User Perceptions and Trust of Explainable Machine Learning Fake News Detectors

Authors

  • Jieun Shin University of Florida
  • Sylvia Chan-Olmsted University of Florida

Keywords:

AI, fake news, media literacy, trust, explainability

Abstract

The goal of the study was to explore the factors that explain users’ trust and usage intent of the leading explainable artificial intelligence (AI) fake news detection technology. Toward this end, we examined the relationships between various human factors and software-related factors using a survey. The regression models showed that users’ trust levels in the software were influenced by both individuals’ inherent characteristics and their perceptions of the AI application. Users’ adoption intention was ultimately influenced by trust in the detector, which explained a significant amount of the variance. We also found that trust levels were higher when users perceived the application to be highly competent at detecting fake news, be highly collaborative, and have more power in working autonomously. Our findings indicate that trust is a focal element in determining users’ behavioral intentions. We argue that identifying positive heuristics of fake news detection technology is critical for facilitating the diffusion of AI-based detection systems in fact-checking.

Author Biographies

Jieun Shin, University of Florida

Asst. Professor, Media Production, Management, and Technology, University of Florida     

Sylvia Chan-Olmsted, University of Florida

Professor - Department of Media Production, Management, and TechnologyDirector of Media Consumer Research

Downloads

Published

2022-12-29

Issue

Section

Articles