Explainable AI and trust: How news media shapes public support for AI-powered autonomous passenger drones

Justin C. Cheung, Shirley S. Ho*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

This study delves into the intricate relationships between attention to AI in news media, perceived AI explainability, trust in AI, and public support for autonomous passenger drones. Using structural equation modelling (N = 1,002), we found significant associations between perceived AI explainability and all trust dimensions (i.e., performance, purpose, process). Additionally, we revealed that the public acquired the perception of AI explainability through attention to AI in the news media. Consequently, we found that when the public pondered upon support for autonomous passenger drones, only the trust in performance dimension was relevant. Our findings underscore the importance of ensuring explainability for the public and highlight the pivotal role of news media in shaping public perceptions in emerging AI technologies. Theoretical and practical implications are discussed.

Original languageEnglish
JournalPublic Understanding of Science
DOIs
Publication statusAccepted/In press - 2024
Externally publishedYes

Bibliographical note

Publisher Copyright:
© The Author(s) 2024.

ASJC Scopus Subject Areas

  • Communication
  • Developmental and Educational Psychology
  • Arts and Humanities (miscellaneous)

Keywords

  • autonomous passenger drones
  • explainable AI
  • perceived explainability
  • public opinion
  • trust in AI
  • XAI

Fingerprint

Dive into the research topics of 'Explainable AI and trust: How news media shapes public support for AI-powered autonomous passenger drones'. Together they form a unique fingerprint.

Cite this