Explainable AI and trust: How news media shapes public support for AI-powered autonomous passenger drones

Justin C. Cheung, Shirley S. Ho*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

This study delves into the intricate relationships between attention to AI in news media, perceived AI explainability, trust in AI, and public support for autonomous passenger drones. Using structural equation modelling (N = 1,002), we found significant associations between perceived AI explainability and all trust dimensions (i.e., performance, purpose, process). Additionally, we revealed that the public acquired the perception of AI explainability through attention to AI in the news media. Consequently, we found that when the public pondered upon support for autonomous passenger drones, only the trust in performance dimension was relevant. Our findings underscore the importance of ensuring explainability for the public and highlight the pivotal role of news media in shaping public perceptions in emerging AI technologies. Theoretical and practical implications are discussed.

Original languageEnglish
Pages (from-to)344-362
Number of pages19
JournalPublic Understanding of Science
Volume34
Issue number3
DOIs
Publication statusPublished - Apr 2025
Externally publishedYes

Bibliographical note

Publisher Copyright:
© The Author(s) 2024.

ASJC Scopus Subject Areas

  • Communication
  • Developmental and Educational Psychology
  • Arts and Humanities (miscellaneous)

Keywords

  • autonomous passenger drones
  • explainable AI
  • perceived explainability
  • public opinion
  • trust in AI
  • XAI

Cite this