Abstract
Artificial intelligence (AI) offers potential to enhance healthcare decision-making but is limited by its’black box’ nature. Explainable AI (XAI) and Uncertainty Quantification (UQ) address these challenges by improving interpretability and reliability. Despite their potential, the impact of XAI and UQ on medical students’ perception of AI in healthcare remains unclear. This study explores the impact of XAI and UQ on medical students’ perceptions of AI in healthcare. A mixed-method study with 131 medical students from Singapore and China assessed the effects of varying AI methods on trust, usability, and decision-making. Results show that XAI and UQ enhance AI usability but highlight the need for clinically relevant explanations and contextualised uncertainty reasoning to optimise AI adoption in healthcare.
Original language | English |
---|---|
Title of host publication | CHI EA 2025 - Extended Abstracts of the 2025 CHI Conference on Human Factors in Computing Systems |
Publisher | Association for Computing Machinery |
ISBN (Electronic) | 9798400713958 |
DOIs | |
Publication status | Published - Apr 26 2025 |
Externally published | Yes |
Event | 2025 CHI Conference on Human Factors in Computing Systems, CHI EA 2025 - Yokohama, Japan Duration: Apr 26 2025 → May 1 2025 |
Publication series
Name | Conference on Human Factors in Computing Systems - Proceedings |
---|
Conference
Conference | 2025 CHI Conference on Human Factors in Computing Systems, CHI EA 2025 |
---|---|
Country/Territory | Japan |
City | Yokohama |
Period | 4/26/25 → 5/1/25 |
Bibliographical note
Publisher Copyright:© 2025 Copyright held by the owner/author(s).
ASJC Scopus Subject Areas
- Human-Computer Interaction
- Computer Graphics and Computer-Aided Design
- Software
Keywords
- Explainable AI
- Human-AI Interaction
- Medical AI
- Uncertainty Quantification