Mortal vs. Machine: A Compact Two-Factor Model for Comparing Trust in Humans and Robots

Andrew Prahl*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Trust in robots is often analyzed with scales built for either humans or automation, making cross-species comparisons imprecise. Addressing that gap, this paper distils decades of trust scholarship, from clinical vs. actuarial judgement to modern human–robot teaming, into a lean two-factor framework: Mortal vs. Machine (MvM). We first surveyed classic technology-acceptance and automation-reliance research and then integrated empirical findings in human–robot interaction to identify diagnostic cues that can be instantiated by both human and machine agents. The model includes (i) ability—perceived task competence and reliability—and (ii) value congruence—alignment of decision weights and trade-off priorities. Benevolence, oft-included in trust studies, was excluded because current robots cannot manifest genuine goodwill and existing items elicit high dropout. The resulting scale travels across contexts, allowing for researchers to benchmark a robot against a human co-worker on identical terms and enabling practitioners to pinpoint whether performance deficits or priority clashes drive acceptance. By reconciling anthropocentric and technocentric trust literature in a deployable diagnostic, MvM offers a field-ready tool and a conceptual bridge for future studies of AI-empowered robotics.

Original languageEnglish
Article number112
JournalRobotics
Volume14
Issue number8
DOIs
Publication statusPublished - Aug 2025
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 2025 by the author.

ASJC Scopus Subject Areas

  • Mechanical Engineering
  • Control and Optimization
  • Artificial Intelligence

Keywords

  • human factors
  • human–machine communication
  • human–robot interaction
  • implementation
  • model
  • performance
  • survey
  • trust

Cite this