Abstract
There is growing concern regarding the potential for automated decision making to discriminate against certain social groups. However, little is known about how the social identities of people influence their perceptions of biased automated decisions. Focusing on the context of racial disparity, this study examined if individuals’ social identities (White vs. people of color [POC]) and social contexts that entail discrimination (discrimination target: the self vs. the other) affect the perceptions of algorithm outcomes. A randomized controlled experiment (N = 604) demonstrated that a participant’s social identity significantly moderated the effects of the discrimination target on the perceptions. Among POC participants, algorithms that discriminate against the subject decreased their perceived fairness and trust, whereas among White participants, the opposite patterns were observed. The findings imply that social disparity and inequality and different social groups’ lived experiences of the existing discrimination and injustice should be at the center of understanding how people make sense of biased algorithms.
Original language | English |
---|---|
Pages (from-to) | 677-699 |
Number of pages | 23 |
Journal | International Journal of Communication |
Volume | 18 |
Publication status | Published - 2024 |
Externally published | Yes |
Bibliographical note
Publisher Copyright:© 2024 (Soojong Kim, Joomi Lee, and Poong Oh). Licensed under the Creative Commons Attribution Non-commercial No Derivatives (by-nc-nd). Available at http://ijoc.org. All Rights Reserved.
ASJC Scopus Subject Areas
- Communication
Keywords
- artificial intelligence
- automated decision making
- bias
- discrimination
- emotion
- fairness
- race
- trust