Rethinking Artificial Intelligence: Algorithmic Bias and Ethical Issues| Questioning Artificial Intelligence: How Racial Identity Shapes the Perceptions of Algorithmic Bias

Authors

  • Soojong Kim University of California Davis
  • Joomi Lee University of Georgia
  • Poong Oh Nanyang Technological University

Keywords:

automated decision making, artificial intelligence, race, discrimination, bias, fairness, trust, emotion

Abstract

Growing concerns indicate that automated decision-making (ADM) may discriminate against certain social groups, but little is known about how social identities of people influence their perception of biased automated decisions. Focusing on the context of racial disparity, this study examined if individuals’ social identities (white vs. People of Color) and social contexts that entail discrimination (discrimination target: the self vs. the other) affect the perceptions of ADM. A randomized controlled experiment (N = 604) demonstrated that a participant’s social identity significantly moderated the effects of the discrimination target on the perceptions of ADM. Among POC participants, ADM that discriminates against the subject decreased their perceived fairness and trust in ADM, whereas among white participants opposite patterns were observed. The findings imply that social disparity and inequality, and different social groups’ lived experiences of the existing discrimination and injustice should be at the center of understanding how people make sense of biased algorithms. 

Author Biographies

Soojong Kim, University of California Davis

Assistant Professor

Joomi Lee, University of Georgia

Postdoctoral Researcher

Poong Oh, Nanyang Technological University

Assistant Professor

Downloads

Published

2023-12-26

Issue

Section

Special Sections