TY - JOUR
T1 - At the intersection of humanity and technology: a technofeminist intersectional critical discourse analysis of gender and race biases in the natural language processing model GPT-3
AU - Palacios Barea, M.D.L.A.
AU - Boeren, D.
AU - Goncalves, J. F. Ferreira
PY - 2023
Y1 - 2023
N2 - Algorithmic biases, or algorithmic unfairness, have been a topic of public and scientific scrutiny for the past years, as increasing evidence suggests the pervasive assimilation of human cognitive biases and stereotypes in such systems. This research is specifically concerned with analyzing the presence of discursive biases in the text generated by GPT-3, an NLPM which has been praised in recent years for resembling human language so closely that it is becoming difficult to differentiate between the human and the algorithm. The pertinence of this research object is substantiated by the identification of race, gender and religious biases in the model’s completions in recent research, suggesting that the model is indeed heavily influenced by human cognitive biases. To this end, this research inquires: How does the Natural Language Processing Model GPT-3 replicate existing social biases?. This question is addressed through the scrutiny of GPT-3’s completions using Critical Discourse Analysis (CDA), a method which has been deemed as amply valuable for this research as it is aimed at uncovering power asymmetries in language. As such, the analysis is specifically centered around the analysis of gender and race biases in the model’s generated text. Research findings suggest that GPT-3’s language generation model significantly exacerbates existing social biases while replicating dangerous ideologies akin to white supremacy and hegemonic masculinity as factual knowledge.
AB - Algorithmic biases, or algorithmic unfairness, have been a topic of public and scientific scrutiny for the past years, as increasing evidence suggests the pervasive assimilation of human cognitive biases and stereotypes in such systems. This research is specifically concerned with analyzing the presence of discursive biases in the text generated by GPT-3, an NLPM which has been praised in recent years for resembling human language so closely that it is becoming difficult to differentiate between the human and the algorithm. The pertinence of this research object is substantiated by the identification of race, gender and religious biases in the model’s completions in recent research, suggesting that the model is indeed heavily influenced by human cognitive biases. To this end, this research inquires: How does the Natural Language Processing Model GPT-3 replicate existing social biases?. This question is addressed through the scrutiny of GPT-3’s completions using Critical Discourse Analysis (CDA), a method which has been deemed as amply valuable for this research as it is aimed at uncovering power asymmetries in language. As such, the analysis is specifically centered around the analysis of gender and race biases in the model’s generated text. Research findings suggest that GPT-3’s language generation model significantly exacerbates existing social biases while replicating dangerous ideologies akin to white supremacy and hegemonic masculinity as factual knowledge.
UR - http://www.scopus.com/inward/record.url?scp=85178168044&partnerID=8YFLogxK
U2 - 10.1007/s00146-023-01804-z
DO - 10.1007/s00146-023-01804-z
M3 - Article
JO - AI & SOCIETY
JF - AI & SOCIETY
ER -