At the intersection of humanity and technology: a technofeminist intersectional critical discourse analysis of gender and race biases in the natural language processing model GPT-3

M.D.L.A. Palacios Barea, D. Boeren, J. F. Ferreira Goncalves

Research output: Contribution to journalArticleScientificpeer-review

32 Downloads (Pure)

Abstract

Algorithmic biases, or algorithmic unfairness, have been a topic of public and scientific scrutiny for the past years, as increasing evidence suggests the pervasive assimilation of human cognitive biases and stereotypes in such systems. This research is specifically concerned with analyzing the presence of discursive biases in the text generated by GPT-3, an NLPM which has been praised in recent years for resembling human language so closely that it is becoming difficult to differentiate between the human and the algorithm. The pertinence of this research object is substantiated by the identification of race, gender and religious biases in the model’s completions in recent research, suggesting that the model is indeed heavily influenced by human cognitive biases. To this end, this research inquires: How does the Natural Language Processing Model GPT-3 replicate existing social biases?. This question is addressed through the scrutiny of GPT-3’s completions using Critical Discourse Analysis (CDA), a method which has been deemed as amply valuable for this research as it is aimed at uncovering power asymmetries in language. As such, the analysis is specifically centered around the analysis of gender and race biases in the model’s generated text. Research findings suggest that GPT-3’s language generation model significantly exacerbates existing social biases while replicating dangerous ideologies akin to white supremacy and hegemonic masculinity as factual knowledge.

Original languageEnglish
JournalAI & SOCIETY
DOIs
Publication statusPublished - 2023

Fingerprint

Dive into the research topics of 'At the intersection of humanity and technology: a technofeminist intersectional critical discourse analysis of gender and race biases in the natural language processing model GPT-3'. Together they form a unique fingerprint.

Cite this