Open Access
Open access
Social Sciences, volume 12, issue 8, pages 435

What ChatGPT Tells Us about Gender: A Cautionary Tale about Performativity and Gender Biases in AI

Publication typeJournal Article
Publication date2023-08-01
Journal: Social Sciences
scimago Q2
SJR0.502
CiteScore2.6
Impact factor1.7
ISSN20760760
General Social Sciences
Abstract

Large language models and generative AI, such as ChatGPT, have gained influence over people’s personal lives and work since their launch, and are expected to scale even further. While the promises of generative artificial intelligence are compelling, this technology harbors significant biases, including those related to gender. Gender biases create patterns of behavior and stereotypes that put women, men and gender-diverse people at a disadvantage. Gender inequalities and injustices affect society as a whole. As a social practice, gendering is achieved through the repeated citation of rituals, expectations and norms. Shared understandings are often captured in scripts, including those emerging in and from generative AI, which means that gendered views and gender biases get grafted back into social, political and economic life. This paper’s central argument is that large language models work performatively, which means that they perpetuate and perhaps even amplify old and non-inclusive understandings of gender. Examples from ChatGPT are used here to illustrate some gender biases in AI. However, this paper also puts forward that AI can work to mitigate biases and act to ‘undo gender’.

Top-30

Journals

1
2
3
1
2
3

Publishers

2
4
6
8
10
12
14
2
4
6
8
10
12
14
  • We do not take into account publications without a DOI.
  • Statistics recalculated only for publications connected to researchers, organizations and labs registered on the platform.
  • Statistics recalculated weekly.

Are you a researcher?

Create a profile to get free access to personal recommendations for colleagues and new articles.
Share
Cite this
GOST | RIS | BibTex | MLA
Found error?
Profiles