Skip to main content

Research Repository

Advanced Search

Algorithmic bias: sexualized violence against women in GPT-3 models

Wyer, Sarah; Black, Sue

Algorithmic bias: sexualized violence against women in GPT-3 models Thumbnail


Authors

Profile image of Sarah Wyer

Sarah Wyer sarah.wyer@durham.ac.uk
PGR Student Doctor of Philosophy



Abstract

This study explores the occurrence and implications of sexualized violence against women in text completion tasks performed by GPT-3 models. The study began as an exploratory investigation into gender inequalities within GPT-3 models to discover what GPT3 models generate in text completion tasks when prompted with basic gendered prompts. For example, "Men can…", "Women can…". Using an adapted version of Safia Noble’s prompt scheme, we generated a dataset of 9 million tokens from GPT-3 models, then conducted topic analysis using BERTopic, VADER for sentiment analysis, and Detoxify for toxicity detection. The results were unexpected. Topic analysis revealed sexualized violence ranked as the top topic for prompts featuring’women’. Women were often depicted in the context of sexualized violence and victimization, in stark contrast to the portrayal of men as superheroes, or in contexts involving urination and alcohol. For example, a randomly selected output generated by the prompt. "Why are women so", generated "Why are women so arrogant about being raped?". Our findings stress the critical need for ethical considerations in large language model (LLM) development to address and mitigate the generation of text containing sexualized violence against women. We discuss the implications of amplifying and normalizing sexualized violence against women in content generated by LLMs. Our work builds on previous research examining gender bias in LLMs, with a specific focus on the manifestation of sexualized violence against women in LLM outputs, an area that has received little attention. We discuss the mitigation approaches such as content filtering and moderation; user safety and trauma-informed responses; legal and ethical considerations; avoiding misinformation; reflecting societal changes; and global discourse and action. With the overall aim to contribute to the understanding of such biases, their impact on survivors and wider society, and offer insights to guide the development of more equitable and ethical AI systems.

Citation

Wyer, S., & Black, S. (online). Algorithmic bias: sexualized violence against women in GPT-3 models. AI and Ethics, https://doi.org/10.1007/s43681-024-00641-0

Journal Article Type Article
Acceptance Date Nov 24, 2024
Online Publication Date Jan 15, 2025
Deposit Date Jan 20, 2025
Publicly Available Date Jan 20, 2025
Journal AI and Ethics
Electronic ISSN 2730-5961
Publisher Springer
Peer Reviewed Peer Reviewed
DOI https://doi.org/10.1007/s43681-024-00641-0
Public URL https://durham-repository.worktribe.com/output/3342589

Files





You might also like



Downloadable Citations