Skip to main content

Research Repository

Advanced Search

All Outputs (2)

Algorithmic bias: sexualized violence against women in GPT-3 models (2025)
Journal Article
Wyer, S., & Black, S. (online). Algorithmic bias: sexualized violence against women in GPT-3 models. AI and Ethics, https://doi.org/10.1007/s43681-024-00641-0

This study explores the occurrence and implications of sexualized violence against women in text completion tasks performed by GPT-3 models. The study began as an exploratory investigation into gender inequalities within GPT-3 models to discover what... Read More about Algorithmic bias: sexualized violence against women in GPT-3 models.

Self-Regulated Sample Diversity in Large Language Models (2024)
Presentation / Conference Contribution
Liu, M., Frawley, J., Wyer, S., Shum, H. P. H., Uckelman, S. L., Black, S., & Willcocks, C. G. (2024, June). Self-Regulated Sample Diversity in Large Language Models. Presented at NAACL 2024: 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Mexico City