Towards Equal Gender Representation in the Annotations of Toxic Language Detection
(2021)
Presentation / Conference Contribution
Excell, E., & Al Moubayed, N. (2021, August). Towards Equal Gender Representation in the Annotations of Toxic Language Detection. Presented at 3rd Workshop on Gender Bias in Natural Language Processing (GeBNLP2021), International Joint Conference on Natural Language Processing (INCNLP2021), Bangkok, Thailand
Classifiers tend to propagate biases present in the data on which they are trained. Hence, it is important to understand how the demographic identities of the annotators of comments affect the fairness of the resulting model. In this paper, we focus... Read More about Towards Equal Gender Representation in the Annotations of Toxic Language Detection.