Understanding and Countering Stereotypes: We present a computational approach to interpreting stereotypes in text through the Stereotype Content Model (SCM), a comprehensive causal theory from social psychology. Further, we explore various strategies to counter stereotypical beliefs.
Kathleen C. Fraser, Svetlana Kiritchenko, and Isar Nejadgholi. (2024) How Does Stereotype Content Differ across Data Sources? In Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024), Mexico City, Mexico, June 2024. [
pdf]
Isar Nejadgholi, Kathleen C. Fraser, Anna Kerkhof, and Svetlana Kiritchenko. (2024) Challenging Negative Gender Stereotypes: A Study on the Effectiveness of Automated Counter-Stereotypes. In Proceedings of the Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), Torino, Italy, May 2024. [
pdf] [
data]
Kathleen C. Fraser, Svetlana Kiritchenko, Isar Nejadgholi, and Anna Kerkhof. (2023) What Makes a Good Counter-Stereotype? Evaluating Strategies for Automated Responses to Stereotypical Text. In Proceedings of the First Workshop on Social Influence in Conversations (SICon), Toronto, ON, Canada, July 2023. [
pdf]
Kathleen C. Fraser, Svetlana Kiritchenko, and Isar Nejadgholi. (2022). Computational Modelling of Stereotype Content in Text. Frontiers in Artificial Intelligence, April, 2022. [
paper]
Kathleen C. Fraser, Isar Nejadgholi, and Svetlana Kiritchenko (2021). Understanding and Countering Stereotypes: A Computational Approach to the Stereotype Content Model. In Proceedings of the Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL-IJCNLP 2021), August 2021. [
pdf]
Kathleen C. Fraser, Svetlana Kiritchenko, and Isar Nejadgholi. (2022). Extracting Age-Related Stereotypes from Social Media Texts. In Proceedings of the Language Resources and Evaluation Conference (LREC-2022), Marseille, France, June 2022. [
pdf][
project webpage]
Biases in Vision-Language Systems: We investigate bias and diversity in outputs of state-of-the-art text-to-image and large vision-language systems.
Kathleen C. Fraser and Svetlana Kiritchenko. (2024) Examining Gender and Racial Bias in Large Vision--Language Models Using a Novel Dataset of Parallel Images. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (EACL), Malta, March 2024. [
paper]
Kathleen C. Fraser, Svetlana Kiritchenko, and Isar Nejadgholi. (2023) Diversity is Not a One-Way Street: Pilot Study on Ethical Interventions for Racial Bias in Text-to-Image Systems. In Proceedings of the 14th International Conference on Computational Creativity (ICCC), Waterloo, ON, Canada, June 2023.
Best Short Paper Award [
pdf]
Kathleen C. Fraser, Isar Nejadgholi, and Svetlana Kiritchenko. (2023) A Friendly Face: Do Text-to-Image Systems Rely on Stereotypes when the Input is Under-Specified? In Proceedings of the Creative AI Across Modalities Workshop (CreativeAI @ AAAI), Washington, DC, USA, Feb. 2023. [
pdf]