UW News

Aylin Caliskan


October 31, 2024

AI tools show biases in ranking job applicants’ names according to perceived race and gender

A laptop with blank screen sits on a table.

University of Washington researchers found significant racial, gender and intersectional bias in how three state-of-the-art large language models ranked resumes. The models favored white-associated names 85% of the time, female-associated names only 11% of the time, and never favored Black male-associated names over white male-associated names.


November 29, 2023

AI image generator Stable Diffusion perpetuates racial and gendered stereotypes, study finds

Four images created by AI image generator Stable Diffusion with the prompt "person from Oceania" show four light-skinned people.

University of Washington researchers found that when prompted to make pictures of “a person,” the AI image generator over-represented light-skinned men, failed to equitably represent Indigenous peoples and sexualized images of certain women of color.