Aylin Caliskan
November 10, 2025
People mirror AI systems’ hiring biases, study finds

In a new UW study, 528 participants worked with simulated AI systems to select job candidates. The researchers simulated different levels of racial biases for resumes from white, Black, Hispanic and Asian men. Without suggestions, participants’ choices exhibited little bias. But when provided with recommendations, participants mirrored the AI’s biases.
October 31, 2024
AI tools show biases in ranking job applicants’ names according to perceived race and gender

University of Washington researchers found significant racial, gender and intersectional bias in how three state-of-the-art large language models ranked resumes. The models favored white-associated names 85% of the time, female-associated names only 11% of the time, and never favored Black male-associated names over white male-associated names.
November 29, 2023
AI image generator Stable Diffusion perpetuates racial and gendered stereotypes, study finds

University of Washington researchers found that when prompted to make pictures of “a person,” the AI image generator over-represented light-skinned men, failed to equitably represent Indigenous peoples and sexualized images of certain women of color.