Population Health

December 12, 2024

UW research finds racial and gender bias in AI tools ranking job applicants’ names

Person types on the keyboard of a laptopThe prevalence of Artificial Intelligence-use in the job market is staggering: applicants are now using artificial intelligence bots to apply for thousands of job listings, and employers are writing job descriptions and evaluating resumes using the latest AI large language models.

Despite the potential for increased efficiency and potentially less discriminatory hiring practices, new University of Washington research found significant racial, gender and intersectional bias in how three large language models, or LLMs, ranked applicants’ names from 550 real-world resumes.

The research was presented at the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society in San Jose, finding white-male associated names to be favored by the LLMs 85% of the time.

In addition, female-associated names were only favored 11% of the time, and white-male associated names were always favored over Black-male associated names.

Kyra Wilson, lead author and UW doctoral student in the Information School, spoke about the lack of regulations around the use of AI tools in hiring procedures and emphasized the importance of protecting intersectionality to ensure the fairness of an AI system.

Learn More >