AI Hiring Bias - Statistics and Facts
We estimate that 87% of organizations now use AI at some point in the hiring process.
Despite this widespread adoption, few recruiters understand the biases AI introduces into the hiring process.
Let's discuss the impact of AI hiring biases on diversity and equity in recruitment.
Key Statistics and Facts
- AI resume screening is 50% more likely to choose candidates with white-sounding names over Black-sounding names
- 49% of job seekers believe that AI recruiting tools are more biased than human recruiters
- In 82% of AI-based recruitment decisions, human recruiters blindly followed the algorithm's recommendations
AI resume screening is 50% more likely to choose candidates with white-sounding names over Black-sounding names (UW)
A research article published by the University of Washington claims that LLMs favor white-sounding names 85% of the time, female-sounding names only 11% of the time, and never favored Black male-sounding names.
According to the UW, AI resume screening is 50% more likely to choose candidates with white-sounding names over Black-sounding names.
The problem is that the texts available for training AI hiring assistants often reflect long-standing societal stereotypes.
If the training data more often associates competence with white-sounding names, the model is likely to reproduce this association when rating resumes.
49% of job seekers believe that AI recruiting tools are more biased than human recruiters (ASA)
In a survey published by the American Staffing Association, 49% of job seekers say AI-powered recruiting tools are more biased than humans.
While AI is often marketed as an objective tool, biases can still be embedded in algorithms through biased training data or flawed design.
For example, Amazon developed an experimental AI recruiting tool designed to rate candidates on a five-star scale.
The company discovered that its algorithm systematically discriminated against women applying for technical positions such as software development.
The algorithm, trained on resume data submitted to Amazon over a ten-year period, identified patterns in the predominantly male workforce and interpreted male dominance as a predictor of success.
In 82% of AI-based recruitment decisions, human recruiters blindly followed the algorithm's recommendations
According to data we gathered from our partner Salarship, 29 out of 35 recruiters blindly followed the AI algorithm's recommendations for hiring decisions.
This means that recruiters rarely override AI recommendations, making hiring decisions that contradict the algorithm only 18% of the time.
I find this statistic quite concerning.
If recruiters blindly follow AI recommendations without questioning their fairness or accuracy, it could significantly increase the risk of hiring bias.
The problem is that AI algorithms are often "black boxes," meaning their decision-making processes are not always transparent or easily understood.
Only 33% of job seekers prefer to be evaluated by an algorithm (Journal of Business Ethics)
In a study published in the Journal of Business Ethics, only 33% of job seekers say they prefer to be evaluated by an algorithm.
Job seekers worry that AI can inadvertently perpetuate data bias, where the algorithms replicate past biased hiring decisions, disadvantaging certain groups.
Selection bias is another concern, as AI systems may favor candidates with similar backgrounds to those already in the data, excluding diverse talent.