ChatGPT can make managing people easier. You can use it to create SMART goals. You can use it to create a script for a fun open enrollment video. And many other things.
But ChatGPT and other AI software tools come with their own problems. They’re big enough that the EEOC issued a warning (Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964).
That’s government-speak for pay attention.
The EEOC doesn’t say “Don’t use AI to hire and manage people”, but it does say you’re responsible for what AI does.
A lawyer found this out the hard way when he submitted a brief to the court that contained a “hallucinated” case.
Side note: Hallucinated is the term people use to describe the information that ChatGPT makes up. And it does happen a lot.
In that lawyer’s experience, ChatGPT made up a court case, and the lawyer didn’t catch it. He’s now in hot water with the court.
You don’t want to be in trouble with the court for not knowing ChatGPT can make things up. And when working in HR, you also don’t want to be in trouble because ChatGPT is indeed biased.
How biased? We don’t know the extent of the biases, but we know it has preferences.
Because ChatGPT was trained on the internet and the internet is made up of humans with their own biases, it makes perfect sense that the results will show some of these biases in the output.
Now that this is clear, here’s what you need to know about the EEOC’s warning.
Watch out for disparate impact
Disparate impact is the legal term for when an action looks neutral but results in an unbalanced result.
For instance, you require everyone to have a college degree to work as a barista in your coffee shop, which results in fewer members of underrepresented groups working there. Because a college degree isn’t necessary for the job, that could be considered illegal discrimination through disparate impact.
Ogletree Deakins attorneys explain:
“Specifically, the EEOC reinforced for employers that, under disparate impact theory, if an employer uses an employment practice that has a disproportionate impact based on race, color, religion, sex, or national origin, an employer must show that the procedure is job-related and consistent with business necessity.”
How could this be an issue with ChatGPT?
Because you can’t see the ‘thought’ processes behind its decision-making, you don’t know what it considers. The requirement is that anything that results in disparate impact must be “job-related and consistent with business necessity.”
The EEOC writes: “The selection procedure must evaluate an individual’s skills as related to the particular job in question.”
When you have a black box algorithm (after all, you don’t see how ChatGPT makes decisions), you cannot say that the tools used to evaluate someone are consistent with business necessity.
But ultimately, you’re responsible for your decision even if you can’t see, like the lawyer who didn’t realize ChatGPT can in fact hallucinate court cases.
Does this mean ChatGPT and other AI tools are banned in hiring?
No! It’s not banned. You can use it to help you do any number of things. Your ATS probably already does. Workable itself uses AI technology, as does just about everyone else.
But, regardless of whether or not you use AI in the hiring process, you remain responsible for the hiring decision.
Here’s how you can check to see if your tools are causing disparate impact:
1. Do your own analysis
Take a look at the results from any AI tool and compare them to the candidate population. If there are substantial differences between races or genders, then you are right to be concerned.
The EEOC uses the four-fifths rule as a rule of thumb. This means that if the difference is bigger than four-fifths (or 80%), then you need to be concerned about disparate impact.
2. Ask your vendors how AI is used
You need to act now if you don’t know if your applicant tracking system uses AI technology. Ask! Ask them how it works. It’s their job to give you all the information you need.
3. Proactively change your processes as needed
If there appears to be a disparate impact, you need to change how your selection process works. If the AI tool you use comes from a vendor, work with them to ensure a better selection process focusing on job necessities.
4. Create and enforce an AI policy
Remember, all aspects of the hiring process can be subpoenaed – including queries in ChatGPT, Bard, or any other AI software. If hiring managers use these tools to compare candidates, you must know how and when they do. Create your guidelines in consultation with your employment attorney.
Better safe than sorry
The EEOC’s new guidance is not binding, but you must pay attention to it and plan your AI usage accordingly.
AI can help greatly, but ensure you don’t inadvertently discriminate against qualified candidates.
Frequently asked questions
- What is disparate impact in recruitment?
- Disparate impact occurs when a seemingly neutral action leads to unequal results between different demographic groups during hiring.
- How does the four-fifths rule work?
- The four-fifths rule states that if there is a more than an 80% difference between various demographic groups in selection rates, it raises concerns about possible disparate impact discrimination.
- Can we still use ChatGPT or other AI tools in our hiring process even with EEOC guidelines?
- Yes, but you are ultimately responsible for decision-making while using them. Make sure they’re job-related and consistent with business necessities to avoid legal issues.
- What should I ask my vendors regarding their AI technology usage?
- Request information about how their system works, especially its logic behind decisions relating to your candidates' attributes.
- How can my organization create an effective AI policy for hiring practices?
- Involve employment attorneys as advisors when developing guidelines that outline which aspects of candidate evaluation may be supported by AI technologies – including transparency requirements from data analytics providers.