Learn how to upskill and reskill effectively with our new ebook. Download the ebook

Avoid unintended bias: learn to navigate EEOC in AI and hiring

Learn how to comply with the latest warning from the EEOC on using AI in the workplace. Learn techniques for identifying potential issue areas, ensuring your processes meet legal requirements, and maximizing success through responsible use of these powerful tools.

Suzanne Lucas
Suzanne Lucas

Suzanne, the Evil HR Lady, shares expertise, guidance, and insights based on 10+ years of experience in corporate human resources....

EEOC in AI and hiring

ChatGPT can make managing people easier. You can use it to create SMART goals. You can use it to create a script for a fun open enrollment video. And many other things.

But ChatGPT and other AI software tools come with their own problems. They’re big enough that the EEOC issued a warning (Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964).

That’s government-speak for pay attention.

Manage compliance confidently

Navigate local and international regulation - including GDPR and EEOC/OFCCP - with automated tools and reports that take the effort out of compliance, wherever you’re hiring.

Demonstrate compliance with Workable

The EEOC doesn’t say “Don’t use AI to hire and manage people”, but it does say you’re responsible for what AI does.

A lawyer found this out the hard way when he submitted a brief to the court that contained a “hallucinated” case.

Side note: Hallucinated is the term people use to describe the information that ChatGPT makes up. And it does happen a lot.

In that lawyer’s experience, ChatGPT made up a court case, and the lawyer didn’t catch it. He’s now in hot water with the court.

You don’t want to be in trouble with the court for not knowing ChatGPT can make things up. And when working in HR, you also don’t want to be in trouble because ChatGPT is indeed biased.

How biased? We don’t know the extent of the biases, but we know it has preferences.

Because ChatGPT was trained on the internet and the internet is made up of humans with their own biases, it makes perfect sense that the results will show some of these biases in the output.

Now that this is clear, here’s what you need to know about the EEOC’s warning.

Watch out for disparate impact

Disparate impact is the legal term for when an action looks neutral but results in an unbalanced result.

For instance, you require everyone to have a college degree to work as a barista in your coffee shop, which results in fewer members of underrepresented groups working there. Because a college degree isn’t necessary for the job, that could be considered illegal discrimination through disparate impact.

Ogletree Deakins attorneys explain:

“Specifically, the EEOC reinforced for employers that, under disparate impact theory, if an employer uses an employment practice that has a disproportionate impact based on race, color, religion, sex, or national origin, an employer must show that the procedure is job-related and consistent with business necessity.”

How could this be an issue with ChatGPT?

Because you can’t see the ‘thought’ processes behind its decision-making, you don’t know what it considers. The requirement is that anything that results in disparate impact must be “job-related and consistent with business necessity.”

The EEOC writes: “The selection procedure must evaluate an individual’s skills as related to the particular job in question.”

When you have a black box algorithm (after all, you don’t see how ChatGPT makes decisions), you cannot say that the tools used to evaluate someone are consistent with business necessity.

But ultimately, you’re responsible for your decision even if you can’t see, like the lawyer who didn’t realize ChatGPT can in fact hallucinate court cases.

Does this mean ChatGPT and other AI tools are banned in hiring?

No! It’s not banned. You can use it to help you do any number of things. Your ATS probably already does. Workable itself uses AI technology, as does just about everyone else.

But, regardless of whether or not you use AI in the hiring process, you remain responsible for the hiring decision.

Here’s how you can check to see if your tools are causing disparate impact:

1. Do your own analysis

Take a look at the results from any AI tool and compare them to the candidate population. If there are substantial differences between races or genders, then you are right to be concerned.

The EEOC uses the four-fifths rule as a rule of thumb. This means that if the difference is bigger than four-fifths (or 80%), then you need to be concerned about disparate impact.

2. Ask your vendors how AI is used

You need to act now if you don’t know if your applicant tracking system uses AI technology. Ask! Ask them how it works. It’s their job to give you all the information you need.

3. Proactively change your processes as needed

If there appears to be a disparate impact, you need to change how your selection process works. If the AI tool you use comes from a vendor, work with them to ensure a better selection process focusing on job necessities.

4. Create and enforce an AI policy

Remember, all aspects of the hiring process can be subpoenaed – including queries in ChatGPT, Bard, or any other AI software. If hiring managers use these tools to compare candidates, you must know how and when they do. Create your guidelines in consultation with your employment attorney.

Better safe than sorry

The EEOC’s new guidance is not binding, but you must pay attention to it and plan your AI usage accordingly.

AI can help greatly, but ensure you don’t inadvertently discriminate against qualified candidates.

Frequently asked questions

Need to ensure a fully compliant hiring process?

We make compliance as easy as possible, whenever and wherever you're hiring.

Worry free

Let's grow together

Explore our full platform with a 15-day free trial.
Post jobs, get candidates and onboard employees all in one place.