Don’t blame AI for gender bias – blame the data

Nikoletta Bika | |

In early October, Reuters reported that Amazon – which emphasizes automation as a major part of its brand – had scrapped its experimental automated recruiting tool. The reason: its resume-analyzing AI discriminated against women by penalizing their resumes.

This reported malfunction doesn’t mean that the system was a sexist failure, nor does it say anything about the merits of machine learning or AI in recruitment. Rather, the failure could be in how the system was trained.

You are what you eat

Reuters identifies the objective of Amazon’s AI as scoring job candidates on a scale of 1 to 5 in order to assist hiring teams. But, as reported, the data the system was fed to learn how to score candidates was “successful resumes” and “unsuccessful resumes” from the past 10 years. Most of those resumes came from men, so the patterns the AI detected caused it to downgrade resumes from women. Essentially, Amazon unwittingly taught its AI to replicate the bias that already existed in the overall hiring process, according to Reuters.

Amazon isn’t alone

This isn’t the first time a company has seen its AI design break. The same has happened to other companies that experiment with machine learning. For example, when researchers tested Microsoft and IBM’s facial-recognition features in early 2018, they found that machines had trouble recognizing women with darker skin. The reason again was skewed input data; in short, if you feed the system with more pictures of white men than black women, the system will be better in recognizing white men. Both companies said they had taken steps to increase accuracy.

You can find countless other examples: from linguistic bias of algorithms to Google’s engine serving ads for high-paying jobs to mostly men, to Twitter users turning a friendly chatbot into a villain.

Hope on the horizon

Those of us fascinated with AI and its potential to improve our world may feel dejected when we realize the technology isn’t quite ready yet. But, despite our disappointment, it’s actually good news that these ‘failures’ come out. Trial and error are what helps us learn to train machines properly. The fact that machines are not 100% reliable yet shouldn’t discourage us; it should actually make us even more eager to tackle design and training problems.

As SpaceX and Tesla mogul Elon Musk affirms: “Failure is an option here. If things are not failing, you’re not innovating.” In that spirit, according to Reuters, Amazon has formed a new team in Edinburgh to give automated employment screening another try, this time taking diversity into account.

AI is not panacea

Despite growing concern that machines will take over people’s jobs, AI is unlikely to replace human critical thinking and judgment (we’ll still have the ability to create and control machines). This is especially so during the hiring process, where people’s careers are on the line; we need to be careful about how we use technology. Our VP of Customer Advocacy and HR thought leader Matt Buckland sums it up nicely: “When it comes to hiring, we need to have a human process, not process the humans.”

This means that artificial intelligence is a service tool that gives us initial information and analysis to speed up the hiring process. A good system can provide you with data you can’t find yourself (or don’t have the time to). But it shouldn’t make the final hiring decision. We humans, with our intelligence, must be the ones to select, reject or hire other humans.

We, at Workable, keep all this in mind when developing People Search and Auto-suggest, our very own AI features.

Our VP of Data Science, Vasilis Vassalos, explains: “Our efforts center on rendering our data more neutral by excluding demographics and gendered language when training our models. And, of course, to train our AI, we use a wide range of anonymized data, not only our own as Workable, but also data from the millions of candidates that have been processed in our system, so we can cancel out the bias of each individual hiring process.”

We’re also careful about how our tool will be used. “Perhaps the most important thing,” Vasilis adds, “is that we don’t allow our AI to make significant choices. The very name ‘Auto-Suggest’ implies it’s used to make suggestions, not decisions.”

Of course, our methods and artificial intelligence itself will continue to improve. “We recognize the difficulty of algorithmically promoting diversity and training machines to be fair,” says Vasilis. “But, as the technology advances, we’ll keep improving our practices and product to make hiring even more effective.”

Looking for an all-in-one recruiting solution? Workable can improve candidate sourcing, interviewing and applicant tracking for a streamlined hiring process. Sign up for our 15-day free trial today.

Get a free trial

Nikoletta Bika

Nikoletta Bika is a senior writer at Workable and holds an MSc in HR. She writes about all things HR and recruiting, with a particular interest in bias, data, technology and the future of work. She hates meaningless jargon and dreams about space travel. She tweets @Nikoletta_Bika.

Latest in this category

How to be the worst interviewer

Meet Joe – Head of Digital Marketing at “Fictional Company”. Joe is really good at his job...

businesses affected by Brexit

Bad news for businesses affected by Brexit uncertainty

With just months to go until the UK leaves the European Union, and no deal yet in place, m...

8 of the best job ad examples

Best job ad examples from the Workable job board

A clear and engaging job description helps attract the right candidates. But writing one i...

All in one recruitment software Sign up