Don’t blame AI for gender bias – blame the data
In early October 2018, Reuters reported that Amazon - which emphasizes automation as a major part of its brand - had scrapped its experimental automated recruiting tool. The reason: its resume-analyzing AI discriminated against women by penalizing their resumes.
This reported malfunction doesn’t mean that the system was a sexist failure, nor does it say anything about the merits of machine learning or AI in recruitment. Rather, the failure could be in how the system was trained.
Contents
You are what you eat
Reuters identifies the objective of Amazon’s AI as scoring job candidates on a scale of 1 to 5 in order to assist hiring teams. But, as reported, the data the system was fed to learn how to score candidates was “successful resumes” and “unsuccessful resumes” from the past 10 years. Most of those resumes came from men, so the patterns the AI detected caused it to downgrade resumes from women. Essentially, Amazon unwittingly taught its AI to replicate the bias that already existed in the overall hiring process, according to Reuters.
Amazon isn’t alone
This isn’t the first time a company has seen its AI design break. The same has happened to other companies that experiment with machine learning. For example, when researchers tested Microsoft and IBM’s facial-recognition features in early 2018, they found that machines had trouble recognizing women with darker skin. The reason again was skewed input data; in short, if you feed the system with more pictures of white men than black women, the system will be better in recognizing white men. Both companies said they had taken steps to increase accuracy.
You can find countless other examples: from linguistic bias of algorithms to Google’s engine serving ads for high-paying jobs to mostly men, to Twitter users turning a friendly chatbot into a villain.
Hope on the horizon
Those of us fascinated with AI and its potential to improve our world may feel dejected when we realize the technology isn’t quite ready yet. But, despite our disappointment, it’s actually good news that these ‘failures’ come out. Trial and error are what helps us learn to train machines properly. The fact that machines are not 100% reliable yet shouldn’t discourage us; it should actually make us even more eager to tackle design and training problems.
As SpaceX and Tesla mogul Elon Musk affirms: “Failure is an option here. If things are not failing, you’re not innovating.” In that spirit, according to Reuters, Amazon has formed a new team in Edinburgh to give automated employment screening another try, this time taking diversity into account.
AI is not panacea
Despite growing concern that machines will take over people’s jobs, AI is unlikely to replace human critical thinking and judgment (we’ll still have the ability to create and control machines). This is especially so during the hiring process, where people’s careers are on the line; we need to be careful about how we use technology. HR thought leader Matt Buckland – who was VP of Customer Advocacy at Workable for two years – sums it up nicely: “When it comes to hiring, we need to have a human process, not process the humans.”
This means that artificial intelligence is a service tool that gives us initial information and analysis to speed up the hiring process. A good system can provide you with data you can’t find yourself (or don’t have the time to). But it shouldn’t make the final hiring decision. We humans, with our intelligence, must be the ones to select, reject or hire other humans.
We, at Workable, keep all this in mind when developing People Search and AI Recruiter, our very own AI features.
Our VP of Data Science, Vasilis Vassalos, explains: “Our efforts center on rendering our data more neutral by excluding demographics and gendered language when training our models. And, of course, to train our AI, we use a wide range of anonymized data, not only our own as Workable, but also data from the millions of candidates that have been processed in our system, so we can cancel out the bias of each individual hiring process.”
We’re also careful about how our tool will be used. “Perhaps the most important thing,” Vasilis adds, “is that we don’t allow our AI to make significant choices. The “AI Recruiter” feature is designed to make suggestions, not decisions.”
Of course, our methods and artificial intelligence itself will continue to improve. “We recognize the difficulty of algorithmically promoting diversity and training machines to be fair,” says Vasilis. “But, as the technology advances, we’ll keep improving our practices and product to make hiring even more effective.”
Frequently asked questions
- How does Amazon use AI in HR?
- With AI-driven software, sorting resumes has become easy and straightforward for Amazon recruiters. All they have to do is feed in all applicant’s information to streamline the recruitment process.
- How is AI used in recruitment?
- Recruiters can now use artificial intelligence to save time and energy. AI for recruiting represents an opportunity that may allow recruiters to reduce the hours spent on repetitive laborious tasks.
- Is Amazon working on AI?
- Amazon is constantly working on AI systems to help the recruiting process. They use these tools to help find qualified leads in less time.