How organizations can help shape the future of AI in recruiting – and reap the benefits

Excited about a world where AI in recruiting will immensely improve your hiring process? We live in a fascinating time because this scenario is right around the corner – and you, the HR professional, may be able to bring it even closer.

Engineers who build AIs need data to train the machines, and they also need more information to determine what works or not. And this is where organizations can contribute because they have access to data and they’re in a position to actually test technology in the field.

This topic was part of my conversation with Matt Alder, the reputable British HR thought leader and host of the Recruiting Future podcast. During an hour-long phone conversation, we discussed possible actions on how businesses can play their part in shaping a world using powerful recruiting AI tools.

See also our discussion on the state and future of AI in recruiting and whether machines can really take recruiters’ jobs.

Technology in our own image

The data we use to train our machines is essential to a successful AI-driven recruitment strategy. If the data is inaccurate, incomplete, skewed or one-dimensional, the machine’s “intelligence” will suffer.

So, we need to choose our data carefully. This is tougher than it sounds because sometimes we don’t even realize we’re looking at biased or incomplete data samples. Because we’re only human, we have inherent difficulties to identify our own shortcomings and the wrong data causes machines to replicate our biases, opinions or behaviors. The old adage of “garbage in, garbage out” applies readily here.

One example is the apparent apathy, evasion, or occasional positive response of virtual assistants Siri and Alexa when faced with verbal sexual abuse from users. They were programmed to respond in certain ways to various forms of harassment that human creators might have thought were “OK” (they’re not). This is something companies that make these AIs are trying to tackle, as Quartz reported.

In the recruiting world, automated tools don’t make final hiring decisions, so how much does bias matter? There’s an interesting caveat here. Matt discussed this in a recent Recruiting Future podcast when he interviewed Miranda Bogen from Upturn, a non-profit think tank promoting equity and justice in the design and use of digital technology.

Upturn recently published a report on the bias of hiring algorithms. Based on that report, Miranda explained that, while AI in recruiting doesn’t decide who gets hired, it can decide who won’t get hired – and that may often be people with certain characteristics. An example of this is Google’s algorithm which showed ads for higher-paying jobs to men only because it thought men were the most likely to click on these jobs. This way, it effectively precluded women from learning about these job opportunities. Upturn’s report also mentions that this bias persists even if you obscure attributes like gender and race when training machines. That’s partly because the datasets we have available are inherently correlated with systemic biases.

So there’s a legitimate philosophical question: could we really create technology that doesn’t replicate our limitations and biases? Well, we have done so in other branches of tech: for example, our naked eye can’t see details far away in space, but our telescopes can. Intelligent machines could work the same way – complementing and enhancing our abilities.

How we can do that is less clear. Matt reflects on this:

“I think this is perhaps the biggest dilemma over the next few years; how do we actually make technology be better than humans?”

When humans are the designers, therein lies the challenge.

We need to go smarter

As Matt emphasizes, the first step in building machines for purely objective rather than subjective recruiting processes is to consciously understand our own biases. That not only involves the ‘what’, but also the ‘how.’ “If we’re going to make HR technology that doesn’t share human bias,” says Matt, “then we need to understand more about where that kind of bias comes in.”

Recruiting professionals are probably in the best position to identify these issues in the hiring process. Monitor your hiring metrics for patterns. Gender and race bias, for example, can be identified by measuring the percentages of female or non-white applicants who apply and are moved through the hiring process. Also, regularly communicate with your hiring teams about what criteria they use to make decisions, and be on the lookout for criteria that aren’t strictly job-related.

Once you have started collecting this type of data and insights, make a systematic effort to mitigate biases wherever they appear. For example, you could try out more objective hiring tools, like structured interviews, and train your interviewers to overcome their unconscious (and occasionally conscious) prejudices.

Also, it’d be useful to participate in the discussion with fellow recruiters in forums or in person to exchange information about existing biases and possible strategies to deal with them. Our collective knowledge and awareness of biases can help companies that make AI in recruiting tools design their products more effectively.

We also need variety

When it comes to AI in recruiting, one of the problems is that the data we’ve used hasn’t been very creative, as Matt points out:

“I think the problem is we still work off CVs which are hopeless in actually telling you what someone’s performance is going to be,” Matt says, “which is why we’re seeing more of other data points coming in, whether it’s facial recognition or tone of voice or various assessments. A CV isn’t going to give even the cleverest form of artificial intelligence enough information to make proper decisions.”

This relates to cases like the Amazon AI recruiting tool which reportedly rejected female candidates because it was mainly trained with resumes of men – in other words, Amazon’s attempt at AI-driven recruitment failed because of an overreliance on past datasets. If we train models using multiple data points, we might avoid those biases and inconsistencies that come with a single dataset.

So if your company makes AI in HR or you’re in close collaboration with an AI vendor, consider using various hiring methods (including assessments, video interviews, etc.) that can help you enrich the types of data used for training AI tools.

Also, you can contribute in making sure we model what’s meaningful for our purpose. “It’s modeling around what high performers look like,” says Matt. “If we’re modeling their facial expressions, is that going to give us the right match? So we’re modeling their behaviors, their attitudes, their values, but what aspect are we looking for? What aspects are actually repeatable in terms of finding someone who matches what we want?”

Trial and Error

Experimenting is how we learn. And that’s perhaps the most important aspect in which a company can contribute to the overall methods of training machines: with real-life data. Try out AI tools and measure results systematically. That way, we’ll soon have more evidence on whether something works or not.

To start experimenting with AI in recruiting, consider these four steps:

1. Understand your current process

In addition to identifying biases in your hiring process, dissect your existing hiring strategies. “I think a lot of it is about understanding current process,” Matt says. “How does it work? Where are the problems with it? What’s the experience like? In a large business, it could be really complicated. There could be [many] stakeholders and moving parts and people might not fully understand exactly what’s going on.”

Audit your recruiting process, and find the stakeholders and their roles. Use recruiting metrics to identify issues and bottlenecks. Then you might have an indication as to which aspects might benefit from a level of automation or AI tools.

“Gaining that understanding and that self-awareness of what’s going on within the organization is a good place to start,” says Matt.

2. Feel the pulse

Another aspect is to understand the environment. Matt clarifies: “Understanding what the technology can or can’t do, looking at companies that are trying [AI in recruiting] and looking at their results is equally important.

“And then it’s about matching the two together. How can this technology realistically solve our niche problems? And if it can, how do we implement it in a way that actually works?”

3. See what AI in recruiting is available

Since you’ve delved into your hiring process and follow what other companies are doing, look for available tools. “Understanding what’s available and what’s out there is important,” says Matt.

“Look into the market and see what can now be done. Someone could have created something that’s the answer to all your problems and you just don’t know it exists,” he says. “And that’s […] confusing and difficult because there’s so much noise out there. But actually having a good view of what’s available is critical.”

Of course, when vendors mention that their AI tools are completely unbiased, be sure to take their claims with a grain of salt. As Miranda Bogen said in the Recruiting Future podcast: “As predictive tools have access to more and more data, there’s more risk this data is closely associated or even a proxy for protected categories [which tools shouldn’t take into account in order to be bias-free].”

If you’re already using automated tools, work with vendors to test and validate them regularly.

4. Remember the candidate

Candidates’ reactions to AI in recruiting are just as important as the effectiveness of tools themselves. “Do the people I’m trying to hire actually like being interfaced with automatically in this way?” asks Matt. “Because if they don’t, and my competitor is taking a more human approach, then I might miss out on some great talent.”

As Matt mentions, there may be cases where implementing automation will be welcomed by candidates; for example, communication about the status of their application will improve. “The biggest complaint candidates have is the black hole that comes through recruitment, where they just don’t know what’s going on, what stage they’re in the process, what the next steps are, what people think of them. And I think technology can fill that gap.”

Sometimes though, candidates may be confused as to the role of technology in the hiring process.

“There’s maybe some fear and misunderstanding about how technology is used to screen out and select people,” says Matt. “And certainly some of the publicity that has come out recently around bias isn’t good. I tend to find that people overestimate how much AI in recruiting is actually responsible for whether they are chosen or not.”

People are wary that they’re being screened out for a job by a faceless machine, and a human isn’t having the chance to consider them.

And that can be especially true with tools like face-recognition software. “It’s very easy to get carried away and think ‘the expressions on my face is how people are going to decide whether I’m going to be a high performer in this job or not.’”

This brings us back to the importance of multiple touchpoints of data in AI in recruiting to lessen dependence on one single area, Matt reminds us. “[Face-recognition software] is just one data point amongst many other things.” Hiring can rarely be reduced to a single decision anyway, as Upturn’s report stresses.

Things are already happening

“There are some businesses where people are effectively being hired with an automated process,” says Matt, “and they might not go actually talk to someone until their first day. It’s a really interesting time. I think that we don’t really know what the answers are going to be in all of this, and a lot of it is experimentation and feedback.”

Matt mentions some companies are trying out automation for volume hiring and graduate hiring. For example, replacing multiple interviews with one video interview at the start reduces the number of candidates you’ll have to meet in person, and candidates wouldn’t have to go through as many hiring stages as before. It’s an effort to improve the efficiency and overall candidate experience.

“Now again, it’s still early days,” Matt reminds us. “Will they revisit that in three or four years time and say ‘the people we hired weren’t as good as the people we used to hire when humans did it’? But still, it certainly makes sense in terms of recruitment and selection process improvement.”

And actually having some real-life examples and data will bring a revolution in how AI in recruiting is made and applied, and this benefits organizations in many ways. Matt reminisces on another time when new technology was tested:

“I remember back in the late ’90s, early 2000s, when recruiting on the internet became a thing. There was a huge amount of mistakes, and horrible things happened, but that didn’t mean that online recruitment wasn’t going to be big. It just wasn’t perfect straight away.”

Matt adds, “Several companies experimented and stuck with it, and contributed to the debate, and gave feedback, and helped shape what the vendors were offering. They’re the companies that benefited the most in the long term.”

So, don’t be afraid to open up to new technology. If you’re an early adopter, you’ll also be the first to benefit when AI technology becomes a smoothly operating aspect of the mainstream recruitment process. Matt reminds us that automation is already widely used and you can find many tools to apply to your recruitment efforts. Experiment with them.

“Be very critical, very analytical about what the results actually are and whether they’re what you want or not.”

Let's grow together

Start hiring now with a 15-day free trial. Or talk to us about your hiring plans and
discover how Workable can help you find and hire great people.