Products infused with AI are being introduced into recruiting at an increasing pace. It seems like a question of when, not if, that AI will start replacing recruiters. While there’ll always be barber school, consider what levels AI has to reach before the technology becomes an existential threat to recruiters.
Where AI Is a Threat
Any AI product that exists today, in recruiting or elsewhere, can only perform a very limited and specific task. Take the case of Google’s Duplex that was launched with a demonstration of how it called a hair-salon and made an appointment for a haircut, using a natural conversation. The demonstration is impressive but Duplex can only make appointments for haircuts and restaurant reservations. Google’s press release mentions that Duplex is limited to “ … closed domains, which are narrow enough to explore extensively. Duplex can only carry out natural conversations after being deeply trained in such domains. It cannot carry out general conversations.”
When evaluating the impact of AI on recruiting, look at the types of jobs being filled, in terms of the qualifications required. On the low end there are millions of jobs in retail, food service, hospitality, and even healthcare, where the qualifications required can be summed up as having a pulse and lacking a felony. Often, the criteria for choosing a candidate is the availability to work a certain shift or schedule. For employers the challenge is finding candidates. What’s needed are algorithms for pattern recognition to search among data, such as that available from social media posts, to determine who is unemployed or otherwise available, and has done similar work before. Combining the AI with a messaging application — email or maybe a version of Duplex — can create a complete sourcing solution that eliminates any need for human intervention. Based on current trends we’re only a few — perhaps less than five — years from automating how jobs like these are filled.
How Much AI Has to Advance
Filling other types of jobs is a more complex undertaking. There’s a lot more involved in sourcing, screening, and selection for jobs with higher levels of skill requirements, and consequently the AI needed has to be more advanced. AI experts Judea Pearl and Dana Mackenzie describe three levels of the technology.
The first is Association: knowing that certain items or patterns in data are likely indicators of certain facts. For example, knowing that combinations of certain words in resumes mean a candidate has certain skills. Or that some events indicate a candidate is more likely to be job hunting – 12 percent more before a birthday and 16 percent more after a class reunion. This is the level that most AI today has attained. Facial recognition represents some of the most advanced development of AI at this level.
The second level is Intervention: reacting to the result of an action. A product like Duplex is an early stage version of AI using intervention. The application adjusts its responses based on what it hears until it completes a reservation. A career site that matches candidates to jobs and then adjusts its messaging based on the actions taken by the candidates would be another example. In both cases the AI products developed can only work within a very narrow domain.
The third and top level is Counterfactual: being able to imagine results, reflect on actions, and assess other scenarios. In a sense this is what it means to be human. AI would have to know how develop a sourcing strategy for different types of candidates; how to engage a candidate; have a conversation with a hiring manager to determine what they value most among the job requirements; then, evaluate a candidate based on that. After being turned down by candidates, the AI application would have to ask itself what it could have done differently and act on that in the future. An AI product that can conduct an actual conversation outside of a specific domain would have to attain this level because spoken language is so complex and people react differently based on context, intonation, pauses, and volume. We’re a very long way from AI being able to do that.
Even applications like IBM’s Deep Blue that defeated Gary Kasparov at chess, Watson that won Jeopardy, or Google’s AI product that beat the world’s best Go player have only attained the level of intervention. Their success was not based on brute force computing by trying every possible combination, but by developing a strategy based on the result of past actions. But they could only do so in one specific domain.
When AI Will Start to Become Intelligent
UCLA professor Judea Pearl expects it will take decades for AI systems to attain the top level of development. AI today is based on probabilities — the likelihood that a certain pattern means something. A pattern of words, a pattern of shapes, a pattern of sounds. This works well up to a point — matching resumes to job descriptions; associating pictures with traits; and attributing meaning to sounds. Large data sets can help make the probabilities better, but it’s all limited to just one task in a specific domain. The algorithms can be chained together to complete more complex tasks, such as when using an Alexa speaker to search for something, but it’s still only an illusion of intelligence.
For AI to advance further, applications need to understand cause and effect, which in turn requires them to be equipped with a model of the environment they have to work in. If an application does not have a model of reality, it cannot behave intelligently in that reality. After that, applications may be able to create such models on their own and refine them based on empirical evidence.
For most recruiters there’s no need to click on that barber-school application just yet.