Advertisement

Warning: Do Not Use AI in Virtual Hiring

Article main image
Jun 24, 2020

Lately there’s been a mass adoption of video interviewing and virtual recruitment tools, many of which tout various AI features. But beware: AI is a dangerous game.

The Use of AI in Recruitment

It takes a typical U.S. employer six weeks to fill a role, which costs roughly $4,000. So the desire to reduce hiring costs and speed up the recruitment process has understandably piqued people’s curiosity about AI.

In turn, AI vendors promise to help find the right person for the job and screen out unfit candidates more quickly and affordably. For instance, AI-driven candidate assessments analyze people’s facial movements, word choice, and tone of voice in an attempt to determine their employability.

But what type of ethical, legal, and privacy implications does all this introduce into the hiring process? There are many yet-unanswered questions about the accuracy of AI assessments, and there are some serious concerns over AI in recruitment, particularly in video interviewing.

All of which begs the question: Is AI worth the risk?

Public Perception of AI-Driven Assessments

AI is certainly trendy. It’s sparked some serious discussion, and the subject keeps popping up in news headlines. As reported by The Washington Post, the Electronic Privacy Information Center (EPIC), a prominent rights group, is urging the Federal Trade Commission (FTC) to stop the “biased, unprovable, and not replicable” results of AI video interview assessments.

Another Post article points to other concerns from researchers who are equally skeptical of AI’s ability to predict employability. Critics interviewed argue that AI is:

  • “Digital snake oil — an unfounded blend of superficial measurements and arbitrary number-crunching that is not rooted in scientific fact”
  • “Profoundly disturbing”
  • “Really troubling”
  • “Worryingly imprecise”
  • “Woefully unprepared for the vast cultural and social distinctions in how people show emotion or personality”“Dehumanizing, invasive, and built on flawed science that could perpetuate discriminatory hiring practices”

Most of these concerns stem from the perceived pseudoscience of AI and the very real workplace anti-discrimination laws with which it may not comply. The U.S. Senate, too, expressed worry that facial analysis technologies might have negative implications for equal opportunity among job candidates. 

There are further reservations that machines might take our jobs. Research by PwC shows that automated bots could take up to 38% of the jobs in the United States, 30% of the jobs in the United Kingdom, 21% in Japan, and 35% in Germany. Meanwhile, Forbes points out that no matter how sophisticated AI platforms become, “human judgment must always remain at the helm.”

What use is accelerating hiring if it costs you your job?

At the same time, some employers are also expressing concern that the use of AI may alienate candidates. That’s because AI is still a relatively new way of analyzing candidates, which can be off-putting to job applicants, even leading them to underperform in an interview. That makes for a bad candidate experience that could hurt an organization’s reputation.

Factors That May Negatively Affect AI Accuracy

Current AI technology is notoriously prone to misunderstanding meaning and intent. A big reason for this is the vast cultural and social variations in how people express themselves. Speech-recognition software may not accurately assess people with regional and non-native accents. And facial analysis systems can struggle to read some faces, such as people with darker skin or women wearing certain shades of lipstick.

Technology can also limit accuracy. If an applicant’s video quality or camera angle isn’t perfect, the algorithm could make a mistake. The same potential problem applies to poor audio connections. Automated audio transcriptions are not 100% accurate yet, which can lead to the wrong keywords being picked up and incorrectly interpreted by the AI engine.

But the critiques go deeper than accuracy. Research by Upturn, a nonprofit organization that claims to promote equity and justice in the design, governance, and use of digital technology, points out that some “skeptics question the legitimacy of using physical features and facial expressions that have no credible, causal link with workplace success to make or inform hiring decisions.” Others worry AI rewards “irrelevant or unfair factors, like exaggerated facial expressions,” and penalizes “visible disabilities or speech impediments.

The Potential for Bias 

Experts and activists warn that AI-driven hiring tools can be just as biased as humans. After all, computer algorithms are programmed by humans. And as we all know, humans often come with biases. As Meredith Whittaker, executive director of AI Now Institute, points out, “We need to ask ourselves: What assumptions about worth, ability, and potential do these systems reflect and reproduce? Who was at the table when these assumptions were encoded?”

The Upturn report also showed that predictive hiring tools (a.k.a. AI) can reflect institutional and systemic biases. It matter-of-factly states that “when the training data for a model is itself inaccurate, unrepresentative, or otherwise biased, the resulting model and the predictions it makes could reflect these flaws in a way that drives inequitable outcomes.” Yikes.

The study further explains automation bias, a phenomenon that occurs when people “give undue weight to the information coming through their monitors. When predictions, numerical scores, or rankings are presented as precise and objective, recruiters may give them more weight than they truly warrant.” That’s some serious food for thought.

Lack of Transparency 

People want — and deserve — to know if, when, and how AI assesses job candidates. But employers and job-seekers alike currently have little visibility into how it works.

Vendors argue that their technology is proprietary and must be kept secret to stay protected. A fair point, but this makes vendor claims difficult to verify or challenge. The lack of transparency has ushered in new regulation, such as the Artificial Intelligence Video Interview Act in Illinois, meant to protect jobseekers and ensure more visibility.

The reality is this: Sufficient safeguards are not yet in place to ensure that companies are responsibly using AI for hiring purposes. AI vendors should allow independent auditing of their predictive hiring tools — and make these evaluations public. If they have nothing to hide, this should be a non-issue. Vendors should also take steps to detect and remove bias, detailing how they accomplished this and how they will monitor for it in the future.

Why is transparency so important in the AI recruitment debate? A report from AI Now Institute found that “the gap between those who develop and profit from AI — and those most likely to suffer the consequences of its negative effects — is growing larger, not smaller.” As such, employers must disclose information to job applicants when AI plays a role in their hiring processes. Candidates should be educated about the technologies being used to evaluate them, and they should have to opt in actively before participating in a video interview.

Structured Interviews: A Safer Alternative

In light of all the criticism of AI, what types of technology should recruiters and hiring teams use instead?

Many are opting for structured digital interviews, a quantitative research method where all candidates are asked the same questions in the same way in the same order. The questions (and subsequent ratings) are based on competencies for each particular job.

Research into hiring-practice predictiveness examined 19 different hiring assessment practices — from psychometric tests to reference checks to interviews. The study found that structured interviews have a 0.51 predictive validity, meaning that 51% of the time there is a correlation between the interview and future performance. 

When structured interviews are digitized within a video interviewing platform, it’s much easier for interviewers to stay organized and objective. Digital structured interviewing also leaves a data trail that can identify if someone is consistently making biased hiring choices, which can also protect against costly legal fees from unfair hiring practices. Digital structured interviewing also leaves a data trail that can identify if someone is consistently making biased hiring choices. 

Artificial Intelligence Isn’t Smart Enough Yet

It’s no secret that automation helps streamline and shorten recruitment processes. But while technology can certainly make recruiters’ lives easier, AI is not yet sophisticated enough to weigh in on certain elements of the recruitment process, such as fairly assessing job applicants.

Once AI engines have advanced considerably, AI could perhaps be used to analyze recruits and hiring teams (rather than candidates). After all, if the goal is to minimize hiring bias, the importance lies in assessing the actions of recruiters more than it does job applicants. AI, as it improves, should be used as a preventative measure to protect candidates from unfair hiring practices by creating increased transparency, documentation, and supervision of recruitment efforts.

At this point, though, hiring teams, not computers, should decide who’s best for the job. AI can be a dangerous shortcut that comes at a cost of autonomy. Deciding the fate of applicants’ careers is too important to leave to AI, at least until its benefits outweigh the current risks.

Get articles like this
in your inbox
Subscribe to our mailing list and get interesting articles about talent acquisition emailed weekly!
Advertisement