Advertisement

What to Tell Candidates About AI in the Hiring Process

Article main image
Apr 4, 2022

The use of AI in hiring is now commonplace. Where job-seekers would once meet with interviewers in person, they now often provide audio/video responses via recorded interviews that are evaluated by an artificial intelligence (AI) application, not a person. Where recruiters used to read and evaluate resumes and cover letters, AI applications have now automated these tasks. 

While AI has helped countless organizations reach large numbers of employment candidates as the face of work has shifted to a more remote platform, the widespread use of AI has raised concerns for job-seekers. Often, those concerns are unfounded as the use of AI helps, rather than hinders, the candidate experience.

It’s therefore critical for talent acquisition professionals to help educate candidates about the role of AI in the hiring process.

Transparency Is Key

Start by letting candidates know upfront that AI will be a part of the recruitment and selection process. It’s better to be transparent to avoid surprises, so let them know that resumes are likely to be screened with an AI application if they are applying for a job at a large organization. 

Explain to candidates that a single position can receive hundreds or thousands of resumes, taking too many people and hours to sift through them. The fact is, without AI, most of those resumes would never be viewed. In this scenario, AI is a good thing for both the employers and candidates. 

Candidates should know that technology helps ensure that every resume is reviewed to see if minimum qualifications are met, if previous job experience is relevant, and other factors. Inform job-seekers that they should focus on outlining relevant information and not waste time formatting an aesthetically pleasing document with perfect fonts and color schemes. The AI doesn’t care about that; it’s just looking for relevant information. 

The main point is that candidates should feel confident knowing that everyone who applies for the job has an opportunity to demonstrate their job-related knowledge, skills, abilities, and experience. 

AI Is Objective, Humans Are Biased

Speaking of creating a level playing field, a common concern for some job-seekers is that AI might discriminate against them because of the way they look, the color of their skin, their ethnicity, gender, etc. Along these lines, the Federal Trade Commission (FTC) recently published guidance to businesses that states definitively that if a business uses a discriminatory algorithm to make automated decisions, they may be violating federal law. 

Such guidance by the FTC is absolutely essential to ensure fairness — and it is incumbent on tech vendors to demonstrate that all protected classes of individuals are well within the Federal Equal Employment Opportunity Commission (EEOC) guidelines. 

Still, the concerns that AI is biased has also been bolstered by a recently filed class-action lawsuit that alleges a hiring platform company, which used AI to assess job candidates during video interviews, illegally collected facial data for analysis. The prospective class alleges multiple violations of the Illinois Biometric Information Privacy Act, which places limits on the use of biometric identifiers or biometric information in Illinois. 

It’s worth pointing out that in this case, AI is being used to assess facial features, etc. And sure enough, presently this type of technology is not foolproof and may result in bias. However, AI that is being used to evaluate the content of speech only — as opposed to expressions, facial features, etc. — does not warrant this concern and would not be considered to measure biometric identifiers. 

In truth, human evaluators, not machines, tend to have bias and discriminate.

Thus, it’s important to let candidates know that in a recorded interview, the AI is not looking at them. Rather, it’s listening to their words, sorting them out, evaluating the content of responses. Nothing more, only the content.  This is quite unlike what happens when a human conducts an interview. 

While people may seem like unbiased evaluators, we all have unconscious bias. One common unconscious human bias is that “people who look like me must be like me.” While we intuitively know that’s just not the case, research has demonstrated that this bias is consistent across evaluators of all ethnicities, genders, and other demographics. 

In addition, sometimes we speak a bit too fast when nervous, or we might pause for a while to think about an appropriate response instead of just saying the first thing that comes to mind. Also, many candidates have strong accents when speaking English. All these things can be concerning when sitting for an interview, especially in front of a live evaluator. But again, the AI is only interested in what you say, the content of your response, and not how you say it.

Leveling the Playing Field

Let’s talk a bit more about the human element. One candidate concern might be that the use of AI (instead of human interviewers) is impersonal. And if the company doesn’t care enough to use a real person to interview me, they probably don’t care about me either and assume it’s not a good place to work. Again, the opposite is true. 

Only by using AI can an organization hope to review each resume and cover letter and/or interview everyone who applies for a position. Only through AI does everyone, regardless of their demographic, disability, socioeconomic status, geography, etc., get a chance to demonstrate their skills and ability to do the work. Only through AI can we truly level the playing field and create an equal opportunity for all. 

Further, AI is generally used as a screening tool and final interviews are generally conducted by real people for most jobs. At this point, a genuine human connection can be forged and the candidate can begin to engage with their new position and organization.

That said, it’s possible that candidate concerns about AI might be somewhat age/generation dependent. After all, technology is simply so much more deeply embedded in our lives than it was just a few years ago. To this end, I convened an informal panel of young adults and a couple of”‘screen-agers.”. I asked what concerns they would have about responding to a recorded, video-based interview that would be evaluated by AI as opposed to a human rater. Some of the younger panel members (screenagers) indicated that the use of AI technology would not make them uncomfortable in the least, and that they trust machine evaluation much more than they would ever trust a human. Without any prompting, they discussed the flawed nature of humans and the inability of a person to eliminate bias when evaluating another. 

It was enlightening, and reassuring — but possibly a bit naive. Most of the older panelists were more wary and mentioned concerns about biased algorithms and lack of transparency. It does seem, based on purely informal research, that there may be age differences in how we relate to these inevitable advancements in technology.  

However, based on actual research — such as the comparison of passing rates for various demographics, increased time to hire, ability to interview greater numbers of candidates, etc. — it does seem that organizations relying on AI for their recruitment and selection processes are on the right track.