Advertisement

Recruiting and the Debate About Ethical AI 

Article main image
Mar 10, 2021
This article is part of a series called Techsploration.

What does an AI researcher at Google getting fired have to do with talent acquisition? 

It’s a moment that could spark the beginning of the end for a wild west of artificial intelligence and the emergence of ethical AI into the mainstream. 

That should spell concern for any recruiting technology that employs AI in a critical function like hiring. But would that be the end of all of these emerging AI tools? 

First, let’s get to what happened at Google. 

Google Dismisses Key Members of Its Ethical AI Team

In late 2020, Google fired one of its leading AI researchers, Dr. Timnit Gebru, after the company demanded that she retract a research paper that she submitted with other researchers. Google also challenged emails that she sent to colleagues about the retraction. (Gebru has worked on a number of issues, particularly around fairness, bias and facial recognition.) The paper she was asked to retract was focused on the issue of using large language models and potential negative consequences.

The termination was opposed by nearly 3,000 Google employees who signed an open letter to the company. Meanwhile, Gebru’s co-lead on the ethical AI team, Margaret Mitchell, was fired late last month after Google stated she violated code of conduct and security policies. 

In the wake of the controversies, Google appointed Dr. Marian Croak to lead the ethical AI team and try to pick up the pieces. Still, the controversy at Google puts the spotlight on a critical issue that has generally escaped much regulatory scrutiny: What are the ethical limits of AI?

Why Ethical AI Matters

Decades before AI truly accelerated, the ethics of robotics and artificial intelligence were explored and debated by academics, ethicists, and science-fiction writers. As a speculative and theoretical idea, it seemed better fodder for post-apocalyptic theater than practical technological application. 

That all changed once AI could be applied to a Google-sized data set, with ramifications that are literally unprecedented. This also now sets the stage for future questions about the role of AI in the world.

Since there is scant regulation, we are dependent on internal teams to police their own actions. We have to ask ourselves whether it’s ethical to build AI for a specific purpose or with specific capabilities. While a lot of the ethical dilemmas of AI tend to be fantastical — for example, will a driverless car choose its occupants’ lives over the lives of pedestrians or other vehicles? — many of the critical questions deal with the limited data sets and lack of diversity that train these in the first place. In the driverless car example, what if they don’t recognize a person in a wheelchair? 

It Matters for Recruiting, Too

Of course, ethical AI isn’t just for big Silicon Valley tech to grapple with. Earlier this year, video interviewing and assessment company HireVue discontinued a controversial product that used facial analysis of candidates to tell an employer more about them. 

While HireVue claimed an algorithmic audit done by a third-party showed it didn’t harbor bias, CEO Kevin Parker told Wired that the product “wasn’t worth the concern.” Experts have continually questioned the idea of using AI to determine someone’s ability. 

In reality, the perception at least is that it is biased and leads to an awful candidate experience, especially for minority applicants. When I talked to friends outside of our industry about what some of these tools claimed to do, they were more frank in their impression: It’s creepy AF. 

What can the average candidate do about this? Outside of giving feedback to the organization doing the hiring, not much. Claims of bias, either on a widespread or individual basis, are difficult to pinpoint. Outside of the outcomes of bias, there’s virtually no policing of AI practices other than working groups affiliated with and often inside organizations themselves. 

The algorithms are often opaque and deemed proprietary by those who question them. While the Organization for Economic Cooperation and Development (OECD) has developed AI guidelines, there’s rarely any incentive for a company to comply with them. 

A Moment for Ethical AI

Big Tech backlash, coupled with some high-profile missteps, could mean more regulation is on the way. While regulation isn’t always the answer — and is often rejected by tech innovators — there likely needs to be some limits set.

For example, knowing how your data will be used in plain text is the bare minimum. Yet is not required today. Having a clear understanding of how the AI works, how it’s been tested, and whether it has been audited by someone with no fiduciary connection to the company seems completely reasonable. 

But something that can be done today in recruiting is for TA leaders to be more discerning buyers. Be skeptical of market claims about bias-free AI tools. Don’t purchase AI tools when you don’t understand how they may work or impact your hiring process.  

Those on the receiving end of AI-driven outcomes, whether it’s labeling a person as an unsuitable candidate in a job interview or not labeling someone at all in a picture, have incredibly small amounts of power. Those who choose and implement these tools, whether on Google’s billions of users or in a much smaller context in hiring, have the power. 

This article is part of a series called Techsploration.