Artificial intelligence is arguably the most powerful technology we have today to innovate processes and experiences. When AI is applied to high-stakes decision-making such as employee selection, not everyone is excited about its potential, though.
Fears about AI in hiring have been solidified by highly publicized missteps, like the highly-criticized use of AI-enabled facial recognition during video interviews. The consequences are real: AI can often introduce bias to the hiring process at a time when nearly 90% of organizations aim to reduce it.
Wherever enterprises fall on the AI adoption spectrum, there is a solution for maximizing reduction of bias while minimizing AI risk: Developing an ethics policy that guides the use of AI in hiring. Implementing this type of strategy ensures companies optimize the value of AI in hiring with human-centric best practices. It strengthens your position as an innovative market leader, building candidates’ trust and amplifying employer brand credibility.
But most importantly, an AI ethics policy also brings candidates into the conversation. So much of our communication around AI is technical or enterprise-specific. Ethics grounds the conversation around something common to us all — the human context for AI in hiring.
Starting Point for an Ethical AI Strategy
The hype over new technologies often outpaces the science behind it. That’s certainly proven true with AI in hiring. An ethical AI strategy acts as a safeguard — especially for early adopters — that demonstrates an organization’s forward-thinking commitment to prioritizing its employees and candidates. It should be a living part of organizational culture, not a written policy that is distributed once but otherwise ignored. You want employees and candidates to experience it.
For example, at our company, our written statement, refined continually as our technology advances, consists of the following six guiding principles:
- AI should benefit candidates and organizations.
- AI must respect the privacy of candidates and organizations.
- AI is accountable to candidates and organizations.
- AI should avoid and reduce bias.
- AI must be transparent in operation.
- AI research and application must uphold scientific standards.
Living by these principles ensures an organization takes a human-centric approach in using AI technologies. Furthermore, by elevating candidate experience, these principles can help protect employer brand, a critical aspect for large-enterprise organizations that are drawing their best talent from their base of consumers.
From the Perspective of Candidates
Here’s how candidates can and should experience ethical AI in hiring:
- Candidates are made aware that a potential employer is using AI in their hiring process. Our research into AI consent language indicates that providing more information to candidates upfront can play a role in building their trust and support of the use of AI in hiring decisions. This can be part of a message explaining that the employer is striving for a fair, unbiased hiring process, and that AI is important in reducing bias.
- Candidates have the opportunity to consent or opt-out of AI-enabled evaluation for hiring. Again, candidate notification and education up front is likely to decrease opt-outs.
- They understand that only information they provide is being used in the hiring process, which maintains candidates’ privacy. This also ensures that candidates have control over the details they share (or don’t share) with prospective employers.
By following AI ethics in hiring and sharing them with candidates, organizations show they want what candidates want: an unbiased, fair, human-centric hiring process that enables everyone involved to make a more informed decision. This is the kind of hiring that gets candidates talking positively (and publicly) about their experience with an organization.