One reason talent acquisition is such a difficult job is the ambiguity involved in hiring. Rarely are there definitive right or wrong answers when choosing a top candidate. As more companies deploy skills-based hiring processes and technology, we’re getting better at matching skills with needs. Still, there are always uncertainties.
And now, in the era of AI and remote work, there’s additional ambiguity for HR and hiring teams to grapple with: How do we know if someone’s work is their own, or even whether they are who they say they are?
Fortunately, there are ways to mitigate your exposure to candidate fraud and cheating. Moreover, some of the perceived problems aren’t really candidate cheating problems. They are problems with the hiring processes — and how companies assess talent, particularly technical skills.
The Difference Between Fraud and Cheating
Many in the hiring industry use the terms “fraud” and “cheating” interchangeably. However, it’s important to differentiate the two terms:
- Fraud is misrepresenting one’s self during the application process.
- Cheating involves a myriad of actions, such as having a second person present to provide solutions or using another’s information/answers in a way that violates the rules of a test.
Fraud is unquestionably a problem in the hiring process, and companies need to take every reasonable step to protect themselves from it. Hiring is a fundamental exercise in trust, and if one party is misrepresenting itself, it’s impossible to build a truly solid working relationship. Examples of fraud in hiring today include faking an identity or lying about work experiences.
But cheating is more — here’s that word again — ambiguous.
Today’s technical job candidates — e.g., engineers and developers — are typically asked to take tests or complete sample exercises to help talent teams vet they are a good fit for a job. For some of these tests, especially simplistic coding tests, it’s historically been considered cheating if a candidate looks for sample code elsewhere. They are expected to write everything from scratch.
However, that’s not really how developers work. The builders for today’s digital lifestyle are paid to research and implement solutions quickly and efficiently. On a daily basis, they use a wide range of tools, like GitHub and Stack Overflow, to source code samples and plugins so they can focus on higher-level thinking. So why should we view similar behavior during assessments as “cheating”?
The Role of Generative AI
As the challenges developers solve become more complex, many have turned to generative AI to assist them with streamlining their day-to-day workflow. In fact, one recent study found that 95% of developers use AI to write code.
Even our own CTO at Filtered, Oliver Weng, sometimes uses ChatGPT to generate code at the beginning of a project and to debug code at the end of a project. The AI-generated code never goes unedited or unchecked — and that’s no different from other building blocks a developer may use from other sources. Developers and their companies understand that, ultimately, any components or code that they use as part of their solutions are their responsibility.
Over the course of a long project, Weng says, generative AI has saved him several hours of work. In fact, ChatGPT has quickly become a go-to among his favorite technical research hubs, including Stack Overflow, Github, DuckDuckGo, and others.
So, if developers are using generative AI as a go-to tool — and clearly, they are — then we should also embrace its use in the hiring process. And if a developer approaches a code test in the same way — and with the same tools — as they would a job assignment, then why is that considered cheating?
Moving From Code Tests to Job Simulations to Verify Skills
Code tests have long been a popular way for talent acquisition teams to vet technical talent in various stages of the hiring process. Over time, as the job of today’s developer has evolved, these simplistic take-home tests have become ineffective — both because it’s so easy for developers to find answers online and because the basic nature of the tests does not reflect the actual work developers will be doing on the job.
And now, with generative AI’s ability to accurately answer the simple questions that often comprise these code tests, they are useless as a measurement of skill.
Turns out, developers find many of these code tests inefficient, too. A recent survey of technical job candidates found that many feel like code tests are a waste of their time — and they believe those tests fail to measure whether they have the skills to succeed in a particular job.
The path forward for skills-based hiring needs to include more sophisticated skill assessments — tests that better match the job of today’s developers. Instead of simply asking candidates to write code, employers should ask them to demonstrate their ability to solve problems efficiently with the tools they’d actually be using. To try to get the best candidates for the job, hiring teams should try as closely as possible to simulate the actual job environment, including the tools they use on a daily basis.
For instance, on our technical interviewing platform, we’re already seeing rapid adoption of ChatGPT among developers. During March, 13% of candidates used ChatGPT as a tool to complete online skill assessments. That’s up from 3.6% in December. Here’s a more detailed breakdown of ChatGPT use among candidates we’ve encountered:
- Dec: 3.58%
- Jan: 10.63%
- Feb: 6.72%
- March: 12.96%
Candidates are going to use tools like ChatGPT to complete skill assessments because developers are already using these tools in their daily work. The key is to understand how a candidate is using generative AI. Is it an efficiency shortcut, or covering for a lack of real skill?
A skills-based hiring approach calls for process-focused assessments. It’s not about a candidate solving a problem as much as how the individual solved the problem. Meanwhile, simply conflating the use of tools like ChatGPT — as developers do in their everyday jobs — with cheating is an outdated perspective for hiring managers.
Eliminating (Real) Fraud From Hiring
AI can be a huge aid in fighting candidate fraud at the top of the hiring funnel. It can identify anomalies in screening question data, and then trigger automation to alert hiring teams to take a look. For example, AI can spot when a candidate enters a physical address that doesn’t align with an IP address geolocation.
For open-ended questions designed to assess soft skills, AI can check for plagiarism against published content on the open web.
There’s even hiring tech that can count the number of open tabs on a computer during live interviews — or track (with webcams) if there are multiple people involved in answering questions during a timed assessment.
Meanwhile, to truly assess the skills of your technical job candidates — and remove more ambiguity from the hiring process — give them the tools that they’d use on the job. Including generative AI. Then you’ll see what problems they can solve, rather than what code they can write.