From automating application screening to conducting skills-based assessments, AI makes the hiring process more efficient and scalable. However, the same technology also provides candidates with new ways to cheat. In this article, we’ll explore how candidates are using AI to game the system, the risks it poses for employers, and what organisations can do to safeguard the integrity of their hiring processes.
How Candidates Are Gaming the System
As AI becomes more sophisticated, so do the methods candidates use to exploit it. Here are some common tactics:
1. Copy-Pasting Questions into Generative AI for Instant Answers
The most straightforward and common method involves candidates copying assessment questions—whether numerical, verbal, or inductive reasoning problems—into generative AI tools like ChatGPT or Gemini. These tools can:
- Solve quantitative problems by breaking them down step-by-step.
- Interpret complex verbal reasoning passages and answer comprehension questions.
- Identify patterns in inductive reasoning questions to select the correct answers.
This approach requires minimal effort and delivers results almost instantly, giving candidates an unfair advantage in timed assessments. ChatGPT can score +90% on all common test types, making every candidate become a star candidate.
2. AI-Powered Content Generators for Written Responses
Generative AI tools are also widely used to craft polished written responses for assessments, including:
- Essays on industry-specific topics.
- Personal statements or responses to behavioral questions.
- Creative writing tasks.
By entering a prompt into AI tools, candidates can generate detailed and articulate answers that may far exceed their actual writing capabilities.
3. Automated Code Generators for Programming Tests
For technical roles, candidates often face coding challenges. Instead of solving these problems independently, some rely on AI tools like GitHub or Copilot to:
- Generate functional code based on a prompt or problem statement.
- Debug existing code in seconds.
- Optimise algorithms or find solutions to programming tasks.
This undermines the intent of coding assessments, which are designed to evaluate a candidate’s problem-solving skills and coding expertise.
The Risks for Employers
As you can see, AI in recruitment is not all good. There are many risks to candidates using AI to help them in their applications. Here are the main concerns:
- Hiring unqualified candidates can lead to decreased productivity, increased turnover, and reputational damage.
- A compromised hiring process can erode trust in the organisation's fairness and integrity.
- Rehiring and retraining can be costly, both in terms of time and financial resources.
Safeguarding the Hiring Process
There are various ways that employers can prevent cheating in their application process, and luckily for them they do not have to implement these advanced measures; choosing a test publisher with ai-proof assessments and measures in place is all that is needed. These measures may include: Employ AI-powered proctoring software to monitor candidates during remote assessments; detecting suspicious behavior like screen sharing or using unauthorized devices; Utilize AI tools to identify AI-generated content in written submissions or detect anomalies in coding patterns; Develop assessments that require critical thinking, problem-solving, and creativity, making it harder for AI tools to provide accurate solutions; Combine various assessment methods, such as technical tests, coding challenges, and behavioral interviews, to get a comprehensive view of a candidate's skills; and clearly communicate expectations to candidates, emphasising the importance of honesty and integrity.
By embracing AI responsibly and implementing robust safeguards, organisations can ensure the fairness and effectiveness of their hiring processes. As technology continues to evolve, it's crucial to stay ahead of the curve and adapt to the changing landscape of talent acquisition.