placeholder
Stuart Gentle Publisher at Onrec

HR owns AI risk – but isn’t equipped to manage it

By Barb Hyman CEO of Sapia.ai

On a seemingly daily basis, headlines reveal stories of talented, highly qualified people struggling to find jobs despite submitting hundreds - and in many cases thousands - of applications. Many of these case studies show candidates turning to AI to curate their CVs and present the best possible picture of themselves. Meanwhile, on the other side of the hiring table, HR teams are using AI to screen candidates to fast-track the process of determining who has the experience to fulfil the role.

The unavoidable fact is that AI is now embedded across hiring decisions, from screening and shortlisting to assessment - but who is taking accountability for its use?

In most organisations, responsibility for outcomes shaped by AI has landed with HR, even though the systems driving those outcomes sit outside its control. This has not been a deliberate transition. It has happened gradually and has been driven by the adoption of tools designed for automation, rather than by a considered, thought-out strategy.

This isn’t to point the finger at HR teams. They have been under sustained pressure to move faster and operate at a greater scale, with expectations to process increasing volumes while maintaining quality and consistency. AI tools have tapped into this need and become the go-to solution, often introduced through vendors offering immediate efficiency gains, with limited explanation of how the AI works.

HR teams ultimately own the outcome of who climbs the career ladder and who slides the snake back to square one. It’s a game of snakes and ladders - and HR teams are blindly rolling the dice.

How HR became the default owner of AI risk

The urgency of navigating huge volumes of applications, whether hiring graduates or those with decades of experience, has led to the widespread adoption of AI tools. These tools have often been introduced to solve specific problems - such as screening volume or improving time-to-hire - which in many cases have happened without a broader AI strategy in place.

Let’s look at what we mean by responsibility in more detail. HR and talent acquisition teams typically own hiring decisions and the experiences that candidates have throughout the process. They are accountable for fairness, compliance, and how the organisation presents itself to candidates. So when AI becomes involved in the process, any resulting changes or outcomes also become the responsibility of HR - right?

There are, of course, other functions with a role to play. IT supports infrastructure and

integration, and legal advises on regulatory exposure, but neither is responsible for the final hiring decision. That remains with HR.

The result is a mismatch. HR is accountable for decisions shaped by systems it did not design and does not control.

Why adoption of tools often favours convenience over control

Most AI tools used in hiring are introduced through vendor platforms. Applicant tracking systems, screening tools, and assessment software now include features designed to be easy to implement and quick to deliver results - a busy HR team’s dream.

This convenience has become the industry’s latest shiny object and has accelerated adoption as a result. Remember when the first person in a friendship group got the latest gadget or smartphone, and suddenly everyone had one?

The equivalent “must-have gadget” for HR teams is also limiting control, with many systems operating with restricted visibility into how decisions are made. The logic behind candidate ranking or rejection is not always accessible, and the underlying data and model assumptions can be difficult to interrogate.

In effect, vendors define how decisions are shaped. HR remains responsible for the outcomes, meaning that judgment - while appearing to sit with HR - is heavily influenced by systems that sit outside the organisation’s direct oversight.

You can’t govern what you can’t see

You can’t govern what you can’t see - let alone what you can’t understand - and that’s where many organisations face challenges. HR teams are expected to oversee AI-driven decisions without full insight into how those decisions are reached.

In practice, this means a limited understanding of data inputs, training models, and decision logic. The data points we mean here are prior experience, years in a career, and qualifications. It can be difficult to audit outcomes or explain them clearly to candidates, yet this lack of transparency creates risk.

Bias can emerge without clear warning signs, particularly when decision-making processes are not visible. If AI decides not to progress a candidate, the lack of a clear rationale - especially when there is no human involvement - can create compliance issues when challenged. Reputational damage can follow quickly if organisations are unable to explain or justify their hiring decisions.

Bias in, bias out

The issue with ineffective use of AI in hiring isn’t just about being fair to the candidates being processed by the tools. These systems are trained on the data they are fed, so the quality and accuracy of that data is hugely important. Purely relying on CV information limits a candidate’s credibility for a role to their direct experience and can therefore

overlook applicable or transferable skills that may make them well-suited.

Candidates themselves are also more aware of how AI is used and increasingly expect clarity on how their data is used, how decisions are made, and whether those decisions are fair. A lack of transparency can undermine trust in both the hiring process and the organisation itself.

For hiring teams, this makes accountability more important than ever before. Organisations need to demonstrate how decisions are reached and who is responsible for them to avoid the detrimental impact of unfair or biased hiring decisions. This requires clear traceability and governance.

Reframing the role of HR in the age of AI

It’s important to note that the use of AI in hiring is not all bad. In fact, it can create an opportunity to redefine the role of HR. Rather than focusing solely on process execution, when implemented correctly, AI tools can enable HR to take a more active role in governing how decisions are made within the hiring process.

This places HR at the centre of decision integrity, where it becomes responsible not only for which candidates are successful but also for ensuring they are identified in a way that is fair, transparent, and aligned with organisational values.

Human hiring means taking responsibility

Hiring is - and always will be - a human-centric process. It’s important for HR to remain involved and accountable, rather than being fully overtaken by AI tools or automation.

The current model places HR in a position where it is accountable for decisions driven by systems but, in many cases, without full understanding or control. Blind trust in tools that have been recommended or adopted simply because others are using them is widening the gap between responsibility and capability - something that isn’t sustainable.

If HR is to remain accountable for AI-driven hiring decisions, it must also be empowered to understand and govern the systems behind them. Without that, risk will continue to grow and confidence in hiring processes will decline.

It’s time to close this gap and stop closing the door on candidates with potential who are being overlooked due to existing process bias. Hiring is human, and it always should be.

From blind spot to informed choice

As AI adoption accelerates across HR, the question of transparency is moving to the forefront. While many tools remain opaque, others are built for accountability and open to interrogation. For HR leaders, the priority should be selecting partners who can clearly explain how their AI operates, what data informs its decisions, and how outcomes are determined. Choosing solutions that withstand scrutiny enables HR teams to actively govern AI-driven processes, rather than simply accepting automated results.