- Censuswide research finds employees in large organisations spend almost as much time checking, verifying or manually redoing AI’s outputs as they do using AI itself, undermining intended productivity gains
- Findings suggest this verification burden is fuelling “AI burnout” – mental fatigue that employees report from the cognitive load of endlessly checking answers
- AI innovator and Amazon Alexa creator urges action to formalise better use of AI in the workplace
AI mistrust could be costing large UK businesses £29 billion per year in unrealised value, according to new Censuswide research commissioned by UnlikelyAI. Despite widespread adoption, AI’s promise to make tasks quicker and better is failing to materialise due to employee time spent checking, verifying or redoing AI-generated work.
The cross-industry study of 1,000 business decision-makers in energy, utilities, finance, insurance, healthcare and the public sector has revealed a clear gap between stated confidence in workplace AI and actual behaviour. Almost all respondents (99%) spend at least some time checking AI outputs each week – citing everything from quick sense checks (minor verifying, 18%) to redoing some or all of the task manually in order to verify it (20%) or even ignoring the output entirely (18%).
In this research, ‘AI tools’ refers to a range of large language models (LLMs) respondents report using, including ChatGPT, Gemini, Claude, LLaMA-based models and DeepSeek, in order of prevalence.
The cost of AI uncertainty
Based on respondents’ estimates, employees in their organisation spend an average of 2 hours 41 minutes using AI every working week – compared to 2 hours 30 minutes going back and verifying, checking or redoing what it’s produced.
Scaled to the UK’s large-business workforce, UnlikelyAI estimates this verification time corresponds to over £29 billion in employee wages every year for organisations with 250+ employees.
AI isn’t making work better, either. Just 57% of respondents see any kind of ROI on their organisation’s current AI investments, whereas 13% are yet to see a clear positive ROI and don’t think they will, and some even say “it has been actively bad for [their] organisation so far”.
The human cost: “AI burnout” and “AI blindness”
Beyond productivity, the survey points to a new form of cognitive strain: 51% find validating AI outputs frustrating, whilst around a third report experiencing effects such as:
- “AI burnout” – mental fatigue from checking and rechecking outputs (32%)
- “AI blindness” – losing perspective on output quality after repeated prompting and inconsistent answers (30%)
- “AI-dependence” – routine skills slipping away after becoming accustomed to AI use (33%)
- “Analysis paralysis” – inability to decide whether to trust the AI result or their gut (31%)
Relatively few report clear cognitive upsides: just 19% say AI makes them feel energised and empowered, 19% say it frees up headspace, and only 17% believe it makes them better at their job.
Why trust breaks down – and why it slows work down
Whilst the vast majority of those surveyed (87%) claim to trust the outputs of AI at work, over 65% still say they feel anxious or nervous when using AI tools for work that others will see, suggesting persistent uncertainty despite the promise of frictionless productivity..
The research uncovered four recurring issues undermining trust:
- Explainability: 32% can’t understand or explain to stakeholders how outputs are generated
- Security and safety: 32% have no way of knowing where their inputs go and how they could be used
- Consistency: 31% are unsettled by different answers to the same prompts
- Accuracy: 31% see outputs that are factually or logically incorrect, and 28% report hallucinations, where AI confidently spits out fabricated outputs
William Tunstall-Pedoe, CEO and founder of UnlikelyAI, says:
“These findings highlight a critical challenge: there has to be a better way to use AI. LLMs have strengths in specific, limited areas, but there’s a huge lack of understanding about when to use them and when to look to other, less-fallible models. That’s where this trust gap is coming from.”
Tunstall-Pedoe proposes three main solutions for businesses looking to fix the productivity drain of untrustworthy AI:
1. Establish clear AI hygiene
“Set ground rules within teams for when AI is and isn’t appropriate. Clarity helps eliminate anxiety and uncertainty – so people are free to perform better, in the knowledge they’re using a tool responsibly and within guardrails.”
2, Educate teams on different types of AI – and their limits
“Most people don’t realise that not all AI is built the same. Large language models are great for creativity and summarisation, but they are weak at accuracy and explainability. Training staff on the strengths and weaknesses of different systems builds confidence and prevents misuse.
“For example, at UnlikelyAI, we combine neural networks with symbolic reasoning to create models that are fully accurate, consistent and, most importantly, can explain every decision they make – so users can fully trust the output. This is something pure LLM solutions currently can’t do, so it’s important businesses understand these limitations.”
3. Prioritise explainability over novelty
”The most powerful AI is not necessarily the fastest or most complex – it’s the one that gives you certainty. Choose tools that produce consistent, verifiable outputs, that tell you when they can’t find an answer, and that leave a transparent audit trail. When you’re in a high-stakes business context, the long-term ROI on trustworthiness far outways the short-term gains of speed alone.”
UnlikelyAI is presenting the embargoed findings of its study and accompanying white paper, The AI Trust Report 2026, at an exclusive industry event on 12 March.





