top of page

AI agents in audit: more coverage, new blind spots

Automation can widen what auditors can test, but it also raises fresh questions about trust, bias, and over-reliance.





For decades, external audits have relied on a familiar rhythm: plan the audit, test a sample, document the work, conclude, and report. Now that rhythm is being rewritten. Across the major audit firms, AI agents are being embedded into audit platforms to ingest large data sets, reconcile accounts, flag anomalies, and propose what should be investigated next. It sounds like an obvious quality win: more coverage, more signals, fewer manual mistakes.


Yet the moment we move from human-led sampling to machine-led scanning, we introduce a new set of blind spots. The biggest risk is not that auditors will stop working hard. It is that they may start trusting outputs they did not truly interrogate, while juniors lose the apprenticeship that used to build judgement.


CONTEXT AND BACKGROUND

The Big Four have been investing heavily in audit technology for years, and the current wave is about bringing “agentic” capabilities into those platforms. In practice, this means systems that can break work into steps, run checks, suggest follow-ups, and summarise supporting evidence, rather than simply automating a single task.


At the same time, the talent model around audit is being pressured from two sides: cost and capability. Clients are already asking whether AI-driven efficiency should reduce audit fees, which is a direct signal that the market expects productivity gains to show up in pricing, not only in marketing claims. Meanwhile, firms are redesigning training because AI is absorbing a portion of the traditional entry-level workload.


INSIGHT AND ANALYSIS

Let’s start with the genuine upside. AI agents can widen what auditors can test by scanning entire populations of transactions, not just small samples. They can match invoices to purchase orders at scale, flag duplicate payments, highlight unusual journal entries, and focus teams on higher-risk areas earlier. This can improve both efficiency and consistency, especially in environments where clients produce messy, high-volume data.


But coverage is not the same as assurance. When an agent flags anomalies, it is still relying on assumptions: what “normal” looks like, which patterns matter, and which data fields can be trusted. If the underlying client data is incomplete, incorrectly mapped, or biased by process quirks, the AI may confidently prioritise the wrong risks. In other words, we may test more, but understand less, unless human scepticism becomes sharper, not softer.


This is where over-reliance creeps in. If the platform produces a neat summary, a clean dashboard, and persuasive explanations, teams can slip into “rubber-stamping mode”. The audit file looks stronger, but the thinking might be weaker. This is particularly dangerous in edge cases: unusual transactions, novel revenue models, rapidly changing businesses, and complex group structures where context matters more than patterns.


There is also a quieter blind spot: training. Audit has always been an apprenticeship. Juniors learned through repetition: tie-outs, vouching, confirmations, walkthroughs, and relentless documentation. If AI agents do more of that “grunt work”, firms must deliberately redesign learning pathways so juniors still develop professional scepticism and accounting intuition, rather than becoming operators of tools they do not truly understand. This concern is now showing up in how the profession talks about training and talent strategy.


IMPLICATIONS

For audit firms, the path forward is not to slow down adoption. It is to raise the standard of governance around it. AI agents should leave an audit trail of their own: what they checked, what data they used, what thresholds were applied, what exceptions were flagged, and what a human did in response. This turns “AI did it” into “Here is how we used AI, and here is how we verified it”.


For audit quality, the practical goal should be a new balance: machines expand coverage, humans expand challenge. That means training auditors to interrogate models, data provenance, and failure modes, not just to operate software. It also means resisting the temptation to promise “end-to-end automation” as a virtue in itself, because the last mile of assurance is judgment. Even optimistic projections about full AI integration in the audit cycle should be read through that lens.


For new entrants, the real opportunity is a better career, not a shorter one. If designed well, AI can free juniors from mindless ticking and bashing and push them sooner into understanding controls, speaking to clients, and explaining findings. But that only happens if firms invest in structured coaching and assessed practice, rather than assuming the tool will fill the gap. Broader research on AI adoption in professional services suggests the transformation is already underway, but unevenly managed.


CLOSING TAKEAWAY

AI agents are giving auditors something they have long wanted: more visibility into the real shape of a business, not just a sample of it. That can strengthen audits and improve insight, but only if the profession treats AI as an amplifier of scepticism, not a substitute for it. The new credibility test for audit firms will be simple: can they explain how the machines contributed, where the limits are, and how humans remained accountable? More coverage is valuable. But without disciplined challenge, it can create a comforting illusion of certainty at exactly the moment we should be asking harder questions.


Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net

 
 
 

Comments


Leveraging AI in Human Resources ​for Organisational Success
CTU Training Solutions webinar

bottom of page