The AI evidence problem is coming for every courtroom
- Johan Steyn

- 14 hours ago
- 4 min read
When images, audio, and transcripts can be generated or altered easily, provenance and verification become the new legal battleground.

Audio summary: https://youtu.be/5GqUlWEr-OU
Follow me on LinkedIn: https://www.linkedin.com/in/johanosteyn/
We are entering a period where “digital evidence” will no longer mean what it used to mean. For years, screenshots, WhatsApp voice notes, CCTV clips, and PDF printouts have been treated as everyday proof, often persuasive enough to settle disputes long before a matter reaches trial. AI changes that. It makes fabrication easy, editing invisible, and confidence cheap. The danger is not only that fake evidence will enter the system, but that genuine evidence will be doubted, weaponised, and endlessly contested. The question for law is shifting from “what happened?” to “how do we prove this artefact is authentic?”
CONTEXT AND BACKGROUND
Courts have always dealt with deception, but the scale and accessibility of synthetic media is new. The National Center for State Courts recently warned that AI-generated evidence is becoming a direct threat to public trust, pointing to early cases where fabricated audio or video has been submitted as if it were real.
South Africa is not immune. The legal profession is already alert to how deepfakes and synthetic media can distort elections, reputations, and public discourse, and how quickly they spread once released. A March 2026 analysis in De Rebus discusses the virality of deepfakes in the South African context and the broader risks that flow from that.
In other words, the raw ingredients that often become evidence in civil and criminal matters are getting harder to trust, and the law has not fully caught up with the operational consequences.
INSIGHT AND ANALYSIS
AI breaks the old assumptions in three ways:
First, it collapses the cost of fabrication. A convincing voice note, a realistic image, or a believable transcript can now be produced by someone with a laptop and a modest subscription. The technical barrier has dropped so dramatically that “who would bother?” is no longer a useful question.
Second, it creates the liar’s dividend. Once the public knows fakes exist, anyone caught on genuine audio or video can claim it is synthetic. That makes investigations harder and trials longer, because the burden shifts towards proving authenticity rather than merely presenting content.
Third, it turns evidence into a systems problem, not a document problem. Traditional chain of custody focuses on who handled an artefact and when. In an AI world, we need a chain of authenticity: where the content came from, what edits occurred, which systems touched it, and what metadata or audit trail can support its integrity.
This is why legal commentary is increasingly emphasising defensible forensic methods and admissibility standards. Kennedys, writing in January 2026, points out that AI detection and content authentication are no longer future concerns and highlights the challenge of ensuring that forensic methods for identifying
AI manipulation can themselves stand up in court.
At the same time, practical guidance is emerging for litigators: treat provenance, metadata, and system logs as first-class evidence, not afterthoughts. A March 2026 legal explainer on JD Supra makes the point plainly: AI manipulation may leave fewer obvious traces, so audit trails, server logs, and verifiable provenance will carry increasing weight.
IMPLICATIONS
For law firms and in-house legal teams, the immediate takeaway is procedural. Evidence handling must become more disciplined: preserve originals, capture metadata early, document transfer paths, and avoid unnecessary format conversions that strip information. Legal teams should also build a basic playbook for when AI manipulation is alleged, including when to escalate to specialist forensic expertise.
For courts and policymakers, the next step is capability. Judges and magistrates will need practical literacy around synthetic media and the limits of detection tools. Rules of court and evidentiary standards may not need a full rewrite overnight, but courts will need clearer expectations about provenance disclosures, expert testimony, and how to manage “authenticity challenges” without allowing bad-faith delay tactics to dominate proceedings.
For technology providers and platforms, the opportunity is to strengthen authenticity infrastructure. The long-term answer cannot be “everyone becomes a forensic expert”. We need better default tooling for content provenance and tamper-evident metadata. The Content Authenticity Initiative has argued that 2026 is a turning point for interoperable provenance and content credentials, signalling a push towards standards that travel with media across workflows and platforms.
CLOSING TAKEAWAY
The AI evidence problem is not a niche issue for cybercrime specialists. It is a mainstream justice issue that will touch family law, labour disputes, insurance claims, fraud, criminal prosecutions, and reputational harm. The fix is not panic, and it is not blind trust in detection tools. It is a new discipline: provenance by default, verification as habit, and court processes that can handle authenticity disputes without collapsing under them. In the AI age, justice will depend less on what can be shown and more on what can be proven.
Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net



Comments