Post-truth justice: believing nothing and falling for anything
- Johan Steyn

- 2 days ago
- 4 min read
Deepfakes and AI-driven disinformation are eroding our trust in photos, video and audio, with serious consequences for journalism, courts and democracy.

Audio summary: https://youtu.be/yF1b6jjhapY
Follow me on LinkedIn: https://www.linkedin.com/in/johanosteyn/
I write about various issues of interest to me that I want to bring to the reader’s attention. While my main work is in Artificial Intelligence and technology, I also cover areas around politics, education, and the future of our children.
“Seeing is believing” used to be one of those phrases we could say without thinking. Today, it feels almost naïve. In the last few years, we have entered a new phase of post-truth, powered by artificial intelligence. Deepfakes and synthetic audio can place a politician at a rally they never attended, make a CEO say things they never said, or create humiliating images of schoolchildren with a few clicks.
The real danger, however, is not only that people fall for fakes. It is that, over time, we begin to doubt everything. When any damaging video can be dismissed as “AI-generated” and any uncomfortable truth can be waved away as a deepfake, justice itself becomes negotiable. We risk drifting into a world where we believe nothing and fall for anything.
CONTEXT AND BACKGROUND
The term “post-truth” entered mainstream political debate around 2016, in the era of Brexit and Trump, when emotions and identity seemed to matter more than facts. That phase was driven largely by social media, cheap memes and targeted propaganda. The underlying assumption, however, remained: a photo or a video was still powerful evidence. You could argue about interpretation, but there was usually agreement that the recording itself was real.
Generative AI changes that foundation. Tools that once required specialist skills are now available to anyone with a laptop. Deepfake videos, voice clones and realistic synthetic images can be produced quickly and cheaply. Researchers and international bodies have warned that this is not just another wave of “fake news”; it is a crisis of authenticity. When we cannot easily tell whether a recording is genuine, the entire chain of trust that underpins journalism, courts, elections and corporate investigations starts to wobble.
At the same time, the impact is uneven. Wealthy countries may eventually deploy sophisticated detection tools and verification standards. In younger or more fragile democracies – including many in Africa – institutions are already under strain. Layer AI-generated disinformation and deepfakes on top of low trust, inequality and polarisation, and the risk is obvious.
INSIGHT AND ANALYSIS
The core problem is a double danger. On one side, convincing deepfakes and AI-generated clips can mislead people. A faked confession or a fabricated scandal can spread across social networks long before any fact-checking catches up. In emotionally charged environments, first impressions matter; retractions rarely travel as far or as fast.
On the other side, and arguably more corrosive, is what some scholars call the “liar’s dividend”. Once people know deepfakes exist, anyone caught on camera doing or saying something compromising can simply claim, “That video is fake.” Even when forensic analysis supports the authenticity, doubt has been planted. Supporters can choose to believe the denial, and opponents may never fully persuade them otherwise. Truth becomes a matter of loyalty rather than evidence.
This dynamic affects more than national politics. Imagine a whistleblower case in a large company, where video evidence of misconduct is challenged as AI-generated. Or a gender-based violence case where intimate images are claimed to be deepfakes. Or a school where a child is targeted with a synthetic image, and parents, teachers and classmates are unsure what to believe. The technology not only creates new lies; it weakens our ability to agree on what counts as proof.
In South Africa and across the continent, this intersects with existing vulnerabilities. We already struggle with low trust in institutions, uneven policing, and intense political rhetoric. Adding AI-driven synthetic media to this mix risks normalising a culture where every uncomfortable fact is dismissed as manipulation, and where citizens retreat into echo chambers that confirm what they already want to believe.
IMPLICATIONS
For journalism, the implications are profound. Newsrooms can no longer treat video or audio as self-evident. Verification must become a visible part of the story: where did this file come from, how was it authenticated, what uncertainties remain? Ironically, as synthetic media proliferates, the value of slow, careful, professional reporting increases. But this requires investment, training and, crucially, public understanding of why speed must sometimes give way to rigour.
For courts and regulators, new standards will be needed. Chain-of-custody, digital signatures, watermarking and content provenance tools will play a bigger role in deciding what counts as admissible evidence. Legal systems will have to grapple with cases where victims are harmed by deepfakes, even if no “real” event occurred, and where powerful actors hide behind the fog of plausible deniability.
For parents and educators, the challenge is deeply personal. Our children are growing up in a world where faces, voices and bodies can be manipulated without consent. They will need a new kind of literacy: not just how to spot obvious fakes, but how to live with the idea that online images are always provisional. That means teaching scepticism without sliding into nihilism, and giving them practical tools to report abuse and seek help.
CLOSING TAKEAWAY
The post-truth era did not begin with AI, but deepfakes and synthetic media have taken it to a new level. We now face a justice problem as much as a technology problem: if nothing can be trusted, power will flow to those who shout the loudest and deny the longest. The solution will not come from detection algorithms alone. It will require stronger institutions, transparent journalism, updated legal frameworks, responsible platform design, and a cultural shift in how we treat “evidence” in the digital age.
Above all, it will require us to teach our children that while images and videos can lie, truth is still worth pursuing – patiently, critically and together. If we can hold onto that, we may yet navigate a post-truth world without surrendering to it.
Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net






Comments