Deepfakes are rewriting the rules of credibility
- Johan Steyn

- 5 hours ago
- 4 min read
As synthetic media spreads, we’ll need new social norms for checking identity, intent, and authenticity.

Audio summary: https://youtu.be/PIYz9-MOe7k
Follow me on LinkedIn: https://www.linkedin.com/in/johanosteyn/
We grew up in a world where a voice note, a phone call, or a video clip felt like proof. Now that assumption is cracking. Deepfakes have moved from novelty to a practical tool for fraud and manipulation, and recent reporting suggests it is happening at a scale that looks less like opportunistic crime and more like a production line. This does not mean we must live in permanent paranoia. But it does mean we need new habits. In the same way we learned not to trust every email, link, or WhatsApp forward, we now need everyday “verification rituals” for audio and video. Credibility is becoming a process, not a feeling.
CONTEXT AND BACKGROUND
The problem with deepfakes is not only that they can look convincing. It is that they have become cheap, fast, and repeatable. Criminals do not need Hollywood-level perfection if they can run thousands of attempts and only a small percentage need to work. That is what “industrial scale” really means: volume, automation, and relentless iteration, powered by readily available tools.
Governments are responding, but they are doing so in a world where the technology moves faster than the policy cycle. The UK, for example, has announced work with Microsoft and others on deepfake detection standards and evaluation, which is an important signal that the issue is now treated as a national-level risk rather than a niche tech concern.
And the credibility crisis is not limited to scams. Courts, legal disputes, and public conversations are all being contaminated by synthetic content, forcing institutions to decide what counts as evidence and what counts as noise.
INSIGHT AND ANALYSIS
A deepfake is not just a fake clip. It is a psychological shortcut. We trust what looks and sounds familiar. We trust urgency. We trust authority. Fraudsters know this. They do not need to win a debate. They need to trigger a reaction: pay now, click now, share now, panic now.
That is why the most dangerous deepfakes are often the simplest ones. A short audio clip that sounds like a colleague asking for an urgent transfer. A “quick video call” with someone who looks like the boss. A message that seems to come from a family member. Even the idea of “proof of life”, something many people assume is the ultimate verification, is being complicated by the reality that convincing synthetic media can be manufactured quickly.
What replaces “seeing is believing” is not one magical detection tool. It is a layered verification. Think of it like modern banking security: you do not rely on one password; you use multiple signals. The same principle now applies socially. Who is sending this? Through which channel? Can I verify via a second route? Is there a pre-agreed code word? Does the request make sense? Am I being rushed?
There is also a broader societal issue: trust is an economic asset. When people stop believing what they see, they either become cynical and disengage or they cling harder to whatever confirms their existing beliefs. Both outcomes are corrosive. This is why the question is not “Can we stop deepfakes?” but “Can we build norms and systems that limit harm when deepfakes are inevitable?”
IMPLICATIONS
For families, the most practical step is to pre-agree on verification habits before a crisis happens. A simple code word for urgent requests. A rule that money or sensitive information is never shared based on a single message. A habit of calling back on a known number, not the number in the message. These are boring, human solutions, and that is exactly why they work.
For businesses, credibility needs a process. Update policies so that approvals, onboarding, and account changes require multi-step confirmation. Train staff to recognise emotional manipulation and urgency traps. And treat deepfakes as part of fraud risk, not as a “social media issue”. As the scale of digital scams rises, there is growing pressure to clarify where responsibility sits across platforms, banks, telecoms, and employers.
For policymakers and platforms, detection standards help, but enforcement and incentives matter. If creating and distributing deceptive content is cheap and low-risk, criminals will keep doing it. The goal should be to raise the cost of fraud and lower the reward, through identity assurance, reporting pathways, and consequences that actually bite.
CLOSING TAKEAWAY
We are entering a world where audio and video are no longer proof, just signals. That sounds bleak, but it does not have to end in distrust of everything. It can end in better habits: slowing down, verifying, using trusted channels, and designing systems that assume deception is possible. The deeper shift is cultural. Credibility will belong to those who can demonstrate provenance, not just performance. In other words, the future of trust is not about sharper instincts. It is about better routines.
Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net



Comments