When the Public Turns on AI
- Johan Steyn

- Jan 12
- 3 min read
2026’s backlash isn’t anti-technology — it’s a demand for accountability, fairness, and control.

Audio summary: https://youtu.be/xCuSOFw5la4
Follow me on LinkedIn: https://www.linkedin.com/in/johanosteyn/
Something is changing in how people talk about artificial intelligence. Not in boardrooms where the default response is still “we need an AI strategy”, but in homes, schools, and workplaces where the question is increasingly “what is this doing to us?” The shift is subtle at first: a parent worried about deepfakes at school, a professional anxious about being replaced, a customer frustrated by an automated service that cannot hear them, a citizen unsure what is real anymore.
Then it becomes more direct: demands for limits, transparency, and consequences when harm occurs. This article is about that turning point, why it matters now, and why organisations that treat AI as a marketing slogan rather than a social contract are heading for reputational, regulatory, and human blowback.
CONTEXT AND BACKGROUND
Every major technology wave has had a honeymoon period. Early adopters marvel at the capability, businesses chase efficiency, and the public tolerates the rough edges because the upside feels exciting. AI has followed that same pattern, but with one crucial difference: it does not just change what we can do, it changes what we can trust.
In South Africa, trust is already fragile. Many institutions are strained, inequality remains severe, and social media has become a pressure cooker for misinformation and anger. Add AI tools that can generate convincing text, images, and video at scale, and the risk is not only technical. It is social. When people feel that technology is amplifying manipulation, reducing dignity, or worsening unemployment, the backlash does not stay online. It becomes political, legal, and deeply personal.
INSIGHT AND ANALYSIS
The public does not “turn on AI” because the technology exists. People turn when the harms become visible and personal. The most dangerous AI failures are not spectacular robot disasters. They are everyday betrayals: a fake image that humiliates a teenager, a voice clone that scams a grandparent, an automated decision that cannot be appealed, a job role quietly redesigned so that one person now does the work of three.
There is also a growing contradiction in the AI narrative. We are told AI is here to augment humans, yet the incentives inside many organisations are about cost-cutting. We are told AI will “free us up for higher-value work”, yet the same organisations often have no credible reskilling plan, no career pathways, and no honest conversation about which roles will shrink. When people sense that “augmentation” is simply a softer word for “replacement”, trust collapses quickly.
Children sit at the centre of this debate, whether leaders admit it or not. Schools are dealing with cheating, synthetic bullying, and content that spreads faster than adult response systems can cope with. Parents are expected to manage risks they do not fully understand, while platforms and vendors shift responsibility in circles. A society that cannot protect children in the digital world will not grant the tech sector unlimited freedom to experiment.
IMPLICATIONS
For business leaders, the message is simple: AI adoption without legitimacy is a risk strategy, not an innovation strategy. If you cannot explain where AI is used, what data it touches, how decisions can be challenged, and who is accountable when something goes wrong, you are building a trust deficit that will eventually show up as customer churn, employee resistance, and regulatory escalation.
For policymakers and regulators, the challenge is not only to write rules, but to make them enforceable and practical. South Africa has privacy obligations through POPIA, but AI brings new questions about consent, biometric data, surveillance, and automated decision-making. The public will not be satisfied with fine print. People want clear protections and real consequences for abuse.
For educators and parents, the path forward is capacity. We need digital literacy that includes deepfakes, online harms, and the emotional realities of life in a manipulated media environment. Children do not just need warnings; they need skills, support, and adults who are equipped to respond.
CLOSING TAKEAWAY
When the public turns on AI, it is not a rejection of progress. It is a demand for responsibility. The organisations that thrive in this next phase will not be the loudest adopters, but the most trustworthy stewards. They will design for consent, build human appeal routes into automated systems, protect children as a first principle, and treat governance as part of the product, not a compliance afterthought. The next twelve months will reveal who is building a better future and who is simply extracting value. Society is watching, and it is learning to push back.
Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net






Comments