top of page

An "AI doomsdayer" refuses to sleep at the wheel

I still believe in AI’s promise, but I’m more concerned than ever about control, misuse, and unintended consequences.




Over the years, a few people have called me an AI doomsdayer. I understand why. The longer I have worked in this field, the more clearly I see both sides of the story: the genuine benefits that can help businesses and society, and the risks that are becoming harder to ignore. 


I still work with organisations that can get real value from AI, and I still believe it can contribute to better healthcare, better education, and better public services. But I also fear we are losing control of the technology’s trajectory, not because a machine has intentions, but because humans have incentives. If being willing to say that out loud earns me the doomsdayer label, I can live with it. I would rather be criticised for caution than praised for sleepwalking.


CONTEXT AND BACKGROUND

In the early days, AI conversations were mostly about narrow use cases and incremental gains. Today, the conversation is about scale, speed, and the normalisation of delegation. We are rapidly shifting from tools that assist humans to systems that can draft, decide, recommend, predict, and influence at volumes no human team can match. That shift brings real opportunity, but it also changes the risk profile. When a mistake happens at scale, it does not stay a mistake for long. It becomes a pattern, a market distortion, a reputational crisis, or a societal harm.


South Africa and Africa sit in a particularly complex position. We have enormous potential to use AI to extend scarce expertise, improve service delivery, and support small businesses. But we also face deep structural challenges: inequality, under-resourced public institutions, uneven digital infrastructure, and a fragile information environment. That means the downside can hit harder and faster. When misinformation spreads, when fraud is automated, or when public trust erodes, the damage falls on people who already carry too much.


INSIGHT AND ANALYSIS

The doomsdayer debate often gets framed as a personality contest: optimists versus pessimists, builders versus blockers. I think that is the wrong framing. The real tension is between capability and control. Capability is accelerating because money, competition, and prestige reward speed. Control requires patience, governance, human judgement, and sometimes the willingness to say “not yet”. Those forces are not evenly matched.


When I say we are losing control, I am not talking about science fiction. I am talking about predictable, human problems. Systems are becoming too complex for most organisations to audit end-to-end. Tools are deployed widely before norms and safeguards mature. Businesses outsource judgment to models they do not truly understand. And the same capabilities that improve productivity also improve manipulation, impersonation, and cyber abuse.


This is where the conversation must become more honest. AI will not only amplify the good. It will amplify whatever a system is optimised for, including perverse incentives. In a world where attention is currency, AI can produce persuasive content at scale. In a world where criminals look for efficiency, AI can automate scams. In a world where employers chase output metrics, AI can intensify pressure on workers while weakening accountability.


IMPLICATIONS

For business leaders, the message is not “stop”. It is “lead”. If your AI adoption is primarily an IT project, you are underestimating what is at stake. Treat it as an organisational change programme with clear ownership, risk oversight, and measurable accountability. Be explicit about where automation is appropriate, where human judgment must stay in the loop, and what you will not delegate.


For educators and parents, we need to accept that AI is now part of the environment our children will grow up in. The goal cannot be to ban curiosity. The goal must be to build literacy: teaching young people how these systems persuade, how they can be wrong, how deepfakes and synthetic content can distort reality, and why character and critical thinking matter more than ever.


For policymakers and civil society, the priority is practical governance that can actually be implemented. Regulation matters, but so does enforcement capacity and public education. We need clearer standards around high-risk use, transparency where it is feasible, and serious consequences for negligent deployment. We also need to strengthen information integrity and cyber resilience, because the harms do not arrive politely.


CLOSING TAKEAWAY

If you want to call me an AI doomsdayer, I will not argue. I have seen enough to know that complacency is not a strategy. I still believe AI can bring profound benefits, and I still want South Africa and Africa to participate confidently in that future. But participation must not mean surrendering judgment. The right posture is neither panic nor hype. It is a steady responsibility: slowing down where it matters, building guardrails that work, and refusing to outsource our moral and civic duties to machines. I would rather be awake early than sorry later.


Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net

 
 
 

Comments


Leveraging AI in Human Resources ​for Organisational Success
CTU Training Solutions webinar

bottom of page