The AI bubble we should worry about is not financial – it’s expectations
- Johan Steyn

- Dec 17, 2025
- 4 min read
The real risk is not that AI valuations might fall, but that we have convinced ourselves algorithms will fix problems rooted in people, politics and leadership.

Audio summary: https://youtu.be/KicrNLEV4y0
I write about various issues of interest to me that I want to bring to the reader’s attention. While my main work is in Artificial Intelligence and technology, I also cover areas around politics, education, and the future of our children.
At almost every conference where I speak, and in weekly conversations with clients, the same question now pops up: “Are we in an AI bubble?” The discussion quickly splits into two camps. On one side are those who are convinced we are replaying the dot-com crash of the early 2000s, with overinflated valuations and frothy promises. On the other hand are those who speak of AI as the best thing since sliced bread, an unstoppable force that will transform everything it touches.
My own view sits somewhere in the middle. Yes, there is speculation. Yes, some investments will end badly. But that is not the bubble that worries me most. The bigger danger is an expectations bubble: a deep misunderstanding of what the technology can actually do, and what it can never do for us.
CONTEXT AND BACKGROUND
Financial bubbles are almost a feature of how modern capitalism funds new infrastructure. Railway mania, the dot-com boom, and even parts of the crypto story all combined genuine innovation with wild speculation. When the dust settled, most speculative ventures disappeared, but the underlying networks remained. AI today has many of the same ingredients: huge capital expenditure on chips and data centres, a small group of dominant firms, and a flood of copycat applications chasing the same customers.
In South Africa, we watch these global debates while grappling with painfully familiar crises. Loadshedding, failing water systems, collapsing municipal services, and underperforming schools are not primarily technological problems. They are the result of long-term underinvestment, corruption, weak governance and a breakdown of accountability. It is into this context that AI is increasingly sold as a silver bullet: predictive models to stabilise the grid, chatbots to fix public service delivery, and learning platforms to rescue education. The promise is seductive, especially for leaders looking for quick wins.
INSIGHT AND ANALYSIS
In the decade I have spent working in the AI field, I have not once seen a project fail because the technology was not good enough. Every failure I have witnessed came down to people: misaligned expectations, poor leadership, weak planning, bad data, or a refusal to change how work is actually done. That is why I worry less about a financial AI bubble and more about the stories we are telling ourselves. When executives or politicians talk about AI as if it were magic, they are inflating expectations that no algorithm can ever meet.
The truth is that AI amplifies whatever it is plugged into. Insert it into a well-run system with clear goals, good data and capable people, and it can deliver impressive gains. Drop it into a broken municipality, a mismanaged state-owned enterprise or a school with no basic resources, and it will mainly help you measure the dysfunction more precisely. No model can create additional generation capacity out of thin air, enforce ethical procurement, or compensate for a child who has never been taught to read. AI can be a powerful assistant, but it cannot be a substitute for governance, competence and political will.
There is also a psychological risk. When we believe that AI will somehow leapfrog us over deep structural problems, we delay the hard work of fixing those problems. Leaders may prioritise shiny pilots over unglamorous basics: maintaining infrastructure, supporting teachers, training frontline staff, and building trustworthy institutions. When the AI projects fail to deliver miracles, public trust in both technology and leadership erodes further.
IMPLICATIONS
For policymakers, this means AI strategy must be rooted in realism. Investing in models and platforms without strengthening the institutions they are meant to serve is a recipe for disappointment. Public money should support AI projects that are tightly linked to clear policy goals, with honest assessments of what the technology can and cannot do. Crucially, these projects should go hand in hand with investments in data quality, skills and organisational capacity.
Business leaders face a similar choice. They can treat AI as a branding exercise, buying licences and announcing pilots while little changes on the ground. Or they can do the harder work of redesigning processes, clarifying decision rights and equipping their people to work alongside intelligent tools. The former path feeds the expectations bubble and almost guarantees disillusionment. The latter requires more humility, but it is where real competitive advantage will come from.
For parents and educators, the message to young people must also be balanced. AI will be part of their world and their work; they need to understand and use it. But they must not be told that it will rescue them from broken systems. Foundational literacy, numeracy, critical thinking and ethical judgement remain non-negotiable, with or without algorithms.
CLOSING TAKEAWAY
We may or may not look back on this period as a classic financial AI bubble. Markets will decide that in time. The expectations bubble, however, is already here. If we allow ourselves to believe that AI alone will fix inequality, state failure and failing schools, we will waste precious years and squander public trust. The healthier stance is more demanding: use AI where it genuinely helps, but keep our focus on the very human tasks of building competent institutions, educating our children and holding leaders to account. Technology can be a powerful ally in that work, but it cannot do the work for us.
Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net






Comments