Is your AI strategy financially sane?
- Johan Steyn

- Jan 4
- 4 min read
How a simple five-point test can protect your balance sheet, your people and your reputation from bad automation decisions.

Audio summary: https://youtu.be/y75E5tJsKsw
Follow me on LinkedIn: https://www.linkedin.com/in/johanosteyn/
I write about various issues of interest to me that I want to bring to the reader’s attention. While my main work is in Artificial Intelligence and technology, I also cover areas around politics, education, and the future of our children.
When I work with executives, I often hear proud declarations that they are “rolling out AI across the business”. When I ask how this shows up in the income statement or balance sheet, the room becomes quieter. Too many organisations are engaged in what I call AI theatre: impressive slide decks, multiple chatbots, a few pilots in the call centre, and very little measurable financial impact.
The uncomfortable truth is that automation and AI are, first and foremost, financial initiatives. If they do not make or save money, reduce risk, or create a genuine strategic advantage, then why are we doing them? To cut through the noise, I use a simple five-point test with clients to keep their AI ambitions financially sane and humanly responsible.
CONTEXT AND BACKGROUND
Over the last decade, many organisations have invested heavily in digital transformation. They have implemented workflow tools, robotic process automation, dashboards and customer portals. Much of this is classic automation: using rules and software to remove manual work and errors. In parallel, the rise of generative AI and advanced analytics has created huge excitement. Vendors promise “intelligent platforms” and “self-learning solutions”; boards ask why their organisation is not yet “AI-driven”.
The risk, especially in South Africa and across Africa, is that scarce budgets are spent on technology experiments that never move beyond proof-of-concept. We do not have the luxury of vanity projects. That is why I encourage leaders to apply a five-step sanity check to every automation and AI proposal: treat it as a financial initiative, start with the back office, look for small efficiency gains, consider regulation, and take people with you on the journey.
INSIGHT AND ANALYSIS
First, it is primarily a financial initiative. Every AI or automation project should have a clear commercial hypothesis: reduce cost, increase revenue, or reduce risk in a way you can measure. It should tie back to line items on the income statement or cost drivers you already track. If a proposal cannot explain this in plain language, it is not ready for investment.
Second, start with the back office. The glamorous front-end use cases often grab attention, but functions like finance and HR are usually the ripest for automation and later AI. They are full of repetitive, rules-based tasks and rich historical data: reconciliations, invoice processing, payroll, leave management, and recruitment screening. Improving these areas may not win awards, but it delivers reliable value and frees people for more meaningful work.
Third, look for efficiency gains and start small. Grand, end-to-end AI programmes almost always stumble. Instead, identify narrow processes where you can remove friction quickly: reducing manual capture in one workflow, automating a single reconciliation, or using AI to classify one type of document. Prove the benefit, learn from the experience, then expand. This builds credibility with both executives and frontline staff.
Fourth, consider regulatory frameworks. Even where formal AI regulation is still emerging, existing laws already apply: data protection, labour law, sector-specific rules, and now, in some cases, dedicated AI acts. Leaders cannot treat compliance as an afterthought. Questions about data use, transparency, accountability and bias must be baked into the business case from day one.
Fifth, and perhaps most importantly, take people on the journey. Automation and AI touch jobs. If employees only hear about them through rumours and press releases, they will understandably resist. Engage staff early, ask where technology could help them, be honest about the impact on roles, and invest in reskilling. The projects that succeed are the ones where people feel they are co-authors, not casualties.
IMPLICATIONS
For business leaders, this five-point test is more than a checklist; it is a discipline. It means saying no to projects that are exciting but financially vague. It means resisting vendor pressure to “go big” before you have proved value in the engine room of the organisation. It means involving finance, legal, HR and risk from the start, not as sign-off at the end. And it means viewing employees as partners in change rather than obstacles.
For policymakers and regulators, the same principles apply. Encouraging AI adoption without insisting on clear economic and human outcomes risks widening inequality and wasting scarce resources. Incentives and guidelines should favour projects that improve productivity, protect rights and create new forms of decent work, rather than simply cutting headcount.
CLOSING TAKEAWAY
In the years ahead, every organisation will be tempted to claim an AI strategy. The real question is whether that strategy is financially sane and humanly credible. A focus on hard numbers, back-office foundations, small efficiency wins, regulatory awareness, and genuine engagement with people can turn AI from theatre into substance. If we can hold ourselves to that standard, we will not only avoid expensive mistakes; we will build businesses that use intelligent tools to serve customers better, strengthen our economies and create a more hopeful future for our children.
Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net






Comments