top of page

Why agentic AI projects fail: it’s the organisation, not the algorithm

Agentic AI is not failing because the models are weak, but because leaders ignore broken data, messy workflows and weak governance that block real business value.

ree



I write about various issues of interest to me that I want to bring to the reader’s attention. While my main work is in Artificial Intelligence and technology, I also cover areas around politics, education, and the future of our children.


Agentic AI – systems that do not just respond to prompts but take actions across multiple tools and workflows – is being sold as the next great productivity wave. Vendors are showcasing dazzling demos, boards are demanding “AI strategies”, and executives feel the pressure not to be left behind. Yet behind the glossy slides, there is a sobering reality: many of these projects are quietly being cancelled, written off as experiments that became too expensive, too risky or simply too messy to operate.


Recent forecasts, including Gartner’s prediction that more than 40% of agentic AI projects will be cancelled by 2027, point to a simple truth: it is not that the models are broken, but that organisations are not set up to use them effectively.


CONTEXT AND BACKGROUND

For years, analysts have noted that most AI initiatives never make it into stable production or fail to deliver the promised value once they do. The reasons are depressingly consistent: fragmented and poor-quality data, systems that do not talk to each other, manual processes that live in spreadsheets and people’s heads, and governance frameworks that were never designed for software that can act on its own. Agentic AI simply amplifies these weaknesses.


In recent commentary, I have described how many organisations rush into AI while their basic “plumbing” is in chaos: data scattered like confetti across platforms, technical debt so deep that every integration feels like open-heart surgery, and leadership teams promising transformation while refusing to fund the unglamorous groundwork. When you add immature risk controls, unclear business ownership and a lack of integration into day-to-day workflows, it is no surprise that many agent projects end up as expensive proofs-of-concept rather than sustainable capabilities.


INSIGHT AND ANALYSIS

What the research and forecasts are really telling us is that the primary blockers are organisational, not technical. The models are improving rapidly; what lags behind is operating model design, data discipline and leadership courage. Many cancelled projects started as showcase demos with vague business cases: “let’s build an AI agent for customer service” rather than “let’s reduce average handling time by 15% while improving customer satisfaction, and here is how we will integrate this into our CRM and quality processes”. Without clear objectives, it becomes impossible to judge value, manage scope or defend costs when budgets tighten.


There is also a cultural problem. Too many organisations treat agentic AI as a side experiment run by enthusiasts, rather than a change in how work is organised. Agents that trigger actions in finance, HR or operations cannot live as toys on the edge of the business. They require clear ownership, cross-functional governance, robust controls, and agreement on who is accountable when something goes wrong. When these foundations are missing, risk and compliance teams rightly get nervous, costs spiral, and leadership loses confidence. The project is then cancelled, and “AI did not work for us” becomes the narrative.


IMPLICATIONS

For policymakers and regulators, the message is that governance guidance cannot focus only on the models. We need clearer expectations about organisational controls, auditability, and the human roles that surround autonomous systems. African regulators in particular have an opportunity to shape frameworks that recognise both our infrastructure realities and our need to protect citizens from poorly governed automation.


For business leaders, the implications are immediate. Before commissioning an agentic AI pilot, they should ask uncomfortable questions: Is our data discoverable, clean and governed? Do we understand our critical workflows end-to-end? Have we reduced our reliance on manual workarounds and undocumented spreadsheets? Do we know who owns the process once the agent is live? If the honest answer to most of these is “no”, then the priority is not another AI demo. It is fixing the foundations so that intelligent tools can operate safely and meaningfully.


CLOSING TAKEAWAY

The looming wave of cancelled agentic AI projects should not make us cynical about the technology; it should make us honest about ourselves. AI will not rescue organisations from the hard work of building coherent systems, disciplined data practices and responsible governance. In South Africa and across Africa, where resources are scarce, and the stakes for our children’s future are high, we cannot afford AI experiments that burn money and trust without changing how work is done.


The real opportunity lies in leaders who are willing to fix the plumbing, align AI with genuine business value, and build organisations that are ready not just to buy intelligent tools, but to use them wisely.


Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net

 
 
 

Recent Posts

See All

Comments


Leveraging AI in Human Resources ​for Organisational Success
CTU Training Solutions webinar

bottom of page