top of page

AI Hallucinations Have Moved From the Chatbot to the Cabinet Room — and Nobody Had a Protocol to Stop Them

The citations were fake, the journals did not exist, and the document had been approved by Cabinet.



Sign up for my Substack daily AI newsletter here.


See my AI Training course portfolio for corporate Business Leaders here.




On 10 April 2026, South Africa’s Department of Communications and Digital Technologies gazetted an 86-page Draft National AI Policy for public comment. The document had been approved by Cabinet on 25 March 2026. It carried the minister’s authority. It proposed substantive measures: an AI Insurance Superfund to compensate victims of algorithmic harm, an AI Ombudsperson and Ethics Board, and a risk-tiered governance framework modelled on international best practice. Sixteen days later, the minister withdrew it. News24 had established that at least six of the document’s 67 academic citations were fabrications. The journals did not exist. The articles had never been written. The authors credited with foundational research had never written on the topics attributed to them. The most plausible explanation, in the minister’s own words, was that drafters used a generative AI tool and published the output without verifying a single reference.


The irony is precise and unsparing. A policy designed to govern AI, to establish the principles of human oversight, accountability, and responsible use, had been built on foundations that AI fabricated.


CONTEXT AND BACKGROUND

The hallucination problem in large language models has been documented since these systems became widely available. Generative AI tools are trained to produce fluent, plausible, confident output. They are not trained to be accurate. When asked to generate text that includes academic citations, they produce citations that look exactly like real ones — formatted correctly, attributed to plausible-sounding authors, published in journals whose names carry the right institutional weight. The fabrications are not obviously wrong. They are designed, by the nature of how these systems work, to be indistinguishable from the real thing to a reader who does not verify them.


This is not a new risk. It is a documented, well-understood, widely published risk. What is new is where it has landed. In 2025, Deloitte Australia was forced to refund the government after an AI-assisted report was found to contain fabricated case studies. South Africa’s DCDT has now produced a Cabinet-approved sovereign policy document built on fake academic sources. The pattern is consistent: organisations adopt AI tools faster than they develop the institutional protocols to govern what those tools produce, and the failures that result are proportional to the authority of the document into which the fabrications were introduced.


The political fallout was swift. The ANC condemned what it described as “institutional embarrassment,” calling for Minister Solly Malatsi to appear before the Portfolio Committee to account fully for the circumstances that led to a Cabinet-approved document built on fictitious references. ActionSA called for the minister’s immediate resignation. Opposition parties rejected the suggestion that responsibility could be attributed to a junior official, arguing that the failure of due diligence rested squarely with both the department and the ministry. Malatsi acknowledged an “unacceptable lapse,” promised consequence management, and announced the appointment of an independent expert review panel chaired by Professor Benjamin Rosman, comprising AI researchers, lawyers, and governance specialists, to rebuild the policy from credible foundations.


INSIGHT AND ANALYSIS

The failure is being framed, in some quarters, as a political story about a minister’s competence. It is not. Or rather, it is not only that. It is a governance story about the absence of institutional protocols for AI-assisted work, and that story extends well beyond one department and one document.


Professor Rendani Mbuvha of the Wits University School of Statistics and Actuarial Science described the blunder as one that underscores the irony of a human-centred framework being undermined by AI hallucination, and argued that it highlights the urgent need to train policymakers to understand both the promise and the shortcomings of the technology. He noted that what the blunder signals is increasing adoption and use of AI, including in policymaking, and that the human is supposed to be at the centre of policy adoption — yet in this case, the human step was the one that was skipped.


That observation cuts to the heart of the matter. The DCDT’s own AI policy lead acknowledged at GovTech 2025 that the policy’s development was “an act of acknowledging that we don’t know enough.” That acknowledgement of epistemic humility did not translate into a verification process. It translated into reliance on a tool that produces fluent text without epistemic accountability, operated by drafters who either did not know or did not act on the known limitations of that tool. The pressure to produce — to demonstrate governance capability, to gazette a document, to show progress — created the conditions for the failure. AI made it spectacular. The root cause was institutional.


I have previously written about the broader question of what happens when AI participates in the making of law and governance, and the accountability and legitimacy questions that follow from it. The DCDT scandal is a specific and vivid illustration of that broader risk: not AI making autonomous decisions about governance, but AI being used as a production tool in governance processes without the human verification layer that gives those processes their legitimacy. The minister’s own stated justification for the policy — that vigilant human oversight is not just a policy suggestion but a prerequisite for governance — became, in the event, an accurate description of exactly what was missing.


IMPLICATIONS

The most important question this incident raises is not who will face consequence management. It is whether every other government department, state entity, and publicly funded institution in South Africa is now asking whether its own AI use practices distinguish between AI as a drafting assistant and AI as a verified source of fact. The DCDT is not the only organisation using generative AI tools to assist with research, analysis, and document production. It is simply the one whose failure became public, because the document it produced was subject to external scrutiny within a 60-day comment window.


The policy’s substantive proposals deserve to survive the scandal. The AI Insurance Superfund, the AI Ombudsperson, the risk-tiered framework — these are serious governance ideas that were obscured rather than invalidated by the fabricated foundations beneath them. The independent panel chaired by Professor Rosman has an opportunity to preserve and strengthen what was worth preserving, and to produce a document whose credibility is proportional to the rigour of its process. The question is whether the political environment, shaped by resignation calls and parliamentary condemnation, will allow a substantive policy conversation to happen rather than a performance of accountability without its substance.


For South African boards and executives in the private sector, the lesson is direct. If a government department with the specific mandate to lead South Africa’s digital policy environment can publish a Cabinet-approved document riddled with AI-generated fabrications, the question of what your organisation has published, submitted, or acted upon on the basis of unverified AI output is not a theoretical one. AI hallucinations do not announce themselves. They are indistinguishable from accurate output to any reviewer who does not independently verify the underlying claims. The protocol that prevents them is not a sophisticated technical solution. It is a human one: a requirement that every factual claim, every citation, and every source in any document produced with AI assistance be independently verified before the document leaves the organisation.


CLOSING TAKEAWAY

South Africa now finds itself without an AI governance framework at the precise moment AI adoption is accelerating across its financial sector, healthcare system, and public services. The citizens most exposed to that gap are not the policymakers who created it. They are the people at the bottom of the economic ladder who have the least recourse when AI systems operating without governance oversight make consequential decisions about their lives. That is the real cost of this incident — not the reputational damage, not the parliamentary debate, not the minister’s political future, but the months of governance vacuum that the fabrication has produced.


The hallucination problem was always going to reach government. The question that now falls to every institution in South Africa, public and private, is whether it will reach you before or after you have a protocol to stop it.


Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net

 
 
 

Comments


Leveraging AI in Human Resources ​for Organisational Success
CTU Training Solutions webinar

bottom of page