The procurement gap: why “trusted AI” fails in the real world
- Johan Steyn

- Feb 19
- 4 min read
Davos rhetoric won’t help if buyers can’t test claims, verify safety, and enforce accountability in contracts.

Audio summary: https://youtu.be/F09ggnsZa4E
Follow me on LinkedIn: https://www.linkedin.com/in/johanosteyn/
Every year in Davos, leaders reach for the same big words: AI, trust, transparency, accountability. This January was no different. The World Economic Forum framed its 2026 Annual Meeting as a “Spirit of Dialogue”, explicitly linked to rebuilding trust in a fractured world. It sounds right, but it can also be dangerously incomplete. Trust is not a theme or a slogan. Trust is a mechanism: the set of operational controls that determine whether AI systems are safe to buy, safe to deploy, and safe to challenge when they fail. And the place where those controls either exist or don’t is procurement.
CONTEXT AND BACKGROUND
In the AI era, procurement has become a form of governance. When an organisation buys an AI model, a decision-support tool, a chatbot, or an automated screening system, it is not merely purchasing software. It is buying a bundle of assumptions: how outputs are generated, how errors are handled, how bias is tested, how data is secured, and who is responsible when the system harms someone.
Yet the public conversation often treats trust as culture, not process. At Davos, panels again pushed the idea that trust and alignment must be built into AI systems, not bolted on later. That is true, but it misses a key point: you cannot “build trust” if the buyer cannot verify claims or enforce consequences. In practice, trust is created through procurement discipline, because procurement is where requirements become testable and where accountability becomes contractual.
Encouragingly, some governments are starting to treat AI procurement as a specialised capability. The OECD has been explicit that AI can both improve procurement and introduce new procurement risks, and that governance needs to cover the full lifecycle, not just purchase day.
INSIGHT AND ANALYSIS
Here is why “trusted AI” fails so often in the real world: organisations buy narratives, not evidence. A slick demo substitutes for due diligence. A confident vendor substitutes for independent assurance. A pilot substitutes for a production-grade risk assessment. And then, when something breaks, the organisation discovers it has no clear audit trail, no clear liability pathway, and no credible redress for affected people.
Procurement discipline fixes this by forcing practical questions upfront. Can the system be audited in a way that is meaningful, not performative? Do you get logs that allow a forensic reconstruction of decisions? Is there model versioning so you can tell what changed between last month and today? Are there clear performance measures and “kill switches” if drift or harm is detected? Who has authority to override the system, and how quickly?
The second failure is accountability theatre. Many organisations can describe “oversight” in a policy document, but cannot name the accountable person who signs off on risk, owns monitoring, and carries the consequences when things go wrong. Good procurement turns accountability from a concept into a role, with responsibilities, escalation paths, and measurable obligations written into contracts.
And then there is the most neglected pillar: redress. If an AI system denies a benefit, blocks a transaction, flags a learner, misroutes a patient, or makes an incorrect risk judgement, what happens next? Is there a human appeal? How fast is it processed? Can the decision be reversed? Is compensation possible for harm? These are not moral questions. They are operational ones, and procurement is where they must be designed.
This is why procurement guidance is starting to become more specific. Australia’s Digital Transformation Agency has published updated AI policy material, including procurement guidance and an impact assessment approach, explicitly aimed at strengthening assurance rather than relying on promises.
IMPLICATIONS
For South Africa, this matters because we will import many AI systems before we build them. If procurement is weak, we will import risk at scale. The starting point is simple: buyers must demand evidence that claims are true, and must contract for auditability, accountability, and remedy as first-class requirements.
Public sector procurement can lead here, because it sets market expectations.
Practical examples already exist. The District of Columbia’s AI Procurement Handbook shows what it looks like to standardise requirements, notifications, and contract addenda so “trust” becomes enforceable, not aspirational.
For business leaders, the takeaway is equally practical: trust is cheaper upfront than scandal later. If you cannot audit it, you cannot defend it. If you cannot assign accountability, you cannot manage it. If you cannot provide redress, you cannot claim legitimacy.
CLOSING TAKEAWAY
Davos will keep talking about trust, and that is fine. But the real work happens far from the microphones, inside tenders, contracts, testing plans, monitoring dashboards, and incident playbooks. Trust in AI is not a mood. It is a mechanism that makes claims verifiable, makes responsibility unavoidable, and makes redress real when systems fail. South Africa’s next wave of AI adoption will be shaped less by speeches and more by whether we learn to buy AI with discipline. The difference between progress and disappointment may come down to how seriously we treat procurement as governance.
Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net



Comments