top of page

The Silence in the Boardroom Is the Loudest AI Risk Nobody Is Talking About

Directors who do not understand AI well enough to ask hard questions will not ask them — and the strategies that go unchallenged today will become the governance failures of tomorrow



Sign up for my Substack daily AI newsletter here.


See my AI Training course portfolio for corporate Business Leaders here.




Picture the scene. A senior management team presents the organisation’s AI strategy to the board. The slides are polished. The language is confident. Words like models, agents, inference, and governance frameworks are deployed with fluency. The board nods along. Questions are few. Approval is given. And as the meeting closes, at least half the directors in the room could not have told you with any precision what they had just approved, what risks it carried, or what accountability structures existed to catch it when something went wrong. This scene is playing out in boardrooms across South Africa and the world with remarkable regularity. And the silence it produces is not just uncomfortable. It is one of the most consequential governance risks of our era — hiding in plain sight, dressed up as confidence.


CONTEXT AND BACKGROUND

The scale of the AI literacy gap at the board level is not a matter of opinion. It is a documented and growing concern among governance professionals, regulators, and legal scholars. According to PwC’s corporate governance research, only 35% of board members say their boards have integrated AI into their oversight activities — meaning almost two-thirds of boards are approving AI strategies without the governance frameworks to match. At the same time, AI is becoming integral to every aspect of enterprise decision-making, from financial modelling and risk assessment to hiring, customer engagement, and regulatory compliance.


The problem is sharply framed by legal scholars at WilmerHale, who argue that AI governance has rapidly become both a legal and a strategic imperative. Their analysis notes that surveys indicate only a minority of boards have adopted formal AI governance frameworks or established clear metrics for oversight — even as courts and regulators increasingly expect them to. Directors who cannot interrogate AI strategy are not simply behind the curve. They are potentially exposed to fiduciary liability when things go wrong.


For South African directors, this challenge has acquired specific and urgent legal weight. The King V Code on Corporate Governance, published by the Institute of Directors in South Africa and effective for financial years beginning in 2026, now explicitly requires boards to set the strategic direction for AI and approve policies governing its ethical, compliant and effective use. According to ENS Africa, King V introduces clear requirements for AI oversight that include human oversight mechanisms, ethics, security and privacy — making AI governance a formal board responsibility in South Africa for the first time. A board that cannot engage meaningfully with AI strategy is now not just informationally deficient. It is potentially non-compliant.


INSIGHT AND ANALYSIS

The governance challenge here is not that directors lack intelligence or diligence. It is that AI presents a genuinely new category of complexity — one that existing board education frameworks were not designed to address, and one that evolves faster than most governance refresh cycles can follow. The Corporate Governance Institute makes the point plainly: directors cannot and should not leave AI strategy to the technology team and assume their governance obligation stops there. AI is too consequential, too cross-functional, and too fast-moving for that approach to hold. It can affect finance, ethics, public relations, data security, legal exposure, and workforce strategy simultaneously — and it can do so at a pace that makes quarterly board reporting feel dangerously slow.


The specific risk of boardroom silence on AI is that it breaks the accountability chain at precisely the point where it matters most. When an AI system causes harm — a discriminatory output, a flawed risk model, an automated decision that damages a customer — the question of accountability travels upward. If management deployed the system under a board-approved strategy that nobody on the board actually understood, accountability becomes genuinely unclear. The Harvard Law School Forum on Corporate Governance notes that Deloitte research shows 72% of boards report having at least one risk oversight committee — yet AI demands the same treatment as financial and operational risk and rarely receives it. AI security risks can compromise sensitive data, biased outputs can raise compliance problems, and irresponsible deployment of AI systems can have far-reaching consequences for the enterprise, consumers, and society at large.


The deeper problem is cultural. In many boardrooms, the unspoken norm is that admitting ignorance about technology signals weakness. Directors who have spent decades building authority and credibility are reluctant to ask basic questions about AI in front of colleagues. Management teams — who often have a vested interest in smooth approval — rarely design presentations to invite challenge. The result is a room full of intelligent, experienced people collectively pretending to understand something none of them has been properly equipped to evaluate. That collective pretence is the governance failure. It is not malicious. But it is consequential.


IMPLICATIONS

For boards, the solution is not to produce a generation of director-engineers. The Corporate Governance Institute is clear on this: directors need dedicated training that enables them to contribute to AI strategy and — crucially — to question it when it needs questioning. The goal is not technical mastery. It is the ability to ask informed, sceptical questions about assumptions, risks, accountability structures, and failure modes. That is a specific and learnable capability. And it is one that the boards of most South African organisations have not yet invested in seriously.


For chairpersons specifically, the responsibility to create a boardroom culture where AI questions are encouraged rather than suppressed is particularly important. A director who asks a basic question about how an AI model makes decisions is not exposing ignorance. They are exercising exactly the kind of oversight that governance demands. Chairpersons who model that behaviour — who demonstrate that challenging AI strategy is a sign of governance strength, not weakness — will create boards that are genuinely equipped for the era they are operating in.


For regulators and governance bodies in South Africa, the arrival of King V creates a real opportunity. The requirement for board-level AI oversight is now on the books. What is needed next is practical guidance on what AI-literate oversight actually looks like in practice — not just in principle. The Institute of Directors in South Africa has a significant role to play in translating the letter of King V into the lived reality of boardroom capability.


CLOSING TAKEAWAY

The silence in the boardroom on AI is not the silence of considered judgment. It is the silence of unpreparedness dressed up as confidence. The strategies being approved without scrutiny today will generate the governance failures, regulatory interventions, and reputational crises of tomorrow. South Africa has always prided itself on the quality of its corporate governance tradition. King V now extends that tradition explicitly into the age of artificial intelligence. The question for every director sitting on a board that deploys, approves, or benefits from AI is simple and urgent: do I understand what I am responsible for well enough to ask the question that needs asking? If the honest answer is no, then the work of building that capability is not optional. It is the job.


Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net

 
 
 

Comments


Leveraging AI in Human Resources ​for Organisational Success
CTU Training Solutions webinar

bottom of page