Agentic AI will expose weak controls in banks faster than it creates value
- Johan Steyn
- 1 day ago
- 4 min read
Without governance, agents scale errors, bias, and fraud just as easily as productivity.

Audio summary: https://youtu.be/Ar46eroFkAQ
Follow me on LinkedIn: https://www.linkedin.com/in/johanosteyn/
South Africa’s banks are clearly leaning into the next wave of AI: systems that do not merely answer questions, but can plan and execute tasks across workflows. This is often called “Agentic AI”, meaning software agents that can take actions on behalf of a person or a team. The excitement is understandable, especially when a major institution starts talking about reshaping the whole organisation around it.
But agentic AI has a brutal side effect: it turns weak controls into fast, scalable failures. If your data is messy, your approvals are informal, your audit trails are patchy, or your processes rely on “tribal knowledge”, agents do not fix that. They amplify it at machine speed. The real story is not whether banks will adopt agentic AI, but whether they will govern it.
CONTEXT AND BACKGROUND
In early February 2026, Standard Bank’s group CIO, Jörg Fischer, argued that agentic AI must reshape the whole organisation, framing it as the next layer in the operating model rather than a bolt-on innovation. That framing matters because it implies the bank is not just adding tools, but redesigning how decisions and work are executed.
Standard Bank has also signalled that it understands the human and ethical dimension by making AI risk and ethics training compulsory for staff. Training helps, but training alone cannot substitute for robust controls when software gains the ability to act.
This conversation is also happening against a broader regulatory and market backdrop. A joint Financial Sector Conduct Authority and Prudential Authority report provides an overview of AI adoption in South Africa’s financial sector and the governance questions that come with it.
INSIGHT AND ANALYSIS
The key shift with agentic AI is that it moves from advice to action. In a bank, that action might be initiating a workflow, retrieving documents, updating customer records, generating compliance evidence, raising alerts, or even preparing a payment instruction that a human approves. Each of those steps touches the machinery of trust: identity, permissioning, segregation of duties, and an auditable record of who did what, when, and why.
This is where many organisations are weaker than they realise. Banks may have strong policies on paper, yet still rely on manual workarounds, inconsistent handovers, and exceptions that are “managed” through experience rather than design. Agents will find those gaps immediately. They will follow whatever process exists in the system, not the process leaders imagine exists. If the system’s rules are unclear, the agent’s behaviour will be unclear too, and the institution will struggle to explain outcomes to customers, auditors, regulators and, crucially, the public.
Consumer risk is not theoretical. Business Report recently highlighted how AI can boost fraud detection while introducing new consumer risks that must be managed, especially as AI becomes more embedded across financial services. Agentic AI raises the stakes further because the failure mode is no longer a wrong answer; it is a wrong action.
Payments are a particularly sensitive frontier. FinASA’s discussion of “agentic payments” imagines software agents handling bill payments, shopping decisions and supplier remittances on behalf of people and businesses. Delegation may reduce friction, but it also raises questions about consent, dispute resolution, transaction limits, fraud liability and the ability to pause or reverse automated behaviour.
IMPLICATIONS
If South African banks want agentic AI to create value rather than headlines, they need to treat agents like a new class of controlled operator, not a clever add-on. That means explicit permissions, strict role-based access, and segregation of duties designed into the workflows the agent can touch. It also means default auditability: every agent action should be logged, traceable and explainable in plain language.
Second, banks need a governance model that matches the operating model shift being promised. Fischer’s broader argument about Africa shaping the digital future is compelling, but it depends on building trustworthy rails, not just capability. In a regulated environment, speed without control is not innovation; it is risk.
Third, boards and executive committees should push for measurable indicators. The point is not to ban agents, but to operationalise trust: error rates, override rates, incident response times, customer dispute metrics, and evidence that controls are working under pressure. The global direction is clear, with major industry forecasts already treating agentic payments and autonomous workflows as a near-term reality.
CLOSING TAKEAWAY
Agentic AI is not coming to banking as a neutral productivity upgrade. It is a stress test of institutional discipline. It will surface weak controls, messy data, unclear accountability and fragile processes faster than it delivers the promised efficiencies. The winners will not be the banks with the loudest announcements, but the ones that engineer trust: permissions that are precise, audit trails that are complete, governance that is real, and people who understand both the power and the limits of delegation to machines. In South Africa, where financial inclusion and consumer protection matter deeply, the future of agentic AI must be built on accountability, not excitement.
Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net


