The digital workforce moment: why AI agents are no longer just software
- Johan Steyn

- 3 days ago
- 4 min read
AI agents are starting to act like junior operators inside core business systems, and that changes how we manage risk, trust, and accountability.

Audio summary: https://youtu.be/GlB8qR_UKQk
Follow me on LinkedIn: https://www.linkedin.com/in/johanosteyn/
There is a quiet line being crossed in the modern workplace. For years, enterprise software helped people store information and follow workflows. Then came automation, which followed rules. Now we have AI agents that can interpret instructions, navigate business systems, and take actions on our behalf.
That is a very different category of capability. When an AI agent can open a case in a CRM, update a customer record, create a purchase request, schedule an interview, reconcile a payment exception, or escalate a compliance issue, it starts behaving less like software and more like a new class of employee: a digital operator. If we keep treating these agents as “just tools”, we will be surprised by the governance gaps, security risks, and cultural consequences that follow.
CONTEXT AND BACKGROUND
Enterprise platforms are racing towards what many vendors now describe as systems of action, not just systems of record. The point is simple: it is no longer enough for software to show you data; it must help move the work forward. Microsoft is openly positioning Copilot and agents as part of the future operating model of firms, with a strong emphasis on agents embedded across everyday work. See the framing from Microsoft’s Ignite 2025 announcements.
At the same time, workflow platforms are tying their growth stories to agentic capability. ServiceNow’s latest outlook and commentary have leaned heavily into AI-driven demand, which is now a mainstream investor narrative, not a fringe experiment.
This is not just a US or European story. In South Africa and across Africa, many organisations sit with stretched teams, fragmented data, legacy systems, and rising pressure to improve service levels without ballooning headcount. That is exactly the context in which digital operators become attractive. But it is also the context in which poor controls can create outsized harm.
INSIGHT AND ANALYSIS
The key shift is agency. If a traditional chatbot gives advice, you can ignore it. If an agent takes an action, it changes the state of your business. That action might be correct, or it might be subtly wrong in a way that only becomes visible at month-end, during an audit, or when a customer escalates to the regulator.
This is why “treat agents like employees” is not a metaphor. Employees have job descriptions, role clarity, permission boundaries, supervision, and performance reviews. They also have consequences when they breach policy. Most organisations have not yet translated those ideas into the digital realm. Who “owns” the agent in HR or finance? Who signs off on its scope? Who monitors its decisions? And who is accountable when it causes a loss?
Security makes the point even sharper. When agents are connected to core systems, the identity and access layer becomes the battlefield. A recent disclosure about a critical flaw impacting ServiceNow’s AI-related components is a sobering reminder that agentic capability expands the attack surface:
Even vendors and executives are starting to emphasise governance as agents begin interacting with each other and running deeper workflows. A recent interview with ServiceNow’s CTO highlights that this is not only about productivity, but about control, trust, and the human role in oversight.
IMPLICATIONS
If AI agents are a new class of employee, then leaders need a practical operating model. Start with role design: name the role, state its purpose, list the tasks it can perform, define what it must never do, and specify escalation rules. Then treat access like you would for staff: least privilege, segregation of duties, and strong audit trails.
Second, create a “probation” approach. Let agents operate in constrained environments first, with human review, before expanding autonomy. In finance and HR, especially, small errors can become reputational disasters, not just operational annoyances.
Third, contract for reality, not hype. Partnerships and platform announcements are accelerating, including deeper model integrations intended to power workflow agents, such as the recent reporting on Anthropic’s models supporting ServiceNow’s agent ambitions. Buyers should demand clarity on logging, monitoring, incident response, and where responsibility sits when things go wrong.
CLOSING TAKEAWAY
The biggest mistake organisations can make is to treat AI agents as a shiny feature, rather than a new category of operational actor. Once software can do work, not just describe work, the organisation must respond with the same seriousness it applies to hiring, supervision, and risk management. In South Africa, where skills constraints and service pressure are real, digital operators could unlock meaningful capacity. But the long-term prize is not speed. It is trust: trust in decisions, trust in data, and trust that humans remain accountable for outcomes. That is the line worth defending as this new workforce arrives.
Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net






Comments