top of page

AI in public services: faster delivery or faster exclusion?

Automation can improve access to government services, but without design for language, disability, and digital literacy it can lock citizens out.





Governments and NGOs are under pressure to do more with less. Citizens are frustrated, staff are stretched, and service delivery backlogs feel permanent. AI appears to offer an obvious solution: automate the front door, answer questions instantly, route cases faster, reduce fraud, and cut the paperwork. But there is a hard truth hiding inside this promise.


If AI becomes the new interface to public services, then any citizen who cannot use it confidently, in their language, on their device, with their level of digital literacy, risks being pushed further to the margins. The real question is not whether AI will enter public services. It is whether it will shorten queues, or simply move the queue into an invisible digital layer that the most vulnerable cannot access.


CONTEXT AND BACKGROUND

South African public institutions are already acknowledging that AI adoption isn’t just a “new tool” problem; it’s a capability and governance problem. A January 2026 public-sector issue brief from the Department of Communications and Digital Technologies and the Digital Economy Working Group’s AI Task Force stresses that the government needs deliberate capacity building and coordinated approaches to integrate AI and other digital technologies, rather than scattered deployments that are hard to manage, secure, and sustain. That warning matters here because citizen-facing AI, especially in government and NGO environments, must be implemented with clear standards, oversight, and accountability; otherwise, the technology meant to reduce friction can quickly create new failure points and unequal access.


This matters because public service AI doesn’t live in a vacuum. It has to connect to identity systems, case management platforms, records, and policies. If every department, province, municipality, or NGO deploys different tools and different rules, service delivery becomes inconsistent, and accountability gets blurry.


Globally, the policy world is starting to state the risk plainly. The OECD’s Governing with Artificial Intelligence report discusses AI’s potential to automate and tailor public services, but also warns that poor transparency and skewed data can erode accountability and widen digital divides.


INSIGHT AND ANALYSIS

The “faster delivery” argument is real. AI can answer common questions, reduce call-centre pressure, help people complete forms, and route cases to the right team. It can also run 24/7, which matters when citizens can’t take time off work to sit in a queue.


But the exclusion risk is equally real, and it often shows up quietly. AI tools tend to be designed around the ideal user: stable connectivity, a modern smartphone, comfort with written English, and confidence in self-service. That is not the median South African citizen. Even in countries with stronger digital infrastructure, governments are having to rethink how they keep services accessible.


In the UK, the Government Digital Service has been piloting and planning a rollout of GOV.UK Chat is part of a modern digital government roadmap, which is explicitly positioned as a user-facing interface to government information and services.


The lesson is not that chat is bad. It is that chat becomes a gate. If the AI is wrong, unclear, biased, or unavailable in the user’s language, the citizen is stuck. If there is no human escalation pathway, frustration turns into abandonment. And abandonment in public services is not a minor inconvenience; it has consequences for grants, housing, health, education, and safety.


This is why the UNDP has emphasised that AI can exacerbate existing digital divides and create new forms of discrimination in public services, systematically disadvantaging vulnerable or marginalised communities if governance and inclusion are not addressed.


IMPLICATIONS

For policymakers, “AI in public services” should be treated as a justice and access issue, not a technology upgrade. Inclusion-by-design must be a requirement: multilingual support, low-bandwidth options, offline alternatives, disability access, and clear routes to human assistance.


For government departments and NGOs, the operating model matters. Central standards for privacy, data sharing, audit logs, and quality assurance reduce fragmentation. Service design should begin with the hardest-to-serve citizen, not the easiest.


For leaders, there is also a trust dimension. The UK’s current debate about digital identity and public service access shows how quickly public scepticism can grow when people fear surveillance or exclusion, even when governments frame the project as convenience and fairness.


CLOSING TAKEAWAY

AI can absolutely improve public service delivery, but it can also create a new kind of bureaucracy: silent, automated, and harder to appeal. If we want faster delivery without faster exclusion, we have to design for the citizen who struggles most, not the citizen who clicks fastest. Done well, AI can shorten queues, reduce friction, and restore trust. Done badly, it will become yet another gatekeeper in a system that already fails too many people. The future of public services will not be judged by how “advanced” the technology is, but by whether the most vulnerable can still get help when they need it.


Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net

 
 
 

Comments


Leveraging AI in Human Resources ​for Organisational Success
CTU Training Solutions webinar

bottom of page