top of page

AI won’t just change work, it will change who controls work

As management tasks become software, organisations will face a politics of power they’re not ready for.





We talk about AI as if it only affects front-line jobs: call centres, admin teams, interns, analysts. But the more disruptive shift may land higher up the chain. AI is steadily moving into management work: allocating tasks, monitoring progress, summarising performance, nudging behaviour, recommending promotions, and even screening and hiring. On the surface, this looks like efficiency.


Underneath, it is a power shift. If the system can tell you what to do, how well you did it, and what should happen next, then it is not merely supporting managers. It is beginning to manage. And once management becomes software, the real conflict is not humans versus machines. It is humans versus the reallocation of authority.


CONTEXT AND BACKGROUND

Management has always been a blend of coordination and legitimacy. The coordination part is the machinery: schedules, workflows, targets, reporting, performance cycles, and compliance. The legitimacy part is human: judgment, context, relationships, trust, and the ability to persuade rather than merely enforce.


For years, technology has tried to optimise the coordination layer. We added dashboards, CRMs, ticketing systems, productivity tools, and remote monitoring. AI takes that to the next level by interpreting language and behaviour at scale. It can summarise meetings, detect patterns, flag deviations, and recommend actions. That is seductive to organisations under pressure to do more with less.


But there is a difference between tooling and authority. A reporting dashboard does not decide who gets hired. A task board does not recommend discipline. AI can drift into those spaces because it can convert messy human activity into scores, rankings, and suggested decisions. Once that happens, the centre of gravity moves.


INSIGHT AND ANALYSIS

The first tension is autonomy. Many managers accept being accountable for results, but they resist being told exactly how to manage. If an AI tool begins setting priorities, defining what “good” looks like, and escalating issues, then the manager becomes a conduit rather than a leader. That threatens the core identity of the role.


The second tension is status. Middle management is often the layer where organisational power is exercised: who gets visibility, who gets opportunities, who gets protected, and who gets stretched. If AI standardises decisions, it reduces discretionary influence. Some of that discretion is healthy; some of it is bias and politics. Either way, removing it will trigger resistance, not always because people are malicious, but because power is being redistributed.


The third tension is accountability. If an AI recommends hiring decisions, performance ratings, or disciplinary actions, who is responsible when it goes wrong? In many companies, the temptation will be to treat AI as an objective arbiter: “the system said so”. That is a dangerous moral escape hatch.


Organisations will need to decide, explicitly, whether managers remain accountable for outcomes, or whether they are merely enforcing a machine’s outputs.


The fourth tension is trust. Management relies on psychological safety. People need to believe feedback is fair and contextual. AI can be consistent, but consistency is not the same as fairness. A model can reward the measurable and miss the meaningful. It can penalise people whose work is less visible. It can misunderstand humour, conflict, culture, and nuance. If employees feel they are being managed by a scoring system, they will optimise for the metric and withdraw from genuine contribution.


And then there is the quiet issue: surveillance creep. To “manage” well, AI wants data: response times, meeting behaviour, sentiment signals, activity patterns, and performance proxies. Even if the intent is benign, the effect can be an always-on workplace where people feel watched. That is not just a compliance concern. It is a cultural one.


IMPLICATIONS

For business leaders, the first decision is philosophical: are you buying AI to make managers better, or to make management more controllable? Those are not the same. If you push too far into control, you may gain efficiency and lose legitimacy, which usually costs more in the long run through disengagement and turnover.


For HR and governance teams, the priority is to define what AI may and may not decide. Advisory is not the same as authority. High-impact decisions need clear human accountability, explainability, audit trails, and an appeal process that employees can trust. If people cannot challenge a decision, you are building a system that will eventually face backlash.


For managers, the opportunity is to lean into the human parts of the job: coaching, context, conflict resolution, and building meaning. AI can reduce admin, but it cannot replace credibility. The managers who thrive will be those who can explain decisions, not simply enforce them.


In South Africa and similar markets, where labour relations and social trust are often fragile, the legitimacy issue is even sharper. If AI-driven management is perceived as surveillance or unfair automation, it will become a workplace relations issue very quickly.


CLOSING TAKEAWAY

AI will absolutely change tasks, but the deeper disruption is about control. As management becomes software, organisations will be forced to confront a new politics of power: who decides, who is accountable, and what legitimacy looks like when decisions are mediated by models. The winners will not be the companies that automate management fastest. They will be the companies that redesign management thoughtfully, using AI to remove admin while doubling down on human judgment, transparency, and trust. Efficiency matters, but without legitimacy, it rarely lasts.


Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net

 
 
 

Leveraging AI in Human Resources ​for Organisational Success
CTU Training Solutions webinar

bottom of page