top of page

The Quiet Unravelling: How AI Is Eroding the Psychological Safety That Holds Organisations Together

When people no longer know whether their skills, their roles, or their contributions still matter, something more dangerous than disengagement begins — and most organisations are not paying attention



Sign up for my Substack daily AI newsletter here.


See my AI Training course portfolio for corporate Business Leaders here.




Picture a knowledge worker on a Monday morning. The report that used to take three hours to compile, analyse, and format is already complete. An AI assistant did it overnight. There is more time now — for strategic thinking, for creativity, for the deeper work that presumably matters more. But something else is also present in that moment. A quiet uncertainty. A question that does not quite surface in the team meeting but sits just below it: if the machine can do that, what exactly am I here to do? That question — unspoken in most organisations, unaddressed in most AI strategies, unexamined in most boardrooms — is one of the most consequential second-order effects of AI on the modern workplace. It is not about job losses. It is about something that precedes them and in many ways shapes their impact far more profoundly: the erosion of the psychological safety that allows people to show up, take risks, and contribute the distinctly human capabilities that no algorithm can replicate.


CONTEXT AND BACKGROUND

Psychological safety — the shared belief that it is safe to speak up, take risks, and make mistakes without fear of punishment or humiliation — is not a soft concept. It is one of the most robustly researched predictors of team performance, learning, and innovation in the organisational literature. Harvard Business School professor Amy Edmondson, who has spent decades studying the conditions under which people do their best work, has written directly about how AI is eroding trust in teams — and how leaders who treat this as a technology problem rather than a human effectiveness problem will consistently mismanage it. Writing in Harvard Business Review, Edmondson and her co-author identify predictable patterns of team dysfunction that mirror classic organisational behaviour problems — dysfunction that many leaders are attempting to solve with better AI tools or training, when the actual issue is the erosion of the human conditions that make collaboration possible.


The research confirming this dynamic is accumulating rapidly. A peer-reviewed study published in Nature, based on a three-wave longitudinal survey of employees in organisations actively deploying AI, found a significant direct relationship between AI adoption and a reduction in psychological safety — which in turn increased depression risk among employees. The study identifies the mechanism clearly: AI adoption disrupts established job roles, increases workplace stress and uncertainty, and threatens the sense of competence and belonging that psychological safety depends on. These are not the effects of poorly implemented AI. They are the effects of AI implementation that ignore the human experience of the people it is displacing.


For South African organisations, this conversation has barely begun. Boards are debating AI strategy, AI costs, and AI governance. Almost none of them are asking what AI is doing to the psychological experience of their people — and the cost of that silence is already accumulating in ways that will not show up on any dashboard until the damage is well advanced.


INSIGHT AND ANALYSIS

The specific mechanism by which AI erodes psychological safety is worth understanding precisely, because it is more subtle than the headline narrative of job replacement suggests. Research published by MIT Technology Review found that fewer than 39 per cent of leaders rate their organisation’s current level of psychological safety as very high — and that 22 per cent of employees admit they have hesitated to lead an AI project because they might be blamed if it misfires. That hesitation is the signal. In environments where AI outputs are unpredictable, where professional reputations can be damaged by an AI-generated error that slipped through inadequate review, and where the relative value of human versus machine contribution is constantly being renegotiated, people instinctively retreat from the risk-taking that innovation requires.


The deeper problem is what happens to identity and meaning when AI takes over the cognitive work that made people feel competent. A year-long longitudinal study of AI deployment in a real-world clinical setting, published in peer-reviewed research and cited widely in the future of work literature, found that AI assistance introduced what the researchers call asymptomatic effects — behavioural shifts that escape standard performance metrics and are not immediately visible or alarming, but which over time congeal into chronic harms. Clinicians whose routine cognitive work was taken over by AI found their manual skills atrophying and their professional identity wavering. They feared losing the hands-on expertise that made them unique, and worried about becoming bystanders in their own practice.


The researchers named this the AI-as-Amplifier Paradox: the very strengths that AI offers can erode the very same human expertise it was deployed to support. This is not confined to clinical settings. It is a pattern that plays out wherever AI automates the cognitive tasks that build competence over time — and it is arriving in South African organisations that have no framework for detecting it, let alone addressing it.


Self-determination theory — one of the most robust frameworks in motivation research — holds that human beings need three things to experience genuine engagement at work: competence, autonomy, and relatedness. AI disruption, when handled without deliberate attention to these needs, threatens all three simultaneously. It replaces the tasks that build and demonstrate competence. It reduces the autonomy that comes from being the person who knows how to do something. And it introduces uncertainty into the relational fabric of teams by blurring accountability, ownership, and contribution. The organisations that are not actively rebuilding those foundations as they deploy AI are not just managing a technology transition. They are quietly dismantling the psychological conditions under which their people do their best work.


IMPLICATIONS

For boards and executive teams, the first obligation is to add the psychological dimension of AI deployment to the governance conversation. This is not a human resources afterthought. It is a strategic risk. Research drawing on survey data from 2,257 employees in a global consulting firm, published in a peer-reviewed study on psychological safety and AI transformation, found that psychological safety reliably predicts whether employees adopt AI tools at all — regardless of their experience level, role seniority, or geographic region. An AI strategy that deploys tools into environments of psychological unsafety is an AI strategy that will systematically underperform. The human conditions are not separate from the technological conditions. They are the technological conditions.


For leaders and managers, the practical implication is straightforward but demanding: the pace of AI deployment must be matched by an equivalent investment in the psychological safety infrastructure that allows people to adapt without retreating. That means creating explicit space for questions, fears, and failures. It means modelling vulnerability about uncertainty rather than projecting false confidence about AI’s trajectory. And it means designing the transition so that people experience it as an expansion of their contribution rather than a replacement of it — because the evidence is clear that when people feel their competence is being threatened rather than augmented, they disengage from exactly the creative and relational capacities that AI cannot substitute.


For South African organisations specifically, this challenge carries additional weight. The structural inequalities that characterise the South African workplace mean that the psychological burden of AI-driven uncertainty falls unevenly. Workers with fewer alternative options, less access to reskilling, and greater economic precarity experience the uncertainty of AI displacement not as an inconvenience but as a genuine threat. A governance framework for AI deployment that does not account for this distribution of psychological risk is not just incomplete. It is irresponsible.


CLOSING TAKEAWAY

The first-order effects of AI — the efficiency gains, the productivity improvements, the cost reductions — are visible, measurable, and celebrated. The second-order effects are quieter, slower, and far more consequential for the long-term health of organisations and the people within them. The erosion of psychological safety is not a warning about what AI might eventually do. It is a description of what it is already doing, in organisations that have deployed AI tools without attending to the human experience of the people those tools are displacing. The unravelling is quiet. It does not announce itself in resignation letters or productivity metrics. It announces itself in the hesitation before a meeting, the question that goes unasked, the risk that goes untaken, and the creative capacity that atrophies in an environment of sustained uncertainty.


South African leaders who want to build organisations that are genuinely fit for the AI era must attend to both the visible and the invisible consequences of the technology they are deploying — because the invisible ones are where the real organisational future is being decided.


Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net

 
 
 

Comments


Leveraging AI in Human Resources ​for Organisational Success
CTU Training Solutions webinar

bottom of page