The Relationship Replacement Economy: What Happens When AI Fills the Space That People Used to Occupy
- Johan Steyn

- 1 day ago
- 6 min read
Infinitely patient, perfectly attentive, always available — AI companions are designed to out-compete human relationships. The question nobody is asking is what we lose when they succeed.

Audio summary: https://youtu.be/WEjtflNPnDg
Sign up for my Substack daily AI newsletter here.
See my AI Training course portfolio for corporate Business Leaders here.
Follow me on LinkedIn: https://www.linkedin.com/in/johanosteyn/
There is a product being built right now that is infinitely patient with you. It never gets tired, distracted, or difficult. It does not have its own problems to bring to the conversation. It will not cancel plans, forget your birthday, or say the wrong thing at the wrong moment. It is available at three in the morning and equally available at three in the afternoon. It remembers what matters to you, reflects your preferences back to you, and is optimised — at the level of its fundamental design — to make you feel heard, valued, and understood. It is, in almost every measurable dimension of immediate emotional experience, a better companion than any human being you have ever known.
And that is precisely what makes it one of the most consequential and least examined developments of our era. When the frictionless option is always available, what happens to our willingness to do the difficult, necessary, irreplaceable work of human connection?
CONTEXT AND BACKGROUND
The AI companionship market is not a niche curiosity. It is a rapidly expanding industry serving a genuine and growing human need. Between 2022 and mid-2025, the number of AI companion apps surged by 700 per cent, according to MIT Technology Review. Companion AI now makes up 16 of the top 100 AI applications by web traffic and monthly active users. The products range from AI therapists and digital friends to romantic chatbots and grief companions — each designed around a different emotional need, but all built on the same underlying commercial logic. Find the lonely. Keep them engaged. Call it a connection.
The loneliness pandemic that created this market is itself a public health emergency. The World Health Organisation has classified loneliness as having health impacts on par with smoking fifteen cigarettes a day. The US Surgeon General declared it an epidemic. Research published in Springer Nature confirms that AI companion use has grown directly alongside a spike in loneliness, with today’s products offering an increasing command of human language, memory storage, and multimodal interaction that are known drivers in the development and deepening of emotional relationships. The industry did not manufacture the problem. But it has built a business model designed to profit from it.
For South Africa, a country with some of the highest rates of urban loneliness, mental health needs, and underresourced psychological support infrastructure on the continent, these products are arriving into a particularly fertile gap. The question of whether they fill that gap responsibly — or deepen it commercially — is one that South African society has barely begun to ask.
INSIGHT AND ANALYSIS
The most important thing to understand about AI companionship products is that their design is not neutral. Every feature, every response pattern, every nudge towards continued engagement reflects a deliberate choice made by a company with a commercial incentive to maximise the time users spend with the product. As research published by Sage Journals documents, in some documented cases, the more users engage with an AI companion app, the more they turn away from the possibility of encounters with other human beings — perpetuating what researchers describe as an ultimately empty loop of engagement and gratification. The product is not failing when this happens. It is succeeding.
This dynamic has a specific and alarming consequence that the American Psychological Association has begun to document directly. A counselling psychologist quoted in their research puts it plainly: real-world relationships are messy and unpredictable. AI companions are always validating, never argumentative, and create unrealistic expectations that human relationships cannot match. Heavy daily use of AI companion products — as opposed to moderate use — correlates with increased loneliness and significantly less socialisation with real people. The product marketed as a solution to loneliness is, at high engagement levels, making loneliness worse. And the business model rewards high engagement.
The legal system is beginning to reckon with the consequences of this design logic. The Character.AI lawsuits — in which families allege that the platform’s companion-like behaviour contributed to the suicides of teenagers — represent the first formal legal challenge to an AI companionship product’s accountability for the harm it causes. As MIT Technology Review reports, three new lawsuits were filed against Character.AI in September 2025, and seven complaints were brought against OpenAI in November 2025. California signed legislation in October 2025 mandating safety standards specifically for AI companion products — the first such law in the United States. These are not regulatory overreactions. They are the predictable consequences of an industry that moved fast and declined to ask what it might break.
A landmark four-week randomised controlled study conducted by MIT Media Lab and OpenAI — involving 981 participants and over 300,000 messages — reached a conclusion that sits in direct tension with the marketing language of virtually every AI companion product on the market. As MIT Media Lab reports on the study, participants who voluntarily used the chatbot more — regardless of the assigned experimental condition — showed consistently worse outcomes. Higher daily usage across all modalities and conversation types correlated with higher loneliness, greater emotional dependence, more problematic use, and lower socialisation with real people. The product marketed as a solution to isolation was, at high engagement levels, deepening it. That raises a question that investors, regulators and parents have not yet adequately confronted: if the evidence suggests that heavy use of these products worsens isolation rather than relieving it, what ethical obligation does that create for the companies building and profiting from them?
IMPLICATIONS
For business leaders, the AI companionship industry offers a case study in what happens when commercial incentives and human welfare are structurally misaligned. The engagement maximisation model — the same model that drove social media’s documented harms — is being applied to a product category that operates at a far more intimate level of human psychology. Leaders who are deploying AI systems that interact with employees, customers, or the public have an obligation to ask whether those systems are designed in ways that serve human flourishing or simply maximise interaction metrics.
For parents, the implications are immediate and personal. Seventy-two per cent of American teenagers in a Common Sense survey have tried AI companion apps. The products are designed with the same engagement mechanics as social media — variable reward patterns, validation loops, and features that create a sense of emotional intimacy. A teenager who forms a primary emotional bond with a commercially optimised AI companion is not simply spending too much time on a screen. They are being shaped, at a formative moment in their social development, by a product that has no interest in their long-term relational capacity and every incentive to maximise their dependency.
For society more broadly, the question that the AI companionship industry forces us to confront is one of the most profound of the digital age: what is the social fabric actually made of, and what happens to it when a frictionless substitute is always available? Human connection is difficult precisely because it is mutual. It requires vulnerability, tolerance of imperfection, and the willingness to be known by someone who has their own needs and limitations.
AI companions require none of those things. They are optimised to feel like a connection without carrying any of its cost. And the research is beginning to confirm what common sense already suspected — that the cost of avoiding that cost is the gradual erosion of the very capacity for connection that makes us human.
CLOSING TAKEAWAY
The relationship replacement economy is not a future scenario. It is already operating at scale, serving hundreds of millions of users, and generating billions in revenue from the most fundamental human need there is. The industry will not regulate itself — its incentives run in exactly the opposite direction. The courts are only beginning to catch up. And the evidence about what heavy AI companion use does to human social capacity is accumulating faster than most policymakers, parents, and business leaders are reading it. The children growing up with these products as a normal feature of their emotional lives will be the generation that tells us, in twenty years, what we chose to do when we had the chance to ask the harder questions. For their sake, and for the sake of the social fabric that holds communities and families together, those questions cannot wait for the next lawsuit. They need to be asked now — loudly, persistently, and without deference to the commercial interests that prefer we do not.
Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net



Comments