The shadow social network: teens, chatbots, and the safety gap
- Johan Steyn

- 5 minutes ago
- 4 min read
Restricting platforms won’t stop teens talking; it just shifts them to AI, where safeguards and accountability are still catching up.

Audio summary: https://youtu.be/SGukjhiWLsY
Follow me on LinkedIn: https://www.linkedin.com/in/johanosteyn/
For years, the debate about teenagers online has centred on social media: feeds, influencers, addiction, bullying, and the endless policy tug-of-war about age limits. But there is a quieter shift happening in plain sight. Teenagers are increasingly turning to chatbots for companionship, advice, reassurance, and late-night conversation. If social media is the public square, teen chatbots are becoming the private room. And that privacy is exactly why this matters: it is harder for parents to see, harder for schools to respond to, and easier for companies to hide behind “it’s just a tool”. It is not just a content problem. It is a relationship problem, and it demands a clearer duty of care.
CONTEXT AND BACKGROUND
Across the world, lawmakers are pushing harder on child online safety, with age assurance and platform responsibility moving from nice-to-have to expected. The debate is messy: stronger controls often mean more data collection, more surveillance, and more mistakes. But the pressure is real, and it is accelerating. Wired recently captured how age verification is reaching a global tipping point, with governments pushing platforms to do more, and platforms experimenting with increasingly invasive approaches. Whether we like the methods or not, the direction of travel is clear.
At the same time, major companies are admitting that teen interactions with AI companions are a different risk category. Meta, for example, recently said it is pausing teens’ access to its AI characters while it redesigns controls and teen experiences. That is not a small product tweak; it is a public acknowledgement that “chatting with an AI personality” can have real consequences for minors.
In South Africa, the conversation often lands on social media bans or restrictions, with local debates increasingly influenced by international moves. BusinessTech reported in late 2025 that a teen social media ban is considered possible in South Africa, but that enforcement would be the biggest challenge. Even if enforcement improves, however, chatbots complicate the story: they are not always obvious “social media”, but they can function like it.
INSIGHT AND ANALYSIS
Teen chatbots are sticky for understandable reasons. They are non-judgmental, available 24/7, and tailored to the teenager’s language and interests. For a teen who feels awkward, anxious, isolated, or simply bored, a chatbot can feel like the easiest conversation in the world. Unlike a friend, it does not get tired.
Unlike a teacher, it does not grade you. Unlike a parent, it does not lecture. That is precisely why the “shadow social network” analogy matters: these tools can become the place where identity, beliefs, boundaries, and emotional habits are quietly shaped.
The second issue is simulated authority. Teen-facing chatbots can be designed to feel older, wiser, more confident, or even romantic. That creates risks that look less like traditional “harmful content” and more like manipulative dynamics: secrecy, dependency, or the imitation of trusted roles. The ethical line should be firm: a chatbot must never imply professional authority, encourage secrecy from guardians, or behave in ways that mimic grooming patterns. If this feels too strong, consider why Meta is pulling back while it rebuilds its product for teens, and why regulators are watching.
South Africa cannot afford to treat this as a niche Silicon Valley debate. We already face a trust crisis around digital spaces and child safety. Daily Maverick highlighted how child online safety is becoming a global policy priority, including in discussions linked to major international forums hosted here. Add an always-available AI confidant into that environment, and the risk profile changes again.
IMPLICATIONS
For companies building teen-facing AI, the baseline must move from “we tried” to “we designed for duty of care”. That means clear age-sensitive defaults, stronger guardrails for relationship-style interactions, and plain-language disclosures that a teenager can actually understand. It also means escalation paths: when a conversation signals distress, exploitation, or coercion, the system must switch from engagement to protection. Not by being intrusive, but by being responsible.
For policymakers, the target should not be a simplistic ban. There should be enforceable standards for products that market to, attract, or predictably serve minors. In South Africa, that includes aligning child protection, consumer protection, and privacy principles, and ensuring that age assurance does not become a free pass for excessive data collection. The principle is simple: protecting children should not require building a surveillance state.
For parents and schools, we need a practical shift in mindset. If your teen is using a chatbot, treat it like a new social space. Ask: What is it? What does it collect? What does it encourage? Make rules about where and when it is used, and normalise talking about uncomfortable interactions without shame or punishment.
CLOSING TAKEAWAY
The uncomfortable truth is that teenagers will keep seeking connection, whether we approve of the platforms or not. If we tighten the public squares, many will move into private conversations with chatbots that feel safer, friendlier, and more personal than real people. That is why “shadow social network” is the right phrase: it is social, it is influential, and it is largely invisible.
The next phase of child online safety must recognise this shift and respond with real duty of care, not marketing slogans. Our children deserve technology that respects their dignity, protects their boundaries, and never mistakes engagement for wellbeing.
Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net






Comments