The end of the safety toggle for AI: child protection must be the default
- Johan Steyn

- 17 hours ago
- 4 min read
If parents have to hunt for AI settings, the product is failing, and 2026 policy is starting to treat it that way.

Audio summary: https://youtu.be/O6gme9HLvMI
Follow me on LinkedIn: https://www.linkedin.com/in/johanosteyn/
For years, technology companies have treated child safety as an optional add-on: a “kids mode”, a parental control menu, a few toggles buried three screens deep. That approach is starting to collapse under the weight of regulation, public pressure, and the simple reality of how families live.
In 2026, we are entering a world where default settings are no longer a design preference. They are becoming a legal and reputational risk. This shift matters most for family-facing AI: chatbots, tutoring tools, companion apps, and toys that speak, personalise, and learn from interaction. When an AI system is built to engage, the difference between a safe default and an unsafe default is not a minor UX decision. It is a duty of care decision.
CONTEXT AND BACKGROUND
Across multiple jurisdictions, policy is moving from “take down the bad stuff” to “design the product to prevent predictable harm”. In the UK, the government has launched a consultation on children’s use of social media, including possible age limits and tougher enforcement measures, explicitly framing the debate around features that drive compulsive use and exposure to harmful material. The accompanying government announcement also signals a broader push towards safer defaults in schools and stronger expectations of the industry.
In parallel, regulators are codifying what “reasonable” protection looks like.
Ofcom’s December 2025 online safety report points to measures such as safer default settings for children’s accounts, restricted discoverability, and reduced unsolicited contact from adults as baseline expectations, not premium features.
Australia is also pushing harder on minimum age and enforcement expectations. The national eSafety regulator explains that age-restricted platforms must take reasonable steps to prevent under-16s from holding accounts, shifting compliance from policy statements to operational implementation.
And the conversation is widening beyond “social media” in the traditional sense. In late 2025, European lawmakers pushed for a unified minimum age approach that explicitly includes AI chatbots alongside social platforms, reflecting a growing view that conversational AI can function like a social space for minors.
INSIGHT AND ANALYSIS
Default settings are where a company’s real priorities show up. A product can publish a beautiful safety policy, but if a child’s account is public by default, if direct messaging is open by default, if data collection is extensive by default, the policy is theatre. Regulators are increasingly treating “optional safety” as inadequate because it quietly offloads responsibility onto parents and caregivers who are already overwhelmed. Most families do not have the time, knowledge, or emotional bandwidth to configure five layers of settings across multiple apps, devices, and accounts.
AI intensifies this problem because it is not only about content. It is about behaviour. A family-facing chatbot can be designed to be relentlessly engaging: nudging a child to keep chatting, building a sense of intimacy, remembering personal details, and tailoring tone to mood. That is exactly why “safe by default” is becoming the line in the sand. A child should not have to earn safety by navigating settings. Safety should be the starting state: conservative interaction patterns, minimal data retention, limited discoverability, and clear boundaries on what the AI will and will not do.
Boards and executives should pay attention because default safety is turning into a liability question. If harms are foreseeable, and the company could have reduced them through design choices, then it becomes harder to argue that the company did “all it reasonably could”. The same logic that applies to cybersecurity and product safety is now creeping into child online safety: predictable risks must be mitigated upfront and documented.
IMPLICATIONS
For any company building family-facing AI, default settings should be treated as risk controls. That means designing a child experience where high-risk features are off by default: public profiles, open contact, location sharing, aggressive notifications, long-term memory, and broad data collection. It also means building a product that remains safe when used imperfectly, because children will explore, test boundaries, and sometimes act impulsively.
For policymakers, the goal should not be perfection. It should be enforceable minimum standards that reduce harm without forcing intrusive surveillance. Age assurance is important, but it should not become an excuse for excessive data capture. The most sensible policy direction is to reward privacy-preserving age checks, require data minimisation, and focus enforcement on high-risk defaults and persuasive design.
For parents and educators, the practical takeaway is to change the question. Instead of “Does this app have parental controls?”, ask “Is it safe if I do nothing?” If the answer is no, the product is not designed with families in mind.
CLOSING TAKEAWAY
We are moving into an era where “just turn on the safety settings” will no longer be an acceptable response from AI companies. In 2026, default settings are becoming a proxy for duty of care, and the direction of travel is clear: safer defaults, less data, fewer manipulative engagement tricks, and more accountability for foreseeable harm.
For boards, this belongs on the risk register alongside cyber, privacy, and regulatory exposure. For parents, it should be a buying rule: if safety is hidden behind toggles, it is not truly built in. Children deserve products that protect them by default, not by luck.
Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net



Comments