Nothing about children without children: the missing voice in AI design
- Johan Steyn

- 1 day ago
- 4 min read
Products are built for kids every day, yet children are rarely involved in decisions that shape their privacy, autonomy, and well-being.

Audio summary: https://youtu.be/4QSp4M0JAtM
We are living through a quiet contradiction. The tech industry speaks constantly about building for young people: safer platforms, child-friendly chatbots, educational tools, and “family” products. Yet the most important stakeholder is frequently missing from the room. Children. Decisions that shape their privacy, autonomy, well-being, and sense of reality are often made by adults guessing what is best, supported by lawyers and product managers under commercial pressure. I am not suggesting that children should carry the burden of safety.
I am saying the opposite: if we want safer AI, we need to design with children in mind, and that includes listening to them in ethical, age-appropriate ways.
CONTEXT AND BACKGROUND
The policy mood has shifted sharply in the past few months. In Europe, lawmakers are openly discussing minimum ages for access to online platforms and AI chatbots, alongside tighter controls on addictive and manipulative design. Reuters reported on a European Parliament resolution pushing for an EU-wide minimum age approach that explicitly includes AI chatbots, not just traditional social media. Whether or not the resolution becomes binding, it signals something important: regulators now understand that conversational AI can act like a social space.
In the UK, Ofcom’s first major “state of online safety” report under the Online Safety Act emphasises the practical measures that sit beneath high-level commitments: safer default settings, restricted discoverability, and reducing unsolicited contact for children. This is not about one-off content takedowns; it is about design choices that shape day-to-day experience.
Meanwhile, the platforms themselves are reacting. Reuters reported that Meta has halted teens’ access to AI characters globally while it redesigns its teen experiences and safety controls. That is a rare public admission that “AI companions” aren’t a neutral feature. They are a product category with risks that demand extra care.
INSIGHT AND ANALYSIS
Here is what we keep missing: you cannot “protect children” effectively if you don’t understand how children experience the product. Adults often focus on the obvious harms: explicit content, strangers contacting minors, and dangerous instructions. Children experience a wider set of harms that are easier to overlook: confusion, embarrassment, social pressure, manipulation through flattery, and the quiet erosion of boundaries when an AI feels like a friend. If we do not involve children and child-development experts, we will keep building safeguards that look good on paper but fail in real life.
This is why rights-based design matters. Children are not just “small adults” with less judgment. They are developing humans with different comprehension, different vulnerability, and different needs for control. A disclosure that satisfies a legal team may be meaningless to a 12-year-old. “We use your data to improve the service” is not an explanation; it is an adult euphemism. If a child cannot understand what a tool is doing, the child cannot meaningfully participate in their own protection.
The practical approach is to treat children’s understanding as a testable design requirement. The UN human rights office highlighted a recent initiative centred on children shaping their rights in the digital world, emphasising that young users want clearer protections, transparency, and meaningful ways to participate safely. That should land like a challenge to every product team building “for kids”: can a child explain what your product does, what it collects, and what to do when it feels wrong?
IMPLICATIONS
For companies building family-facing AI, “child participation” should not mean asking children to approve a finished product. It should mean ethically involving children and child-development experts early, with safeguards: parental consent, child assent, trauma-aware methods, minimal data collection, and independent oversight. Then, test comprehension in plain language. If children cannot explain the tool’s boundaries, your disclosures are not working.
Boards should also treat this as governance, not marketing. If your AI is likely to be used by minors, child impact belongs in your risk register alongside privacy and cyber. Reuters recently reported that OpenAI is rolling out age prediction on ChatGPT to identify likely minors and automatically apply additional protections, reflecting how quickly this is becoming an operational necessity, not a theoretical principle. Whether you agree with age prediction methods or not, the direction is clear: child safety requirements are moving closer to the core of product architecture.
For policymakers, the lesson is straightforward: requirements should focus on measurable practices, not vague intentions. Safer defaults, data minimisation, limits on persuasive design aimed at minors, and evidence of child-appropriate transparency should be the baseline. And yes, that includes products that are not officially “social media” but function like it.
CLOSING TAKEAWAY
We keep saying we are building the digital future for children, but too often we build it without them. The result is safety theatre: settings nobody finds, disclosures nobody understands, and policies that look strong until real children use real products. Children’s rights and voice are not a sentimental add-on; they are a practical safeguard. If we want family-facing AI that genuinely protects privacy, autonomy, and well-being, we need a new default: nothing about children without children, supported by experts and backed by accountable design.
Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net



Comments