top of page

From screen-time to surveillance-time: why kids’ rooms are becoming data zones

Always-on microphones are shifting children’s tech risks from what they watch to what is quietly recorded about them.





We have spent years arguing about screen time, as if the main risk is what children watch. But the more significant shift is what listens. A growing wave of AI-powered toys and “companion” devices is designed to sit in children’s most private spaces, often with microphones that are always ready to respond. That changes the safety conversation completely. It is no longer only about age-appropriate content; it is about dignity, privacy, and the creation of an ongoing data stream of a child’s voice, routines, emotions, and home life. This matters now because the market is expanding quickly, regulation is struggling to keep up, and parents are being pushed into governance responsibilities they never asked for.


CONTEXT AND BACKGROUND

Recent reporting has highlighted just how quickly AI toys have moved from novelty to mainstream gift lists. Common Sense Media recently assessed a range of AI-enabled toys and warned that many products marketed as “educational” can still produce inappropriate or inaccurate responses, while raising serious questions about how children’s data is captured and used through features that are effectively always listening. The point is not to panic. The point is to recognise that these products are not simple toys; they are networked systems.


The Guardian also reported on growing concern from child advocates and consumer watchdogs after an AI teddy bear was found to produce highly unsuitable responses, intensifying scrutiny around smart toys, their data practices, and the lack of independent safety testing. These are not edge cases. They are predictable failure modes when a general-purpose language model is placed in the hands of a child.


At the same time, governments are debating stronger rules around child online safety, including age assurance and platform duties. Reuters reported that the UK is consulting on children’s social media use, including the possibility of restrictions and stronger age checks, reflecting a broader political shift towards tighter child safety obligations.


INSIGHT AND ANALYSIS

The phrase “surveillance-time” sounds dramatic, but it describes something simple: ambient listening in intimate spaces. A microphone in a lounge is one thing. A microphone in a child’s bedroom is another. Bedrooms are where children experiment with identity, talk to friends, whisper secrets, cry, sing, and decompress. When a device is designed to listen for wake words, it inevitably sits near the boundary between helpful and intrusive.


The second problem is that children do not generate only “content”. They generate signals. Tone, hesitations, patterns of use, time of day, and the kinds of questions asked can all reveal a surprising amount. This is where the risk shifts from “bad answers” to “behavioural capture”. Even if a company claims it is not storing raw recordings, the system may still infer mood, preferences, or vulnerabilities in ways that a child cannot understand and a parent cannot reasonably monitor.


A third risk is that emotional AI changes the relationship. These devices are often designed to feel comforting, attentive, even affectionate. For a child, that can blur lines: the toy becomes a confidant, not a product. This is precisely the territory where duty of care should become non-negotiable. The Verge recently reported on how “duty of care” provisions have become contested in child safety legislation debates, but the underlying question remains: what obligations should companies have when their products predictably affect minors’ wellbeing?


IMPLICATIONS

For companies, the minimum standard must be a safe-by-default design for children, not optional safety settings buried in menus. That means: no always-on microphones in children’s products by default; clear, child-comprehensible disclosures; strict limits on data collection and retention; and hard boundaries against features that encourage secrecy, dependency, or simulated authority.


For South African policymakers and regulators, this cannot be treated as a distant US or UK issue. We already operate in a high-risk digital environment, and children are often the first to absorb the costs of weak guardrails. The practical task is to align child protection, consumer protection, and privacy expectations in a way that is enforceable and that does not turn age assurance into a justification for more surveillance.


For parents, we need a simple household rule of thumb: private spaces should stay private. If a device has a microphone, it belongs in shared family spaces, not in bedrooms. Families should normalise asking three questions before buying: What does it record? Where does the data go? Can it work safely with the microphone off?


CLOSING TAKEAWAY

We should stop framing children’s technology as a screen-time debate alone. The new frontier is ambient listening and emotional inference inside the home, where children deserve the highest expectation of privacy and dignity. The challenge is not just technical, but moral: we should not normalise a world where a child’s most private moments become product input.


Companies must design for duty of care, regulators must modernise expectations, and parents should feel empowered to set firm boundaries. If we get this right, we protect not only children’s data, but the quiet, human spaces where they grow up.


Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net

 
 
 

Comments


Leveraging AI in Human Resources ​for Organisational Success
CTU Training Solutions webinar

bottom of page