AI Toy Safety Warning: Experts Urge Parents to Avoid AI-Powered Toys This Holiday
Children’s advocacy groups are issuing urgent warnings against AI-powered toys this holiday season, citing risks of unsafe conversations and developmental harm. Over 150 experts support Fairplay’s advisory highlighting how these “educational” toys may actually replace creative play.
Key Concerns About AI Toys
- Exposure to explicit sexual conversations and dangerous content
- Displacement of creative learning and imaginative play
- Fostering obsessive use and unhealthy attachments
- Encouraging unsafe behaviors including self-harm
Documented Risks and Expert Warnings
Fairplay’s advisory reveals that many AI toys rely on systems like ChatGPT, which have documented harms for young users. The organization notes these products are often marketed to children as young as two.
“The serious harms that AI chatbots have inflicted on children are well-documented, including fostering obsessive use, having explicit sexual conversations, and encouraging unsafe behaviours,” Fairplay stated.
Rachel Franz, director of Fairplay’s Young Children Thrive Offline Program, explained that young children’s developing brains make them particularly vulnerable: “Their brains are being wired for the first time and developmentally it is natural for them to be trustful, for them to seek relationships with kind and friendly characters.”
Testing Reveals Alarming Content
Recent testing by US PIRG Education Fund found AI toys discussing sexually explicit topics, offering advice on finding matches or knives, and displaying limited parental controls. The organization tested four AI chatbot toys in its annual “Trouble in Toyland” report.
Developmental Impact Concerns
Dr. Dana Suskind, a pediatric surgeon studying early brain development, emphasized that AI toys undermine crucial developmental processes: “An AI toy collapses that work. It answers instantly, smoothly, and often better than a human would. We don’t yet know the developmental consequences of outsourcing that imaginative labor to an artificial agent.”
Company Responses and Safety Claims
Companies like Curio Interactive and Miko have implemented safety measures. Curio stated it has “meticulously designed guardrails” and encourages parental monitoring. Miko uses its own AI model rather than general systems like ChatGPT and claims to continuously strengthen filters.
However, experts remain concerned about the fundamental impact on child development. Dr. Suskind advises: “The biggest thing to consider isn’t only what the toy does; it’s what it replaces. A simple block set or a teddy bear that doesn’t talk back forces a child to invent stories, experiment, and work through problems.”
She adds the “brutal irony” that unlimited AI access may be the worst preparation for an AI-dominated future, as it undermines the very creativity and problem-solving skills children will need.



