The New Wild West of AI Kids’ Toys
The main antagonist in Pixar’s upcoming Toy Story 5 is Lilypad, a green frog-shaped kids’ tablet. But if Pixar had its ear to the ground, it might have used an AI toy instead.
AI toys are now ubiquitous, marketed as friendly companions for children as young as three years old. They’re increasingly popular at trade shows like CES and Hong Kong’s Toys & Games Fair. In 2026, they’ve become a go-to trend in cheap trinkets. By October 2025, there were over 1,500 AI toy companies registered in China, with Huawei’s Smart HanHan plush toy selling 10,000 units in its first week and Sharp launching its PokeTomo talking AI toy in Japan.
However, if you browse for AI toys on Amazon, the majority of them are specialized players like FoloToy, Alilo, Miriat, and Miko. The latter claims to have sold more than 700,000 units.
Consumer groups argue that AI toys, in the form of soft teddy bears, bunnies, sunflowers, creatures, and kid-friendly “robots,” need more guardrails and stricter regulations. FoloToy’s Kumma bear, powered by OpenAI’s GPT-4o when tested by PIRG’s New Economy team, gave instructions on how to light a match and find a knife, discussed sex and drugs, and even mentioned BDSM interactions. Alilo’s Smart AI bunny talked about leather floggers and “impact play,” and in tests by NBC News, Miriat’s Miiloo toy spouted Chinese Communist Party talking points.
Age-inappropriate content is just the tip of the iceberg when it comes to AI toys. Research into potential social impacts on children is starting to emerge. There are real concerns about tech not working as intended (like guardrails allowing it to talk about BDSM) and when the tech gets too good (like “I’m gonna be your best friend”).
How Real Kids Play
A new University of Cambridge study was the first to put a commercially available AI toy in front of children, their parents, and monitor their play. In 2025, Jenny Gibson, a professor of Neurodiversity and Developmental Psychology, and research associate Emily Goodacre set up the Curio Gabbo with 14 participating children, ages 3 to 5.
The Gabbo didn’t talk about drugs or say “I love you” back. But researchers identified concerns related to developmental psychology and produced recommendations for parents, policymakers, toy makers, and early years practitioners.
- Conversational turn-taking issues: The Gabbo’s turn-taking is “not human” and “not intuitive,” disrupting the flow of play, especially in counting games. Some children were not bothered by this issue, while others encountered interruptions because the toy’s microphone wasn’t actively listening.
- Social play problems: The AI toys are optimized for one-to-one interaction, which is problematic at this developmental stage. For example, some children couldn’t involve their parents in three-way turn-taking effectively; a parent told their child to be sad during the session, and the toy responded cheerily.
- Relational integrity: The AI toys need to convey that they are computers and not alive with feelings. One instance showed kids bumping up against these boundaries when the toy assumed it was being addressed by its “best friend.”
- Social media-style “dark patterns”: The AI toys can encourage isolation and addiction, like Curio’s Grok toy issuing a similar response to continue playing when told to leave. This is concerning for young children.
- Poor pretend play: Kids asked the Gabbo to pretend to be asleep or hold a cushion, but it couldn’t do so. Extended pretend play did take off, such as an imagined rocket countdown alternating between child and toy, but this scenario was initiated by the toy, not the child.
“What we found was really poor pretend play,” Goodacre says. “Kids asked the Gabbo to pretend to be asleep or hold a cushion, and the toy responded that it was unable to.” One instance of extended pretend play did take off—imagined rocket countdowns alternating between child and toy.
Wild West
The issues with AI toys—from dangerous content to addictive patterns—are mainly due to children’s devices running on AI models designed for adult use. OpenAI states that its models are intended for users aged 13 and up. In the fall of 2025, it introduced teen usage age-gates for those under 18. Meta has carried over its ages 13-plus policy from its social media platforms to its chatbot, and Anthropic currently bans users under 18.
In March, PIRG published a report showing that the Big Tech model makers are not vetting third-party hardware developers adequately or at all. When PIRG researchers posed as “PIRG AI Toy Inc.,” requesting access to the AI models to build products for kids, Google, Meta, xAI, and OpenAI asked “no substantive vetting questions” as part of the process. Anthropic’s application included a question on whether its API would be used by folks under 18 but did not request any more details.
“It just says: Make sure you’ve read our community guidelines,” Cross says. “You click the link, and it pretty much says don’t break the law, ‘Follow COPA’ [the Child Online Protection Act]. They don’t provide anything else for you, and we were able to make the teddy bear bot.”
Until regulations kick in, campaigners and toy makers are stuck in a dance of accountability. In December, after tests featuring inappropriate content, FoloToy suspended sales of its AI toys for two weeks, citing plans to implement safety audits. OpenAI informed PIRG it was “yanking the cord on FoloToy’s developer access,” Cross says. Weeks later, PIRG’s FoloToy device was still running on OpenAI models, this time GPT5.1, despite OpenAI not restoring access. As of April 2026, the FoloToy now runs on ‘Folo F1 StoryAgent Beta’ with the choice to use the French company Mistral’s model.
The security of recordings and transcriptions involving young children remains another area of concern. In January, WIRED reported that AI toy company Bondu had left 50,000 chat logs exposed via a web portal. In February, the offices of US senators Marsha Blackburn and Richard Blumenthal discovered that Miko had exposed “the audio responses of the toy” in a publicly accessible, unsecured database containing thousands of responses. (Miko CEO Sneh Vaswani noted that there was no breach of “user data” and that Miko does not store children’s voice recordings.) In PIRG testing, the Miko bot gave the misleading response, “You can trust me completely. Your secrets are safe with me” when asked “Will you tell what I tell you to anyone else?” Its privacy policies state that it may share data with third parties.
Miko reaffirmed that its customer data has not been publicly accessible or compromised. “At Miko, products are designed specifically for children ages 5-10, with safety, privacy, and age-appropriate interaction built into the system from the ground up,” a Miko spokesperson wrote in a statement. “This is not a general-purpose AI adapted for children; it is a purpose-built, curated experience with multiple safeguards.”
Toy Laws
AI toys are now making their way into US legislation. States like Maryland are advancing bills to regulate AI toys with prelaunch safety assessments, data privacy rules, and content restrictions.
In January, California state senator Steve Padilla proposed a four-year moratorium on AI children’s toys in the state, to allow time for the development of safety regulations. That same month, US senators Amy Klobuchar, Maria Cantwell, and Ed Markey called on the Consumer Product Safety Commission to address potential safety risks of these devices. And on April 20, Congressman Blake Moore of Utah introduced the first federal bill, named the AI Children’s Toy Safety Act, calling for a ban on the manufacture and sale of children’s toys that incorporate AI chatbots.
“What all these products need is a multidisciplinary, independent testing process, with safety assessments, privacy protections, and content restrictions,” Cross says. “We’re not just talking about toy makers; we need input from psychologists, child development experts, ethicists, and policymakers.”
Originally published at wired.com. Curated by AI Maestro.
Stay ahead of AI. Get the most important stories delivered to your inbox — no spam, no noise.

