A US non-profit research organization has called for AI companies to standardize rules for AI toys, after finding major AI models being used in children’s toys despite age restrictions.
The U.S. PIRG Education Fund, which conducts policy analysis and public education about toys, found that third-party developers had been allowed to use AI models including those made by OpenAI, Anthropic and Google, in children’s toys.
OpenAI bans children aged under 13 from using its AI models, and Anthropic bans those aged under 18 from using its models. Google only allows children under 13 to use its AI model Gemini if they do it through a parent-managed Google account.

The researchers’ new report is called Not For Kids. Found in Toys: How AI Companies’ Loose Rules For Developers Put Kids At Risk. It follows another recent report by the U.S. PIRG Education Fund, that found that some AI toys marketed at children in the US were capable of sex and kink chat.
In the new report researchers found that despite age restriction rules, it was easy for third-party developers to place major AI models in toys that could be sold to children.
Google, Meta, OpenAI and xAI all allowed the researchers developer access to AI models without significant checks on how they would use them. Anthropic was the only company tested that asked if developer access would be used for products for minors, and asked for clarification on usage.
After getting developer access for the companies’ AI models, the researchers were then able to quickly create three AI chatbot models simulating an AI talking teddy bear, that could have been placed inside a toy.

Researchers also found over 20 AI toys on sale in the US that claimed to use an AI model by OpenAI, the same company whose usage policies nominally ban under-13s from direct access. At least five toys on sale online were found to be claiming to use AI models by Google. Toys claiming to use models by Anthropic and xAI, the company behind Grok, were also on sale.
Structural, not incidental
The findings raise the prospect of children using AI chatbots in toys that circumvent age restrictions, The problem is structural, not incidental. And it sits awkwardly alongside the EU AI Act’s notably light-touch treatment of AI in non-adult consumer products, which we’ve noted before tends to leave the people most affected with the least recourse.
The U.S. PIRG Education Fund said the findings highlight “a market for kids’ AI products where the job of ensuring child safety is largely left up to unvetted third parties.”
The researchers said: “Users under 13 aren’t allowed to use ChatGPT directly – but a third-party developer can pay for access to the same model and put it in a toy. The companies say their chatbots aren’t for children are the same ones powering the toys the kids in your life may be talking to.”
They added: “AI companies should standardize their rules around child-directed products. If a company makes an AI model that is not safe for children, it should not, as a general rule, allow developers to deploy that model in children’s products.”

Not the first time
In 2025 the U.S. PIRG Education Fund found that Kumma (pictured above), an AI teddy made by the company FoloToy, was capable of discussing kink, sex positions and bondage.
Following the report FoloToy pulled the bear from sale, but in the new report researchers said the bear was still able to be set on a mode using GPT-5.1: an OpenAI model. OpenAI maintains that FoloToy is banned from using its models. The bear was still running on GPT-5.1 at the time of the report. Make of that what you will before reading OpenAI’s response.
OpenAI told the researchers: “Minors deserve strong protections and we have strict policies that all developers are required to uphold.”
The company added: “We take enforcement action against developers when we determine that they have violated our policies, which prohibit any use of our services to exploit, endanger, or sexualize anyone under 18 years old. These rules apply to every developer using our API, and we run classifiers to help ensure our services are not used to harm minors.”


























Leave a Reply