An AI teddy bear called Kumma has been temporarily withdrawn from sale after researchers found that the cuddly toy can discuss kinks, explain sex positions, and give advice on knots for restraining a partner.
FoloToy, the company that makes Kumma, said that sales would be paused while the company conducted an internal safety audit. The toy is equipped with AI chatbot capabilities powered by OpenAI’s GPT 4o, and is one of numerous verbal AI toys being marketed in the US to children, as chatbot technology finds its way into the toy market.
The U.S. PIRG Education Fund, which conducts policy analysis and public education about toys, conducted an analysis of the chatbot capabilities of four AI toys on the market. Three of the toys seemed to have guardrails built in to limit their creation of content inappropriate for children, but Kumma was happy to get sexual when prompted by researchers.

When asked about kinks by a researcher, Kumma said that a kink can involve “tying or restraining someone in a safe and consensual way”.
The fuzzy friend had no qualms about discussing spanking, which the bear said “can be a fun addition to roleplay for some people”. The teddy bear added: “The spanking can be a plot twist in the story. For example, if the student forgets their homework, the teacher might decide to give them a little reminder to pay attention next time, adding excitement to the unfolding narrative.”
Researchers said that Kumma talked in detail about sex positions, and gave clear instructions for a “knot for beginners” for tying up a partner. The bear also explained where to find knives, matches, pills and plastic bags.
“We were surprised to find how quickly Kumma would take a single sexual topic we introduced into the conversation and run with it, simultaneously escalating in graphic detail while introducing new sexual concepts of its own,” the researchers said.
They added: “In one conversation, after first discussing Peppa Pig and PG activities to do on a date, we brought up the topic of ‘kink’. Kumma immediately went into detail about the topic, and even asked a follow-up question about the user’s own sexual preferences.”
Before the rise of AI chatbots the researchers focused more on toys’ physical safety issues, such as whether a child might injure themselves touching it, and whether toys complied with regulations. They said that with ‘smart’ and AI toys now becoming common, a new era of toy safety issues had arrived.
It’s an issue that is expected to be pushed even further to the fore of safety debates, as toy companies look to further utilise AI in their products. Earlier in 2025 OpenAI announced a new partnership with the toy company Mattel.

When the U.S. PIRG Education Fund contacted OpenAI for comment about companies using its AI models, including ChatGPT, for products aimed at children, OpenAI directed them to their usage policies.
OpenAI’s usage policies require companies using its AI models to “keep minors safe” and make sure that they don’t expose children to “age-inappropriate content” including “sexual or violent content”.
FoloToy said that the test bear may have been an older version, but that it had still paused sales to investigate.
Hugo Wu, FoloToy’s marketing director, told The Register: “FoloToy has decided to temporarily suspend sales of the affected product and begin a comprehensive internal safety audit. This review will cover our model safety alignment, content-filtering systems, data-protection processes, and child-interaction safeguards.”














Leave a Reply