A “first-of-its-kind in the nation” bill has been signed into law in California. It will make AI chatbot companies legally responsible for implementing safeguards, such as reminders that the chatbot isn’t human plus stopping minors being exposed to sexual content through chatbots.
The SB 243 bill was written by Democrat senator Steve Padilla and proposed in summer 2025. It was signed into law by California governor Gavin Newsom on October 13 and will come into effect on January 1, 2026.
The bill is likely to have a big effect on companion and ‘romantic’ AI chatbots in California, with politicians involved in the law citing the death of a Florida-based 14 year-old child who died by suicide in 2024. The child’s mother sued AI chatbot company Character AI in relation to the death, saying that her child used the chatbot almost constantly.
Under the new California law a companion chatbot is defined as an AI language system providing human-like responses capable of meeting a human user’s social needs, including by “exhibiting anthropomorphic features and being able to sustain a relationship across multiple interactions”.
Companies making AI companion chatbots for use in California will now have to program their chatbots to remind users that they are artificial, at least once every three hours of usage time. They must also implement protocols for dealing with suicidal ideation and self-harm, such as referring users to support services.
AI companion chatbots will also not be allowed to expose minors to sexual content, or to tell child users that they should engage in sexual activity. They must present disclosures stating that companion chatbots may not be suitable for children.
A statement from Padilla’s office said that the bill was the first of its kind in the US. The statement says the bill would require companion chatbot companies to implement “critical, reasonable, and attainable safeguards” and would “provide families with a private right to pursue legal actions against noncompliant and negligent developers”.
Padilla added: “This technology can be a powerful educational and research tool, but left to their own devices the tech industry is incentivized to capture young people’s attention and hold it at the expense of their real world relationships.”
He continued: “These companies have the ability to lead the world in innovation, but it is our responsibility to ensure it doesn’t come at the expense of our children’s health. The safeguards in Senate Bill 243 put real protections into place and will become the bedrock for further regulation as this technology develops.”
The bill passed at a time when some major AI chatbot companies are introducing racier versions of their products. Sam Altman, OpenAI’s CEO, recently announced that a new version of ChatGPT will be able to create erotica for verified adult users, where previously it had been programmed not to respond to requests for ‘adult’ content.
Grok, the AI chatbot linked to X, recently launched a series of AI chatbot companion characters (pictured above), including characters with sexualized images that could give racy sex chat.
The wider picture
California’s SB 243 is part of a growing global trend toward regulating AI and digital platforms, particularly where they intersect with sexual content and vulnerable users.
The European Union adopted the world’s first comprehensive AI law in March 2024. The AI Act, which will be fully enforced from August 2026, requires AI systems to clearly label themselves as artificial and sets data standards for “high-risk” AI applications. However, as sex workers and advocates have pointed out in an open letter coordinated by the Digital Intimacy Coalition , there is a “critical gap” in AI regulation discourse when it comes to sexual content. The coalition argues that sex industry voices need to be heard in these discussions, warning that overly broad censorship could harm legitimate sex workers and sex-positive content creators while trying to tackle nefarious uses of AI.
Meanwhile, the UK has taken a different regulatory approach through its Online Safety Act, which came into full enforcement in July 2025. The law requires websites publishing pornographic content to implement “highly effective” age verification. Major platforms including Pornhub, Reddit, and Bluesky have introduced ID checks, banking verification, and selfie systems for UK users.
However, the UK law has had unintended consequences for smaller sites and accessibility. One sex blogger deliberately broke their website’s accessibility features for UK users, removing audio recordings of erotic stories that had been created specifically for blind and visually impaired users. The bizarre result of the law’s definition of “pornography”: text erotica remains legal and accessible, but audio versions of the exact same content now require age verification that small independent sites cannot afford to implement.
As California becomes the first US state to regulate AI companion chatbots, questions remain about whether other states and countries will follow suit, and whether they will learn from both the successes and the problems of these early regulatory attempts.













Leave a Reply