ChatGPT is set to launch an age prediction function that will block graphic sexual content if the chatbot believes the human user is aged under 18.
OpenAI said that the way ChatGPT, the company’s flagship AI chatbot, “responds to a 15-year-old should look different than the way it responds to an adult”. The company added that if a user wants to engage in flirtatious talk with ChatGPT, they should be confirmed to be an adult.
According to OpenAI policy, ChatGPT users need to be at least 13 years old to use the chatbot.
The age prediction announcement came after AI chatbots have come under intense scrutiny with regard to how teenagers use them. OpenAI is currently being sued by the parents of a California-based 16 year-old, alleging that their son died by suicide after ChatGPT encouraged him to take his life.
OpenAI has not revealed exactly how ChatGPT will predict a user’s age, but the company’s wording suggests that it could be based on the user’s messaging style with the chatbot rather than data or ID checks.
“When we identify that a user is under 18, they will automatically be directed to a ChatGPT experience with age-appropriate policies, including blocking graphic sexual content and, in rare cases of acute distress, potentially involving law enforcement to ensure safety,” OpenAI said.
Sam Altman, OpenAI’s CEO, said that age prediction could also help ChatGPT decide whether it’s appropriate to behave in a more ‘adult’ way with the user.
ChatGPT is programmed to not produce sexually explicit content at all, but it is understood that the new restrictions would result in a tightening of what is considered sexual content.
“The default behavior of our model will not lead to much flirtatious talk, but if an adult user asks for it, they should get it”
-Sam Altman, OpenAI CEO
Earlier in 2025 ChatGPT’s rules were relaxed to allow it to produce erotic content in more contexts. In 2024 the company said it was exploring the possibility of allowing it to produce some NSFW content. It’s likely that this kind of content would become under the remit of what would be restricted for under-18s.
“We have been working to increase user freedoms over time as our models get more steerable,” Altman wrote in a blog. “For example, the default behavior of our model will not lead to much flirtatious talk, but if an adult user asks for it, they should get it.”
He added that if ChatGPT detects that it is talking with a user aged under 18, the chatbot “will be trained not to do the above-mentioned flirtatious talk if asked, or engage in discussions about suicide of self-harm even in a creative writing setting.”
OpenAI said that if ChatGPT isn’t confident in its age prediction, it will default to behaving as if it is interacting with a person aged under 18. When this happens for adult users, they will have to verify their age to make the chatbot go into ‘adult’ mode.
This verification may be done by an ID check, a move OpenAI acknowledged would lead to criticism about privacy concerns.
“We realize that these principles are in conflict and not everyone will agree with how we are resolving that conflict,” the company said. “These are difficult decisions, but after talking with experts, this is what we think is best and want to be transparent in our intentions.”
Parental controls for ChatGPT are set to be introduced before the end of September 2025. They will allow parents to link their account with their teen’s account, will allow functions such as memory and chat history to be disabled, and can send alerts to parents when ChatGPT detects that the teen user is in distress.
Recently Altman suggested that OpenAI would not produce sex robots, because he didn’t want vulnerable people to have their “delusions” about AI exploited.
Leave a Reply