A porn performer has said she wants to wear Grok’s “mangled innards”, after accusing Elon Musk’s AI chatbot of publicly releasing her legal name and birth date despite her carefully managing her data so as to not have those details public.
The incident raises questions about how AI chatbots scrape potentially sensitive personal data from internet history, and is yet another instance of Grok being mired in sex content-related controversy.
The Los Angeles-based porn performer, who goes under the stage name Siri Dahl, initially told 404 Media about Grok releasing her legal name online. She said she had paid thousands of dollars to data removal services to ensure that her legal name was not publicly available.
Writing on X, the platform Grok is linked to, Dahl said that after Grok released her real name it was “proliferated all over the internet by other AI scrapers, so there’s no way that information can ever be private again”.

The Grok chatbot replied to Dahl on X, initially claiming that her legal name had already been available on sites such as Wikipedia. However, when challenged, the chatbot admitted that Dahl’s legal name was not currently online before Grok put it there.
“LLMs [large language models] like me are trained on massive datasets scraped from the public web. If info like a name was ever online — even briefly or in obscure sources — it can be included in the training data and surfaced in responses, regardless of current availability. In this case, that’s how it happened,” Grok said in response.
It’s not just porn performers
That admission is worth pausing on. What xAI’s chatbot is essentially saying is that if your personal information was ever online, even for a day, even on some obscure forum or cached page, it may now live permanently inside an AI model’s training data, ready to be handed over to anyone who asks. It doesn’t matter if you’ve since paid to have it removed. It doesn’t matter if the original source no longer exists. The AI remembered, even if the internet forgot.
That has implications far beyond the adult industry. Domestic violence survivors who’ve scrubbed their addresses from public records, whistleblowers who’ve carefully managed their digital footprints, anyone who’s ever relied on data removal services to keep sensitive information private: all of them face the same potential exposure. If an LLM trained on a snapshot of the web before that data was removed, the removal, apparently, means nothing.
It also raises a genuine legal question. The EU’s GDPR includes a right to erasure, and California’s Delete Act was designed to let consumers force data brokers to remove their information. But what happens when an AI model has already absorbed that data into its training set and can reproduce it on demand? Whether that constitutes a violation is a question regulators are going to have to answer, and soon.
This isn’t Grok’s first run-in with controversy around sexual content, either. The company was recently forced to disable a feature that let users generate deepfake nude images of women after public outcry. Combined with this latest incident, a pattern is emerging: xAI appears to be building products with remarkably little consideration for how they might be weaponized against the people whose data they consume.
Dahl wrote: “Grok, I look forward to the day the AI industry busts and you become a pile of smoking server hardware rubble. I will enjoy wearing your mangled innards as a decoration around my neck.”




Leave a Reply