A new and somewhat petrifying report has highlighted how bad the Android security of many hugely popular companion and adult AI chatbots is, including flaws that could potentially allow hackers access to private erotic messages.
The companion chatbot sector has grown fast and remained largely unregulated, a combination that Oversecured’s findings suggest has had predictable consequences. The report, by app security firm Oversecured, shows how apps with millions of downloads (although not necessarily including the aforementioned apps) potentially put users’ most private erotic messages at risk of exposure.

Oversecured analyzed some popular AI companion apps available on Google Play, and found what the company called 14 critical security flaws on 17 chatbot apps. According to Oversecured, ten of those apps allowed a “direct path” for hackers to access users’ conversation histories.
The company said that the apps hadn’t included “basic security” in their design, with the report arriving at a time when there is a push to introduce proper regulation for romantic AI chatbots.
Chattin’ in a hackers’ paradise
Oversecured has not revealed the names of the apps with the security flaws in its report, or technical details of the flaws. The company said this was because the vulnerabilities remain unpatched, so could be exploited by bad actors. The app names are findable with minimal effort, but SEXTECHGUIDE isn’t naming them while the vulnerabilities remain unpatched.
The company did say that some of the chatbots examined had tens of millions of downloads. The 17 AI chatbots, including those found to have vulnerabilities, had been collectively downloaded over 150 million times.
Some of these apps allow erotic content, attracting users looking for personal sex chat. For at least six of the AI ‘girlfriend’ apps, hackers had the potential to access user conversations linked to real identities, which could contain users’ explicit fantasies and desires: prime potential extortion material.
One app, which had more than ten million downloads, was found to be shipped with hardcoded credentials. This means that with reverse-engineering a hacker could potentially reach the app’s backend and billing infrastructure.
An app, also with over ten million downloads, had shipped its hardcoded cloud credentials in its public code. This means that a hacker could potentially unlock the app’s full chat database, as well as all its users’ financial records related to the app.
One app with over 50 million downloads had a weakness that could allow a malicious advert to directly query databases storing users’ conversations.
The list goes on, and it’s unpleasant reading.
The regulation question
Medical apps are heavily regulated in many countries, with strict rules about user data management and security. Companion and romantic AI chatbot apps are not, despite many being specifically designed to extract deeply personal information and conversation from users.
Sergey Toshin, Oversecured’s founder, said: “The AI companion category handles a different but equally sensitive type of data as therapy apps — personal confessions, relationship details, sexual content. These apps grew so fast that basic security was never part of the process.”
Pressure on AI companies to make companion chatbots safer and less addictive has risen recently. In California a “first-of-its-kind” bill that will make chatbot companies legally responsible for building safeguards into chatbot usage was recently passed. Safeguards such as chatbots regularly reminding users that they are not human will be brought in.
The pressure has risen partly due to the case of a Florida-based 14 year-old who died by suicide in 2024, after allegedly using the Character.ai chatbot almost constantly. The child’s mother has sued the company in relation to the death. The company later introduced a “teen mode”, which the child’s mother described as “too late”.
Scrutiny has tended to focus on potential behavioural harm: addiction, emotional manipulation, and things like the risk of crisis escalation without human intervention. Data security has been treated as a secondary concern, despite a track record of companion app leaks that suggests it shouldn’t be. In 2025, tens of millions of intimate messages and hundreds of thousands of photos were leaked from the AI girlfriend apps Chattee Chat and GiMe Chat.
It wasn’t the first time an adult platform’s failure to secure a database turned users’ private lives into a liability — the alleged BangBros leak, which exposed 12 million records including geolocations, is one precedent, and scam loan apps have weaponized users’ intimate images directly against them.
Hopefully it will be reports such as Oversecured’s, rather than more leaks, that will highlight the need for full and proper chatbot app security, perhaps deriving from far more rigid regulation.



























Leave a Reply