EU AI Act Compliance
, ,

The AI Act mostly ignores adult content. That’s not entirely good news


We may earn a commission via links on our site.

We humans like to imagine explicit things, and we especially like to use any technology available to illustrate and communicate said explicit things.

We’ve used rock painting for explicit imagery, photography to create spicy tokens you could secretly collect, and legend says porn was one of the original drivers behind the spread of VHS. Every time we create new technology, it’s usually a given that we use it for our smutty inclinations.

And this is no different with AI.

I find my work often sits at the intersection of sexuality and digital rights, and whenever I find myself at a table with important people, and I say that adult content is one of the biggest use cases for AI, I can sense both the lightbulbs going on and the general discomfort that comes up when sex is mentioned.

But even though we might be uncomfortable discussing it, we’re very comfortable turning to (non-judgmental) AI to help generate all of our wildest fantasies.

Proof of this is how the Internet has been populated with spicy chatbots, AI-generated boyfriends and girlfriends, and ‘nudify’ apps.

OnlyFans creators are promised the ultimate solution to skyrocket profits with AI chatters, while the mainstream is up in arms about deepfakes – but what does the legislation say about the intersection of adult content and AI, and what does it mean for your business?

Well, we will break it all down – and you might not be surprised to find there’s a lot of gaps, especially when it comes to privacy protection.

Legal Disclaimer

This article is for informational purposes only and does not constitute legal advice. The AI Act is complex, still being implemented, and how it applies to your specific business will depend on factors we can’t assess from here.

If you’re unsure about your compliance obligations — particularly around AI-generated content, chatbots, or content moderation — consult a lawyer familiar with EU digital regulation and the adult industry.

We’re here to help you understand the landscape, not to replace proper legal counsel.

Post Contents

The AI Act

The AI Act is a piece of legislation proposed by the European Commission in 2021 and rolled out in 2025. It’s the first piece of legislation concerned specifically with AI, and it aims at regulating it in a way that keeps it “human and fair,” but specialists believe it falls a bit short.

If you’ve read our article on the DSA, you’ve seen that it works like a framework, rather than super-specific laws, as it’s trying to deal with something that is constantly (and rapidly) changing – the Internet. The AI Act functions similarly, but it is also concerned with defining what we mean by AI in the first place.

By definition, the AI Act states that:

‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

So, in short, the AI Act defines AI broadly as, essentially, any machine-based system that infers outputs from inputs, whether that’s predictions, content, recommendations, or decisions. It’s deliberately vague, reflecting the fact that we’re still figuring out what AI even means.

Who are the players in the AI Act?

This is intentionally broad, and it reflects the fact that, as a society, we are also grappling with what we even mean when we say “AI”, and what the limits of the nomenclature are.

Being a framework law, the AI Act understands the technology from an actor-based perspective, anticipating the possible changes that will come with the years.

EU AI Act Stakeholders

Key roles and responsibilities in AI compliance

Providers

OpenAI, Google, Clearview AI

Build and develop AI systems. These must ensure compliance with EU rules, risk assessments, documentation, and CE marking before release.

Deployers

OnlyFans, KaufMich, platforms using AI moderation or age checks

Use AI systems in their operations. Must apply them responsibly, monitor outcomes, and ensure human oversight.

Importers

EU companies bringing in AI tools from abroad

Verify that non-EU systems meet EU standards before entering the market.

Distributors

Vendors or resellers offering AI products in the EU

Check that systems have correct documentation and CE marking; stop selling non-compliant systems.

Notified Bodies & Authorities

EU AI Office, national regulators

Supervise, audit, and enforce compliance, especially for high-risk AI systems.

Similar to the DSA, the AI Act also operates under the idea of risk, meaning that the higher the risk, the higher the responsibility an actor has. However, the approach to risk is different.

The DSA is more concerned with the systemic risks involved in using different platforms and how to mitigate them. The AI Act takes a tiered approach to risk, focusing on AI systems: what it does, how it was built, and whether its use is inherently dangerous.

Ranking risks

According to the AI Act, we have the following taxonomy of risks:

  • Unacceptable risk: those that are fully banned within the EU, such as social scoring and real-time biometric surveillance.
  • High risk: Systems involved in policing, hiring, or health. They have strict obligations when it comes to their deployment to remain compliant.
  • Limited risk: Systems that include chatbots, deepfakes, and AI-generated intimate image sharing/intimate image abuse. For most of these, the AI Act requires transparency (meaning content should be clearly labelled as AI). Deepfakes are not banned under the Act.
  • Minimal risk: The most lenient classification, given to “general-use AI” – systems such as recommendation algorithms and content moderation.

To put this in practical terms: your AI girlfriend chatbot is “limited risk.” Instagram’s algorithm deciding to shadowban your account might be “minimal risk.” Real-time facial recognition at a protest would be “unacceptable risk” — unless the police claim terrorism, in which case exceptions apply.

As you might be raising your eyebrows right now, this taxonomy can appear a bit insufficient, especially when large models like ChatGPT operate across so many different sectors. And of course, it begs the question: what does it all mean for the adult industry?

The adult world of AI

Key Dates
April 2021

AI Act proposed by European Commission

March 2024

European Parliament approves AI Act

August 2024

AI Act enters into force

February 2025

Prohibitions on unacceptable-risk AI apply

August 2025

Obligations for general-purpose AI models apply

August 2026

Full enforcement for high-risk AI systems

A lot of the use cases for AI, especially generative AI in the adult industry, are considered limited or minimal risk: chatbots, romantic companions, deepfakes, and content moderation. This has its pros and cons.

On the one hand, it means fewer headaches when it comes to compliance. However, it also means a lot of ground left uncovered.

For instance, it doesn’t concern itself with how deepfake porn is created. This is a serious privacy concern as these images are routinely used for extortion and coercion, but also it doesn’t protect the performers who often have their content stolen and “decapitated”, which is industry shorthand for videos where a performer’s face is replaced with an AI-generated one, often without consent or compensation.

This matters because the AI Act’s deepfake provisions only apply when the generated content ‘resembles a real person’ — language that may not cover cases where a performer’s body is used but their face is swapped out. Performers are left in a legal grey zone.

Here’s a concrete implication: limited risk systems must disclose that they’re AI. That means the booming market of AI chatters for OnlyFans creators is now operating in legally uncertain territory, so if you’re using AI to respond to paying subscribers without disclosure, you may not be compliant.

Content moderation, one of the most pervasive cases for AI use in the adult industry, has been classified as minimal risk, which means that, in many cases, platforms might still get away with discrimination of sex-positive content.

There’s one important thing to note, however: AI-enabled social scoring is banned by the AI Act. So that means if accounts are being blocked on Instagram based on past behavior, there could be a case to be argued that this counts as social scoring.

One emerging area to watch —though it’s not yet widespread— is “emotional recognition technology”. While still in its infancy for now, the technology could develop to scan people’s faces in videos and make diagnostic analyses about them.

For instance, pornographic videos could have their performers assessed by AI for age assurance or even to check for consent or emotional distress, which, considering how inaccurate this biometric technology is, could be disastrous for the industry.

Even if it seems like a money saver at first, sex tech companies are better off being sceptical of these solutions and always pushing for human oversight.

Finally, one big red flag that stands out is how banned risk systems can be used under suspicion of serious crime, human trafficking, or terrorism. This imposes a huge risk for criminalized communities such as sex workers, who might have their real-time biometric data scraped and location disclosed.

And given the current political climate and increasing criminalisation of adult content in some jurisdictions, these exceptions could expand.

The bottom line

The AI Act is a framework built for general AI concerns, not the specific realities of the adult industry. For now, most sex tech and adult businesses fall into low-risk categories with lighter compliance obligations. But the gaps are real: performers are underprotected, surveillance exceptions are concerning, and the rules around AI-generated content are still being tested.

The practical takeaway? If you’re using AI bots to respond to humans on platforms like OnlyFans, you probably need to disclose that. If you’re a performer, the Act won’t protect your likeness the way you might hope. And if you’re in a criminalised or semi-criminalised corner of the industry, the surveillance exceptions should worry you.

This is a space worth watching — and worth organizing around. The EU’s deregulation agenda may keep pressure light for now, but that can change quickly.

The Short Version
Using AI chatters to talk to subscribers? You likely need to disclose that.
Creating or hosting deepfake content? The Act won’t stop you, but it won’t protect performers either.
Relying on AI for content moderation? You’re in “minimal risk” territory — which means limited oversight, but also limited protection from discriminatory enforcement.
Working in a criminalised or semi-criminalised part of the industry? Watch the surveillance exceptions closely.