EU nudify app ban
, , ,

EU bans AI nudify apps under amended AI Act, with December deadline


We may earn a commission via links on our site.

AI “nudification” apps that non-consensually generate explicit content are set to become illegal in the European Union (EU), following a political agreement between the EU’s main lawmaking bodies.

The European Parliament and the Council of the EU came to an agreement on Wednesday (May 6, 2026) about amendments to the EU’s AI Act, after various rounds of talks and stalling.

The amendments were mainly about implementation timelines, but a new ban on “AI systems that generate non-consensual sexually explicit and intimate content or child sexual abuse material, such as AI ‘nudification’ apps” was also agreed on. The ban will come into force on December 2, 2026.

Many countries and territories, including the US, UK and Australia, have recently moved to criminalize explicit non-consensual deepfake AI content. Most ‘nudification’ apps would be categorized as creating material like this, but the EU ban via the AI Act will cover the software itself rather than focusing on the deepfake output, potentially giving it strong powers to address the issue at the sources.

The AI Act is being touted as the world’s first comprehensive, large-scale AI law. It’s designed to regulate AI use across the EU and mitigate risks, with measures such as banning some manipulative AI and acts like governments socially scoring citizens. The AI Act will also enshrine AI transparency obligations, including disclosure requirements for deepfakes and chatbots. However, as we’ve noted previously, the Act’s treatment of adult content has been conspicuously thin. The nudify ban is a meaningful but narrow carve-out rather than a systematic approach. It addresses one of the most visible harms without touching the broader regulatory gaps that leave performers, adult creators, and sex workers exposed.

Responding to the new amendment incorporating the nudify app ban, Henna Virkkunen, the European Commission’s executive vice president for tech sovereignty, security and democracy, said: “Our businesses and citizens want two things from AI rules. They want to be able to innovate and feel safe. Today’s agreement does both.”

Big tech under nudify scrutiny

Nudify apps have become a growing cause for concern across multiple contexts. There have been reports of them being used by schoolchildren to create deepfake nude images of classmates. Deepfakes like these are already being used for sextortion. Most of the apps that generate them are free or close to it, which is part of what makes the supply-side approach (banning the software) potentially more effective than trying to prosecute individual outputs.

Recent research has shown how easy nudify apps are to obtain through mainstream app stores. In April it was revealed that Apple and Google had removed some AI ‘nudify’ apps from their app stores, after an investigation found that both stores were actively pushing users to apps that could create deepfake nude images from photos of clothed individuals.

Earlier in 2026 the tech research watchdog Tech Transparency Project (TTP) said that neither Apple nor Google were “effectively policing their platforms or enforcing their own policies” with regard to ‘nudify’ apps, after the TTP found large amounts of such apps on the Apple App Store and Google Play.

The TTP’s investigation found that 55 apps in the Google Play Store could make sexual deepfakes in the style of nudify apps, and that 47 apps in the Apple App Store could.

Both Apple and Google have supposedly strict rules about not hosting apps like these. Although they have acted to remove some when investigations such as these have been raised, the new AI Act ‘nudify’ app ban will place far more scrutiny on tech giants’ role in keeping them off their platforms.

The EU hasn’t announced specific legal penalties for producing nudify apps once the ban kicks in, but operating any AI system prohibited under the Act can result in fines of up to tens of millions of euros.