SEXTECHGUIDE Home

Deepfake porn fears: AI-generated faces as realistic as real faces, research finds

19
Jamie F
Updated February 28, 2022
Published February 23, 2022
We may earn a commission via links on our site.
Why?
Our editorial policy and independence is vitally important to us. It has been crafted drawing upon more than 15 years of publishing experience – we wouldn’t jeopardise this reputation for a free product or travel of any kind.

At SEXTECHGUIDE, transparency, honesty, and integrity are the core of our values. We are committed to providing high-quality, unbiased content to our readers. Below are our detailed guidelines on how we maintain our editorial independence and ethical standards. 

We do not accept sponsored content or link insertion requests.
Read Full Policy

Researchers studying images of faces created by artificial intelligence (AI) have called for new guidelines in the face of the potential misuse of the technology to create super-realistic deepfake revenge porn.

A new report into recognition of AI-generated faces shows that participants couldn’t distinguish between real and fake photographs of faces.

Sophie J Nightingale, psychology lecturer at the UK’s Lancaster University, and Hany Farid, computer science professor at the University of California, called for guidelines in their research paper: AI-Synthesized Faces are Indistinguishable From Real Faces and More Trustworthy.

“At this pivotal moment, and as other scientific and engineering fields have done, we encourage the graphics and vision community to develop guidelines for the creation and distribution of synthetic media technologies that incorporate ethical guidelines for researchers, publishers, and media distributors,”, they wrote, aiming to cut off the misuse of the technology for deepfake revenge porn.

ai faces 2
Faces participants viewed in the research, to judge whether they were AI-generated or real

The researchers asked participants to judge whether images of faces they viewed were real human faces, or had been created by an AI system that generated fake face photos. Participants showed 48.2 percent accuracy – slightly less than chance – meaning that they likely found it impossible to identify AI-generated faces.

To create the fake face images, the researchers ran AI programs called GANs (generative adversarial networks), one of which was a network trained on real face images.

The researchers also found that participants rated the AI-generated faces as more trustworthy than the real faces they viewed. “Our evaluation of the photorealism of AI-synthesized faces indicates that synthesis engines have passed through the uncanny valley and are capable of creating faces that are indistinguishable – and more trustworthy – than real faces,” they wrote.

Nightingale told Screenshot that she was concerned that hyper-realistic AI-generated face images could be used to create believable revenge porn content.

“Anyone can create synthetic content without specialized knowledge of Photoshop or CGI… we should be concerned because these synthetic faces are incredibly effective for nefarious purposes, for things like revenge porn or fraud,” she says.

“We should be concerned because these synthetic faces are incredibly effective for nefarious purposes, for things like revenge porn or fraud”

Sophie J Nightingale, psychology lecturer at Lancaster University

The recent rise of deepfake videos – which allows users to place photos of individuals in ‘fake’ videos so it appears as if they were in the original footage – has led to a rise in concerns such as Nightingale’s. Apps claiming to allow users to create deepfake porn videos with ‘one click’ have been released.

In the report, the researchers said that people and companies developing deepfake technology need to take responsibility about how it could be misused.

They wrote that they should “consider whether the associated risks are greater than their benefits. If so, then we discourage the development of technology simply because it is possible. If not, then we encourage the parallel development of reasonable safeguards to help mitigate the inevitable harms from the resulting synthetic media.”

Safety measures suggested by the researchers included incorporating watermarks into image and video synthesis networks, and not releasing code information widely.

They wrote that because it is “the democratization of access to this powerful technology that poses the most significant threat, we also encourage reconsideration of the often laissez-faire approach to the public and unrestricted releasing of code for anyone to incorporate into any application.”

Read Next: Taiwan’s considering prison sentences of up to 7 years for deepfake porn crimes

If you’ve experienced image-based sexual abuse or non-consensual sharing of intimate images, help is available worldwide.

For a complete international directory of support services, visit the Cyber Civil Rights Initiative’s international resources.

Explore the topics in this article
  • 54
    Deepfake
  • 113
    Regulations
  • 8
    Revenge Porn
Article by
Jamie F is a freelance writer, contributing to outlets such as The Guardian, The Times, The Telegraph, CNN and Vice, among others. He is also the creative force behind the Audible podcast Beast Master.
By the same author…
  • Planet Taboo Melbourne 2026

    Planet Taboo 2026: Australia’s hybrid sextech conference wants to connect the global industry for $7

    Jamie F/
    February 4, 2026
  • pornhub uk block 1

    Ofcom promises more fines as Pornhub blocks unverified UK users

    Jamie F/
    February 4, 2026
  • A digital illustration features a magnifying glass over a smartphone with app icons, symbolizing app scrutiny. Bright, neon colors highlight technology and digital privacy themes relevant to app store policing and AI nudify apps.

    Apple and Google ignored own policies while nudify apps hit 700 million downloads

    Jamie F/
    January 30, 2026

Leave a Reply

Your email address will not be published. Required fields are marked *