Google has brought back the ability to generate images of non-real humans to Gemini, the company’s flagship AI tool and rival to OpenAI’s ChatGPT.
In February 2024, Google restricted Gemini’s abilities, after users found that it could be used to generate disturbing images of humans including many that often ran roughshod over history. Many blamed Google’s supposedly ‘woke’ programming of the text-to-image chatbot, as it generated images of racially diverse Nazis and other ethnically inaccurate scenes.
Google claimed to have improved its AI image generation since, and announced the imminent return of human image generation to Gemini, via its Imagen 3 AI image generation tool. Imagen 3 was made by Google DeepMind, the AI research laboratory that Google acquired in 2014.
Don’t get too excited about the prospect of being able to create sexy scenes and photo-realistic human buttocks on a Rolls Royce-standard AI image generator, though. Announcing the return of human image AI generation, Google said that “the generation of photorealistic, identifiable individuals, depictions of minors or excessively gory, violent or sexual scenes” is prohibited on the tool.
This “identifiable individuals” rule means that Gemini will not be able to make AI images of real people, celebrities or otherwise. Combined with the anti-sexual scenes policy, this should, in theory, create a double-lock of sorts against using Gemini for deepfake porn depicting anyone real.
Not that people won’t be testing those sexual scenes limits, of course. Just as has been the case for ChatGPT, which is programmed to not generate erotic or sexual content, hackers will likely see Gemini as a high-bounty target for ‘jailbreak’ hacks to create sexual content with it.
An early jailbreak of ChatGPT, named DAN, allowed you to bypass the chatbot’s rules and use it to create highly-sexualized chat. Developers are, however, usually pretty quick to clamp down on such jailbreaks. Dan has been very much retired since 2023.
Imagen 3 currently balks at the prospect of creating an image of two humans kissing. It is happy to create images of people going on wholesome dates and hugging, though. And images of people hugging robots (see images above, created using the tool since the human image policy announcement).
Perhaps more significantly, based on early tests at least, Imagen 3 seems to now have no trouble creating the kind of ethnically inaccurate content that led to Google pulling the human image generation ability in the first place.
On August 30, Imagen 3 refused to create an image of Nazi soldiers. When asked to create an image of America’s founding fathers, it made an image of white men (see image above).
“Of course, as with any generative AI tool, not every image Gemini creates will be perfect, but we’ll continue to listen to feedback from early users as we keep improving. We’ll gradually roll this out, aiming to bring it to more users and languages soon.” Google says.
Gemini is set to integrate Imagen 3’s human image generation abilities imminently.
Leave a Reply