It’s generally accepted that AI is the new frontier field in tech, and that new AI systems such as chatbot GPT-4 represent a fundamental shift in how the internet is used. A development as significant as the rise of social media, perhaps, but one with the potential for even more dire consequences.
Recently AI-assisted deepfake videos and photos have gone viral, artists have complained about AI artwork being created based on their work uncredited and unpaid, and people have complained about sexualized AI selfies being created through image generators. Society is scrambling to get a handle on ethical AI issues both minor and major.
Little surprise, then, that an AI backlash is brewing, with notable tech figures adding their voices to the chorus of concern. On March 22, 2023 the Future of Life Institute, a US non-profit research institute, published an open letter calling for “all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4”, to allow time to develop and implement AI safety protocols.
Tesla, Twitter and SpaceX engineer and mogul Elon Musk added his name to the letter, along with Apple co-founder Steve Wozniak. As of April 5, over 10,100 verified people had signed the letter, with Future of Life saying that over 50,000 signatures had been added but were yet to be verified.
Read this next? The 7 Best AI Porn Sites for Making Your Own Images & Videos
While those numbers aren’t earth-shattering, the eminence of many of those who signed it saw it gain global traction. Other signatures on the letter include those of Pinterest co-founder Evan Sharp, Getty Images CEO Craig Peters, plus multiple staff at influential AI research company DeepMind.
In the adult content and sextech sphere AI has devastating potential. Deepfake revenge porn videos have already been used nefariously online. AI porn image generators have been developed, raising concerns about the rights of models and other porn producers that the images may have learned from. People have been using AI chatbots to generate messages for matches on dating apps, to secure in-person meetings.
Some AI ethics skeptics have gone further than Future of Life, with Eliezer Yudkowsky, decision theorist at the Machine Intelligence Research Institute, calling for AI development to be shut down completely.
AI at the speed of… regulation
Like most quick-moving tech fields, the development of AI has happened at a pace far greater than any change in legislation and safety protocols regarding AI has. There have been notable reactions, such as UK laws around deepfake video sharing being toughened, but it’s still the Wild West.
It makes sense, then, to attempt to let the ethical framework around AI catch up with its development and rollout. Future of Life is calling for the development and implementation of a set of “shared safety protocols for advanced AI design and development”, and for them to be “rigorously audited and overseen” by independent experts.
Practical suggestions for what the protocols might entail include watermarking AI content so it can be easily distinguished from non-AI content, which could potentially be beneficial for deepfake detection. More funding for AI safety research is called for, along with greater levels of liability for AI-caused harm.
The institute asked: “Should we let machines flood our information channels with propaganda and untruth?”
The answer, of course, is no. And with this open letter gaining more and more traction, it feels like the nascent AI backlash could soon spill over into genuine change.