The US and UK governments are launching a joint online children’s safety working group, with the aims of protecting minors from —and cracking down on— internet platforms that potentially expose children to a raft of online harm.
Details about how the joint working group will operate have not been revealed, but the UK government announced that both UK and US authorities will “work with our national institutions and organisations [sic]” to support its goals.
Those goals are largely focused on childrens’ use of social media plus protections from online harassment, cyberbullying, abusive content and sexual abuse. The UK government’s announcement also mentioned AI-derived image-based sexual abuse, suggesting that AI deepfake abuse and porn content will be a priority too.
The joint working group announcement comes amid crackdowns on online porn and adult content, plus illegal sexual content, in both the US and UK.
The British government’s Online Safety Act is set to come into effect before the end of 2024, and is designed to crack down on social media companies and other online platforms that allow children to access adult content. Under the act, platforms will have to enforce effective age verification for accessing adult content and face fines of up to $22 million, or ten percent of their annual global turnover, if they don’t.
Within the UK, England and Wales are also making the creation of deepfake porn without the consent of those depicted in it illegal.
In the US, many states have introduced more stringent age verification laws for accessing online porn. This has led to Aylo, which owns Pornhub, the world’s biggest porn site, blocking access to its porn sites in many states.
The announcement of the US-UK working group mentioned the US government’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI. The executive order, issued in 2023, decrees that the US secretary of commerce will create a report identifying tools and methods used for preventing generative AI from producing child sexual abuse content. The executive order will also lead to guidance being issued about platforms labeling content that has been generated by AI.
One major concern about AI-derived sexual content, even if it may not break any laws, is that it is not labeled as such so could be confused for ‘real’ content.
The White House said that “deepfake image-based sexual abuse is an urgent threat that demands global action”. The US government called on internet and payment platforms, app stores and developers, cloud providers and other stakeholders to actively curb the spread of image-based sexual abuse.
Many tech companies have already made changes to their tools and processes, as they attempt to crack down on the spread of such material. Microsoft and Google are among the tech giants to have done so already.
The UK government said that “online platforms, including social media companies, have a moral responsibility to respect human rights and put in place additional protections for children’s safety and privacy. Age-appropriate safeguards, including protections from content and interactions that harm children’s health and safety, are vital to achieve this goal.”
The US-UK working group could potentially make it easier for authorities in both countries to enforce new rules around AI and online porn. It has traditionally been tough to pin down globally-used porn websites with country-specific laws, so greater communication between the US and UK with regard to similar online content laws could make processes less silo-ed.
The move also helps to present a unified front as the problem of AI-generated nonconsensual deepfake porn and abuse content spirals. In South Korea, the government has gone as far as criminalizing watching deepfake porn, following what has been dubbed a “digital sex crime epidemic” in the country.
While many people welcome new efforts to crack down on platforms providing potentially harmful sexual content, some have raised concerns about censorship and free speech.
Recently a coalition of porn and sex-adjacent industry workers and advocates released an open letter to the EU Commission, regarding regulation of AI. They raised concerns about sweeping anti-abuse measures potentially leading to legitimate use of AI and sexual content being made inaccessible.
Announcing the new US-UK working group, the UK government’s Department for Science, Innovation and Technology said: “We encourage online platforms to go further and faster in their efforts to protect children by taking immediate action and continually using the resources available to them to develop innovative solutions, while ensuring there are appropriate safeguards for user privacy and freedom of expression.”
Meanwhile, on Thursday (October 17, 2024) Meta announced that Instagram will stop people being able to screenshot content intended to only be viewed once, in a move designed to help prevent nonconsensual sharing of intimate images and ‘sextortion’.
Critics have said that Meta should roll out functions like these to its other products, including WhatsApp.
Leave a Reply