Embattled 15-seconds-of-fame portal TikTok has become the latest social platform to ban the use of deepfakes by its users.
“We’re adding a policy which prohibits synthetic or manipulated content that misleads users by distorting the truth of events in a way that could cause harm. Our intent is to protect users from things like shallow or deep fakes, so while this kind of content was broadly covered by our guidelines already, this update makes the policy clearer for our users,” TikTok said in a blog post.
Although the primary motivation for the change is likely to be appeasement on Capitol Hill in an election year, most people will have heard of deepfakes in the context of pornography, where it is often used to create ‘celebrity’ sex tapes.
Many porn platforms have already banned the practice, with Pornhub citing the ‘non-consensual nature’ of these encounters as the reason. However, because they can be incredibly difficult to detect, investigations have found that a significant quantity of deepfake videos are still online.
Other platforms that have already banned the posting of deepfakes include Twitter, Reddit and Google.
The Google ban is perhaps the most significant. Any results that include deep fakes, which it describes as “involuntary synthetic pornographic imagery” are not shown during searches, and that is something that TikTok will want to avoid.
South Korea has already gone one step further, banning deepfakes altogether and declaring them a ‘sex crime’ whilst for the Chinese government, deepfakes already represent everything it doesn’t allow. The UK government committed to reviewing its own laws around deep fake content, or so-called ‘x-ray apps‘.
The company announced the new policies on the same day it was ‘banned’ in the US, as the result of an Executive Order by Donald Trump. The ban will be lifted only if is sold to a US buyer (Microsoft is currently in pole position for this) by mid-September.