Non-consensual pornographic deepfakes almost exclusively target women
Getty Images / Kieran Walsh
Hundreds of explicit deepfake videos featuring female celebrities, actresses and musicians are being uploaded to the world’s biggest pornography websites every month, new analysis shows. The non-consensual videos rack up millions of views and porn companies are still failing to remove them from their websites.
Up to 1,000 deepfake videos have been uploaded to porn sites every month as they became increasingly popular during 2020, figures from deepfake detection company Sensity show. The videos continue to break away from dedicated deepfake pornography communities and into the mainstream.
Advertisement
Deepfake videos hosted on three of the biggest porn websites, XVideos, Xnxx, and xHamster, have been viewed millions of times. The videos are surrounded by adverts, helping to make money for the sites. XVideos and Xnxx, which are both owned by the same Czech holding company, are the number one and three biggest porn websites in world and rank in the top ten biggest sites across the entire web. They each have, or exceed, as many visitors as Wikipedia, Amazon and Reddit.
One 30-second video, which appears on all three of the above sites and uses actress Emma Watson’s face, has been viewed more than 23 million times – being watched 13m times on Xnxx. Other deepfake videos, which have hundreds of thousands or millions of views, include celebrities such as Natalie Portman, Billie Eilish, Taylor Swift and Indian actress Anushka Shetty. Many of the celebrities have continuously been the targets of deepfakes since they first emerged in 2018.
“The attitude of these websites is that they don’t really consider this a problem,” says Giorgio Patrini, CEO and chief scientist at Sensity, which was until recently called DeepTrace. Deepfake pornography videos are widely considered to target, harm, and humiliate the women that are placed at their core. Patrini adds that Sensity has increasingly seen deepfakes being made for other people in the public realm, such as Instagram, Twitch and YouTube influencers, and worries the advancement of deepfake tech will inevitably see members of the public targeted.
“Until there is a strong reason for them [porn websites] to try to take them down and to filter them, I strongly believe nothing is going to happen,” Patrini says. “People will still be free to upload this type of material without any consequences to these websites that are viewed by hundreds of millions of people”.
Advertisement
Many of the videos are hiding in plain sight – they’re uploaded to be watched, after all. Some videos include “fake” or “deepfake” in their titles and are tagged as being a deepfake. For instance, tag pages on XVideos and Xnxx list hundreds of the videos.
However, the full scale of the problem on porn websites is unknown. There will probably never be a true picture of how many of these videos are created without people’s permission.
Despite repeated attempts to contact representatives of XVideos and Xnxx, the owners did not answer requests for comment on their attitudes and policies towards deepfakes.
Alex Hawkins, VP of xHamster, says the company doesn’t have a specific policy for deepfakes but “treat it like any other non-consensual content.” Hawkins says that the company’s moderation process involves multiple different steps and it will remove videos if people’s images are used without permission.
Advertisement
“We absolutely understand the concern around deepfakes, so we make it easy for it to be removed,” Hawkins says. “Content uploaded without necessary permission being obtained is in violation of our Terms of Use and will be removed once identified.” Hawkins adds that the dozens of videos appearing as deepfakes on xHamster, which were highlighted by WIRED, have been passed onto its moderation team to be reviewed.
Deepfake upload figures seen WIRED did not include Pornhub, which is the second biggest porn website and despite banning deepfakes in 2018 still has problems with the videos.
“There has to be some kind of thinking about what we do about this when women are embarrassed and humiliated and demeaned in this way on the internet, and it really is like a question about privacy and security,” says Nina Schick, a political broadcaster and the author of Deepfakes and the Infocalypse.
Since the first deepfakes emerged from Reddit in early 2018, the underlying artificial intelligence technology needed to make them has advanced. It’s getting cheaper and easier for people to make deepfake videos and in one recent example a security researcher created video and audio of Tom Hanks using open-source software and spending less than $100.
The tech advancements have led to increased fears around deepfakes being used to manipulate political conversations. While there have been some early examples of this happening the threat has largely failed to materialise. However, deepfake porn, where the technology was first invented, has flourished. Hollywood actress Kristen Bell said she was “shocked” when she first found out deepfakes were made using her image. “‘Even if it’s labelled as, ‘Oh, this is not actually her’ it’s hard to think about that, I’m being exploited”, she told Vox in June.
Deepfakes are already breaking democracy. Just ask any woman
The amount of deepfakes online is growing exponentially. A report from Sensity released last year found 14,678 deepfake videos online in July 2019 – 96 per cent of these were porn and almost all are focussed on women. By June this year the amount of deepfakes had climbed to 49,081.
The majority of deepfake porn is found on, and created by, specific communities. The top four deepfake porn websites had received more than 134 million views last year, Sensity’s 2019 analysis shows. One deepfake porn website is full of videos featuring celebrities and contains videos of Indian actresses that have been watched millions of times. Some videos state they were requested, while their creators say they can be paid in Bitcoin.
“Some of this technology is improving so fast, because there’s so much energy and drive, unfortunately, from the creators’ side,” Patrini says. “I think we’re going to be seeing it applied very soon with much larger intent to private individuals.” He believes when the technology is easy for anyone to use there will be a “tipping point” when lawmakers will become aware of the problems.
Clare McGlynn, a professor at the Durham Law School who specialises in pornography regulations and sexual abuse images, agrees. “What this shows is the looming problem that is going to come for non-celebrities,” she says. “This is a serious issue for celebrities and others in the public eye. But my long-standing concern, speaking to individual survivors who are non-celebrities, is the risk of this is what is coming down the line.”
At the moment, the legal options for people featured in deepfake videos has not kept up with the technology. In fact, it wasn’t ever prepared for the impact of AI-generated porn. “If a pornographic picture or video of you goes up online, your legal options for taking it down vary wildly,” says Aislinn O’Connell, a law lecturer from Royal Holloway University in London.
People can pursue non-consensual uploads for defamation, under human rights laws, copyright complaints and other forms. However, most of these processes are onerous, resource-intensive and most often don’t apply to deepfakes. “We need more and better solutions now,” O’Connell says.
Some deepfake laws have been passed in US states but these largely focus on politics and ignore the impact that deepfakes are already having on people’s lives. In the UK the Law Commission is conducting a review into the sharing on intimate images online, which includes deepfakes, but it is expected to take years until any changes can be made. O’Connell proposes that England adopts image rights laws so people can properly protect themselves.
However, while lawmakers fail to deal with the problem the technology is set to become cheaper and easier for all to use. “I see the evolution of deep fakes in the pornographic space as actually the harbinger of the bigger civil liberties issues that are going to emerge,” Schick says.
“This technology is out there and it is evolving at a rate that is much faster than society can keep up with,” she adds. “We are not ready for the age of synthetic media where even video becomes something that almost anybody can corrupt”. To fight this, Schick says, multiple people need to be involved – technologists, the public, domain specific experts, policy officials, and lawmakers. Right now, however, that’s not happening.
Matt Burgess is WIRED’s deputy digital editor. He tweets from @mattburgess1
More great stories from WIRED
🧠 Can’t focus? Here’s how to concentrate when working from home
🕺 Across London havoc is being caused by illegal Airbnb nightclubs
👟 If you started running during lockdown these are the best running shoes in 2020
Advertisement
🔊 Listen to The WIRED Podcast, the week in science, technology and culture, delivered every Friday
👉 Follow WIRED on Twitter, Instagram, Facebook and LinkedIn
Get The Email from WIRED, your no-nonsense briefing on all the biggest stories in technology, business and science. In your inbox every weekday at 12pm sharp.
by entering your email address, you agree to our privacy policy
Thank You. You have successfully subscribed to our newsletter. You will hear from us shortly.
Sorry, you have entered an invalid email. Please refresh and try again.