Deepfake porn: It’s not just about Taylor Swift

Published 5:00 am Saturday, February 3, 2024

Last week, fake sexually explicit images of Taylor Swift went viral on X, inciting a fast reaction from her fanbase and bringing attention to an issue that has been plaguing women and girls for years: Deepfake porn. 

The images were created using Microsoft’s Designer artificial intelligence image generator, according to 404 Media. 

Following 404’s report, Microsoft  (MSFT) – Get Free Report officials said they strengthened safety systems and closed the loopholes that enabled users to generate the images in the first place.  

Related: Taylor Swift is the latest victim of ‘disgusting’ AI trend

The incident is a symptom of a much wider, much older problem. 

More than six years ago, Samantha Cole reported for Motherboard on the early days of the proliferation of AI-assisted deepfake celebrity porn. At the time, the effort to create such material — which then impacted Gal Gadot, Scarlett Johansson, Aubrey Plaza and Swift, among others — took anywhere between a few hours and a few days. 

“This new type of fake porn shows that we’re on the verge of living in a world where it’s trivially easy to fabricate believable videos of people doing and saying things they never did. Even having sex,” Cole wrote. 

That world is here. 

Recent investigations by 404 Media have highlighted the nonconsensual deepfake porn supply chain, one in which users can easily — and very quickly — use AI image generators to create believable, explicit images of almost anyone. 

A recent Stanford investigation found hundreds of examples of child sexual abuse material (CSAM) in an open dataset that was used to train popular AI image generators, including Stable Diffusion. 

And last year, students at a New Jersey high school used AI image generators to create and spread pornographic images of more than 30 female classmates. 

Laws to shield people from such instances are not widespread; there is no federal law that prohibits either the creation or dissemination of nonconsensual deepfake porn. 

Several states — including Texas, New York, Minnesota, Virginia, Georgia and Hawaii — have passed legislation to combat the issue, but each state’s legislation varies in strength, with some addressing only the dissemination of such material, rather than the creation of it. 

More deep dives on AI:

And though most states have laws against revenge pornography, Syracuse University Professor Nina Brown, who studies the intersection of media law and technology, said that the majority of such laws would not cover porn created by deepfake technology. 

“There is clearly an opportunity for the legislature to address this, and maybe we owe Taylor Swift thanks because just five days after she was the victim of a nonconsensual pornographic deepfake, the bipartisan DEFIANCE Act was introduced in Congress,” Brown told TheStreet. 

The act, introduced Wednesday, would provide a federal civil remedy for victims identifiable in deep-faked pornographic images, and would be enforceable against individuals who created, distributed, received or possessed with the intent to distribute, said images. 

Law enforcement officials, according to the New York Times, are meanwhile bracing for a significant upsurge in AI-generated child pornographic material. 

“Perhaps not surprisingly, the vast majority of deepfakes online are pornographic images of women. Women who by and large lack the star power it takes to get X — or anyone — to spring into action in their defense,” Brown said. “This is why we need the federal government to put meaningful laws in place to deter and punish those who are involved in the creation and dissemination of nonconsensual deepfake pornography.”

Related: Facebook whistleblower explains why Mark Zuckerberg’s latest hearing is different than the others

Cybersecurity expert: A technical solution may not be possible

With regulation lagging behind technological progress, Lisa Plaggemier, Executive Director at the National Cybersecurity Alliance, said that the bigger problem is a simple one: AI, like the internet, is fundamentally flawed when it comes to security. 

If the internet had been built with security in mind, Plaggemier told TheStreet, the dark web, for one, wouldn’t exist. If AI models had been built with security in mind, the current scenario of deepfake porn could potentially have been avoided, but, she said, the “genie’s out of the bottle.” 

“I just wish that we would learn from that situation. If you don’t design these things with security in mind, and you don’t think about how they can be abused, they’re going to be abused,” Plaggemier said. “It’s human nature.”

Guardrails preventing the creation of deepfake porn are a possible solution, she said, but there’s no fiscal incentive for the tech companies to enact guardrails, and without regulation, there’s no requirement for them. 

And with lobbying efforts ingrained in U.S. politics — Apple  (AAPL) – Get Free Report, Microsoft  (MSFT) – Get Free Report, Google  (GOOGL) – Get Free Report, Meta  (META) – Get Free Report and Amazon  (AMZN) – Get Free Report spent a combined $69 million lobbying Congress in 2022 — she doesn’t expect useful regulation to make its way through anytime soon. 

And with the government still attempting to get a handle on the regulation of social media companies, Plaggemier isn’t optimistic that Congress will be able to move quickly on AI. The massive gray area presented by attempting to regulate the technology, specifically on where legislation would draw the line between common decency and censorship, she said, adds a further complication to the idea that the government can just regulate the problem away. 

“I don’t even know if a technical solution is possible,” she said. “Look how we struggled with this with social media. It’s going to be the same sort of problem with AI.”

Related: New platform seeks to prevent Big Tech from stealing art

A new relationship with technology

The only feasible response to the current situation, according to Plaggemier, is one of individual and parental responsibility, something that needs to be bolstered by better efforts to educate the public about the dangers of AI. 

“I worry more about our children and what teens will do to each other, not understanding the implications, not understanding the damage that can be done,” she said. “I worry more about that than I do about Big Tech in this situation, because I don’t think there’s a quick easy solution with Big Tech; I think parents, teachers and students need to be aware.”

Plaggemier has noticed a rising trend of parents waiting to give their kids technology connected to the internet, alongside a trend of kids gaining more suspicion when it comes to such tech than they’ve had in the past. 

Recent polling by the Artificial Intelligence Policy Institute (AIPI) has further found that 93% of U.S. voters surveyed are concerned about the ability of AI to create deepfake child pornography; 63% said that the creators of the models being used to create AI-generated child porn should be held liable. 

More deep dives on AI:

Parents, Plaggemier said, should consider more closely monitoring what their children are doing when they’re using a device. And parents should also get more comfortable with parental monitoring settings on the apps their kids use. 

“Until we have regulation or until the technology providers are willing to police themselves — neither of which I see happening anytime soon — we will lose people if we don’t do more education around these topics and if parents aren’t paying more attention and getting more involved in the technology that their children are using,” she said. 

The balance between trust and privacy, and letting kids essentially “play in traffic on the internet” is something that Plaggemier said should be re-addressed by parents and teachers. 

“This has the capability to do the most harm to our children. Taylor Swift can get some mental health help and manage her way through this situation probably a lot better than your average 13-year-old in middle school,” she said. “And so that’s my bigger concern: Are we arming teachers and parents with enough information about this?”

Related: Human creativity persists in the era of generative AI

Taylor Swift’s legal options 

Though there are a few options open to Swift if she were interested in pursuing legal recourse, the key practical problem with such a measure, according to Syracuse University law professor Shubha Ghosh, is “identifying the right set of defendants to bring the possible claims against.”

In theory, Ghosh told TheStreet, “she could raise defamation suits against the company and people who shared the image.”

Elizabeth Moody, chair of the New Media Practice at law firm Granderson Des Rocher, agreed, saying that, since the images were fake and made to look real, they might constitute defamation. 

A stronger argument, according to Moody, would fall under the Right of Publicity, a right that protects against the misappropriation of a person’s name or likeness for commercial gain, something recognized by most states, though not at the federal level. 

Moody said that Swift could also argue an invasion of privacy. 

Reports have said that Swift is furious about the situation and is considering legal action. Her publicist did not respond to TheStreet’s request for comment immediately following the incident. 

Buda Mendes/TAS23/Getty Images

As to the issue of defendants, Moody said that there’s almost no point in attempting to find and go after the people who created the images. Like the New York Times, Authors Guild and other organizations that have sued AI companies like OpenAI and Microsoft for copyright infringement, Moody said Swift’s best option would be to go after the companies behind the models, though that question of liability has yet to be legally answered. 

The courts, for instance, have not yet decided whether or how Section 230 — which protects internet companies from liability for content published by someone else on their platform — applies to generative AI

Moody is somewhat optimistic that because this happened to such a beloved, renowned figure as Swift, it might move the needle on legislation and it might further shift those damaging incentives, creating a “good business reason for some of Big Tech” to establish better guardrails to prevent the creation of deepfake porn. 

“Because it happened to Taylor Swift — it sounds terrible to say — it may actually be a good thing that it happened to somebody who has such a massive fan base and such wide appeal and also tends to really advocate for herself and people in her position,” Moody said. “I think she’s going to do something.”

“We have to change the law,” she added, “but I think even if she were to win a state lawsuit, it’s going to be a lot easier to change the law.”

Contact Ian with AI stories via email, ian.krietzberg@thearenagroup.net, or Signal 732-804-1223.

Related: The ethics of artificial intelligence: A path toward responsible AI

Marketplace