Taylor Swift Deepfake Scandal Shows A Larger Problem

I show You how To Make Huge Profits In A Short Time With Cryptos!

While the concern around generative AI has so far mainly focused on the potential for misinformation as we head into the U.S. general election, the possible displacement of workers, and the disruption of the U.S. education system, there is another real and present danger — the use of AI to create deepfake, non-consensual pornography.

Last month, fake, sexually explicit photos of Taylor Swift were circulated on X, the platform formerly known as Twitter, and allowed to stay on there for several hours before they were ultimately taken down. One of the posts on X garnered over 45 million views, according to The Verge. X later blocked search results for Swift’s name altogether in what the company’s head of business operations described as a “temporary action” for safety reasons.

Swift is far from the only person to be targeted, but her case is yet another reminder of how easy and cheap it has become for bad actors to take advantage of the advances in generative AI technology to create fake pornographic content without consent, while victims have few legal options.

Even the White House weighed in on the incident, calling on Congress to legislate, and urging social media companies to do more to prevent people from taking advantage of their platforms.

The term “deepfakes” refers to synthetic media, including photos, video and audio, that have been manipulated through the use of AI tools to show someone doing something they never actually did.

The word itself was coined by a Reddit user in 2017 whose profile name was “Deepfake,” and posted fake pornography clips on the platform using face-swapping technology.

A 2019 report by Sensity AI, a company formerly known as Deeptrace, reported that 96% of deepfakes accounted for pornographic content.

Meanwhile, a total of 24 million unique visitors visited the websites of 34 providers of synthetic non-consensual intimate imagery in September, according to Similarweb online traffic data cited by Graphika.

The FBI issued a public service announcement in June, saying it has noticed “an uptick in sextortion victims reporting the use of fake images or videos created from content posted on their social media sites or web postings, provided to the malicious actor upon request, or captured during video chats.”

“We are angry on behalf of Taylor Swift, and angrier still for the millions of people who do not have the resources to reclaim autonomy over their images.”

– Stefan Turkheimer, vice president of public policy at the Rape, Abuse & Incest National Network (RAINN)

Federal agencies also recently warned businesses about the danger deepfakes could pose for them.

One of the many worrying aspects around the creation of deepfake porn is how easy and inexpensive it has become to create due to the wide array of tools available that have democratized the practice.

Hany Farid, a professor at the University of California, Berkeley, told the MIT Technology Review that in the past perpetrators would need hundreds of pictures to create a deepfake, including deepfake porn, whereas the sophistication of available tools means that just one image is enough now.

“We’ve just given high school boys the mother of all nuclear weapons for them,” Farid added.

While the circulation of deepfake images of Swift brought much-needed attention to the topic, she is far from the only person to have been targeted.

“If this can happen to the most powerful woman on the planet, who has, you could argue, many protections, this could also happen to high schoolers, teenagers, and it actually is happening,” Laurie Segall, a veteran tech journalist and founder and CEO of Mostly Human Media, a company exploring the intersection of technology and humanity, told HuffPost.

Indeed, many women, including lawmakers and young girls, have spoken out about appearing in deepfakes without their consent.

“We are angry on behalf of Taylor Swift, and angrier still for the millions of people who do not have the resources to reclaim autonomy over their images,” Stefan Turkheimer, the vice president of public policy at the Rape, Abuse & Incest National Network (RAINN), said in a statement.

Florida Senate Minority Leader Lauren Book, a survivor of child sexual abuse, has previously revealed that sexually explicit deepfakes of her and her husband have been circulated and sold online since 2020. But Book told People she only found out about that more than a year later upon contacting the Florida Department of Law Enforcement about threatening texts from a man who claimed to have topless images of her.

The 20-year-old man was later arrested and charged with extortion and cyberstalking. Amid the incident, Book sponsored SB 1798, which among other things, makes it illegal to “wilfully and maliciously” distribute a sexually explicit deepfake. Florida Gov. Ron DeSantis (R) signed the bill into law in June 2022.

Book told HuffPost she still has to confront the existence of the deepfake images to this day.

“It’s very difficult even today, we know that if there’s a contentious bill or an issue that the right doesn’t like, for example, we know that we have to search online, or keep our eye on Twitter, because they’re going to start recirculating those images,” Book told HuffPost.

Florida state Sen. Lauren Book gestures as she speaks to the media on Feb. 6, 2023, in the Senate Office Building at the Capitol in Tallahassee.

Phil Sears via Associated Press

Francesca Mani, a New Jersey teenager, was among about 30 girls at her high school who were notified in October that their likenesses appeared in deepfake pornography allegedly created by their classmates at school, using AI tools, and then shared with others on Snapchat.

Mani never saw the images herself but her mother, Dorota Mani, said she was told by the school’s principal that she had been identified by four others, according to NBC News.

Francesca Mani, who has created a website to raise awareness on the issue, and her mother visited Washington in December to pressure lawmakers.

“This incident offers a tremendous opportunity for Congress to demonstrate that it can act and act quickly, in a nonpartisan matter, to protect students and young people from unnecessary exploitation,” Dorota Mani said.

While a small number of states, including California, Texas and New York, already have laws targeting deepfakes, they vary in scope. Meanwhile, there is no federal law directly targeting deepfakes — at least for now.

A bipartisan group of senators on the upper chamber’s Judiciary Committee introduced the DEFIANCE Act last month, which imposes a civil penalty for victims “who are identifiable in a ‘digital forgery.’” The term is defined as a “a visual depiction created through the use of software, machine learning, artificial intelligence, or any other computer-generated or technological means to falsely appear to be authentic.”

“Although the imagery may be fake, the harm to the victims from the distribution of sexually explicit deepfakes is very real,” Chair Dick Durbin (D-Ill.) said. “By introducing this legislation, we’re giving power back to the victims, cracking down on the distribution of deepfake images, and holding those responsible for the images accountable.”

However, Segall points out research has shown that perpetrators are “likely to be deterred by criminal penalties, not just civil ones,” slightly limiting the effectiveness of the Senate bill.

In the House, Rep. Joe Morelle (D-N.Y.) has introduced the Preventing Deepfakes of Intimate Images Act, a bill to “prohibit the disclosure of intimate digital depictions.” The legislation has also been sponsored by Rep. Tom Kean (N.J.), a Republican, offering hopes that this could garner bipartisan support.

Rep. Yvette Clarke (D-N.Y.) has introduced the DEEPFAKES Accountability Act that requires the application of digital watermarks on AI-generated content to protect both national security, and give victims a legal avenue to fight.

Efforts by both Morelle and Clarke in the past to introduce similar legislation failed to gather enough support.

“Look, I’ve had to come to terms with the fact that those images of me, my husband, they’re online, I’m never gonna get them back.”

– Florida Senate Minority Leader Lauren Book

Mary Anne Franks, the president and legislative and tech policy director of the Cyber Civil Rights Initiative, a nonprofit focused on fighting online abuse that was asked to provide feedback on Morelle’s bill, said a legislative fix to this issue would need to deter a would-be perpetrator from moving forward with creating a non-consensual deepfake.

“The point is to have it be a criminal prohibition that puts people on notice how serious this is, because not only will it have negative consequences for them, but one would hope that it would communicate that the incredibly negative consequences for their victim will never end,” Franks told the “Your Undivided Attention” podcast in an episode published earlier this month.

Book spoke to HuffPost about having to accept that it is impossible to fully make those images disappear from the internet.

“Look, I’ve had to come to terms with the fact that those images of me, my husband, they’re online, I’m never gonna get them back,” Book said. “At some point, I’m gonna have to talk to my children about how they are out there, they exist. And it’s something that’s gonna follow me for the rest of my life.”

She continued: “And that’s a really, really difficult thing, to be handed down a life sentence with something that you had no part in.”

Tech companies, which own some of the AI tools used to create deepfakes that can fall into the hands of bad actors, can also be part of the solution.

Meta, the parent company of Facebook and Instagram, last week announced it would start labelling some AI-generated content posted on its platforms “in the coming months.” However, one of the shortcomings of this policy is this would only apply to still images in its initial rollout.

Some of the fake, sexually explicit images of Swift were allegedly created using Microsoft’s Designer tool. While the tech giant has not confirmed whether its tool was used to create some of those deepfakes, Microsoft has since placed more guardrails to prevent its users from misusing its services.

CEO and chairman Satya Nadella told NBC’s “Nightly News” the Swift incident was “alarming,” adding that companies like his have a role to play in limiting perpetrators.

“Especially when you have law and law enforcement and tech platforms that can come together, I think we can govern a lot more than we give ourselves credit for,” Nadella added.

Segall warned that if we don’t get ahead of this technology “we’re going to create a whole new generation of victims, but we’re also going to create a whole new generation of abusers.”

“We have a lot of data on the fact that what we do in the digital world can oftentimes be turned into harm in the real world,” she added.

Be the first to comment

Leave a Reply

Your email address will not be published.


*