The uproar caused by last week’s sexually explicit images created by artificial intelligence of music star Taylor Swift is still ongoing.

This Monday, social network X (exTwitter) blocked searches related to Taylor Swift, which highlights the growing concern among social networks to tackle the problem of deepfakes. A lots of Users share realistic AI-generated images and sounds that can be used to portray famous individuals in compromising or deceptive situations. without your consent.

404 The media revealed that the fake images of Swift that became popular on

Since last week, pop singer Taylor Swift’s fans, US politicians and even the White House have expressed outrage over fake pornographic images of the star that went viral on social network X and were still available on other platforms.

One of those images was viewed 47 million times on Xu, exTwitter, before it was removed on Thursday. According to American media, the post was visible on the platform around 5 p.m.

deepfake

“Deepfake” pornographic images – fake but extremely realistic – of celebrities are not new. But activists and authorities are concerned that easy-to-use tools that use generative artificial intelligence (AI) will create an uncontrolled flood of toxic or harmful content.

The attack on Swift, the second most listened to artist in the world on the Spotify platform after Canadian rapper Drake, could shed new light on this phenomenon.

“The only good thing about what’s happening to Taylor Swift is that she probably has enough power to pass a law to take it away. You are sick,” wrote influencer Danisha Carter on Xu.

X is one of the biggest platforms for pornographic content in the world, some analysts say, because its nudity rules are more flexible than those of Meta-owned Facebook or Instagram.

‘Zero tolerance’

Apple and Google have the power to intervene to control the content circulating in apps through rules imposed by their mobile operating systems, but so far both have tolerated this situation in X.

In a statement, X clarified that “the publication of non-consensual nudity (NCN) images is strictly prohibited” on his platform. “We have a zero-tolerance policy for that content.”

A social network owned by a tycoon Elon Musk stated that he is “actively removing all identified images and taking appropriate action against the accounts responsible for posting them.”

In addition, it noted that it is “closely monitoring the situation to ensure that any further violations are promptly addressed and the content removed.”

A 2019 study estimated that 96% of deepfake videos are pornographic.

According to research cited by Wired magazine, in the first nine months of 2023, 113,000 “deepfake” videos were uploaded to the most popular pornographic websites.