Skip to content Skip to footer

Outrage sparked by clear artificial intelligence-generated deep fake images of Taylor Swift

AI-generated explicit images of pop star Taylor Swift were recently shared across social media platforms, causing widespread uproar. The images portrayed Swift in sexually explicit poses and were circulated nonconsensually for 19 hours, gathering around 27 million views and 260,000 likes before the account was suspended. The incident raises serious concerns about the efficacy of social media policies against deep fake misinformation.

The explicit images were primarily shared on a social media platform known as X but also spanned others like Facebook. This incident revealed how controversial AI-related content spreads quite quickly. In response to the incident, a spokesperson from Meta stated that the content violates their policies and actions are in place to remove it from their platforms. They also indicated that they would penalize those who posted such content.

AI detection services and ‘Community Notes’ attached to posts could potentially combat such issues. However, these methods have their own limitations. The ramifications of this incident as well as other recent viral deep fake cases were widely discussed on Reddit. There was also mention of a falsified image circulated on TikTok which depicted the Eiffel Tower on fire.

One Reddit user expressed concern about the widespread ignorance regarding what deepfakes are, indicating a low level of public awareness about the matter. Tech companies have also been criticised for their unregulated use of generative AI, resulting in instances like these.

Ben Decker, from Memetica, a digital investigations agency, highlighted the lack of control over AI’s impacts and the shortcomings of social media companies’ content monitoring strategies. Additionally, talk is around about Taylor Swift considering legal proceedings against the deep fake porn site that hosted the images.

There have been many similar incidents involving explicit deep fakes, mostly of women and children. These have sometimes been used with extortionate intentions, termed as “sexploitation”. Concern is rife about the darkest aspect of this technology: the creation of AI images involving child sex abuse. A US man was recently sentenced to 40 years in prison for possessing such images.

Calls for legislation addressing the creation and distribution of deepfake images have intensified. US Representative Joe Morelle described the Swift deepfakes incident as “appalling” and underscored the need for immediate legal action.

Deep fake technology is a potential threat extending beyond celebrities to politics and global security. With AI algorithms designed to generate compelling forgeries, the manipulation of content can influence stock markets, elections, and public opinion.

Though efforts to curb deepfakes are in progress, with tech giants developing detection technologies like Intel’s product, which can detect fake videos with 96% accuracy, the challenge remains largely daunting due to the rapid evolution and dissemination of the technology on the internet.

Leave a comment

0.0/5