Skip to content Skip to footer

DeepMind research reveals that deep fakes are the primary type of Artificial Intelligence abuse.

AI holds vast potential for societal advancement, but it also carries significant risk. One of these is the rapidly growing field of deep fake creation, which uses AI technologies to fabricate highly realistic but false media. A new study from Google DeepMind and Jigsaw, a Google offshoot focused on addressing societal threats, has found that the production and dissemination of these deceptive deep fakes is the most pervasive form of AI misuse.

The study analyzed 200 real-world incidents of AI misuse from January 2023 to March 2024, leveraging data from diverse sources including social media platforms, online blogs, and media reports. The aim was to identify the type of AI technology being misused, the intent behind such misuse, and the level of technical expertise required to execute it.

Deep fakes were found to be the primary form of AI abuse, being responsible for nearly twice as many instances as the next most common category. These deep fakes often target politicians and public figures, leading to misinformation and potential distortions of public opinion.

However, AI misuse is not limited to deep fake production. The study also found a noticeable trend of using AI language models and chatbots to generate and disseminate disinformation online. Through automation, these actors can produce misinformation on a previously unheard-of scale.

The researchers found that the main motivation behind over a quarter (27%) of the AI misuse cases was to influence public opinion and political narratives. Financial gain was identified as the second most common motivator, with the study highlighting cases of deep fake production services being sold for profit.

Surprisingly, the majority of AI abuse cases involve readily available tools that require minimal technical expertise, allowing virtually anyone to engage in AI-powered deception and manipulation.

The study highlights the urgency for policymakers, tech companies, and researchers to collaborate on comprehensive strategies for detecting and countering deep fakes, AI-generated misinformation, and other AI misuse forms. However, the reliability of deep fake detection remains a challenge. Even fact-checkers tasked with identifying these AI-generated impersonations struggle with reliability.

The growing sophistication of deep fakes has led to wider societal harm, with more children becoming targets of deep fake incidents. The advent of new AI tools, like OpenAI’s text-to-video system Sora, will only add to the complexity of tackling the deep fake problem.

In conclusion, the DeepMind and Jigsaw study emphasizes the dire need to address the misuse of AI technologies, as the ability to fabricate advanced deep fakes becomes more prevalent and accessible. Policymakers and technology companies alike must work cohesively to tackle this issue and minimize the societal harm of these deceptive media. New detection and countering strategies have to be developed and implemented to ensure the integrity of information and preserve the democratic processes worldwide.

Leave a comment

0.0/5