Meta has committed to increasing transparency around AI-generated content in their platform by labelling such images to distinguish between human-created and synthetic content. Nick Clegg, Meta’s President for Global Affairs, highlighted this in a blog post, stating that as human and synthetic content become increasingly indistinguishable, it becomes crucial to indicate when content is AI-generated.
Meta acknowledges the expanding presence of photorealistic AI-generated content, known as deep fakes, which have caused controversies such as non-consensual explicit images of celebrities and a deep fake video scam that cost a multinational company $25.6 million. As a large social media company, Meta shares the responsibility of preventing the spread of these images.
Their current strategy includes labelling photorealistic images created using the platform’s own AI feature as “Imagined with AI”. They intend to extend this labelling to content generated by other companies’ tools. They are also developing tools that can identify invisible markers or metadata within AI-generated images. This would allow them to label content from various sources such as Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock.
These labels will be applied across all languages supported on the platform. Clegg emphasized the importance of the tech industry banding together to tackle these issues. Meta has been collaborating with other companies to develop common standards for identifying AI-generated content.
However, Clegg admits that despite their best efforts, detecting AI-generated audio and video remains difficult due to fewer discernible signals. Even when it comes to images, they can gather millions of views before being removed, as seen in the recent Taylor Swift incident.
As a temporary measure, Meta plans to roll out a feature that allows users to voluntarily disclose AI-generated video or audio content. However, it remains questionable whether this will be effectively used.
While labelling is part of the strategy, Meta is also studying various technologies to improve detection of AI-generated content even if no invisible markers are present. This includes the exploration of invisible watermarking technology, such as the Stable Signature from Meta’s AI Research lab. This integrates the watermark directly into the image generation process, potentially improving detection capabilities.
However, with deep fakes increasingly used in political misinformation and upcoming elections in countries like the US and UK, the need for robust detection and transparency becomes more urgent. Despite efforts, a final solution looks elusive. This only intensifies the need for continuous efforts in content authenticity and transparency.