Skip to content Skip to footer
Search
Search
Search

The Oversight Board of Meta evaluates how it manages explicit deep fakes.

The Oversight Board, a body of experts independent from Meta (formerly Facebook), was founded in 2020 to review challenging content moderation decisions on Facebook, Instagram, and Threads. This Board has recently announced its decision to scrutinize Meta’s response to two cases related to sexually explicit, AI-generated images of female public figures.

The Board comprises 20 members globally, including legal professionals, human rights advocates, journalists, and academics. Having its own staff and budget, the Board operates autonomously from Meta and can make conclusive decisions on content. Meta must adhere to these decisions inasmuch as they do not contravene the law. The Board also has the authority to issue non-binding policy recommendations to Meta.

The first case under review by the Board pertains to an AI-generated nude image resembling a public figure from India, uploaded on Instagram. The platform originally permitted the image but took it down for infringing Meta’s Bullying and Harassment Community Standard after the Board elected to review the case.

The second case relates to an image posted on a Facebook group concentrating on AI creations. The AI-generated photo showed a nude woman, who was made to resemble an American public figure, being groped by a man. Meta took it down for contravening a policy clause prohibiting slanderous sexualized photoshop or drawings.

The Board aims to evaluate the efficacy of Meta’s policies and enforcement practices vis-à-vis explicit AI-generated imagery. It invites public comments on diverse aspects of the cases, such as the harms caused by deep fake pornography, its use and prevalence around the world, and ways for Meta to tackle deep fake pornography on its platforms.

The public has 14 days to submit their comments, after which the Board will deliberate and issue binding decisions on both cases. Meta is required to respond to any non-binding policy recommendations from the Board within 60 days.

This review comes amidst mounting anxiety over the spread of non-consensual deep fake pornography targeting women, particularly female celebrities. The most targeted individual has been singer Taylor Swift, with AI-generated explicit images of her sparking a digital pursuit of the offender.

In January, the DEFIANCE Act was proposed, which would permit victims of non-consensual deep fake pornography to file a suit if they can demonstrate that the content was created without their approval. Congresswoman Alexandria Ocasio-Cortez, a sponsor of the bill and herself a target of deep fake pornography, highlighted the necessity for Congress to help victims as deep fakes become increasingly prevalent.

Leave a comment

0.0/5