Skip to content Skip to footer

Scholars participate in an open letter endorsing independent assessments of Artificial Intelligence.

More than 100 prominent AI experts have drafted an open letter, calling on prominent AI companies such as OpenAI, Meta, and Google among others, to permit independent testing of their generative AI technologies. The letter asserts that the restrictive terms and conditions of these companies are fettering independent research efforts aimed at ensuring the safety of AI tools.

Signatories to the open letter include renowned experts like Percy Liang from Stanford, Pulitzer Prize recipient Julia Angwin, Renée DiResta from Stanford Internet Observatory, Mozilla Fellow Deb Raji, ex-member of the European Parliament Marietje Schaake, and Brown University’s Suresh Venkatasubramanian.

The group of researchers firmly believes that the oversight of the social media era, in which independent research was often sidelined, should not be replicated. They suggest the establishment of legal and technical ‘safe spaces’ for the examination of AI products by researchers. Such a move would help these experts evaluate these AI tools without fear of facing legal consequences or having their accounts suspended.

AI tools are designed with stringent usage policy to deter malicious use and maintain the safety guardrails. However, these very policies complicate matters for researchers attempting to delve deeper and comprehend AI models.

The open letter, published on the MIT’s website, appeals to companies in two forms. Firstly, it argues for a ‘safe harbor’, which would exempt good-faith independent research in AI safety, security and trustworthiness from prosecution provided it complies with the standard vulnerability disclosure rules. Secondly, companies should agree to fairer access by permitting independent evaluators to moderate the evaluation applications of researchers thereby protecting bona fide safety research from unproductive account suspensions.

The open letter also includes a policy proposal co-penned by some signatories recommending changes in the companies’ terms of service to support academic and safety research. This is an important step in acknowledging the increasing consensus on the risks associated with generative AI such as the creation of nonconsensual intimate imagery, bias and copyright infringement.

The group of experts, by pushing for the provision of ‘safe harbor’ for independent evaluation, are promoting the greater public good and hope to create an ecosystems where AI technologies are developed and deployed with social welfare and responsibility at the forefront.

Leave a comment

0.0/5