Artificial Intelligence (AI) software company OpenAI is being investigated by a Senate committee following allegations of compromising safety protocols during the rush to launch its latest AI model GPT-4 Omni, as well as imposing restrictive employee non-disclosure agreements (NDAs). The allegations were raised in a Washington Post report and by whistleblowers, including experts from OpenAI’s now-disbanded “superalignment team”. Five US senators, led by Senator Brian Schatz (D-Hawaii), have demanded that the company provide detailed explanations about its safety procedures and treatment of employees.
The letter sent to OpenAI CEO Sam Altman by the senators expressed unease about the company’s dedication to responsible AI development and its internal governing policies. The senators requested that the company clarifies its stance on various issues, including the allocation of computing resources for safety research, whether independent experts are allowed to test systems pre-launch and the company’s use of NDAs. They also addressed whether OpenAI would make a commitment to not enforcing any permanent non-disparagement contracts for current or previous employees.
In response to the Senate probe, OpenAI took steps to reassure the public about its commitment to safety, focusing on the Preparedness Framework, a mechanism designed to assess and protect against potential risks associated with powerful AI models. The company also addressed the allegations regarding restrictive employee agreements. OpenAI said that they believe that positive debate about its technology is crucial and have made alterations to their departure proceedings to remove any non-disparagement clauses.
OpenAI’s Board of Directors launched a Safety and Security committee in May, seeking to fortify safety measures. The committee includes retired US Army General Paul Nakasone, a renowned cybersecurity expert. OpenAI insists that although AI systems are of substantial societal benefit, continuous rigorous safety measures and vigilance are essential.
However, the possibility of comprehensive AI legislation this year is slim, as attention is pivoted towards the 2024 elections. Instead, the US government appears to be depending mainly on AI companies’ voluntary commitment to develop safe and trustworthy AI systems, as Congress has not enacted new laws to regulate the industry.