OpenAI recently announced the creation of a Safety and Security Committee, which is responsible for giving advice on vital security and safety decisions related to all OpenAI projects. The committee is composed of directors Bret Taylor (Chairperson), Sam Altman (OpenAI’s CEO), Adam D’Angelo, and Nicole Seligman. Additionally, the committee includes Aleksander Madry (Head of Preparedness), Lilian Weng (Head of Safety Systems), John Schulman (Head of Alignment Science), Matt Knight (Head of Security), and Jakub Pachocki (Chief Scientist).
The formation of this committee was done in response to both internal and external criticism towards OpenAI’s management of AI safety. Sam Altman’s dismissal the previous year, supported by then-board member Ilya Sutskever among others, was reportedly attributed to safety-related apprehensions. Recently, Sutskever alongside Jan Leike, both from OpenAI’s superalignment team, left the company. Leike specifically cited concerns around safety as his reason for departure, claiming the company was putting shiny products above safety.
The committee will spend the upcoming 90 days scrutinizing and enhancing OpenAI’s safety policies and measures. The suggestions will be forwarded to OpenAI’s board for approval. OpenAI has vowed to share the implemented safety advice to the public.
This initiative for added safety frameworks is concurrent with OpenAI’s declaration of training its next model that’s believed to advance the capabilities on the path to AGI. However, there’s no defined release date for this model, and training alone is predicted to take weeks, if not months.
Following the AI Seoul Summit, OpenAI released an update on its approach to safety. The organization stated it would not release a new model if it crossed a “Medium” risk level according to the Preparedness Framework. More than 70 external specialists were involved in red teaming GPT-4o before its launch. Pairing the 90-day timeline for the committee to offer its recommendations and the newly started training process, it seems GPT-5 will not be released anytime soon.
The return of Altman not only as the company’s CEO but also as a member of the newly formed committee responsible for safety issues raises questions on transparency. This is especially true considering the OpenAI board learned about the release of ChatGPT through Twitter. Interestingly, Helen Toner, an ex-board member, broke her silence and revealed shocking new details about why Altman was fired. Readers are left to wonder: might this be opening preparations for the training of GPT-6?