Skip to content Skip to footer

Sam Altman suggests that global organizations should oversee AI systems.

OpenAI CEO, Sam Altman, has stated that there is a need for an international agency to oversee powerful future AI models to ensure their safety. Altman argued that these frontier AI systems could potentially cause significant global harm, so they should be treated with the same level of attention as things like nuclear weapons or biohazardous material.

Altman expressed his dissatisfaction with current attempts to regulate AI. He criticized the US and EU authorities for their legislation to regulate AI, stating that relatively rigid laws cannot keep up with the speed at which AI is progressing. He also disagreed with the individual American states making independent attempts to regulate AI.

Altman proposed an international agency to monitor the most powerful AI systems and ensure their adequate safety testing. Such an agency is deemed necessary by Altman to prevent a superintelligent AI from gaining autonomy and recursively self-improving. Despite advocating for this level of oversight, Altman cautioned against overregulation, suggesting it could hinder the progress of AI.

Drawing parallels to international nuclear regulation, Altman pointed to the International Atomic Energy Agency’s oversight over states with significant access to nuclear materials. He suggested a line of sorts that wouldn’t negatively affect startups or overburden them with regulation.

Altman explained that an agency-based approach is more effective than legislating AI due to the rapid evolution of the technology. Legislating AI, he claims, would likely result in premature or irrelevant laws within a year or two.

Regarding the future of GPT-5, Altman hinted that the evolution of AI technology may not happen in the most expected ways. He was circumspect regarding a potential release date for GPT-5, stating that OpenAI takes time with major model releases. He also cast doubt on whether the next big model would even be called “GPT-5”. Instead, Altman indicated that iterative improvements on GPT-4 would be a more likely approach for technological progression.

Altman finally teased upcoming updates from OpenAI, without revealing any specific information. These updates could potentially provide more information on the future of AI models and any further regulations proposed for their safe and effective use.

Leave a comment

0.0/5