Skip to content Skip to footer

Co-founder of OpenAI, Ilya Sutskever, initiates a fresh venture called Safe Superintelligence Inc.

Ilya Sutskever, a co-founder and former chief scientist at OpenAI, has announced the launch of his new venture, Safe Superintelligence Inc. (SSI), alongside co-founders Daniel Gross from Y Combinator and Daniel Levy, an ex-OpenAI engineer. The new venture’s mission is to the development of a safe and reliable superintelligent AI, systems that match or exceed human intelligence, a milestone which Sutskever believes will be achievable within ten years.

The recently released statement positions the creation of safe superintelligence (from which the company draws its name) as “the most important technical problem of our time.” The founders believe that such an advancement is “within reach,” and have dedicated their new venture to this mission, creating an ecosystem where the team, investors, and business model all focus on delivering this singular goal.

Significant speculation around Sutskever’s departure from OpenAI and his subsequent move to establish SSI hints at possible disagreements over the former’s strategic priorities. As reported, disagreements included a failed attempt by Sutskever to oust OpenAI CEO Sam Altman, an action which he later publicly regretted. Furthermore, a number of key researchers leaving OpenAI amid safety concerns, along with the dissolution of the “superalignment team,” tasked with aligning AI to human benefits and values, has raised further questions around OpenAI’s current trajectory and alignment with its founding principles.

The formation of SSI stems from a desire among its founders to pursue a safety-first approach to AI. While views on the risks posed by AI vary greatly, SSI’s standpoint is clear: the company is committed to simultaneously advancing AI capabilities and safety measures whilst forging “revolutionary engineering and scientific breakthroughs.”

SSI’s operating structure is to focus exclusively on safety measures free from traditional distractions such as management overhead, product cycles, and short-term commercial pressures. This singular focus is aimed at insulating the company’s safety, security, and progress from external pressures.

To achieve these goals, SSI is in the process of recruiting a team of top engineers and researchers who would be dedicated exclusively to the task of developing safe superintelligence. The company, with offices in Palo Alto and Tel Aviv, extends an invitation to those who are up for the challenge to join them in solving this significant technical challenge.

In conclusion, SSI marks the addition of another key player in the rapidly expanding field of AI. The company’s establishment will potentially trigger a significant movement of talent, potentially from OpenAI, as it forges ahead on its mission to create safe superintelligent AI.

Leave a comment

0.0/5