In a significant development in the AI industry, Ilya Sutskever, co-founder of OpenAI, has taken the helm as CEO of Safe Superintelligence (SSI), a startup focused on developing safe artificial superintelligence. This leadership change comes after the departure of former CEO Daniel Gross, who has moved to Meta Platforms to lead AI initiatives.
Sutskever, previously OpenAI’s chief scientist, founded SSI in June 2024 with a mission to prioritize AI safety in the creation of superintelligent systems. His transition to CEO marks a pivotal moment for the company, which has already raised significant funding, including a recent $2 billion round, valuing SSI at an impressive $32 billion.
The AI landscape is witnessing intense competition and talent wars, with major tech giants like Meta aggressively expanding their AI capabilities. Gross’s exit to Meta underscores this trend, while Sutskever’s leadership is expected to steer SSI toward groundbreaking advancements in safe AI technologies.
SSI, co-founded with Daniel Levy and Gross, aims to create a singular focus on building a superintelligent agent that surpasses human intelligence while ensuring safety remains paramount. This mission aligns with Sutskever’s long-standing concerns about AI risks, which were evident during his tenure at OpenAI.
Despite not having a product yet, SSI has garnered massive backing from industry leaders such as Alphabet, Nvidia, and top venture capitalists. This financial support highlights the confidence in Sutskever’s vision and the potential impact of safe superintelligence on the future of technology.
As Sutskever steps into this role, the industry watches closely to see how SSI will differentiate itself in a crowded AI market. His leadership could redefine standards for safety and innovation, shaping the ethical development of AI for years to come.