Ilya Sutskever, co-founder of OpenAI, has embarked on a new venture after stepping down as the company’s chief scientist in May. Alongside his former OpenAI colleague Daniel Levy and Daniel Gross, formerly of Apple and Cue, Sutskever announced the formation of Safe Superintelligence Inc. (SSI). Their mission? To build safe superintelligence—an AI system with intellectual capabilities far surpassing those of humans.
Superintelligence, in this context, refers to a hypothetical agent with intelligence vastly superior to even the smartest human. Sutskever’s work at OpenAI included contributing to the superalignment team, which aimed to design ways to control powerful AI systems. However, with his departure, that team disbanded, drawing criticism from former lead Jean Leike. Now, SSI is taking a focused approach, aiming for safe superintelligence as a singular goal, backed by revolutionary engineering and scientific breakthroughs.
The founders emphasize advancing capabilities while maintaining safety as their top priority. As they embark on this critical endeavor, Sutskever and his team are poised to address one of the most pressing technical challenges of our time. Stay tuned for more updates as SSI charts its course toward a future where AI transcends human intelligence.
Leave a Reply