Ilya Sutskever, one in every of OpenAI’s co-founders, has launched a brand new firm, Secure Superintelligence Inc. (SSI), only a month after leaving OpenAI.
Sutskever, who was OpenAI’s longtime chief scientist, based SSI with former YC companion Daniel Gross and ex-OpenAI engineer Daniel Levy.
At OpenAI, Sutskever was integral to the corporate’s efforts to enhance AI security with the rise of “superintelligent” AI methods, an space he labored on alongside Jan Leike. But each Sutskever after which Leike left the corporate dramatically in Could after falling out with management at OpenAI over learn how to method AI security. Leike now heads a staff at Anthropic.
Sutskever’s been giving consideration to the thornier facets of AI security for a very long time now. In a weblog put up revealed in 2023, he (writing with Leike) predicted that AI with intelligence exceeding that of people might arrive throughout the decade—and when it does, it received’t essentially be benevolent, necessitating analysis into methods to regulate and prohibit it.
“SSI is our mission, our identify, and our total product roadmap, as a result of it’s our sole focus. Our staff, buyers, and enterprise mannequin are all aligned to attain SSI. We method security and capabilities in tandem, as technical issues to be solved by means of revolutionary engineering and scientific breakthroughs,” the tweet reads.
“We plan to advance capabilities as quick as doable whereas ensuring our security at all times stays forward. This fashion, we are able to scale in peace. Our singular focus means no distraction by administration overhead or product cycles, and our enterprise mannequin means security, safety, and progress are all insulated from short-term industrial pressures.”
SSI has workplaces in Palo Alto and Tel Aviv, the place it’s at the moment recruiting technical expertise.