Key Points:
- Safe Superintelligence (SSI), founded by OpenAI co-founder Ilya Sutskever, has secured $1 billion in funding.
- SSI is dedicated to the safe development of artificial intelligence.
- Backers include Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel.
Ilya Sutskever, co-founder of OpenAI, has raised $1 billion for his new AI venture, Safe Superintelligence (SSI), which is dedicated to ensuring the safe development of artificial intelligence. After stepping down from his role at OpenAI in May, Sutskever launched SSI with a mission to focus solely on safety-first AI development.
Sutskever, a highly regarded computer scientist, believes that AI will surpass human intelligence within the next decade, making the development of safe and secure AI systems crucial. This belief drives SSI’s mission, which is reflected in its name and long-term goals.
As stated in a company post on X, “SSI is our mission, our name, and our entire product roadmap because it is our sole focus.”
Backing from Leading Investors
SSI has attracted the attention of top investors, including Andreessen Horowitz (A16), Sequoia Capital, DST Global, and SV Angel, among others. Daniel Gross, a former Apple executive and an SSI leader, also contributed through NFDG, a partnership he co-runs. These contributions provide a strong financial foundation for SSI as it seeks to scale up its operations.
Though the company’s valuation has not been publicly disclosed, sources familiar with the project told Reuters that SSI’s valuation is estimated at around $5 billion. Despite short-term uncertainty in the market, this significant investment underscores the confidence that institutional investors have in Sutskever and his team, given their proven expertise in AI.
Expanding Resources and Team Growth
Having served as the chief scientist and co-leader of the Superalignment team at OpenAI, Sutskever’s expertise in AI is well-established. His departure, along with that of co-leader Jan Leike—who joined competitor Anthropic—prompted OpenAI to disband the team. Leike had previously criticized OpenAI, stating that its focus on safety had waned in favor of product development.
SSI now plans to utilize its new funding to expand its workforce and computing capabilities. Starting with a small team of 10 employees, the company is looking to build a specialized team of engineers and researchers, based in both Palo Alto, California, and Tel Aviv, Israel. This expansion aligns with SSI’s commitment to its mission of creating safe AI systems.
With key executives such as Daniel Gross and Daniel Levy, both with extensive backgrounds in AI at companies like Apple and OpenAI, respectively, SSI is poised to become a major player in the AI landscape focused on ethical development and safety protocols.