Just three months after its inception, Safe Superintelligence (SSI), a new AI startup founded by OpenAI co-founder Ilya Sutskever, has raised $1 billion in funding. Led by venture capital firms Sequoia and Andreessen Horowitz, the latest investment round values the company at approximately $5 billion, according to a Financial Times report.
Sutskever, who left OpenAI in May this year following a failed attempt to oust CEO Sam Altman, established SSI to develop ‘safe’ AI models. The company’s mission is to create AI systems that are both highly capable and aligned with human interests.
‘We’ve identified a new mountain to climb that is slightly different from what I was working on previously. We’re not trying to go down the same path faster. If you do something different, it becomes possible for you to do something special, Sutskever told the Financial Times.
The substantial funding will be used to acquire computing resources necessary for AI model development and to expand SSI’s current team of 10 employees. The company actively recruits and offers positions in Palo Alto, California, and Tel Aviv, Israel.
With its focus on safety and alignment, SSI’s approach differs from that of other AI companies. Take firms like OpenAI, Anthropic, and Elon Musk’s xAI, which are all developing AI models for various consumer and business applications. SSI, on the other hand, is focusing solely on creating what it calls a ‘straight shot to safe superintelligence’.
Daniel Gross, SSI’s chief executive, emphasised the importance of this focused approach in a statement to Reuters: “It’s important for us to be surrounded by investors who understand, respect and support our mission, which is to make a straight shot to safe superintelligence and in particular to spend a couple of years doing R&D on our product before bringing it to market.”
It is also interesting to point out that despite not having a product yet, the company’s significant valuation and funding highlight the intense interest and investment in safe AI research. This is amid growing concerns about the potential risks associated with increasingly powerful AI systems.
Even Sutskever’s departure from OpenAI was reportedly due to disagreements over the company’s direction and the pace of AI development. At OpenAI, he led the ‘alignment’ team, which focused on ensuring that advanced AI systems would act in humanity’s best interests.
What is clear, however, is that the formation of SSI and its rapid funding success reflect a broader trend in the AI industry towards addressing safety concerns alongside capability advancements. This approach aligns with calls from AI researchers and ethicists for more responsible development of artificial intelligence.
Today, SSI joins a competitive field of well-funded AI companies. OpenAI is reportedly in talks to raise funds at a valuation exceeding $100 billion, while Anthropic and xAI were recently valued at around $20 billion.
However, the crowded market did not dim SSI’s unique focus on safety or its high-profile founding team, both of which have clearly resonated with investors.
“We are assembling a lean, cracked team of the world’s best engineers and researchers dedicated to focusing on SSI and nothing else. We offer an opportunity to do your life’s work and help solve our age’s most important technical challenge,” the company’s website states.
For now, the company’s progress will be closely watched by both the tech industry and those concerned with the ethical implications of AI development.
See also: OpenAI hit by leadership exodus as three key figures depart
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.