Home Artificial Intelligence Ilya Sutskever’s Safe Superintelligence (SSI) raises $1 billion at $5 billion valuation

Ilya Sutskever’s Safe Superintelligence (SSI) raises $1 billion at $5 billion valuation

by admin
SSI

Safe Superintelligence (SSI), a startup co-founded by former OpenAI chief scientist Ilya Sutskever, has secured $1 billion in funding just three months after it was founded. 

The company, which aims to develop “safe” artificial general intelligence (AGI) systems that surpass human intelligence, has achieved a valuation of approximately $5 billion despite having no product and only ten employees.

The funding round, led by top-tier venture capital firms including Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel, shows that AI ventures are still attracting massive cash.

It defies skepticism surrounding investment in the AI sector.

When generative AI went ‘mainstream’ in late 2022, people thought it would change the world in the flick of a switch.

It has, to some extent, but it’s still early days for investors. AI companies founded just months ago have attracted billions of dollars in cash, but paying it back might take longer than expected as companies attempt to monetize their expensive products. 

Brushing all of that aside, there’s still a cool $1 billion for SSI. Sutskever said himself when he found it would have no trouble raising cash. 

Ex-OpenAI co-founder Sutskever co-founded SSI in June alongside serial AI investors Nat Friedman, Daniel Gross, and Daniel Levy, a former OpenAI researcher. Sutskever was part of a series of high-profile exits from OpenAI.

The company plans to use the newly acquired funds to secure computing resources and expand its team, with offices in Palo Alto, California, and Tel Aviv, Israel.

“We’ve identified a new mountain to climb that’s a bit different from what I was working on previously,” Sutskever told the Financial Times

“We’re not trying to go down the same path faster. If you do something different, then it becomes possible for you to do something special.”

SSI: A unique player in the AI sector

SSI’s mission contrasts with that of other major AI players, such as OpenAI, Anthropic, and Elon Musk’s xAI, which are developing models with broad consumer and business applications. 

Instead, SSI is laser-focused on what it calls a “straight shot to safe superintelligence.”

When it was founded, SSI stated, “Superintelligence is within reach. Building safe superintelligence (SSI) is the most important technical problem of our​ time. We have started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence.”

SSI plans to spend several years on research and development before bringing a product to market. 

“It’s important for us to be surrounded by investors who understand, respect and support our mission, which is to make a straight shot to safe superintelligence and in particular to spend a couple of years doing R&D on our product before bringing it to market,” Gross, SSI’s CEO, told Reuters.

This allows them to “scale in peace,” free from the pressures of management overhead, product cycles, and short-term commercial demands.

Some question whether ‘safe superintelligence’ can work conceptually, but this unique approach is welcomed in an AI sector dominated by language models. 

Of course, the true test will lie in SSI’s ability to deliver on its lofty goals and navigate the complex challenges of developing safe, superintelligent AI systems. 

If successful, this small startup will definitely have far-reaching implications for the future of AI and its impact on society.

Source Link

Related Posts

Leave a Comment