Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Ilya Sutskever on how AI will change and his new startup Safe Superintelligence

SAN FRANCISCO/NEW YORK : Ilya Sutskever, OpenAI’s former chief scientist, has launched a new company called Safe Superintelligence (SSI), aiming to develop safe artificial intelligence systems that far surpass human capabilities.
He and his co-founders outlined their plans for the startup in an exclusive interview with Reuters this week.
Sutskever, 37, is one of the most influential technologists in AI and trained under Geoffrey Hinton, known as the “Godfather of AI”. Sutskever was an early advocate of scaling – the idea that AI performance improves with vast amounts of computing power – which laid the groundwork for generative AI advances like ChatGPT. SSI will approach scaling differently from OpenAI, he said.
Following are highlights from the interview.
THE RATIONALE FOR FOUNDING SSI
“We’ve identified a mountain that’s a bit different from what I was working [on]…once you climb to the top of this mountain, the paradigm will change… Everything we know about AI will change once again. At that point, the most important superintelligence safety work will take place.”
“Our first product will be the safe superintelligence.”
WOULD YOU RELEASE AI THAT IS AS SMART AS HUMANS AHEAD OF SUPERINTELLIGENCE?
“I think the question is: Is it safe? Is it a force for good in the world? I think the world is going to change so much when we get to this point that to offer you the definitive plan of what we’ll do is quite difficult.
I can tell you the world will be a very different place. The way everybody in the broader world is thinking about what’s happening in AI will be very different in ways that are difficult to comprehend. It’s going to be a much more intense conversation. It may not just be up to what we decide, also.”
HOW SSI WILL DECIDE WHAT CONSTITUTES SAFE AI?   
“A big part of the answer to your question will require that we do some significant research. And especially if you have the view as we do, that things will change quite a bit… There are many big ideas that are being discovered.
Many people are thinking about how as an AI becomes more powerful, what are the steps and the tests to do? It’s getting a little tricky. There’s a lot of research to be done. I don’t want to say that there are definitive answers just yet. But this is one of the things we’ll figure out.”
ON SCALING HYPOTHESIS AND AI SAFETY 
“Everyone just says ‘scaling hypothesis’. Everyone neglects to ask, what are we scaling? The great breakthrough of deep learning of the past decade is a particular formula for the scaling hypothesis. But it will change… And as it changes, the capabilities of the system will increase. The safety question will become the most intense, and that’s what we’ll need to address.”
    ON OPEN-SOURCING SSI’S RESEARCH
“At this point, all AI companies are not open-sourcing their primary work. The same holds true for us. But I think that hopefully, depending on certain factors, there will be many opportunities to open-source relevant superintelligence safety work. Perhaps not all of it, but certainly some.”
ON OTHER AI COMPANIES’ SAFETY RESEARCH EFFORTS
“I actually have a very high opinion about the industry. I think that as people continue to make progress, all the different companies will realize — maybe at slightly different times — the nature of the challenge that they’re facing. So rather than say that we think that no one else can do it, we say that we think we can make a contribution.

en_USEnglish