Over a month ago, Ilya Sutskever, co-founder and chief scientist of OpenAI, who left the company under mysterious circumstances, announced his new venture, Safe Superintelligence (SSI).
The startup aims to tackle “the most important technical problem of our time” — risks posed by AI, a long-standing concern for Sutskever. In 2023, he predicted that human-level intelligence could emerge within the next decade and might not be “inherently benevolent.”
Details about Sutskever’s new initiative are scarce. According to the sole Twitter post about the launch, the company is currently scouting for top tech talent in Palo Alto and Tel Aviv, where Ilya has “deep roots.”
Jerusalem to Toronto
Ilya Sutskever was born in Nizhny Novgorod, the sixth-largest city in Russia, and grew up in Jerusalem, Israel. “My parents say I was interested in AI from an early age. I was also very motivated by consciousness,” he said in an interview. “I was very disturbed by it, and I was curious about things that could help me understand it better.”
Ilya moved to Canada with his family when he was a teenager. This transition was pivotal. Sutskever pursued his undergraduate studies at the University of Toronto, where he worked with Geoffrey Hinton, a pioneer in the field of neural networks.
Hinton was already training models that could produce short strings of text. “It was the beginning of generative AI right there,” said Sutskever. “It was really cool — it just wasn’t very good.”
This collaboration proved to be a turning point in Sutskever’s career, leading to research that laid the foundation for modern deep learning.
The spirit of collaboration
After completing his Ph.D., Sutskever moved to the U.S. to join the Google Brain team. The move provided Ilya with the resources to advance his research and contribute to significant breakthroughs in AI.
During his time at Google, Sutskever co-authored several research papers on neural networks and deep learning, contributing to the development of algorithms that are now standard in the AI community.
Sutskever often highlights the importance of the collaborative environment. “The U.S. has been a hub of innovation in AI,” he said in a podcast. “Being here allowed me to work with some of the brightest minds and access the tools needed to push the field forward.”
Shaping AI vision
In numerous podcasts and interviews, Sutskever has emphasized the importance of creating AI that is safe and aligned with human values.
One of his comments from an interview with Lex Fridman highlights this vision: “The goal of AI should be to amplify human potential, not replace it. We need to build systems that can collaborate with humans and help solve some of the most pressing challenges we face.”
Sutskever is also a strong advocate for transparency and collaboration in AI research. He believes that sharing knowledge and resources openly can accelerate progress and ensure that AI technologies are developed responsibly.
“Collaboration and openness are key to advancing AI in a way that is beneficial for everyone,” he said during an episode of the Eye on AI podcast.
Future of superintelligence
Sutskever acknowledges the challenges that lie ahead in AI development. One of the primary concerns he has raised is the potential for AI to be misused. He frequently discusses the need for safety measures and ethical guidelines to prevent the misuse of AI technologies.
As OpenAI launches GPT-5, initiating a series of advancements aimed at developing more sophisticated AI systems, we likely won’t see any updates from SSI in the near future.
In an interview with Bloomberg, Sutskever mentioned that SSI’s first product will be safe superintelligence, and the company “will not do anything else” until that is achieved.