Meet the AI scientist who was ousted then reinstated OpenAI’s CEO. Ilya Sutskever has deep concerns about AI safety – find out what drives his skepticism.
Where Sam Altman is known for his risk-taking approach to artificial intelligence development that sometimes even comes at the cost of safety, Ilya Sutskever plays it safer. Sutskever harbours deep concerns about the dangers of AI and reportedly played a key role in persuading fellow board members at OpenAI that Altman’s fast-paced approach to AI deployment was not the way. This sparked the series of events that led to one of the most dramatic CEO reshuffles in tech in years.
Altman ultimately emerged as the winner, though, and was reinstated as the CEO of OpenAI mere days after his firing, with Sutskever doing a 180 saying that he deeply regrets his participation in the board’s actions.
While no longer a part of the board of the company he actually co-founded, Sutskever’s fears are not unsubstantiated. The rapid development and deployment of powerful AI models like ChatGPT have been flagged by researchers and regulators alike who’ve questioned the safety of such technologies. In fact, Sutskever himself admitted in an MIT Technology Review that he didn’t think ChatGPT was good enough before its record-breaking launch.
But who exactly is Ilya Sutskever, how did he get where he is today, and what exactly fuels his immense scepticism about AI? Let’s take a look.
Who is Ilya Sutskever?
Ilya Sutskever is regarded as a visionary in the field of AI. Born in Soviet Russia in 1986 but raised in Jerusalem from the age of 5, he studied at the Open University of Israel before moving to the University of Toronto. From there, Sutskever received a Bachelor of Science in mathematics in 2005, an MSc in computer science in 2007, and a Doctor of Philosophy in computer science in 2013.
His work initially at the University of Toronto was highly experimental – the university magazine described early software developed by him as one that created nonsensical Wikipedia-like entries. But then he got his initial break in 2012 co-authoring a paper with Alex Krizhevsky and his doctoral supervisor Geoffrey Hinton (often credited as the ‘godfather of AI’) that demonstrated the remarkable abilities of the deep learning algorithms he had been exploring. The project, dubbed AlexNet, had the power to solve pattern recognition problems at an unprecedented level.
His role at Google, then at OpenAI
Impressed with their breakthrough, Google hired the three researchers just a year later. Here, Sutskever demonstrated that AlexNet’s pattern recognition abilities with images could also work for words and sentences. He also worked on TensorFlow, an advanced end-to-end open-source platform for machine learning, at the company.
But after less than three years at Google, Sutskever was persuaded by Tesla CEO Elon Musk to become a co-founder and the chief scientist at OpenAI, the non-profit AI company that Musk was also a co-founder of alongside Sam Altman. Just like Sutskever now, Musk has also been wary of AI and has been warning about its potential dangers for years. But Musk left OpenAI in 2018 citing a conflict of interest with Tesla.
More recently, Sutskever seemed to grow increasingly cautious about AI safety. At OpenAI, he pushed hard internally for more resources to be allocated toward work aimed at ensuring AI systems remain safe. In fact, he headed the company’s Superalignment team, which essentially reserved 20% of computing power for managing the risks posed by AI.
And this brings us back to his conflict with Sam Altman. His careful stance apparently got him into a disagreement with Altman who seemingly favoured moving faster in developing powerful AI capabilities. Matters came to a head when Sutskever and allies on OpenAI’s board engineered Altman’s ousting, replacing him with Emmett Shear – someone more aligned with a cautious approach. But just a few days later, Sutskever was on the backfoot: “I never intended to harm OpenAI,” he said in a post on X. “I love everything we’ve built together and I will do everything I can to reunite the company.”
Also read | What is Project Q*, the AI breakthrough from OpenAI? 5 reasons why it may threaten humanity
Why so cautious?
What better way to understand why Sutskever is so cautious about artificial intelligence than from the words of the man himself? “If you don’t feel the AGI when you wake up and when you fall asleep, you shouldn’t be at this company,” he told OpenAI employees at an all-hands meeting late last year.
Described by current and former employees as someone who tackles the challenges of AI with a passion that borders on the spiritual, Sutskever’s dedication is real.
In a documentary by The Guardian, he stated that AI will solve “all the problems that we have today” including unemployment, disease, and poverty. However, it will also create new ones: “The problem of fake news is going to be a million times worse; cyber attacks will become much more extreme; we will have totally automated AI weapons,” he said, adding that AI has the potential to create “infinitely stable dictatorships.”
Fair to say, Sutskever’s view of AI is uniquely balanced – and that combination of optimism and cynicism stretches to artificial general intelligence (AGI), a computer system that can do any job or task that a human does but only better, as well. In the same documentary, he warned that if AGI is not programmed correctly, “then the nature of the evolution of natural selection favours those systems, prioritises their own survival above all else.”
However, unlike some of the other extreme views shared by peers that go as far as predicting the destruction of humanity, Sutskever holds more moderate views.
“It’s not we hate animals; I think humans love animals and have a lot of affection for them. But when the time comes to build a highway between two cities, we are not asking the animals for permission. We just do it because it’s important for us. And I think, by default, that’s the kind of relationship that’s going to be between us and AGIs.”
Source:indianexpress.com