Why was Sam Altman fired from OpenAI?
tl;dr AGI (Artificial General Intelligence) may be closer than we think.
If you’ve visited X in the last 24 hours, seen OpenAI’s Blog, or simply opened up Google, you’ve probably heard that the board of OpenAI has fired its CEO, Sam Altman. There are already dozens of articles online covering the topic, including Ars Technica, The Information, Wired, The Verge, CNN and many others.
I got a flurry of messages yesterday afternoon simply letting me know of this fact (my favourite one below). After a day of more information coming in, I figured I’d share my thoughts because I believe it may be a signal that AGI is closer than we think. I’m simultaneously excited and don’t know what to expect.
I’ve been following Sam since he was first appointed as President of YC in 2014, and he is someone who both inspires me and has taught me a lot (from afar) over the years, so I was shocked to hear the news just like many others.
I won’t reiterate exactly what went down because Greg Brockman’s (@gdb) tweet captures the gist.
OpenAI’s Announcement of the Leadership Transition mentioned that this was a decision of the board, consisting of:
Ilya Sutskever - OpenAI Chief Scientist & Co-Founder
Adam D’Angelo - Quora CEO
Tasha McCauley - Technology entrepreneur
Helen Toner - Georgetown Center for Security and Emerging Technology’s
[Stepped Down] Greg Brockman - OpenAI President & Co-Founder
I decided to cross-reference this list with the list of co-founders (according to Wikipedia):
The organization was founded in December 2015 by Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, Jessica Livingston, John Schulman, Pamela Vagata, and Wojciech Zaremba, with Sam Altman and Elon Musk serving as the initial board members
Aside from the fact that there are almost a dozen co-founders, it was interesting that with the exception of Greg, who stepped down, Ilya is the only other individual on both lists.
For those that don’t know, Ilya Sustkever studied under Geoffrey Hinton, the “Godfather of Deep Learning,” at the University of Toronto and did a postdoc under Andrew Ng, “The Godfather of AI Education,” at Stanford. In my opinion, if we had to look to a single individual leading the AI Revolution, it’s Ilya.
In Walter Isaacson recent biography of Elon Musk, he mentions how much effort Elon put into persuading Ilya to move from Google to OpenAI and lead the charge on AGI safety. He was the reason Elon & Larry Page stopped being friends, and Elon talks about it again in a recent podcast he did with Lex Friedman (timestamped below).
The point I’m getting to is that AI safety is at the top of Ilya’s mind, and has been since day one. It’s the reason he introduced the Superalignment effort last July, dedicating 20% of their compute to solving the alignment problem within four years. He discusses all the challenges related to this in an interview from last July as well.
According to The Information, one of the statements he made to the team after the news were made public is:
This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity.
Ilya Sutskever
Wojceich, one of the other co-founders, publically reinforced that the mission of building a safe AGI to benefit humanity has not changed, but is also sad to see both Sam and Greg go.
Having worked within teams driven by AI and ML through the 2010s, I was an AGI skeptic. I knew it would happen one day, but did not expect to see it in my lifetime. When ChatGPT came out in late 2022, I did a complete 180 and realized how I wrong I was after publishing 24 Hours of ChatGPT.
Over the last week, there have been news around GPT5 being underway. If the jump from GPT4 to GPT5 will resemble anything we’ve seen from GPT3 to GPT4, I genuinely don’t know what to expect. My guess is that there was a disagreement between Sam and Ilya when finding a balance between building the business and shipping product while also maintaining AGI safety and staying true to the core of OpenAI’s vision.
It’s hard to say what kind of innovation is going on behind the scenes at OpenAI, but they have some of the most talented people on the planet. It’s hard to say what kind of conversations they have, but the last few months are reinforcing that safety is becoming a top priority. It’s hard to say when AGI will come into existence, but I anticipate it’ll be sooner than most expect. It’s hard to say whether letting Sam go was the right decision, but only time will tell.