- Polyweb
- Posts
- Musings on AGI and OpenAI Drama
Musings on AGI and OpenAI Drama
In which I explore how the recent drama at OpenAI is bigger than you think, and why it has an impact on you (even if you don't know it yet)
✨ Hi, I’m Sara! In this newsletter, I share my musings at the intersection of tech, product, and human tinkering, with the aim of navigating business and life in the Technology Era with purpose.
Subscribe to join me on this journey and check out the Polyweb podcast.
I’m not gonna lie, this wasn’t the article I had in mind to write. But since this was one of the most eventful week perhaps in tech history, let’s talk about it 👇
A brief timeline of the OpenAI drama
Let's unpack the recent events, and yes, there are quite a few names, so hold tight as I walk you through this crazy story:
Friday, November 17th: In a stunning turn of events that no one saw coming, the board of OpenAI, the fastest-growing company in history, fires CEO Sam Altman.
In a hastily arranged Google Meet, Chief Scientist and board member Ilya Sutskever, along with other board members - excluding Greg Brockman - deliver the news to Altman. This group includes Adam D’Angelo, CEO of Quora; Helen Toner of the Center for Security and Emerging Technology; and Tasha McCauley, CEO of GeoSim, a city mapping startup.
Brockman was told he was being removed from the board as well and decided to resign.
This move reportedly blindsided OpenAI’s investors, including Microsoft—which has agreed to invest more than $13 billion in the company.
Meanwhile, CTO Mira Murati was appointed interim CEO.
The board didn’t give away details on why it took such drastic actions, except saying that “he (Altman) was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities”.
OpenAI's unique structure as a nonprofit, later integrated with a for-profit arm, plays a crucial role here. The nonprofit board mandate is to prioritize safe and beneficial AGI development for humanity and is not bound to prioritize profitability.
Saturday and Sunday, November 18th-19th: Negotiations are attempted between the board and Altman. Meanwhile, employees show their support on X to Sam en mass. On Sunday, OpenAI named Twitch co-founder and former CEO Emmett Shear as its next interim CEO.
Monday, November 20th: Microsoft hires Altman and Brockman to lead a new AI research lab within the company. Meanwhile, 95% of OpenAI’s 770 employees, including Sutskever, who in the meantime tweets that he regrets his decision, penned a letter threatening to quit and join Microsoft unless Altman and Brockman are reinstated and the board resigns.
Tuesday, November 21st: Altman is back as OpenAI CEO, with a new board. This new board includes Adam D’Angelo, the sole survivor from the previous board; Bret Taylor, former CTO of Facebook and co-CEO of Salesforce; and Larry Summers, the ex-Treasury Department secretary. Rumors suggest the board will expand, potentially including up to nine members.
This shake-up seems to solidify Altman's authority and foster a stronger sense of unity among OpenAI employees.
What this means for the destiny of the nonprofit, however, is unclear. It’s almost certain that Microsoft and other investors would want a sit at the table to avoid more costly surprises, but that would make OpenAI governance no different than other tech companies looking for profit.
There is a risk that the mission of “building an AGI that benefits humanity” will turn into “building an AGI that benefits the investors’ pocket” if safeguards are not put into place.
I guess the question is: can responsible research and economic gains coexist?
What caused the OpenAI coup?
While we still don’t know what happened, it seems the board has been divided for a while. A research paper by Toner has surfaced, lauding Anthropic (OpenAI's key rival and the creator of Claude) for prioritizing AI safety over expansion, a stance seemingly at odds with OpenAI's approach.
And it seems that OpenAI researchers warned the board of an AI breakthrough that raised safety concerns, a model known as Q*, ahead of Sam's ouster. According to Reuters, the new model was able to perform math on the level of grade-school students. This is not AGI (more on this below), but a significant step forward.
At present, generative AI excels in writing and translating languages by predicting the next word in a sequence, leading to varied answers for the same question. However, mastering math, which has definitive right answers, would indicate that AI has developed greater reasoning skills.
As significant as this advancement in AI is, it might have raised an even more fundamental question: what do you do when the model behaves in ways you didn't anticipate?
This might be where the real tension lies. Do you pause and carefully assess these unexpected outcomes, or do you race to market, driven by the worry that if you don't, your competitors will beat you to it?
Don’t get distracted by the smoke, look for the fire
Most people are missing the key point regarding OpenAI. This isn't your standard Silicon Valley startup drama.
This is much bigger and affects all of us, given OpenAI's original mission:
building an AGI (Artificial General Intelligence) that benefits all humanity.
But to grasp the full weight of this mission, we need to unpack what AGI really is.
AGI is a type of artificial intelligence that has the ability to understand, learn, and apply its intelligence to a wide range of problems, much like a human. Unlike specialized AI, designed for specific tasks, AGI can adapt to various situations and perform diverse tasks without being programmed for each one.
Though still largely theoretical, AGI is a hot topic in ongoing AI research.
OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks. Google DeepMind, suggests AGI should possess qualities like versatility and metacognition. However, this doesn't necessarily mean it will think like a human or possess consciousness.
When discussing AGI, I find it particularly interesting to distinguish between the concept of Strong AI vs. Weak or Narrow AI.
A Strong AI can perform a variety of functions, eventually teaching itself to solve new problems. Weak AI, which is what we have today, relies on human interference to define the parameters of its learning algorithms and to provide the relevant training data to ensure accuracy. Both can perform complex tasks. The key difference is, that while human input accelerates the growth phase of Strong AI, it is not required, and over time, it develops a human-like consciousness instead of simulating it, like Weak AI.
Ilya Sutskever once used a compelling analogy to describe this. He said Isaac Newton couldn’t explain the movement and elliptical orbits of planets. Newton consulted with every professor and read every book that he could find but couldn’t come up with an answer. He had to look outside and invent calculus to solve the problem. This is the difference between Weak AI (what we have today, a system pouring through all the knowledge at its disposal) and a Strong AI.
But even in a Weak AI scenario, if the simulation is convincing enough and enough people trust it for decision-making, the impact on our lives and society would be monumental.
Imagine an AGI capable of handling all your tasks, simple or complex, autonomously improving over time. Such an AGI could hold engaging conversations, innovate in technology, and understand human motivations with uncanny accuracy. We might even trust these systems to make strategic decisions for companies.
Is AGI a good or a bad thing?
In the book, What We Owe The Future, author Will MacKallister argues that the development of AGI would likely be a moment of monumental importance, for two reasons.
First, it could massively speed up the rate of technological progress. MacKallister notes that once AI becomes fully general, it could replace a vast array of jobs, eliminating the limitations imposed by the slow pace of human labor growth, such as child-rearing. Consequently, the world's economy might experience unprecedented growth, potentially doubling every five years.
Secondly, the potential longevity of an AGI system could significantly alter global power dynamics. For example, if one major power harnessed AGI-fueled growth before others, it might rapidly overshadow all other powers combined.
These scenarios both assume that humans can control AGI and use it to enforce their values.
However, human control over AGI is far from guaranteed. Building AGI could pave the way for AI systems that eclipse human abilities across all domains, much as current AI already surpasses human capabilities in games like chess and Go. In a scenario with Strong AI, the AGI could become autonomous, initiating actions independently.
In Sam Altman's own words:
AGI has the potential to give everyone incredible new capabilities; we can imagine a world where all of us have access to help with almost any cognitive task, providing a great force multiplier for human ingenuity and creativity.
On the other hand, AGI would also come with serious risk of misuse, drastic accidents, and societal disruption. Because the upside of AGI is so great, we do not believe it is possible or desirable for society to stop its development forever; instead, society and the developers of AGI have to figure out how to get it right.
Why you should care: this will change your life, even if you don’t know it yet
By now, it should be clear that the individuals training AI models, potentially paving the way for AGI and figuring out “how to get it right”, are not just your average tech professionals. They are, in many ways, distinct from you and me.
The everyday decisions made in the process of training these models could one day lead to an AGI that either benefits or harms humanity.
AGI might be our best friend or our biggest foe. Alternatively, AGI might view us with the same detachment we have towards an ant - appreciative from a distance, but indifferent in the grand scheme of things.
Consider this:
Individuals within organizations like OpenAI potentially hold more influence over the future of humanity than any politician or world leader.
The leadership and team at OpenAI, and similar entities, wield immense power over our collective future, power that comes with significant responsibility but limited accountability, guided primarily by their moral compass.
What we currently lack is a universally accepted definition or a broad consensus on what constitutes responsible AI and AGI development.
Not just in the future, but right now.
Key questions remain unanswered:
If things start going awry with AI, when do we decide to 'pull the plug'? How do we determine the right time to alter the direction of AI training models?
These critical decisions are currently in the hands of OpenAI's 770 remarkable employees and their counterparts in other leading AI firms.
Yet, the conversation needs to be more inclusive and far-reaching, considering the widespread implications AGI will have on all of us.
It's not just politicians and regulators who need to get involved, but everyday people as well.
We should learn from past technologies like social media that engineering without ethical foresight carries unintended risks. OpenAI releasing detailed research roadmaps and being proactive with the public and regulators about objectives, milestones and controls would go a long way. Independent oversight committees staffed by diverse experts could also audit priorities periodically.
The stakes are exponentially higher than for most technologies and responsible development of something as critical as AGI cannot happen in the dark.
✨ If you like this newsletter and found the content useful, please consider sharing it 🙏