The Wild Claim at the Heart of Elon Musk’s OpenAI Lawsuit
Elon Musk started the week by posting testily on X about his struggles to set up a new laptop running Windows. He ended it by filing a lawsuit accusing OpenAI of recklessly developing human-level AI and handing it over to Microsoft.
Musk’s lawsuit is filed against OpenAI and two of its executives, CEO Sam Altman and president Greg Brockman, both of whom worked with the rocket and car entrepreneur to found the company in 2015. It claims that the pair have breached the original “Founding Agreement” worked out with Musk, which it says pledged the company to develop AGI openly and “for the benefit of humanity.”
Musk’s suit alleges that the for-profit arm of the company, established in 2019 after he parted ways with OpenAI, has created AGI without proper transparency and licensed it to Microsoft, which has invested billions into the company. It demands that OpenAI be forced to release its technology openly and that it be barred from using it to financially benefit Microsoft, Altman, or Brockman.
A large part of the case pivots around a bold and questionable technical claim: that OpenAI has developed so-called artificial general intelligence, or AGI, a term generally used to refer to machines that can comprehensively match or outsmart humans.
“On information and belief, GPT-4 is an AGI algorithm,” the lawsuit states, referring to the large language model that sits behind OpenAI’s ChatGPT. It cites studies that found the system can get a passing grade on the Uniform Bar Exam and other standard tests as proof that it has surpassed some fundamental human abilities. “GPT-4 is not just capable of reasoning. It is better at reasoning than average humans,” the suit claims.
Although GPT-4 was heralded as a major breakthrough when it was launched in March 2023, most AI experts do not see it as proof that AGI has been achieved. “GPT-4 is general, but it’s obviously not AGI in the way that people typically use the term,” says Oren Etzioni, a professor emeritus at the University of Washington and an expert on AI.
“It will be viewed as a wild claim,” says Christopher Manning, a professor at Stanford University who specializes in AI and language, of the AGI assertion in Musk’s suit. Manning says there are divergent views of what constitutes AGI within the AI community. Some experts might set the bar lower, arguing that GPT-4’s ability to perform a wide range of functions would justify calling it AGI, while others prefer to reserve the term for algorithms that can outsmart most or all humans at anything. “Under this definition, I think we very clearly don’t have AGI and are indeed still quite far from it,” he says.
Limited Breakthrough
GPT-4 won notice—and new customers for OpenAI—because it can answer a wide range of questions, while older AI programs were generally dedicated to specific tasks like playing chess or tagging images. Musk’s lawsuit refers to assertions from Microsoft researchers, in a paper from March 2023, that “given the breadth and depth of GPT-4’s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.” Despite its impressive abilities, GPT-4 still makes mistakes and has significant limitations to its ability to correctly parse complex questions.
“I have the sense that most of us researchers on the ground think that large language models [like GPT-4] are a very significant tool for allowing humans to do much more but that they are limited in ways that make them far from stand-alone intelligences,” adds Michael Jordan, a professor at UC Berkeley and an influential figure in the field of machine learning.