Published Date : 04/06/2025
Tech CEOs, futurists, and venture capitalists often describe Artificial General Intelligence (AGI) as an inevitable and ultimate goal for technology development. However, the term is vague and lacks a precise definition. Definitions of AGI range widely, primarily serving the economic interests of the individuals and organizations involved. OpenAI’s charter defines AGI as “highly autonomous systems that outperform humans at most economically valuable work,” while Mark Zuckerberg admits to not having a clear, concise definition. Ilya Sustkever, the former Chief AI Scientist at OpenAI, even led chants of “Feel the AGI!” around the office, embodying the mysticism surrounding the term.
In a leaked agreement, Microsoft and OpenAI crafted a more concrete metric: whether such a system could generate $100 billion in profit. This bid for awe in the use of “AGI” echoes the discourse from the field's origins. In 1956, computer scientist Marvin Minsky, one of the founders of the academic discipline of artificial intelligence, remarked that human beings are instances of very complicated machines. If we could replicate key parts of the human brain, we would achieve “AI.” This approach resonates with modern deployments of “AGI.”
When we give credence to the idea of AGI, it has multiple real-world implications. First, it signals that a computer program proficient at one task, like predicting words from other words (as in ChatGPT and other chatbots), can perform significant social and economic work. These proposals include addressing gaps in major social services, doing science autonomously, and solving climate change. California Governor Gavin Newsom has suggested using “AI” to solve traffic issues and homelessness, while Google DeepMind CEO Demis Hassabis believes we will cure cancer and eliminate all diseases in five to ten years with autonomous AI scientists. Former Google CEO Eric Schmidt claims that “AGI” will solve climate change.
The second issue is closely related to the first: claims of “AGI” serve as a cover for abandoning the current social contract. Instead of focusing on immediate needs, many AGI proponents believe that the best and only thing humans can do now is work on developing superintelligence. Venture capitalist Marc Andreessen has stated that “AI” will crash wages and deliver us into a “consumer cornucopia” where the marginal cost of consumer goods approaches zero. OpenAI CEO Sam Altman envisions a future where everyone will have a small bit of access to AGI, apportioned as “universal basic compute,” similar to universal basic income.
If these ideas sound bizarre and god-like, they are. The discourse around AGI often plays into the idea of a benevolent robot god that will rescue humans from themselves—or a malevolent one that will wipe us out. Futurist Ray Kurzweil, a Google fellow, believes in the technological singularity, where an AGI trained with proper values will lead to a world of limitless abundance. Conversely, Eliezer Yudkowsky, a blogger and internet personality, fears that a machine superintelligence could go rogue and eliminate humanity. Yudkowsky suggests extreme measures, including bombing a data center, to prevent this scenario.
Despite its fuzzy definition, the concept of AGI has significant influence in policy circles. It motivates initiatives like the Stargate initiative and helps justify limitations on AI regulation, such as the 10-year moratorium on state-level regulation of AI passed by the House. The next time you hear someone discuss the promise or threat of AGI, ask what social or political problems they are trying to paper over and how they are implicated in creating or exacerbating these issues.
Q: What is Artificial General Intelligence (AGI)?
A: AGI refers to a hypothetical form of artificial intelligence that can understand, learn, and apply knowledge across a wide range of tasks at a level equal to or beyond human capabilities. However, the term lacks a precise definition and is often used vaguely.
Q: Why is AGI considered a myth?
A: AGI is considered a myth because it is a vague concept without a clear definition. It is often used to justify massive investments and policy decisions, despite the lack of concrete evidence or technology to support its imminent arrival.
Q: What are the potential impacts of AGI on society?
A: Proponents of AGI claim it could solve major social and economic issues, such as climate change and disease. However, critics argue that the focus on AGI diverts attention from immediate social and political problems and could lead to negative outcomes like job displacement and ethical concerns.
Q: Who are the key figures in the AGI discourse?
A: Key figures in the AGI discourse include tech CEOs like Sam Altman and Mark Zuckerberg, futurists like Ray Kurzweil, and thinkers like Eliezer Yudkowsky. They have varying views on the potential benefits and risks of AGI.
Q: What are the ethical concerns surrounding AGI?
A: Ethical concerns surrounding AGI include the potential for job displacement, the concentration of power in the hands of a few tech companies, and the risk of creating a superintelligence that could go rogue and pose a threat to humanity.