Published Date : 26/09/2025
In early August, one day before releasing GPT-5, OpenAI CEO Sam Altman posted an image of the Death Star on social media. It was just the latest declaration by Altman that his new AI model would change the world forever. “We have discovered, invented, whatever you want to call it, something extraordinary that is going to reshape the course of human history,” Altman said in a July interview. He compared his company’s research to the Manhattan Project and said that he felt “useless” compared with OpenAI’s newest invention. Altman, in other words, suggested that GPT-5 would bring society closer to what computer scientists call artificial general intelligence: an AI system that can match or exceed human cognition, including the ability to learn new things.
For years, creating AGI has been the holy grail of many leading AI researchers. Altman and other top technologists, including Anthropic CEO Dario Amodei and computer science professors Yoshua Bengio and Stuart Russell, have been dreaming of constructing superintelligent systems for decades—as well as fearing them. Recently, many of these voices have declared that the day of reckoning is near, telling government officials that whichever country invents AGI first will gain enormous geopolitical advantages. Days before U.S. President Donald Trump’s second inauguration, for example, Altman told Trump that AGI would be achieved within his term—and that Washington needed to prepare.
These declarations have clearly had an effect. Over the last two years, Democratic and Republican politicians alike have been discussing AGI more frequently and exploring policies that could unleash its potential or limit its harms. It is easy to see why. AI is already at the heart of a range of emerging technologies, including robotics, biotechnology, and quantum computing. It is also a central element of U.S.-China competition. AGI could theoretically unlock more (and more impressive) scientific advancements, including the ability to stop others from making similar breakthroughs. In this view, if the United States makes it first, American economic growth might skyrocket and the country could attain an unassailable military advantage.
There is no doubt that AI is a very powerful invention. But when it comes to AGI, the hype has grown out of proportion. Given the limitations of existing systems, it is unlikely that superintelligence is actually imminent, even though AI systems continue to improve. Some prominent computer scientists, such as Andrew Ng, have questioned whether artificial general intelligence will ever be created. For now, and possibly forever, advances in AI are more likely to be iterative, like other general-purpose technologies.
The United States should therefore treat the AI race with China like a marathon, not a sprint. This is especially important given the centrality of AI to Washington’s competition with Beijing. Today, both the country’s new tech firms, like DeepSeek, and existing powerhouses, like Huawei, are increasingly keeping pace with their American counterparts. By emphasizing steady advancements and economic integration, China may now even be ahead of the United States in terms of adopting and using robotics. To win the AI race, Washington thus needs to emphasize practical investments in the development and rapid adoption of AI. It cannot distort U.S. policy by dashing for something that might not exist.
In Washington, AGI is a hot topic. In a September 2024 hearing on AI oversight, Connecticut Senator Richard Blumenthal declared that AGI is “here and now—one to three years has been the latest prediction.” In July, South Dakota Senator Mike Rounds introduced a bill requiring the Pentagon to establish an AGI steering committee. The bipartisan U.S.-China Economic and Security Review Commission’s 2024 report argued that AGI demanded a Manhattan Project–level effort to ensure the United States achieved it first. Some officials even believe AGI is about to jeopardize human existence. In June 2025, for instance, Representative Jill Tokuda of Hawaii said that “artificial superintelligence, ASI, is one of the largest existential threats that we face.”
The fixation on AGI goes beyond rhetoric. Former Biden administration officials issued executive orders that regulated AI in part based on concerns that AGI is on the horizon. Trump’s AI Action Plan, released in July, may avoid explicit mentions of AGI. But it emphasizes frontier AI, infrastructure expansions, and an innovation-centric race for technological dominance. It would, in the words of Time magazine, fulfill “many of the greatest policy wishes of the top AI companies—which are all now more certain than ever that AGI is around the corner.”
The argument for dashing toward AGI is simple. An AGI system, the thinking goes, might be able to self-improve simultaneously along multiple dimensions. In doing so, it could quickly surpass what humans are capable of and solve problems that have vexed society for millennia. The company and country that reaches that point first will thus not only achieve enormous financial returns, scientific breakthroughs, and military advancements but also lock out competitors by monopolizing the benefits in ways that restrict the developments of others and that establish the rules of the game. The AI race, then, is really a race to a predetermined, AGI finish line in which the winner not only bursts triumphantly through the ribbon but picks up every trophy and goes home, leaving nothing for even the second- and third-place competitors.
Yet there is reason to be skeptical of this framing. For starters, AI researchers can’t even agree on how to define AGI and its capabilities; in other words, no one agrees on where the finish line is. That makes any policy based around achieving it inherently dubious. Instead of a singular creation, AI is more of a broad category of technologies, with many different types of innovations. That means progress is likely to be a complex and ever-changing wave, rather than a straight-line trip.
This is evident in the technology’s most recent developments. Today’s models are making strides in usability. The most advanced large language models, however, still face many of the same challenges they faced in 2022, including shallow reasoning, brittle generalization, a lack of long-term memory, and a lack of genuine metacognition or continual learning—as well, of course, as hallucinations. Since its release, for instance, GPT-5 has looked more like a normal advancement than a transformative breakthrough. As a result, some of AGI’s biggest proponents have started tempering their enthusiasm. At the start of the summer, former Google CEO Eric Schmidt said that AI wasn’t hyped enough; now, he argues that people have become too obsessed with
Q: What is AGI?
A: AGI stands for Artificial General Intelligence, which refers to an AI system that can match or exceed human cognitive abilities, including the ability to learn and adapt to new tasks.
Q: Why is AGI considered the holy grail of AI research?
A: AGI is considered the holy grail because it promises to revolutionize various fields by solving complex problems that are currently beyond human capabilities, potentially leading to significant scientific, economic, and military advancements.
Q: What are the risks of focusing too much on AGI?
A: Focusing too much on AGI can divert resources and attention from practical, incremental advancements in AI that could have immediate benefits. It also risks setting unrealistic expectations and creating policy distortions.
Q: How is China approaching the AI race?
A: China is emphasizing the rapid adoption and integration of current and near-term AI capabilities across various industries. While it is also investing in AGI, the focus is on scaling and applying AI to achieve widespread benefits.
Q: What steps can the U.S. take to improve AI adoption?
A: The U.S. can launch large-scale AI literacy initiatives, modernize its infrastructure and data practices, and invest in the procurement of advanced AI systems to ensure practical and widespread adoption of AI technologies.