Published Date : 12/08/2025
Google DeepMind CEO Demis Hassabis has highlighted the significant inconsistencies in artificial intelligence (AI) as a major factor in why AI can excel in complex tasks but falter in simpler ones. In a recent interview on the “Google for Developers” podcast, Hassabis explained that advanced AI models like Google’s Gemini can win gold medals at the International Mathematical Olympiad but often struggle with basic high school math problems. “The lack of consistency in AI is a major barrier to achieving AGI,” he said, referring to Artificial General Intelligence, the stage where AI can reason like humans. This inconsistency, according to Hassabis, is a key obstacle preventing AI from reaching its full potential.
During the podcast, Hassabis also referenced Google CEO Sundar Pichai’s description of the current state of AI as “AJI” — artificial jagged intelligence. This term is used to describe systems that excel in certain tasks but fail in others. Hassabis emphasized that solving AI’s inconsistency problem will require more than just increasing data and computing power. “We need better testing and new, more challenging benchmarks to determine precisely what the models excel at and what they don’t,” he stated.
The debate over achieving AGI continues to divide the tech industry. Hassabis has previously taken a more cautious approach compared to Google co-founder Sergey Brin, advocating for higher standards before declaring that AI has reached the level of AGI. He believes that true AGI should be able to reason and perform a wide range of tasks as effectively as humans.
OpenAI CEO Sam Altman, who initially suggested that AGI was “just around the corner,” has recently taken a different stance. In a recent interview on CNBC’s “Squawk Box,” Altman was asked whether the company’s latest GPT-5 model brings the world closer to achieving AGI. Altman responded that the term AGI is not particularly useful, noting that different companies and individuals define it in varying ways. One common definition is an AI that can perform a significant amount of the work in the world, but this definition has its own set of problems as the nature of work is constantly evolving.
Altman emphasized that the focus should be on the continuous improvement and exponential growth in model capabilities, which will increasingly be relied upon for a wide range of tasks. “I think the point of all of this is it doesn’t really matter and it’s just this continuing exponential of model capability that we’ll rely on for more and more things,” he added.
As the tech industry continues to grapple with the challenges and potential of AI, the insights from leaders like Hassabis and Altman provide valuable perspectives on the path forward. Achieving true AGI remains a significant goal, but the journey is marked by ongoing research, rigorous testing, and a deep understanding of the capabilities and limitations of current AI models.
Q: What is the main issue with current AI models according to Demis Hassabis?
A: The main issue with current AI models, according to Demis Hassabis, is the inconsistency in performance. AI can excel in complex tasks but fail in simpler ones, which is a major barrier to achieving Artificial General Intelligence (AGI).
Q: What does the term 'AJI' stand for, and who coined it?
A: The term 'AJI' stands for 'artificial jagged intelligence,' and it was coined by Google CEO Sundar Pichai. It describes AI systems that excel in certain tasks but fail in others.
Q: What is Demis Hassabis's stance on achieving AGI compared to Sergey Brin?
A: Demis Hassabis takes a more cautious approach to achieving AGI, advocating for higher standards before declaring that AI has reached that level. In contrast, Google co-founder Sergey Brin is more optimistic about the arrival of AGI.
Q: What did Sam Altman say about the term 'AGI' in his recent interview?
A: Sam Altman, CEO of OpenAI, stated that the term 'AGI' is not particularly useful. He believes the focus should be on the continuous improvement and exponential growth in model capabilities rather than the term itself.
Q: What does Demis Hassabis suggest is needed to solve AI's inconsistency problem?
A: Demis Hassabis suggests that solving AI's inconsistency problem will require more than just increasing data and computing power. He emphasizes the need for better testing and new, more challenging benchmarks to determine what the models excel at and what they don’t.