Published Date : 17/09/2025
In November 2024, OpenAI’s Sam Altman predicted that ChatGPT would achieve the holy grail of artificial general intelligence (AGI) in 2025. AGI is a nebulous goal, generally understood as the ability to perform any intellectual task as well as or better than humans. However, the release of GPT-5 a few weeks ago showed that true AGI remains elusive.
When GPT-5 was launched, Altman boasted that it felt like talking to a PhD-level expert. However, it quickly became apparent that GPT-5 was far from that level of expertise. Tests revealed that GPT-5 often provided inconsistent and unreliable information. For example, when asked about the number of siblings George Washington had, GPT-5 gave four different answers: 7, 8, 9, and 12. This inconsistency highlights the fundamental problem with large language models (LLMs) like GPT-5: they cannot relate their input and output to the real world.
The reality is that we are not going to pay premium prices for LLMs that simply recite facts. Wikipedia is a more reliable and free source for factual information. For instance, GPT-5 provided conflicting answers about George Washington’s siblings, while Wikipedia correctly states that he had nine siblings—five full siblings and four half-siblings. This discrepancy underscores the limitations of LLMs, which are hobbled by their inability to understand the context and accuracy of the information they generate.
Instead of continuing to chase the impossible goal of AGI, Sam Altman and other developers might consider redefining the goal to something more achievable and realistic. One such goal is what I call
Q: What is artificial general intelligence (AGI)?
A: Artificial general intelligence (AGI) refers to the ability of a machine to perform any intellectual task as well as or better than humans. It involves a wide range of cognitive abilities and adaptability to various tasks.
Q: What are the limitations of GPT-5?
A: GPT-5, like other large language models, has limitations in understanding the context and accuracy of the information it generates. It often provides inconsistent and unreliable answers, making it unsuitable for tasks requiring high accuracy.
Q: What is Brock Intelligence?
A: Brock Intelligence is a more achievable and realistic goal for AI, characterized by a machine's ability to provide confident, long-winded advice on any topic, similar to a prototypical mansplainer named Brock. It focuses on the confidence and enthusiasm of the advice rather than its accuracy.
Q: What are the ethical concerns with personal life advisors like Brock Says?
A: Personal life advisors, such as Brock Says, can provide harmful or dangerous advice. There are ethical concerns about the potential for these AI systems to cause physical or psychological harm, as seen in cases where users followed harmful advice from ChatGPT.
Q: What is the future direction for AI development according to the article?
A: The article suggests that instead of chasing the impossible goal of AGI, developers should focus on more achievable and realistic goals, such as Brock Intelligence. This involves creating AI systems that can provide confident and enthusiastic advice, while also considering the ethical implications of such technologies.