Published Date : 10/09/2025
One of science’s quiet strengths is its ability to chart the boundaries of the knowable. The laws of thermodynamics reveal entropy’s inevitability. Einstein showed that nothing outruns light. Quantum mechanics gave us the uncertainty principle; chaos and complexity theory uncovered inherent unpredictability; mathematical logic proved that some truths can never be decided. Good science doesn’t just tell us what we can do — it tells us where we must stop.
AI belongs in this lineage of limits. Its successes depend on narrowing the problem: making it specific, discarding most possibilities, sealing it inside a representation and a specification. Deep Blue mastered chess, AlphaGo conquered Go, and LLMs like GPT-4 and GPT-5 dazzle with conversation — yet all are automation by definition: mechanical simulations of tasks that once required human intelligence. In LLMs, the representation is tokens; the specification is the transformer’s attention-and-prediction loop. However wide the surface domain is, the machinery is fixed.
Here’s the rub: asking how to make these systems “general” is asking how to remove the very constraints that made them work in the first place. That’s the futurist’s dilemma. Futurists point to Narrow AI triumphs as if they foreshadow AGI, but those triumphs — from chess engines to self-driving cars to LLMs — are demonstrations of boundaries, not breakthroughs toward mind.
Seen this way, all usable AI today is a form of engineered reduction: problems stripped down to what a machine can represent and compute. And the real question for AGI isn’t how much bigger we can make those reductions, but whether there are domains of intelligence that resist reduction altogether.
This, in the end, is why AI is still automation — and why the dream of machines that think like us remains just that: a dream. The real question we face isn’t whether AI will “wake up,” but how much of our human world we’re willing to hand over to machines.
The journey of AI has been marked by significant milestones, but each step forward often reveals new limitations. In 2016, I wrote about the idea that AI was still just automation, and this remains true today. The projected but impossible AI future often envisions a world where organic life forms, or “Orga,” coexist with mechanical life forms, or “Mecha,” that will improve and take over. However, this vision remains speculative.
Wide AI, while still just automation, represents a genuine advance. Truly general intelligence remains a mystery, and perhaps even more so than it was in 2016. Even when we achieve a quantum leap forward, it often signals that it too is a dead end for the bolder ambitions of true AGI. The quest continues, but the path is fraught with challenges and unknowns.
Q: What is the main argument of the article?
A: The main argument is that AI, despite its advancements, is still fundamentally automation. It operates within specific, predefined boundaries and is not capable of true general intelligence.
Q: How does the article define 'Narrow AI'?
A: Narrow AI refers to AI systems that are designed to perform specific tasks, such as playing chess or generating text, within a fixed set of parameters and constraints.
Q: What is the difference between Narrow AI and AGI?
A: Narrow AI is specialized for specific tasks, while AGI (Artificial General Intelligence) aims to possess the broad, flexible intelligence that humans have, capable of understanding and learning any intellectual task.
Q: Why is achieving AGI considered a significant challenge?
A: Achieving AGI is challenging because it requires creating a system that can understand and adapt to a wide range of tasks and environments, something that current AI systems, which rely on specific constraints, cannot do.
Q: What does the article suggest about the future of AI?
A: The article suggests that while AI will continue to advance and become more sophisticated, the dream of machines that think like humans remains a distant and uncertain goal.