Published Date : 5/9/2025
When I first wrote about this almost a decade ago, “AI” was already a cultural Rorschach test. To some, it was exciting and futuristic. To others, it was ominous, Orwellian, or just marketing spin. Automation, by contrast, was the unglamorous cousin that conjured images of soulless machines taking over the last shreds of human purpose.
But from the start, my view was simple: what we call “AI” today is still just automation. And automation is not a mind. That argument has aged better than I expected. In the years since, we’ve seen an explosion of so-called AI — from self-driving cars to ChatGPT — yet the distinction between AI and automation remains almost universally misunderstood.
Recently, computational linguist Emily Bender and Alex Hanna, in The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want (2025), repeat the mantra: “AI is automation technology.” In 2016, that was a deflating — even scandalous — claim. In 2025, it’s part of the vanguard of a New Resistance.
To many, “artificial intelligence” is an exciting concept — sexy, scary, futuristic. Automation, by contrast, sounds dull and mechanistic. It conjures Orwellian images of mindless machines stripping away what soul we have left on Earth. There’s clearly an emotional gap between the two terms. But is there a real one? What, exactly, is AI — and how is it different from automation?
I think this confusion is at the heart of a broader cultural and intellectual muddle. Futurists like Ray Kurzweil, Elon Musk, Kevin Kelly, and Oxford’s Nick Bostrom (among many others) imbue AI with near-magical powers. In their vision, AI will “come alive,” develop minds, and plot to overtake us. We’re told we’re witnessing the evolution of a new species: we “Orga” — organic life forms — and they “Mecha,” mechanical life forms. The Mecha will keep getting smarter, the story goes, until the distinction between mindful humans and mechanical minds disappears. In the AI futurist’s imagination, the future belongs to the machines.
The AIs-are-coming meme is everywhere — from clickbait headlines to Hollywood thrillers. In the 2014 hit Ex Machina, the reclusive tech genius captures the futurist mood when he says, “One day the AIs are going to look back on us the same way we look at fossil skeletons on the plains of Africa.” Then he downs a vodka, already musing on his soon-to-be-forgotten life.
This kind of futurist AI is seductive in the way good fiction is: it hooks the imagination. (Storytelling sells.) But the scientific question is different. Strip away the sci-fi drama about sentient machines “coming alive” and you’re left with a much less glamorous, more Orwellian reality: automation. And here’s the spoiler — today’s “AI” is just automation. Nothing more. The real question is whether automation can ever become mind-like.
Futurist-AI enthusiasts are quick to concede that today’s systems are “narrow” — they only work within a specific domain. A chess program plays chess. A Jeopardy! system plays Jeopardy! A self-driving car drives. That’s it. The leap from these specialized systems to the Ex Machina version — with motives, personalities, and an inner life — is enormous. In the old days, this hypothetical leap was called “Strong AI.” Today, the preferred term is Artificial General Intelligence, or AGI.
So how does narrow AI sprout a mind? If you believe the rapture stories told by futurists like Ray Kurzweil or Wired co-founder Kevin Kelly (see his 2010 book What Technology Wants), the shift happens “gradually, then suddenly,” like falling in love. Narrow AI just keeps getting better until it crosses a threshold, starts improving itself, and snowballs into “ultra-intelligent” systems — a term coined in the 1960s by mathematician I.J. Good. Oxford philosopher Nick Bostrom, in his 2014 bestseller Superintelligence, calls the endgame “superintelligence.” In this narrative, narrow AI doesn’t just grow into AGI — AGI hatches into something vastly smarter than us. At which point, yes, we’re the fossil skeletons on the plains of Africa.
The rise of large language models built on Transformer architectures has changed the landscape — but not in the way the hype would have you believe. If there’s a “third way” between narrow AI and AGI, it’s what I (following others) call Wide AI: systems with broad conversational range, trained on massive deep neural networks using attention, as first described in Google’s landmark 2017 paper. (Ironically, Google itself missed the boat.)
Wide AI, while still just automation, is a genuine advance.
Q: What is the main argument of the article?
A: The main argument is that what we call AI today is essentially just advanced automation and does not possess the characteristics of a mind or consciousness.
Q: Why is there a distinction between AI and automation misunderstood?
A: The distinction is misunderstood because AI is often portrayed in a glamorous and futuristic light, while automation is seen as dull and mechanistic. This emotional gap leads to confusion about the actual capabilities of AI.
Q: What do futurists believe about the future of AI?
A: Futurists like Ray Kurzweil and Elon Musk believe that AI will develop minds, plot to overtake humans, and eventually become a new species, leading to a future where machines are smarter than humans.
Q: What is the difference between narrow AI and AGI?
A: Narrow AI is specialized and works within a specific domain, such as playing chess or driving a car. AGI, or Artificial General Intelligence, is the hypothetical leap where AI develops a broad range of capabilities and a mind-like intelligence.
Q: What is Wide AI and how is it different from narrow AI and AGI?
A: Wide AI refers to systems with broad conversational range, trained on massive deep neural networks. While it is a significant advance, it is still just automation and does not possess the broad capabilities or consciousness of AGI.