Published Date : 21/08/2025
One way to measure the scope of the generative AI boom is financially, and another is in terms of public awareness. Both are nearly unprecedented, even in the realm of high tech. Data center build-outs in support of AI expansion are expected to be in the region of $364 Billion in 2025— an amount that makes the cloud “revolution” look more like an aperitif.
Beyond the charm and utility of chatbots, the AGI (artificial general intelligence) concept is the main driver of public attention. Many believe we are on the cusp of an explosion of compute power that will change history.
If the promise of AGI is one pole of the field in public perception of AI, the opposite pole is that generative AI is largely a stochastic hat trick. Some rather well-informed parties have pointed out the inherent limitations in the way large language models are built. In short, LLMs have certain drawbacks that cannot be mitigated by increasing the size of the models.
Somewhere between the two poles lies the truth. Because we are already using these tools in our daily work, software developers know better than most where AI tools shine and where they fall apart. Now is a great moment to reflect on the state of the AI boom from our position on its leading edge.
### Rhymes with Dotcom
In 2001, I was a junior engineer at a dotcom startup. I was walking through the familiar maze of cubicles one day when a passing thought froze me: “Is this a bubble?” Now, I wasn’t especially prescient. I didn’t have a particular grasp of the economics or even the overall technology landscape. I was just glad to be programming for money. But there was something about the weird blend of college dorm and high tech, of carefree confidence and easy history-making, that caught my attention.
There was a palpable sense that “everything was different now.” We were in this bright new era where the limits on expansion had been overcome. All we had to do was continue exploring new possibilities as they opened before us. The ultimate apotheosis of tech married to finance was guaranteed, so long as we maintained enthusiasm. Does any of this sound familiar to you?
Early this year, Goldman Sachs released a study comparing the dotcom tech boom to AI today. The study notes fundamental ways the current moment differs from the dotcom era, especially in the profits reaped by big tech. In essence, the “Magnificent 7” tech companies are pulling in AI-generated revenue that, according to Sachs, justifies the mental and financial extravagance of the moment.
“We continue to believe that the technology sector is not in a bubble,” says the report, because “while enthusiasm for technology stocks has risen sharply in recent years, this has not represented a bubble because the price appreciation has been justified by strong profit fundamentals.” So, that’s the bullish case from the investment point of view. (For another view, the New Yorker recently published another comparison of the current AI boom and that of the dotcom era.)
### The AI Money Trap
We don’t have to look far for a more contrarian perspective. Economist Paul Kedrosky, for one, notes that capital expenditure on data centers has driven 1.2% of national GDP, acting as a kind of stimulus program. Without that investment, he writes, the US economy would be in contraction.
Kedrosky describes an “AI datacenter spending program” that is “already larger than peak telecom spending (as a percentage of GDP) during the dot-com era, and within shouting distance of peak 19th century railroad infrastructure spending.” Virtually all AI spend flows into Nvidia in one way or another. This is reflected in its recent valuation as the first publicly traded company to break $4 trillion in market capitalization (second up was Microsoft).
To put that number in context, market observers such as Forbes described it as being greater than the GDP of Canada or the annual global spending on defense. Nvidia alone accounts for more than 7% of the value of the S&P 500. AI gadfly Ed Zitron calls it the “AI money trap.”
### What Developers Know
Here’s where I believe programmers and others who use AI tools have an advantage. We are the early adopters, par excellence. We are also, as typical coders, quick to call it like it is. We won’t fudge if reality doesn’t live up to the promise. Code generation is currently the killer app of AI. But for developers, that flavor of AI assistance is already turning stale. We’re already looking for the next frontier, be it agentic AI, infrastructure, or process automation.
The reality is that AI is useful for development, but continues to exhibit many shortcomings. If you’re using it for work, you get a visceral sense for that balance. AI sometimes delivers incredible, time-saving insights and content. But then it will just as confidently introduce an inaccuracy or regression that eats up all the time it’s just saved you. I think most developers have realized quickly that modern AI is more of a useful tool than a world-changing revelation. It won’t be tearing the roof off of traditional development practice anytime soon.
### The Limitations of LLMs
There is a growing ripple effect as developers realize we are basically doing what can be done with AI, while the industry presses forward as if there were an almost insatiable demand for more. As an example, consider the sobering rumination from Gary Marcus, a longtime observer of AI, in a recent post dissecting the rather lackluster launch of ChatGPT 5. Looking into the heart of modern AI design, he identifies inherent shortcomings that cannot be addressed by increasing the availability of compute power and data. (His alternative architecture, Neurosymbolic AI, is worth a look.)
Marcus also references a recent report from Arizona State University, which delves into chain of thought (CoT) reasoning and the limitations of LLMs to perform inference. This is a structural limitation also highlighted in the June 2025 position paper from Apple, The Illusion of Thinking. The basic message is that LLMs, when they appear to be reasoning, are actually just reflecting the patterns in their data, without the ability to generalize. According to this line of thought, what we see is what we get with LLMs; how they have worked in the last few years is what they are capable of— at least, without a thoroughgoing re-architecture.
If that is true, then we can expect a continuation of the incremental gains we are seeing now, even after throwing trillions worth of data center infrastructure at AI. Some unpredictable breakthroughs may occur, but they’ll be more in the realm of potential than predictable, based on the existing facts on the ground.
### What if AI is a Bubble?
Artificial intelligence is a legitimately exciting new sphere of technology, and it is producing a massive build-out. But if the hype extends too far beyond what reality can support, it will contract in the other direction. The organizations and ideas that survive that round of culling will be the ones capable of supporting enduring growth. This happened with Blockchain. It went big, and some of those riding its expansion exploded spectacularly. Consider FTX, which lost some $8 billion in customer funds, or the Terra collapse, which is tough to fully quantify but included at least $35 billion lost in a single day. And these are just two examples among many.
However, many of the companies and projects that survived the crypto winter are now pillars of the crypto ecosystem, which is becoming ever more integrated directly into mainstream finance. The same thing may be true of the current AI trend: Even if the bubble pops spectacularly, there will be survivors. And those survivors will promote lasting AI-driven changes. Some of the big tech companies at the forefront of AI are survivors of the dotcom collapse, after all.
In a recent interview with The Verge, OpenAI’s Sam Altman noted that, “When bubbles happen, smart people get overexcited about a kernel of truth. Are we in a phase where investors as a whole are overexcited about AI? My opinion is yes. Is AI the most important thing to happen in a very long time? My opinion is also yes.”
What do you think? As a software developer using AI in your work, are we in a bubble? If so, how big is it, and how long before it is corrected?
Q: What is generative AI?
A: Generative AI refers to artificial intelligence systems that can create new content, such as text, images, and music, based on patterns learned from existing data.
Q: What are the main drivers of the AI boom?
A: The main drivers of the AI boom include the public's fascination with AGI (artificial general intelligence), significant financial investments, and the practical applications of AI in various industries, particularly in software development.
Q: What are the limitations of large language models (LLMs)?
A: LLMs have limitations in their ability to generalize and perform complex reasoning. They often reflect patterns in their training data without truly understanding the context, leading to inaccuracies and limitations in their outputs.
Q: Is the current AI boom a bubble?
A: Opinions vary, but some economists and industry experts believe that the AI boom could be a bubble due to overhyped expectations and significant financial investments that may not be sustainable in the long term.
Q: What are the potential consequences if the AI bubble bursts?
A: If the AI bubble bursts, it could lead to a correction in the market, with some companies and projects failing. However, the survivors are likely to drive lasting AI-driven changes and innovations in the tech industry.