Published Date : 21/08/2025
Back in March, I warned that, despite the blooming AI Spring, an AI winter might soon be upon us, adding that a global reset of expectations was both necessary and overdue. There are signs this month that it might be beginning: the Financial Times warns this week that AI-related tech shares are taking a battering, including NVIDIA (down 3.5%), Palantir (down 9.4%), and Arm (down five percent), with other stocks following them downwards.
The cause? In part, a critical MIT report, revealing that 95% of enterprise gen AI programs are failing or return no measurable value. More on that in a moment. But the absurdity of today's tech-fueled stock market was revealed by one Financial Times comment: The tech-heavy Nasdaq Composite closed down 1.4 percent, the biggest one-day drop for the index since 1 August. That a single-digit one-day drop three weeks after another single-digit drop might constitute a global crisis indicates just how jittery and short-termist investor expectations have become.
The implication is that, behind the hype and Chief Executive Officers' (CEOs) nonsensical claims of Large Language Models' (LLMs) superintelligence, are thousands of nervous investors biting their lips, knowing that the bubble might burst at any time. As the Economist notes recently, some AI valuations are 'verging on the unhinged', backed by hype, media hysteria, social media's e/acc cultists, and the unquestioning support of politicians desperate for an instant source of growth – a future of 'infinite productivity', no less, in the words of UK AI and digital government minister Feryal Clark in the Spring.
The tell comes last week from the industry's leader and by far its biggest problem: OpenAI CEO Sam Altman, a man whose every interview with client podcasters should be watched with the sound off, so you can focus on his body language and smirk, which scream 'Everything I'm telling you is probably BS'. In a moment of atypical – yet cynical – candor, Altman says: 'Are investors over excited? My opinion is yes. […] I do think some investors are likely to lose a lot of money, and I don't want to minimize that, that sucks. There will be periods of irrational exuberance. But, on the whole, the value for society will be huge.'
Nothing to see here: just the man who supported the inflated bubble finally acknowledging that the bubble exists, largely because AI has industrialized laziness rather than made us smarter. And this comes only days after the fumbled launch of GPT-5, a product so far removed from artificial general intelligence (AGI) as to be a remedial student. (And remember, AGI – the founding purpose of OpenAI – is no longer 'a super-useful term', according to Altman.)
The sector's problems are obvious and beg for a global reset. A non-exhaustive list includes: First, vendors scraped the Web for training data – information that may have been in a public domain (the internet) but which was not always public-domain in terms of rights. As a result, they snapped up copyrighted content of every kind: books, reports, movies, images, music, and more, including entire archives of material and pirate libraries of millions of books. Soon, the Web was awash with 'me too' AIs that, far from curing cancer or solving the world's most urgent problems, offered humans the effort-free illusion of talent, skill, knowledge, and expertise – largely based on the unauthorized use of intellectual property (IP) – for the price of a monthly subscription.
Suddenly AIs composed songs, wrote books, made videos, and more. This exploitative bilge devalued human skill and talent, and certainly its billable potential, spurring an outcry from the world's creative sectors. After all, the training data's authors and rightsholder receive nothing from the deal. Second, the legal repercussions of all this are just beginning for vendors. Anthropic is facing an existential crisis in the form of a class action by (potentially) every US author whose work was scraped, the plaintiffs allege, from the LibGen pirate library. Meta is known to have exploited the same resource rather than license that content – according to a federal judge in June – while a report from Denmark's Rights Alliance in the Spring revealed that other vendors had used pirated data to train their systems.
But word reaches me this week that the legal fallout does not just concern copyright: the first lawsuits are beginning against vendors' cloud-based tools disclosing private conversations to AIs without the consent of participants. The first player in the spotlight? Our old friend, Otter AI. Last year I reported how this once-useful transcription tool had become so 'infected' with AI functions that it had begun rewriting history and putting words in people's mouths, flying in data from unknown sources and crediting it to named speakers. As a result, it had become too dangerous to use. In the US, Otter is now being sued by the plaintiff Justin Brewer ('individually and on behalf of others similarly situated' – a class action) for its Notetaker service disclosing the words of meetings – including of participants who are not Otter subscribers – to its GPT-based AI. Brewer's conversations were 'intercepted' by Otter, the suit alleges.
Clause Three of the action says: 'Otter does not obtain prior consent, express or otherwise, of persons who attend meetings where the Otter Notetaker is enabled, prior to Otter recording, accessing, reading, and learning the contents of conversations between Otter account holders and other meeting participants. Moreover, Otter completely fails to disclose to those who do set up Otter to run on virtual meetings, but who are recorded by the Otter Notetaker, that their conversations are being used to train Otter Notetaker's automatic speech recognition (ASR) and Machine Learning (ML) models, and in turn, to financially benefit Otter's business.' Brewer believes that this breaches both federal and California law – namely, the Electronic Communications Privacy Act of 1986; the Computer Fraud and Abuse Act; the California Invasion of Privacy Act; California's Comprehensive Computer Data and Fraud Access Act; the California common law torts of intrusion upon seclusion and conversion; and the California Unfair Competition Law. That's quite a list.
Speaking as a journalist, these same problems risk breaching confidentiality when tools like Otter record and transcribe interviews with, for example, CEOs, academics, analysts, spokespeople, and corporate insiders and whistleblowers. Who would consent to an interview if the views of named speakers, expressed in an environment of trust, might be disclosed to AI systems, third-party vendors, and unknown corporate partners, without the journalist's knowledge – let alone the interviewee's? Thanks for speaking to me, Whistleblower X. Do you consent to your revelations being used to train ChatGPT?
Third, AI companies walled off all that scraped data and creatives' IP and began renting it back to us, causing other forms of Web traffic to fall away. As I noted last week, 60% of all Google Web searches are already 'zero click', meaning that users never click out to external sources. More and more data is consumed solely within AI search, a trend that can only deepen as Google transitions to AI Mode. My report added: 'Inevitably, millions of websites and trusted information sources will wither and die in the AI onslaught. And we will inch ever closer to the technology becoming what I have long described as a coup on the world's digitized content – or what Canadian non-profit media organization The Walrus called 'a heist' this month.'
Thus, we are becoming unmoored from authoritative, verifiable data sources and cast adrift on an ocean of unreliable information, including hallucinations. As I noted in an earlier report this month, there have been at least 150 cases of experienced US lawyers presenting fake AI precedent in court. What other industries are being similarly undermined? Fourth, as previously documented, some AI vendors' data center capex is at least an order of magnitude higher than the value of the entire AI software market. As a result, they need others to pay their hardware costs – including, perhaps, nations within new strategic partnerships. Will those sums ever add up?
Then, fifth, there are the energy and water costs associated with training and using data-hungry AI models. As I have previously noted, cloud data centers already use more energy than the whole of Japan, and AI will force that figure much higher. Meanwhile, in a moment of absurdist comedy, the British Government last week advised citizens to delete old emails and photos because data centers use 'vast amounts of water' and risk triggering a drought. A ludicrous announcement from a government that wants to plough billions into new AI facilities. Sixth, report after report reveals that most employees use AI tactically to save money and time, not strategically to make smarter decisions. So bogus claims of AGI and superintelligence miss the point of enterprises' real-world usage – much of which is shadow IT, as my report on accountability demonstrated this week.
But seventh, if some people are using AI to make smarter decisions rather than cut corners and save time, several academic reports have revealed that it is having the opposite effect: in many cases, AI is making us dumber, lazier, and less likely to think critically or even retain information. This 206-page study on arXiv examining the cognitive impacts of AI assistance is merely one of example many. And eighth in our ever-expanding list of problems, AI just isn't demonstrating a hard Return on Investment (ROI) – either to most users, or to those nervous investors who are sitting on their hands in the bus of survivors (to quote the late David Bowie).
Which brings us back to that MIT Nanda Initiative report, which has so alarmed the stock market – not to mention millions of potential customers.
Q: What is the AI Winter?
A: The AI Winter refers to a period of reduced funding and interest in artificial intelligence research, often following a period of hype and high expectations.
Q: Why are AI-related stocks falling?
A: AI-related stocks are falling due to a combination of factors, including a critical MIT report revealing the failure of most enterprise AI programs and a general cooling of investor sentiment.
Q: What are the legal issues facing AI companies?
A: AI companies are facing legal issues related to copyright infringement, unauthorized use of intellectual property, and privacy concerns, such as the unauthorized recording and use of private conversations.
Q: How is AI affecting the web and search behavior?
A: AI is changing web traffic patterns, with a significant increase in 'zero click' searches, where users do not click through to external sources, leading to a decline in traffic to many websites.
Q: What are the environmental concerns associated with AI?
A: AI training and usage require significant energy and water resources, leading to concerns about the environmental impact, including high energy consumption and water usage in data centers.