Published Date: 4/08/2024
The AI bubble has been growing for quite some time now, with many experts predicting its inevitable burst. Recently, AI expert Gary Marcus shared his thoughts on why this bubble is about to pop. According to Marcus, the current approaches to machine learning, which underlie most AI systems today, are terrible at handling outliers.
When these systems encounter unusual circumstances, they often produce absurd results. Marcus calls these 'discomprehensions.' For instance, remember the claims about bears in space and jackrabbits everywhere? These types of errors occur because AI systems are not capable of thinking creatively or handling situations that are far from the space of trained examples.
As Robert J. Marks often points out, AI is not creative. It can find solutions if they already exist online, but if they don't, the system will fail. This is when the hallucinations and crazy reasoning begin. If the chatbot answers are crazy enough, users will notice. But what if the answers are wrong but not obviously crazy?
Marcus warns that once you understand this issue, predictions about AGI being nigh seem like sheer fantasy. He notes that he and Steven Pinker had warned of this problem a while back, but maybe the big guys didn't listen. If so, maybe their shareholders will pay later.
Marcus enlarged on this theme, offering four reasons for thinking the bubble will soon burst. He pointed out a series of problems with deep learning, including troubles with reasoning and abstraction that were often ignored or denied for years. These problems continue to plague deep learning to this day and have come to be widely recognized.
In December 2022, at the height of ChatGPT's popularity, Marcus made a series of seven predictions about GPT-4 and its limits, such as hallucinations and making stupid errors. Essentially all have proven correct, and held true for every other LLM that has come since.
We should remember this when we hear Ray Kurzweil tell us that AI will think like humans in 2029 or when Sam Altman forecasts a super-competent AI colleague. Artificial intelligences like HAL 9000 and David are murderous and otherwise unlikeable, but their crazy is sociopathic, not demented.
Q: What is the main reason why the AI bubble is about to burst?
A: The main reason is that current approaches to machine learning are terrible at handling outliers and thinking creatively.
Q: Who predicted the AI bubble would burst and why?
A: Gary Marcus predicted the AI bubble would burst because of its inability to handle outliers and think creatively.
Q: What are some examples of AI errors caused by its inability to handle outliers?
A: Examples include claims about bears in space and jackrabbits everywhere, which are absurd results produced by AI systems.
Q: Is AI capable of thinking creatively?
A: No, AI is not capable of thinking creatively. It can find solutions if they already exist online, but if they don't, the system will fail.
Q: What is the consequence of the AI bubble bursting?
A: The consequence of the AI bubble bursting is that shareholders of big tech companies may pay later for not listening to warnings about the limitations of AI.