Meta's AI Chief Critiques Path to Human-Level Intelligence

Published Date : 10/01/2025 

Meta's Chief AI Scientist, Yann LeCun, argues against the belief that large language models (LLMs) can achieve human-level intelligence. He emphasizes the limitations of current AI and the need for new approaches in AI development. 

Meta’s Chief AI Scientist, Yann LeCun, spoke out recently against the prevailing views on achieving human-level artificial intelligence (AGI).

In a fireside chat at CES in Las Vegas, LeCun, a Turing Award winner, expressed his skepticism about the current approach of scaling large language models (LLMs) to reach human-level intelligence.



LeCun strongly disagreed with OpenAI CEO Sam Altman’s statement that his teams already know how to build AGI and are looking beyond it to superintelligence.

According to LeCun, autoregressive LLMs, the type we know today, will not reach human intelligence.

“There’s absolutely no way … that autoregressive LLMs, the type that we know today, will reach human intelligence,” he said.

“It’s just not going to happen.”


LLMs are trained to predict the next word in a sentence by looking at all possible text options and then selecting the most appropriate one.

However, human brains handle information in a much more complex manner, processing multiple modalities such as text, images, and sounds simultaneously.

AI systems today are mostly “narrow AI,” excelling in specific tasks like playing chess or diagnosing medical conditions.

But they struggle outside their trained areas.

“People in AI have been making that mistake all the time, saying, ‘OK, we have systems now that can beat us at chess, so pretty soon, they’ll be as smart as we are,’” LeCun explained.

“We have systems now that can drive a car through the desert.

Pretty soon, we’ll have self-driving cars at Level 5.

We still don’t have that, 13 years later.”


LeCun also highlighted that even if AI systems perform well in cognitive tasks, they are far from handling physical tasks like plumbing.

“We’re not going to have an automated plumber anytime soon,” he said.

“It’s incredibly complicated.

It requires a very deep understanding of the physical world and manipulation [of objects].” The challenge lies not in building the physical robots but in making them smart enough.

“In fact, we’re not even close to matching the understanding of the physical world of any animal, cat or dog.”


Another issue with LLMs is the diminishing returns from scaling.

LeCun noted, “Scaling is saturating.” While scaling can continue to improve LLMs, it is becoming increasingly expensive.

This is why OpenAI, despite charging $200 a month for ChatGPT Pro, is not making a profit.

(Altman mentioned this in a recent post on X.)


However, LeCun sees potential in the rise of generative world models, which create virtual worlds for robots to train in.

This approach is less costly and less risky than physical training.

Nvidia CEO Jensen Huang recently unveiled Cosmos, a platform for creating virtual worlds for robotics training.

Using text, image, or video prompts, developers can generate synthetic data to train their “physical AI” systems, including robots and autonomous vehicles.



Google DeepMind is also investing in generative world models, hiring a new team to focus on this area.

Additionally, AI pioneer Fei Fei Li’s World Labs has launched with $230 million in funding from prominent Silicon Valley figures, including Geoffrey Hinton, Marc Benioff, Reid Hoffman, and Eric Schmidt.



When asked about the timeline for a “ChatGPT moment” in robotics, LeCun suggested it could be three to five years away with the advent of world models.

However, he emphasized that AI agents will become more common in the workplace, assisting with specific tasks, rather than performing activities without specific training.



In summary, while LLMs have made significant strides, they are not the pathway to human-level intelligence.

The future of AI lies in new approaches, such as generative world models, which promise to make robots smarter and more capable in the physical world. 

Frequently Asked Questions (FAQS):

Q: What is Yann LeCun's current position?

A: Yann LeCun is the Chief AI Scientist at Meta and a Turing Award winner.


Q: Why does Yann LeCun believe large language models (LLMs) will not achieve human-level intelligence?

A: LeCun argues that LLMs are limited to text-based tasks and cannot handle the complex, multi-modal processing that human brains can. They also struggle with physical tasks and understanding the physical world.


Q: What are generative world models, and why are they important?

A: Generative world models create virtual environments for robots to train in, which is less costly and risky than physical training. This approach can help improve the capabilities of AI-powered robots.


Q: What is the current challenge with scaling large language models?

A: The performance gains from scaling LLMs are diminishing, and it is becoming increasingly expensive. This is why OpenAI is not making a profit from ChatGPT Pro despite its high subscription cost.


Q: When does LeCun predict a significant breakthrough in robotics due to generative world models?

A: LeCun believes that a major breakthrough, similar to the 'ChatGPT moment' for robotics, could happen in the next three to five years. 

More Related Topics :