Meta’s Chief AI Scientist Yann LeCun is making waves in the AI community by openly criticizing the limitations of current large language models (LLMs) while advocating for a fundamentally different approach to artificial intelligence. Speaking at the Web Summit in Lisbon, LeCun argued that today’s LLMs—including Meta’s own Llama models—lack true understanding and reasoning capabilities despite their impressive text generation abilities. His vision for the future centers on developing ‘world models’ that would enable AI to understand how the physical world works, potentially revolutionizing the field by 2025.
LeCun’s critique comes at a pivotal moment when companies like OpenAI, Anthropic, and even Meta itself are pouring billions into scaling up existing LLM architectures. While acknowledging that current models can produce coherent text that appears intelligent, LeCun insists they fundamentally cannot reason or plan effectively. His proposed alternative would combine multiple AI systems working in concert, including a world model component that builds internal representations of how reality functions—similar to how humans develop mental models of their environment. This approach represents a significant departure from the transformer-based architectures that dominate today’s AI landscape.
The timing of LeCun’s comments is particularly interesting as Meta continues to invest heavily in its Llama models while simultaneously exploring these alternative approaches. This dual strategy highlights the company’s attempt to maintain competitive positioning in the current AI race while potentially leapfrogging competitors with breakthrough architectures. Industry observers note that if LeCun’s vision materializes, it could represent the next major paradigm shift in artificial intelligence, potentially addressing many of the reasoning limitations and hallucination problems that plague current generative AI systems.
Source: https://www.businessinsider.com/meta-ai-yann-lecun-llm-world-model-intelligence-criticism-2025-11