All Vaults
YL

Yann LeCun

AI Skepticism · World Models · Beyond Transformers

The Yann LeCun reading vault — Chief AI Scientist at Meta, Turing Award winner, and the most prominent AI skeptic who works inside a major lab. Curated vault on why current LLMs aren't close to AGI, self-supervised learning, JEPA world models, and the architectural changes needed before we get to human-level AI.

3 articles·Updated 5/12/2026·
Curated by@hawking520
Get Burn 451

About this vault

Essential reading vault for Yann LeCun — Chief AI Scientist at Meta, co-recipient of the 2018 Turing Award with Geoffrey Hinton and Yoshua Bengio, and the most articulate critic of the 'LLMs are close to AGI' narrative working inside a major AI lab. LeCun's position is distinctive: he doesn't dismiss LLMs as useless (they're clearly impressive) but he argues they are architecturally limited in ways that prevent them from reaching human-level intelligence. His core claims: (1) LLMs learn next-token prediction from text — a thin, bandwidth-limited, sample-inefficient proxy for how humans learn; (2) true intelligence requires learning world models — internal representations of how the physical and social world works — which requires learning from video, sensorimotor data, and action, not just text; (3) the next breakthrough will come from JEPA (Joint Embedding Predictive Architecture), not scaling transformers further. Reading LeCun alongside Sam Altman (who believes scaling transformers + RLHF is the path) and François Chollet (who believes ARC-AGI-style reasoning is the missing piece) gives you the three most credible competing frameworks for what AI needs next. LeCun's vault is particularly timely in 2026 because his skeptical predictions about LLM limitations have been repeatedly validated: LLMs still confabulate, still fail at simple physical reasoning, still can't plan multi-step tasks reliably. Meanwhile, JEPA-based models from Meta are producing real research advances. This vault collects his most important papers, talks, and essays on self-supervised learning, world models, and why he thinks the current AI moment is impressive but not close to AGI.

3 articles

Yann LeCun's Cake" Problem (Joint Embedding Predictive Architecture)

LeCun's 'cake' analogy reframes the entire AI research agenda: most of intelligence is learning world models (the cake), very little is learning behavior (the frosting), and reinforcement learning is just the cherry on top. This talk from 2022 lays out JEPA — Joint Embedding Predictive Architecture — as the alternative to generative models for learning world models. JEPA learns abstract representations of the world by predicting representations rather than predicting raw pixels or tokens. The implications: JEPA-based models can learn from video (without labels) in ways that generative models can't, because predicting compressed representations is easier than predicting everything. LeCun uses this framework to argue why LLMs — which are essentially generative next-token predictors — are hitting a ceiling: they can model text well but have no grounded world model. The cake problem is the most concise summary of why Meta invests in JEPA and why LeCun is skeptical that scaling LLMs alone will reach human-level AI.

A Path Towards Autonomous Machine Intelligence

LeCun's 2022 manifesto laying out a full research roadmap for achieving autonomous machine intelligence (AMI) — what he calls AGI without the science-fiction connotations. The paper is notable because it comes from inside Meta, a major AI lab, and acknowledges openly that current approaches (LLMs, RLHF, generative models) are insufficient. The core architecture LeCun proposes has six modules: (1) a world model that predicts consequences of actions; (2) an actor that proposes actions; (3) an intrinsic cost that encodes objectives; (4) a short-term memory module (similar to a key-value store); (5) a perception module; (6) a configurator that modulates all other modules. The key insight: the world model is the core unsolved problem. Everything else in modern ML (RLHF, next-token prediction) occupies the 'actor' and 'cost' modules, which are relatively tractable compared to learning an accurate world model. This paper is the intellectual foundation for Meta's JEPA research program and explains why LeCun has been publicly skeptical of LLM-based AGI timelines.

Yann LeCun on Why LLMs Can't Reason (and What Would Fix It)

LeCun's most-cited social media thread on LLM limitations: LLMs confabulate because they have no world model. They predict the next token based on pattern-matching in text, but they don't have an internal model of how the world works — so they generate plausible-sounding but factually wrong answers. His argument: humans learn world models from observing and interacting with the physical world from birth, which grounds their reasoning in causality. LLMs learn from text, which is a highly filtered, bandwidth-limited representation of human experience. You can't learn physics from reading a physics textbook — you have to interact with the world. The fix, in LeCun's view: multimodal learning (learning from video, audio, sensorimotor data) combined with architectures like JEPA that learn abstract representations rather than predicting raw pixels. This thread was widely shared in 2025 after a series of high-profile LLM failures and is the best accessible entry point for LeCun's core argument.

Start reading, not hoarding.

Import this vault to Burn 451 and actually read what matters.

Frequently asked questions

Who is Yann LeCun?

Yann LeCun is covered in this Burn 451 vault with a focus on ai skepticism · world models · beyond transformers. The Yann LeCun reading vault — Chief AI Scientist at Meta, Turing Award winner, and the most prominent AI skeptic who works inside a major lab. Curated vault on why current LLMs aren't close to AGI, self-supervised learning, JEPA world models, and the architectural changes needed before we get to human-level AI.

How was the Yann LeCun vault curated?

The Yann LeCun vault was hand-curated by the Burn 451 editorial team from publicly available essays, blog posts, podcast transcripts, and social threads. Each piece includes an AI-generated summary so readers can triage in seconds. The vault auto-syncs as new content from Yann LeCun is published.

How many articles are in the Yann LeCun vault?

The Yann LeCun vault currently contains 3 curated pieces organized by topic, not chronology. Each article has an AI summary and a direct link to the original source. Items are refreshed hourly through Burn 451's ISR pipeline, so new publications appear within a day.

How do I use this vault with Claude or Cursor?

Install the burn-mcp-server package from npm and connect it to Claude, Cursor, or any MCP-compatible AI tool. The vault becomes queryable as live context — your AI can search, summarize, and cite articles from Yann LeCun directly in conversation without manual copy-paste or re-uploading files.

What is Burn 451?

Burn 451 is a read-later app built around a 24-hour burn timer that forces daily triage. Articles you save must be read, vaulted, or released within 24 hours. The Vault layer — including this Yann LeCun collection — holds permanent curated reading lists for AI thought leaders, founders, and researchers.

Content attributed to original authors. Burn 451 curates publicly available writing as a reading index. For removal requests, contact @hawking520.