When it Comes to AI: Think Inside the Box
AI Summary
Cal Newport contrasts two recent ways of talking about AI to argue for what he calls modern thinking — opening the box and looking at how the model actually works — over pre-modern thinking that observes behavior and spins narratives. James Somers, in The New Yorker's "The Case That A.I. Is Thinking," applies a definition from Eric Baum's What is Thought? — thinking as deploying a compressed model of the world to make predictions — and finds that LLM next-token prediction resembles this. Somers concludes carefully: "I do not believe ChatGPT has an inner life, and yet it seems to know what it's talking about."
Compare that to biologist Bret Weinstein on Joe Rogan's podcast, who claimed LLMs are "running little experiments" and "discovering what they should say if they want certain things to happen," then asserted consciousness is inevitable because of the analogy to a baby. Newport calls this confused. Once trained, language models are static — fixed sequences of transformers and feed-forward networks. Every word ChatGPT generates comes from the same unchanging network. A deployed LLM cannot run experiments, want anything, plot, plan, or have spontaneous computation. So it cannot be conscious in the implied sense.
Newport's frame: Somers does modern thinking (mechanism in, conclusions out). Weinstein does pre-modern thinking (observe behavior, spin story, extrapolate) — the same epistemic move as "lightning comes from the gods." AI risks are real and require cool-headed responses, which is why our most serious AI conversations must start by thinking inside the box.
Highlights
- ▸Once trained, an LLM is static — fixed sequence of transformers and feed-forward networks — so it cannot run experiments, plot, plan, or want anything; conscious-LLM claims that ignore the mechanism are pre-modern storytelling
- ▸Two epistemic styles: modern (open the box, look at the mechanism, then conclude — Somers) versus pre-modern (observe behavior, spin a story, extrapolate — Weinstein, lightning-from-the-gods reasoning)
- ▸AI debates are too important for pre-modern thinking; serious conversations about AI risk must start by understanding how the actual computation works, not by analogy to babies or human minds
Original excerpt
James Somers recently published an interesting essay in _The New Yorker_ titled“The Case That A.I. Is Thinking.”He starts by presenting a specific definition of thinking, attributed in part to Eric B. Baum’s 2003 book _What is Thought?_, that describes this act as deploying a “compressed model of the world” to make predictions about what you expect to happen. (Jeff Hawkins’s 2004 exercise in amateur neuroscience,_On Intelligence_, makes a similar case).
Somers then talks to experts who study how modern large language models operate, and notes that the mechanics of LLMs’ next-token prediction resemble this existing definition of thinking. Somers is careful to constrain his conclusions, but…
26 more articles in this vault.
Import the full Cal Newport vault to Burn 451 and build your own knowledge base.
Content attributed to the original author (Cal Newport). Burn 451 curates publicly available writing as a reading index. For removal requests, contact @hawking520.