I think 'agent' may finally have a widely enough agreed upon definition to be useful jargon now
AI Summary
Willison's September 2025 concession that 'agent' now means something — specifically, an LLM that runs tools in a loop to achieve a goal. The piece then contrasts Claude's opt-in memory tool (explicit tool calls, visible in the conversation, users can see what the agent reads and writes) with ChatGPT's automatic injection model, and argues the visibility difference matters for trust, debugging, and safety. He also flags the lethal trifecta — private data plus untrusted content plus external communication — as the attack surface any memory-enabled agent has to defend. Short, skeptical, and useful because Willison is usually the last person to adopt new terminology. When he does, the field has actually settled something.
Original excerpt
I think "agent" may finally have a widely enough agreed upon definition to be useful jargon now
I've noticed something interesting over the past few weeks: I've started using the term "agent" in conversations where I don't feel the need to then define it, roll my eyes or wrap it in scare quotes.
I've been _very_ hesitant to use the term "agent" for meaningful communication over the last couple of years. It felt to me like the ultimate in buzzword bingo - everyone was talking about agents, but if you quizzed them everyone seemed to hold a different mental model of what they actually were.
I even started collecting definitions in my agent-definitions tag, including crowdsourcing 211 definitions…
18 more articles in this vault.
Import the full Agent Memory Patterns vault to Burn 451 and build your own knowledge base.
Content attributed to the original author (Simon Willison). Burn 451 curates publicly available writing as a reading index. For removal requests, contact @hawking520.