Context Engineering Our Way to Long-Horizon Agents

BlogHarrison Chase, Sonya Huang, Pat GradyJun 14, 2026

AI Summary

Sequoia's January 2026 interview with Chase where he lays out the LangChain thesis: agents that work over long horizons need context engineering, not bigger models. He names memory as the primary moat — email agents that learn your writing style over months, coding agents that remember which approaches already failed, support agents that don't ask the same clarifying question twice — and endorses sleep-time compute as the correct way to compound that learning across sessions. He also argues traces are the new source of truth for evaluating agents, not unit tests. Pair with 'Your harness, your memory' to get both the vision and the political claim. Long, worth the full listen.

Original excerpt

Harrison Chase, cofounder of LangChain and pioneer of AI agent frameworks, discusses the emergence of long-horizon agents that can work autonomously for extended periods. Harrison breaks down the evolution from early scaffolding approaches to today’s harness-based architectures, explaining why context engineering – not just better models – has become fundamental to agent development. He shares insights on why coding agents are leading the way and how building agents differs from traditional software development.

_Insights from this episode:_Long horizon agents are finally working, especially in coding and research:Harrison explains that agents running LLMs in a loop—once mostly a demo—are…

18 more articles in this vault.

Import the full Agent Memory Patterns vault to Burn 451 and build your own knowledge base.

Content attributed to the original author (Harrison Chase, Sonya Huang, Pat Grady). Burn 451 curates publicly available writing as a reading index. For removal requests, contact @hawking520.