Overly structured, workflow-based systems that work with today's models will become bottlenecks tomorrow. Engineers must be prepared to shed abstractions and rebuild simpler, more general systems to capture the gains from exponentially improving models.
While prompt engineering focuses on crafting the human message, context engineering is a broader discipline that also manages the flow of information from a potentially large number of tool calls, a key challenge in building effective agents.
Don't pass the full, token-heavy output of every tool call back into an agent's message history. Instead, save the raw data to an external system (like a file system or agent state) and only provide the agent with a summary or pointer.
Multi-agent systems work well for easily parallelizable, "read-only" tasks like research, where sub-agents gather context independently. They are much trickier for "write" tasks like coding, where conflicting decisions between agents create integration problems.
The simple "tool calling in a loop" model for agents is deceptive. Without managing context, token-heavy tool calls quickly accumulate, leading to high costs ($1-2 per run), hitting context limits, and performance degradation known as "context rot."
Criticism against AI frameworks is nuanced. High-level abstractions like `import agent` can hide complexity and make systems hard to adapt. However, low-level orchestration frameworks providing building blocks like nodes and edges are valuable for their utility (e.g., checkpointing) without sacrificing transparency.
There's a tension in agent design: should you prune failures from the message history? Pruning prevents a "poisoned" context where hallucinations persist, but keeping failures allows the agent to see the error and correct its approach. For tool call errors, the speaker prefers keeping them in.
While complex RAG pipelines with vector stores are popular, leading code agents like Anthropic's Claude Code demonstrate that simple "agentic retrieval" using basic file tools can be superior. Providing an agent a manifest file (like `lm.txt`) and a tool to fetch files can outperform pre-indexed semantic search.
