Even with large advertised context windows, LLMs show performance degradation and strange behaviors when overloaded. Described as "context anxiety," they may prematurely give up on complex tasks, claim imaginary time constraints, or oversimplify the problem, highlighting the gap between advertised and effective context sizes.
AI models like Claude Code can experience a decline in output quality as their context window fills. It is recommended to start a new session once the context usage exceeds 50% to avoid this degradation, which can manifest as the model 'forgetting' earlier instructions.
Current LLMs are intelligent enough for many tasks but fail because they lack access to complete context—emails, Slack messages, past data. The next step is building products that ingest this real-world context, making it available for the model to act upon.
When using AI for complex analysis like a medical case, providing a detailed, unabridged history is crucial. The host found that when he summarized his son's case history to start a new chat, the model's performance noticeably worsened because it lacked the fine-grained, day-to-day data points for accurate trend analysis.
Even models with million-token context windows suffer from "context rot" when overloaded with information. Performance degrades as the model struggles to find the signal in the noise. Effective context engineering requires precision, packing the window with only the exact data needed.
AI struggles with tasks requiring long and wide context, like software engineering. Because adding a linear amount of context requires an exponential increase in compute power, it cannot effectively manage the complex interdependencies of large projects.
Long conversations degrade LLM performance as attention gets clogged with irrelevant details. An expert workflow is to stop, ask the model to summarize the key points of the discussion, and then start a fresh chat with that summary as the initial prompt. This keeps the context clean and the model on track.
Simply having a large context window is insufficient. Models may fail to "see" or recall specific facts embedded deep within the context, a phenomenon exposed by "needle in the haystack" evaluations. Effective reasoning capability across the entire window is a separate, critical factor.
Overloading LLMs with excessive context degrades performance, a phenomenon known as 'context rot'. Claude Skills address this by loading context only when relevant to a specific task. This laser-focused approach improves accuracy and avoids the performance degradation seen in broader project-level contexts.
The simple "tool calling in a loop" model for agents is deceptive. Without managing context, token-heavy tool calls quickly accumulate, leading to high costs ($1-2 per run), hitting context limits, and performance degradation known as "context rot."
Exposing a full API via the Model Context Protocol (MCP) overwhelms an LLM's context window and reasoning. This forces developers to abandon exposing their entire service and instead manually craft a few highly specific tools, limiting the AI's capabilities and defeating the "do anything" vision of agents.