When a coding agent loses context, don't just start over. A power-user technique is to begin a new session and instruct the agent to read the locally stored conversation logs from the previous, failed session to regain context and continue the task.
To prevent an AI agent from repeating mistakes across coding sessions, create 'agents.md' files in your codebase. These act as a persistent memory, providing context and instructions specific to a folder or the entire repo. The agent reads these files before working, allowing it to learn from past iterations and improve over time.
When an AI coding assistant gets off track, Tim McLear asks it to generate a summary prompt for another AI to take over. This "resume work" prompt forces the AI to consolidate the context and goal. This summary often reveals where the AI misunderstood the request, allowing him to correct the course and restart with a cleaner prompt.
When an AI's context window is nearly full, don't rely on its automatic compaction feature. Instead, proactively instruct the AI to summarize the current project state into a "process notes" file, then clear the context and have it read the summary to avoid losing key details.
When an AI model gives nonsensical responses after a long conversation, its context window is likely full. Instead of trying to correct it, reset the context. For prototypes, fork the design to start a new session. For chats, ask the AI to summarize the conversation, then start a new chat with that summary.
Many AI tools expose the model's reasoning before generating an answer. Reading this internal monologue is a powerful debugging technique. It reveals how the AI is interpreting your instructions, allowing you to quickly identify misunderstandings and improve the clarity of your prompts for better results.
Long, continuous AI chat threads degrade output quality as the context window fills up, making it harder for the model to recall early details. To maintain high-quality results, treat each discrete feature or task as a new chat, ensuring the agent has a clean, focused context for each job.
Instead of manually rereading notes to regain context after a break, instruct a context-aware AI to summarize your own recent progress. This acts as a personalized briefing, dramatically reducing the friction of re-engaging with complex, multi-day projects like coding or writing.
Long-running AI agent conversations degrade in quality as the context window fills. The best engineers combat this with "intentional compaction": they direct the agent to summarize its progress into a clean markdown file, then start a fresh session using that summary as the new, clean input. This is like rebooting the agent's short-term memory.
Long conversations degrade LLM performance as attention gets clogged with irrelevant details. An expert workflow is to stop, ask the model to summarize the key points of the discussion, and then start a fresh chat with that summary as the initial prompt. This keeps the context clean and the model on track.
When an agent fails, treat it like an intern. Scrutinize its log of actions to find the specific step where it went wrong (e.g., used the wrong link), then provide a targeted correction. This is far more effective than giving a generic, frustrated re-prompt.