When an AI model gives nonsensical responses after a long conversation, its context window is likely full. Instead of trying to correct it, reset the context. For prototypes, fork the design to start a new session. For chats, ask the AI to summarize the conversation, then start a new chat with that summary.
People struggle with AI prompts because the model lacks background on their goals and progress. The solution is 'Context Engineering': creating an environment where the AI continuously accumulates user-specific information, materials, and intent, reducing the need for constant prompt tweaking.
When an AI coding assistant gets off track, Tim McLear asks it to generate a summary prompt for another AI to take over. This "resume work" prompt forces the AI to consolidate the context and goal. This summary often reveals where the AI misunderstood the request, allowing him to correct the course and restart with a cleaner prompt.
When building Spiral, a single large language model trying to both interview the user and write content failed due to "context rot." The solution was a multi-agent system where an "interviewer" agent hands off the full context to a separate "writer" agent, improving performance and reliability.
Many AI tools expose the model's reasoning before generating an answer. Reading this internal monologue is a powerful debugging technique. It reveals how the AI is interpreting your instructions, allowing you to quickly identify misunderstandings and improve the clarity of your prompts for better results.
Long, continuous AI chat threads degrade output quality as the context window fills up, making it harder for the model to recall early details. To maintain high-quality results, treat each discrete feature or task as a new chat, ensuring the agent has a clean, focused context for each job.
Instead of manually rereading notes to regain context after a break, instruct a context-aware AI to summarize your own recent progress. This acts as a personalized briefing, dramatically reducing the friction of re-engaging with complex, multi-day projects like coding or writing.
When a conversation with Codex approaches its context window limit, using `/new` erases all history. The `/compact` command is a better alternative. It instructs the LLM to summarize the current conversation into a shorter form, freeing up tokens while retaining essential context for continued work.
Long conversations degrade LLM performance as attention gets clogged with irrelevant details. An expert workflow is to stop, ask the model to summarize the key points of the discussion, and then start a fresh chat with that summary as the initial prompt. This keeps the context clean and the model on track.
When an AI tool fails, a common user mistake is to get stuck in a 'doom loop' by repeatedly using negative, low-context prompts like 'it's not working.' This is counterproductive. A better approach is to use a specific command or prompt that forces the AI to reflect and reset its approach.
When a prompt yields poor results, use a meta-prompting technique. Feed the failing prompt back to the AI, describe the incorrect output, specify the desired outcome, and explicitly grant it permission to rewrite, add, or delete. The AI will then debug and improve its own instructions.