Models like Gemini Flash can exhibit a behavior of creating and then deleting temporary utility files (e.g., code analyzers), assuming they are no longer needed. This forces costly regeneration. To prevent this, users must explicitly instruct the LLM to save these scripts in a specific directory for future use.
The all-caps `clod` file, created via the `init` command, stores project structure and user-defined rules. Unlike temporary in-chat instructions that get lost or degraded as the conversation continues, this file is referenced in every session, ensuring consistent behavior and enforcing project-wide guardrails.
A practical hack to improve AI agent reliability is to avoid built-in tool-calling functions. LLMs have more training data on writing code than on specific tool-use APIs. Prompting the agent to write and execute the code that calls a tool leverages its core strength and produces better outcomes.
Human developers may prefer longer files, but AI coding assistants process code in smaller chunks. App developer Terry Lynn intentionally keeps his files small (under 400 lines) to reduce the AI's context window usage, prevent it from getting lost, and improve the speed and accuracy of its code generation.
When an AI coding assistant goes off track, it can be hard to undo the damage. Developer Terry Lynn mitigates this risk by programming his AI workflow to make a Git commit before and after each small phase of a task. This creates a trail of "breadcrumbs," allowing him to easily revert to a stable state if the AI makes a mistake.
When an AI's context window is nearly full, don't rely on its automatic compaction feature. Instead, proactively instruct the AI to summarize the current project state into a "process notes" file, then clear the context and have it read the summary to avoid losing key details.
Don't pass the full, token-heavy output of every tool call back into an agent's message history. Instead, save the raw data to an external system (like a file system or agent state) and only provide the agent with a summary or pointer.
To get consistent, high-quality results from AI coding assistants, define reusable instructions in dedicated files (e.g., `prd.md`) within your repository. This "agent briefing" file can be referenced in prompts, ensuring all generated assets adhere to a predefined structure and style.
When a prompt yields poor results, use a meta-prompting technique. Feed the failing prompt back to the AI, describe the incorrect output, specify the desired outcome, and explicitly grant it permission to rewrite, add, or delete. The AI will then debug and improve its own instructions.
LLMs may use available packages in a project's environment without properly declaring them in configuration files like `package.json`. This leads to fragile builds that work locally but break on fresh installations. Developers must manually verify and instruct the LLM to add all required dependencies.
For complex, one-time tasks like a code migration, don't just ask AI to write a script. Instead, have it build a disposable tool—a "jig" or "command center”—that visualizes the process and guides you through each step. This provides more control and understanding than a black-box script.