When using AI development tools, first leverage their "planning" mode. The AI may correctly identify code to change but misinterpret the strategic goal. Correct the AI's plan (e.g., from a global change to a user-specific one) before implementation to avoid rework.
To get superior results from AI coding agents, treat them like human developers by providing a detailed plan. Creating a Product Requirements Document (PRD) upfront leads to a more focused and accurate MVP, saving significant time on debugging and revisions later on.
AI development tools can be "resistant," ignoring change requests. A powerful technique is to prompt the AI to consider multiple options and ask for your choice before building. This prevents it from making incorrect unilateral decisions, such as applying a navigation change to the entire site by mistake.
LLMs often get stuck or pursue incorrect paths on complex tasks. "Plan mode" forces Claude Code to present its step-by-step checklist for your approval before it starts editing files. This allows you to correct its logic and assumptions upfront, ensuring the final output aligns with your intent and saving time.
When using "vibe-coding" tools, feed changes one at a time, such as typography, then a header image, then a specific feature. A single, long list of desired changes can confuse the AI and lead to poor results. This step-by-step process of iteration and refinement yields a better final product.
Achieve higher-quality results by using an AI to first generate an outline or plan. Then, refine that plan with follow-up prompts before asking for the final execution. This course-corrects early and avoids wasted time on flawed one-shot outputs, ultimately saving time.
Even for a simple personal project, starting with a Product Requirements Document (PRD) dramatically improves the output from AI code generation tools. Taking a few minutes to outline goals and features provides the necessary context for the AI to produce more accurate and relevant code, saving time on rework.
As AI writes most of the code, the highest-leverage human activity will shift from reviewing pull requests to reviewing the AI's research and implementation plans. Collaborating on the plan provides a narrative journey of the upcoming changes, allowing for high-level course correction before hundreds of lines of bad code are ever generated.
Borrowing from classic management theory, the most effective way to use AI agents is to fix problems at the earliest 'lowest value stage'. This means rigorously reviewing the agent's proposed plan *before* it writes any code, preventing costly rework later on.
A powerful but unintuitive AI development pattern is to give a model a vague goal and let it attempt a full implementation. This "throwaway" draft, with its mistakes and unexpected choices, provides crucial insights for writing a much more accurate plan for the final version.
To get AI agents to perform complex tasks in existing code, a three-stage workflow is key. First, have the agent research and objectively document how the codebase works. Second, use that research to create a step-by-step implementation plan. Finally, execute the plan. This structured approach prevents the agent from wasting context on discovery during implementation.