Before writing any code for a complex feature or bug fix, delegate the initial discovery phase to an AI. Task it with researching the current state of the codebase to understand existing logic and potential challenges. This front-loads research and leads to a more informed, efficient approach.
To get superior results from AI coding agents, treat them like human developers by providing a detailed plan. Creating a Product Requirements Document (PRD) upfront leads to a more focused and accurate MVP, saving significant time on debugging and revisions later on.
Even though modern AI coding assistants can handle complex, single-shot requests, it's more reliable to build an application in stages. First, build the core functionality, then add secondary features, and finally add tertiary elements like download buttons. This iterative approach prevents the AI from getting confused.
When using AI development tools, first leverage their "planning" mode. The AI may correctly identify code to change but misinterpret the strategic goal. Correct the AI's plan (e.g., from a global change to a user-specific one) before implementation to avoid rework.
Instead of asking an AI to directly build something, the more effective approach is to instruct it on *how* to solve the problem: gather references, identify best-in-class libraries, and create a framework before implementation. This means working one level of abstraction higher than the code itself.
Use the Claude chat application for deep research on technical architecture and best practices *before* coding. It can research topics for over 10 minutes, providing a well-summarized plan that you can then feed into a dedicated coding tool like Cursor or Claude Code for implementation.
When using AI for complex but solved problems (like user permissions), don't jump straight to code generation. First, use the AI as a research assistant to find the established architectural patterns used by major companies. This ensures you're building on a proven foundation rather than a novel, flawed solution.
To ensure comprehension of AI-generated code, developer Terry Lynn created a "rubber duck" rule in his AI tool. This prompts the AI to explain code sections and even create pop quizzes about specific functions. This turns the development process into an active learning tool, ensuring he deeply understands the code he's shipping.
An emerging power-user pattern, especially among new grads, is to trust AI coding assistants like Codex with entire features, not just small snippets. This "full YOLO mode" approach, while sometimes failing, often "one-shots" complex tasks, forcing a recalibration of how developers should leverage AI for maximum effectiveness.
To get AI agents to perform complex tasks in existing code, a three-stage workflow is key. First, have the agent research and objectively document how the codebase works. Second, use that research to create a step-by-step implementation plan. Finally, execute the plan. This structured approach prevents the agent from wasting context on discovery during implementation.
For complex, one-time tasks like a code migration, don't just ask AI to write a script. Instead, have it build a disposable tool—a "jig" or "command center”—that visualizes the process and guides you through each step. This provides more control and understanding than a black-box script.