Human developers may prefer longer files, but AI coding assistants process code in smaller chunks. App developer Terry Lynn intentionally keeps his files small (under 400 lines) to reduce the AI's context window usage, prevent it from getting lost, and improve the speed and accuracy of its code generation.

Related Insights

When an AI coding assistant goes off track, it can be hard to undo the damage. Developer Terry Lynn mitigates this risk by programming his AI workflow to make a Git commit before and after each small phase of a task. This creates a trail of "breadcrumbs," allowing him to easily revert to a stable state if the AI makes a mistake.

High productivity isn't about using AI for everything. It's a disciplined workflow: breaking a task into sub-problems, using an LLM for high-leverage parts like scaffolding and tests, and reserving human focus for the core implementation. This avoids the sunk cost of forcing AI on unsuitable tasks.

Instead of using siloed note-taking apps, structure all your knowledge—code, writing, proposals, notes—into a single GitHub monorepo. This creates a unified, context-rich environment that any AI coding assistant can access. This approach avoids vendor lock-in and provides the AI with a comprehensive "second brain" to work from.

Moving PRDs and other product artifacts from Confluence or Notion directly into the codebase's repository gives AI coding assistants persistent, local context. This adjacency means the AI doesn't need external tool access (like an MCP) to understand the 'why' behind the code, leading to better suggestions and iterations.

Long, continuous AI chat threads degrade output quality as the context window fills up, making it harder for the model to recall early details. To maintain high-quality results, treat each discrete feature or task as a new chat, ensuring the agent has a clean, focused context for each job.

To maximize an AI agent's effectiveness, establish foundational software engineering practices like typed languages, linters, and tests. These tools provide the necessary context and feedback loops for the AI to identify, understand, and correct its own mistakes, making it more resilient.

To get consistent, high-quality results from AI coding assistants, define reusable instructions in dedicated files (e.g., `prd.md`) within your repository. This "agent briefing" file can be referenced in prompts, ensuring all generated assets adhere to a predefined structure and style.

To ensure comprehension of AI-generated code, developer Terry Lynn created a "rubber duck" rule in his AI tool. This prompts the AI to explain code sections and even create pop quizzes about specific functions. This turns the development process into an active learning tool, ensuring he deeply understands the code he's shipping.

Instead of fighting for perfect code upfront, accept that AI assistants can generate verbose code. Build a dedicated "refactoring" phase into your process, using AI with specific rules to clean up and restructure the initial output. This allows you to actively manage technical debt created by AI-powered speed.

An emerging power-user pattern, especially among new grads, is to trust AI coding assistants like Codex with entire features, not just small snippets. This "full YOLO mode" approach, while sometimes failing, often "one-shots" complex tasks, forcing a recalibration of how developers should leverage AI for maximum effectiveness.

Structure Your Codebase for AI Assistants by Keeping Files Under 400 Lines | RiffOn