When asked to modify or rewrite functionality, LLMs often attempt to preserve compatibility with previous versions, even on greenfield projects. This defensive behavior can lead to overly complex code and technical debt. Developers must explicitly state that backward compatibility is not a requirement.
When given ambiguous instructions, LLMs will choose the most common technology stack from their training data (e.g., React with Tailwind), even if it contradicts the project's goals. Developers must provide explicit constraints to avoid this unwanted default behavior.
Prompting a different LLM model to review code generated by the first one provides a powerful, non-defensive critique. This "second opinion" can rapidly identify architectural issues, bugs, and alternative approaches without the human ego involved in traditional code reviews.
LLMs may use available packages in a project's environment without properly declaring them in configuration files like `package.json`. This leads to fragile builds that work locally but break on fresh installations. Developers must manually verify and instruct the LLM to add all required dependencies.
LLMs can both generate code analysis tools (measuring metrics like cognitive complexity) and then act on those results. This creates a powerful, objective feedback loop where you can instruct an LLM to refactor code specifically to improve a quantifiable metric, then validate the improvement afterward.
Models like Gemini Flash can exhibit a behavior of creating and then deleting temporary utility files (e.g., code analyzers), assuming they are no longer needed. This forces costly regeneration. To prevent this, users must explicitly instruct the LLM to save these scripts in a specific directory for future use.
