AI tools can generate vast amounts of verbose code on command, making metrics like 'lines of code' easily gameable and meaningless for measuring true engineering productivity. This practice introduces complexity and technical debt rather than indicating progress.
Once AI coding agents reach a high performance level, objective benchmarks become less important than a developer's subjective experience. Like a warrior choosing a sword, the best tool is often the one that has the right "feel," writes code in a preferred style, and integrates seamlessly into a human workflow.
The trend of 'vibe coding'—casually using prompts to generate code without rigor—is creating low-quality, unmaintainable software. The AI engineering community has reached its limit with this approach and is actively searching for a new development paradigm that marries AI's speed with traditional engineering's craft and reliability.
The most significant productivity gains come from applying AI to every stage of development, including research, planning, product marketing, and status updates. Limiting AI to just code generation misses the larger opportunity to automate the entire engineering process.
High productivity isn't about using AI for everything. It's a disciplined workflow: breaking a task into sub-problems, using an LLM for high-leverage parts like scaffolding and tests, and reserving human focus for the core implementation. This avoids the sunk cost of forcing AI on unsuitable tasks.
Human intuition is a poor gauge of AI's actual productivity benefits. A study found developers felt significantly sped up by AI coding tools even when objective measurements showed no speed increase. The real value may come from enabling tasks that otherwise wouldn't be attempted, rather than simply accelerating existing workflows.
According to Goodhart's Law, when a measure becomes a target, it ceases to be a good measure. If you incentivize employees on AI-driven metrics like 'emails sent,' they will optimize for the number, not quality, corrupting the data and giving false signals of productivity.
Research highlights "work slop": AI output that appears polished but lacks human context. This forces coworkers to spend significant time fixing it, effectively offloading cognitive labor and damaging perceptions of the sender's capability and trustworthiness.
Instead of fighting for perfect code upfront, accept that AI assistants can generate verbose code. Build a dedicated "refactoring" phase into your process, using AI with specific rules to clean up and restructure the initial output. This allows you to actively manage technical debt created by AI-powered speed.
It's infeasible for humans to manually review thousands of lines of AI-generated code. The abstraction of review is moving up the stack. Instead of checking syntax, developers will validate high-level plans, two-sentence summaries, and behavioral outcomes in a testing environment.
As AI generates more code, the core engineering task evolves from writing to reviewing. Developers will spend significantly more time evaluating AI-generated code for correctness, style, and reliability, fundamentally changing daily workflows and skill requirements.