Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Some engineering teams use AI in a way that produces a high volume of code riddled with mistakes. This forces them to rewrite large portions, sometimes without AI assistance, ultimately slowing them down. The issue is not the tool, but the lack of best practices for its application.

Related Insights

While AI accelerates code generation, it creates significant new chokepoints. The high volume of AI-generated code leads to "pull request fatigue," requiring more human reviewers per change. It also overwhelms automated testing systems, which must run full cycles for every minor AI-driven adjustment, offsetting initial productivity gains.

Using AI to generate content without adding human context simply transfers the intellectual effort to the recipient. This creates rework, confusion, and can damage professional relationships, explaining the low ROI seen in many AI initiatives.

The primary issue with low-effort AI-generated work is not its poor quality, but how it transfers the cognitive burden of correction and completion to the recipient. This 'masquerades' as finished work but creates interpersonal friction and hidden rework, fundamentally shifting the responsibility for the task's success.

Simply deploying AI to write code faster doesn't increase end-to-end velocity. It creates a new bottleneck where human engineers are overwhelmed with reviewing a flood of AI-generated code. To truly benefit, companies must also automate verification and validation processes.

A Workday study reveals a critical blind spot in AI productivity metrics. While tools save time, roughly 37% of that saved time is offset by the need for rework—verifying information, correcting errors, and rewriting content. This dramatically reduces the net value and ROI of the technology.

While AI dramatically increases development speed, it's a double-edged sword. Without a solid product foundation, user understanding, and clear principles, teams will simply accelerate the shipment of low-value features. AI amplifies both good and bad practices.

AI coding tools dramatically accelerate development, but this speed amplifies technical debt creation exponentially. A small team can now generate a massive, fragile codebase with inconsistent patterns and sparse documentation, creating maintenance burdens previously seen only in large, legacy organizations.

While AI coding assistants appear to boost output, they introduce a "rework tax." A Stanford study found AI-generated code leads to significant downstream refactoring. A team might ship 40% more code, but if half of that increase is just fixing last week's AI-generated "slop," the real productivity gain is much lower than headlines suggest.

A new risk for engineering leaders is becoming a 'vibe coding boss': using AI to set direction but misjudging its output as 95% complete when it's only 5%. This burdens the team with cleaning up a 'big mess of slop' rather than accelerating development.

After achieving broad adoption of agentic coding, the new challenge becomes managing the downsides. Increased code generation leads to lower quality, rushed reviews, and a knowledge gap as team members struggle to keep up with the rapidly changing codebase.