A recent study found that AI assistants actually slowed down programmers working on complex codebases. More importantly, the programmers mistakenly believed the AI was speeding them up. This suggests a general human bias towards overestimating AI's current effectiveness, which could lead to flawed projections about future progress.
While AI accelerates code generation, it creates significant new chokepoints. The high volume of AI-generated code leads to "pull request fatigue," requiring more human reviewers per change. It also overwhelms automated testing systems, which must run full cycles for every minor AI-driven adjustment, offsetting initial productivity gains.
There's a significant gap between AI performance on structured benchmarks and its real-world utility. A randomized controlled trial (RCT) found that open-source software developers were actually slowed down by 20% when using AI assistants, despite being miscalibrated to believe the tools were helping. This highlights the limitations of current evaluation methods.
Human intuition is a poor gauge of AI's actual productivity benefits. A study found developers felt significantly sped up by AI coding tools even when objective measurements showed no speed increase. The real value may come from enabling tasks that otherwise wouldn't be attempted, rather than simply accelerating existing workflows.
Simply deploying AI to write code faster doesn't increase end-to-end velocity. It creates a new bottleneck where human engineers are overwhelmed with reviewing a flood of AI-generated code. To truly benefit, companies must also automate verification and validation processes.
Research highlights "work slop": AI output that appears polished but lacks human context. This forces coworkers to spend significant time fixing it, effectively offloading cognitive labor and damaging perceptions of the sender's capability and trustworthiness.
While AI coding assistants appear to boost output, they introduce a "rework tax." A Stanford study found AI-generated code leads to significant downstream refactoring. A team might ship 40% more code, but if half of that increase is just fixing last week's AI-generated "slop," the real productivity gain is much lower than headlines suggest.
AI coding tools disproportionately amplify the productivity of senior, sophisticated engineers who can effectively guide them and validate their output. For junior developers, these tools can be a liability, producing code they don't understand, which can introduce security bugs or fail code reviews. Success requires experience.
The perceived time-saving benefits of using AI for lesson planning may be misleading. Similar to coders who must fix AI-generated mistakes, educators may spend so much time correcting flawed outputs that the net efficiency gain is zero or even negative, a factor often overlooked in a rush to adopt new tools.
A Meta study found expert programmers were less productive with AI tools. The speaker suggests this is because users thought they were faster while actually being distracted (e.g., social media) waiting for the AI, highlighting a dangerous gap between perceived and actual productivity.
AI tools can generate vast amounts of verbose code on command, making metrics like 'lines of code' easily gameable and meaningless for measuring true engineering productivity. This practice introduces complexity and technical debt rather than indicating progress.