While AI coding assistants appear to boost output, they introduce a "rework tax." A Stanford study found AI-generated code leads to significant downstream refactoring. A team might ship 40% more code, but if half of that increase is just fixing last week's AI-generated "slop," the real productivity gain is much lower than headlines suggest.
The problem with bad AI-generated work ('slop') isn't just poor writing. It's that subtle inaccuracies or context loss can derail meetings and create long, energy-wasting debates. This cognitive overload makes it difficult for teams to sense-make and ultimately costs more in human time than it saves.
Using AI to generate content without adding human context simply transfers the intellectual effort to the recipient. This creates rework, confusion, and can damage professional relationships, explaining the low ROI seen in many AI initiatives.
Human intuition is a poor gauge of AI's actual productivity benefits. A study found developers felt significantly sped up by AI coding tools even when objective measurements showed no speed increase. The real value may come from enabling tasks that otherwise wouldn't be attempted, rather than simply accelerating existing workflows.
Research highlights "work slop": AI output that appears polished but lacks human context. This forces coworkers to spend significant time fixing it, effectively offloading cognitive labor and damaging perceptions of the sender's capability and trustworthiness.
While AI development tools can improve backend efficiency by up to 90%, they often create user interface challenges. AI tends to generate very verbose text that takes up too much space and can break the UX layout, requiring significant time and manual effort to get right.
Instead of fighting for perfect code upfront, accept that AI assistants can generate verbose code. Build a dedicated "refactoring" phase into your process, using AI with specific rules to clean up and restructure the initial output. This allows you to actively manage technical debt created by AI-powered speed.
AI tools can generate vast amounts of verbose code on command, making metrics like 'lines of code' easily gameable and meaningless for measuring true engineering productivity. This practice introduces complexity and technical debt rather than indicating progress.
As AI generates more code, the core engineering task evolves from writing to reviewing. Developers will spend significantly more time evaluating AI-generated code for correctness, style, and reliability, fundamentally changing daily workflows and skill requirements.
Historically, developer tools adapted to a company's codebase. The productivity gains from AI agents are so significant that the dynamic has flipped: for the first time, companies are proactively changing their code, logging, and tooling to be more 'agent-friendly,' rather than the other way around.
The ease of generating AI summaries is creating low-quality 'slop.' This imposes a hidden productivity cost, as collaborators must waste time clarifying ambiguous or incorrect AI-generated points, derailing work and leading to lengthy, unnecessary corrections.