We scan new podcasts and send you the top 5 insights daily.
The primary issue with low-effort AI-generated work is not its poor quality, but how it transfers the cognitive burden of correction and completion to the recipient. This 'masquerades' as finished work but creates interpersonal friction and hidden rework, fundamentally shifting the responsibility for the task's success.
The problem with bad AI-generated work ('slop') isn't just poor writing. It's that subtle inaccuracies or context loss can derail meetings and create long, energy-wasting debates. This cognitive overload makes it difficult for teams to sense-make and ultimately costs more in human time than it saves.
While the time spent fixing AI-generated junk is costly ($9M/year for a 10k-employee firm), the more toxic damage is emotional and interpersonal. Receiving 'work slop' leads colleagues to be judged as less competent and trustworthy, directly harming collaboration, engagement, and psychological safety.
Using AI to generate content without adding human context simply transfers the intellectual effort to the recipient. This creates rework, confusion, and can damage professional relationships, explaining the low ROI seen in many AI initiatives.
A Workday study reveals a critical blind spot in AI productivity metrics. While tools save time, roughly 37% of that saved time is offset by the need for rework—verifying information, correcting errors, and rewriting content. This dramatically reduces the net value and ROI of the technology.
When AI empowers non-specialists to perform complex tasks (e.g., marketers writing code), it creates a new, hidden workload for experts. These specialists must then spend significant time reviewing, correcting, and guiding the AI-assisted work from their colleagues, creating a new form of operational drag.
AI is increasingly used to produce low-quality outputs like emails and reports, termed "work slop." While quick to create, this content is often so vague or useless that it makes colleagues' jobs harder, increasing overall administrative burden and hindering real progress.
Research highlights "work slop": AI output that appears polished but lacks human context. This forces coworkers to spend significant time fixing it, effectively offloading cognitive labor and damaging perceptions of the sender's capability and trustworthiness.
While AI coding assistants appear to boost output, they introduce a "rework tax." A Stanford study found AI-generated code leads to significant downstream refactoring. A team might ship 40% more code, but if half of that increase is just fixing last week's AI-generated "slop," the real productivity gain is much lower than headlines suggest.
A new risk for engineering leaders is becoming a 'vibe coding boss': using AI to set direction but misjudging its output as 95% complete when it's only 5%. This burdens the team with cleaning up a 'big mess of slop' rather than accelerating development.
The ease of generating AI summaries is creating low-quality 'slop.' This imposes a hidden productivity cost, as collaborators must waste time clarifying ambiguous or incorrect AI-generated points, derailing work and leading to lengthy, unnecessary corrections.