Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Employees produce low-quality AI work not because they are lazy, but as a symptom of a leadership problem. The combination of generalized mandates to use AI and increased workload expectations creates a perfect storm for 'work slop' as a survival mechanism, rather than a productivity tool.

Related Insights

The problem with bad AI-generated work ('slop') isn't just poor writing. It's that subtle inaccuracies or context loss can derail meetings and create long, energy-wasting debates. This cognitive overload makes it difficult for teams to sense-make and ultimately costs more in human time than it saves.

Business leaders often assume their teams are independently adopting AI. In reality, employees are hesitant to admit they don't know how to use it effectively and are waiting for formal training and a clear strategy. The responsibility falls on leadership to initiate AI education.

To avoid "AI slop"—the proliferation of low-quality AI outputs—Dell's CTO advocates for a disciplined, top-down strategy. Instead of letting tools run wild, they focus on a small number of high-impact use cases with clear business outcomes, ensuring quality and preventing chaos.

The primary issue with low-effort AI-generated work is not its poor quality, but how it transfers the cognitive burden of correction and completion to the recipient. This 'masquerades' as finished work but creates interpersonal friction and hidden rework, fundamentally shifting the responsibility for the task's success.

Companies like Accenture are forcing AI tool adoption through promotion mandates not because the tools lack value, but because employees are caught in a 'time poverty' trap. They lack the dedicated time to learn new technologies that would ultimately save them time, creating a need for top-down corporate pressure to break the cycle.

AI is increasingly used to produce low-quality outputs like emails and reports, termed "work slop." While quick to create, this content is often so vague or useless that it makes colleagues' jobs harder, increasing overall administrative burden and hindering real progress.

Research highlights "work slop": AI output that appears polished but lacks human context. This forces coworkers to spend significant time fixing it, effectively offloading cognitive labor and damaging perceptions of the sender's capability and trustworthiness.

A new risk for engineering leaders is becoming a 'vibe coding boss': using AI to set direction but misjudging its output as 95% complete when it's only 5%. This burdens the team with cleaning up a 'big mess of slop' rather than accelerating development.

Before surveying employees or analyzing output, leaders can diagnose a high risk of 'AI work slop' with a simple test: is AI use mandated? If the organizational strategy is one of mandates, it creates pressure that makes employees far more likely to produce low-quality, box-ticking AI work.

According to Dropbox's VP of Engineering, the flood of low-quality, AI-generated "work slop" isn't a technology problem, but a strategy problem. When leaders push for AI adoption without defining crisp use cases and goals, employees are left to generate generic content that fails to add real value.