Employees produce low-quality AI work not because they are lazy, but as a symptom of a leadership problem. The combination of generalized mandates to use AI and increased workload expectations creates a perfect storm for 'work slop' as a survival mechanism, rather than a productivity tool.
The most effective way to integrate AI is not through individual training but by empowering teams to redesign their own work processes. This team-level approach fosters agency and ensures AI is used to solve real, shared problems, which is more powerful than simply making individuals 'AI literate'.
The primary issue with low-effort AI-generated work is not its poor quality, but how it transfers the cognitive burden of correction and completion to the recipient. This 'masquerades' as finished work but creates interpersonal friction and hidden rework, fundamentally shifting the responsibility for the task's success.
Effective AI adoption requires more than technical skill; it requires a 'pilot mindset'. This involves cultivating high agency (a sense of ownership and control) and high optimism about the technology's potential. Organizations should offer mindset training alongside tool training to foster curiosity and confident experimentation.
While the time spent fixing AI-generated junk is costly ($9M/year for a 10k-employee firm), the more toxic damage is emotional and interpersonal. Receiving 'work slop' leads colleagues to be judged as less competent and trustworthy, directly harming collaboration, engagement, and psychological safety.
Before surveying employees or analyzing output, leaders can diagnose a high risk of 'AI work slop' with a simple test: is AI use mandated? If the organizational strategy is one of mandates, it creates pressure that makes employees far more likely to produce low-quality, box-ticking AI work.
