A key critique suggests AI is a "fake tool" because it automates tasks that produce little real value, like pointless memos. This criticism inadvertently highlights that much of current corporate knowledge work is itself performative and lacks substance, a problem that predates AI.
The problem with bad AI-generated work ('slop') isn't just poor writing. It's that subtle inaccuracies or context loss can derail meetings and create long, energy-wasting debates. This cognitive overload makes it difficult for teams to sense-make and ultimately costs more in human time than it saves.
While AI tools once gave creators an edge, they now risk producing democratized, undifferentiated output. IBM's AI VP, who grew to 200k followers, now uses AI less. The new edge is spending more time on unique human thinking and using AI only for initial ideation, not final writing.
Using AI to generate content without adding human context simply transfers the intellectual effort to the recipient. This creates rework, confusion, and can damage professional relationships, explaining the low ROI seen in many AI initiatives.
The "generative" label on AI is misleading. Its true power for daily knowledge work lies not in creating artifacts, but in its superhuman ability to read, comprehend, and synthesize vast amounts of information—a far more frequent and fundamental task than writing.
A McKinsey report suggests that if applying AI to a task doesn't improve key metrics like revenue or cost, the work itself may be valueless. This reframes the failure of an AI tool as a successful diagnosis of organizational inefficiency, highlighting "BS work" that should be eliminated.
A critique from a SaaS entrepreneur outside the AI hype bubble suggests that current tools often just accelerate the creation of corporate fluff, like generating a 50-slide deck for a five-minute meeting. This raises questions about whether AI is creating true productivity gains or just more unnecessary work.
Historically, well-structured writing served as a reliable signal that the author had invested time in research and deep thinking. Economist Bernd Hobart notes that because AI can generate coherent text without underlying comprehension, this signal is lost. This forces us to find new, more reliable ways to assess a person's actual knowledge and wisdom.
Research highlights "work slop": AI output that appears polished but lacks human context. This forces coworkers to spend significant time fixing it, effectively offloading cognitive labor and damaging perceptions of the sender's capability and trustworthiness.
Professionals are using AI to write detailed reports, while their managers use AI to summarize them. This creates a feedback loop where AI generates content for other AIs to consume, with humans acting merely as conduits. This "AI slop" replaces deep thought with inefficient, automated communication.
According to Dropbox's VP of Engineering, the flood of low-quality, AI-generated "work slop" isn't a technology problem, but a strategy problem. When leaders push for AI adoption without defining crisp use cases and goals, employees are left to generate generic content that fails to add real value.