Historically, well-structured writing served as a reliable signal that the author had invested time in research and deep thinking. Economist Bernd Hobart notes that because AI can generate coherent text without underlying comprehension, this signal is lost. This forces us to find new, more reliable ways to assess a person's actual knowledge and wisdom.
As AI makes it increasingly easy to get answers without effort, society may split into two groups. Bernd Hobart suggests a "cognitive underclass" will opt for the ease of AI-generated solutions, while a "cognitive overclass" will deliberately engage in the now-optional hard work of critical thinking, creating a new societal divide.
The "generative" label on AI is misleading. Its true power for daily knowledge work lies not in creating artifacts, but in its superhuman ability to read, comprehend, and synthesize vast amounts of information—a far more frequent and fundamental task than writing.
Using generative AI to produce work bypasses the reflection and effort required to build strong knowledge networks. This outsourcing of thinking leads to poor retention and a diminished ability to evaluate the quality of AI-generated output, mirroring historical data on how calculators impacted math skills.
AI enables rapid book creation by generating chapters and citing sources. This creates a new problem: authors can produce works on complex topics without ever reading the source material or developing deep understanding. This "AI slop" presents a veneer of expertise that lacks the genuine, ingested knowledge of its human creator.
In the age of AI, the new standard for value is the "GPT Test." If a person's public statements, writing, or ideas could have been generated by a large language model, they will fail to stand out. This places an immense premium on true originality, deep insight, and an authentic voice—the very things AI struggles to replicate.
The true danger of LLMs in the workplace isn't just sloppy output, but the erosion of deep thinking. The arduous process of writing forces structured, first-principles reasoning. By making it easy to generate plausible text from bullet points, LLMs allow users to bypass this critical thinking process, leading to shallower insights.
While AI can accelerate tasks like writing, the real learning happens during the creative process itself. By outsourcing the 'doing' to AI, we risk losing the ability to think critically and synthesize information. Research shows our brains are physically remapping, reducing our ability to think on our feet.
Research highlights "work slop": AI output that appears polished but lacks human context. This forces coworkers to spend significant time fixing it, effectively offloading cognitive labor and damaging perceptions of the sender's capability and trustworthiness.
Professionals are using AI to write detailed reports, while their managers use AI to summarize them. This creates a feedback loop where AI generates content for other AIs to consume, with humans acting merely as conduits. This "AI slop" replaces deep thought with inefficient, automated communication.
Writing is not just the documentation of pre-formed thoughts; it is the process of forming them. By wrestling with arguments on the page, you clarify your own thinking. Outsourcing this "hard part" to AI means you skip the essential step of developing a unique, well-reasoned perspective.