An AI entrepreneur's viral essay warning about AI's job-destroying capabilities lost some credibility when it was revealed he used AI to help write it. This highlights a central hypocrisy in the AI debate: evangelists and critics alike are leveraging the technology, complicating their own arguments about its ultimate impact.
While AI tools once gave creators an edge, they now risk producing democratized, undifferentiated output. IBM's AI VP, who grew to 200k followers, now uses AI less. The new edge is spending more time on unique human thinking and using AI only for initial ideation, not final writing.
Founders making glib comments about AI likely ending the world, even in jest, creates genuine fear and opposition among the public. This humor backfires, as people facing job automation and rising energy costs question why society is pursuing this technology at all, fueling calls to halt progress.
Drawing on Cory Doctorow's insight, the immediate risk for workers isn't being replaced by a competent AI, but by an incompetent one. AI only needs to be good enough to convince a manager to fire a human, leading to a lose-lose situation of job loss and declining work quality.
The public AI debate is a false dichotomy between 'hype folks' and 'doomers.' Both camps operate from the premise that AI is or will be supremely powerful. This shared assumption crowds out a more realistic critique that current AI is a flawed, over-sold product that isn't truly intelligent.
The two dominant negative narratives about AI—that it's a fake bubble and that it's on the verge of creating a dangerous superintelligence—are mutually exclusive. If AI is a bubble, it's not super powerful; if it's super powerful, the economic activity is justified. This contradiction exposes the ideological roots of the doomer movement.
AI enables rapid book creation by generating chapters and citing sources. This creates a new problem: authors can produce works on complex topics without ever reading the source material or developing deep understanding. This "AI slop" presents a veneer of expertise that lacks the genuine, ingested knowledge of its human creator.
Public discourse on AI's employment impact often uses the Motte-and-Bailey fallacy. Critics make a bold, refutable claim that AI is causing job losses now (the Bailey). When challenged with data, they retreat to the safer, unfalsifiable position that it will cause job losses in the future (the Motte).
AI is experiencing a political backlash from day one, unlike social media's long "honeymoon" period. This is largely self-inflicted, as industry leaders like Sam Altman have used apocalyptic, "it might kill everyone" rhetoric as a marketing tool, creating widespread fear before the benefits are fully realized.
Historically, well-structured writing served as a reliable signal that the author had invested time in research and deep thinking. Economist Bernd Hobart notes that because AI can generate coherent text without underlying comprehension, this signal is lost. This forces us to find new, more reliable ways to assess a person's actual knowledge and wisdom.
AI leaders often use dystopian language about job loss and world-ending scenarios (“summoning the demon”). While effective for fundraising from investors who are "long demon," this messaging is driving a public backlash by framing AI as an existential threat rather than an empowering tool for humanity.