Instead of a single, all-powerful AGI emerging, the reality of AI is a "polytheistic" ecosystem of many decentralized models, each with different strengths. This framework challenges the notion of a single entity to control or fear and suggests a more complex, competitive landscape.
The internet allows for "unfettered conversations," breaking legacy media's control over acceptable discourse. For the institutional left, this is akin to open borders for ideology, a threatening loss of narrative control, just as open physical borders are viewed as threatening by the right.
The rapid pace of AI development means the main "job" being taken is that of the last generation's inferior AI model. A human's role evolves into that of a manager, constantly evaluating and deploying the newest, most capable AI tool for a given task, rather than being replaced by it.
Historically, the "law of the sea" governed ships in international waters. Today, the internet is the new global commons where data packets travel between "ports" (computers). The rules governing this flow are increasingly defined by code and protocol, creating a new digital legal framework.
AI operates probabilistically, making it great for creative and pattern-matching tasks but unreliable for absolute verification. Crypto is deterministic; its outputs are mathematically guaranteed. This makes crypto the perfect antidote to AI-generated fakes, providing a foundation of verifiable truth that AI cannot replicate.
With AI making content generation easy and verification hard, simply publishing your own message ("going direct") is insufficient. The new standard is to make claims mathematically verifiable ("prove correct") using on-chain data and cryptography to build trust in a low-trust environment.
Blindly applying AI to every task results in low-quality, untrustworthy output ("slop"). The optimal approach involves using AI as an accelerator while retaining human oversight for prompting, verification, and critical judgment. Over-reliance on the AI shortcut diminishes quality and trust.
When media outlets collectively push a single narrative, it becomes consensus reality. If the story is later proven false, they retract it in unison. This "school of fish" behavior provides safety in numbers, making it impossible to hold any single journalist or outlet accountable for being wrong.
Faced with economic disruption from tech, legacy media outlets like the NYT pivoted. They sacrificed their position as a trusted arbiter for the broader public, opting for a more stable business model: serving as a "party newsletter" to a loyal, paying subscriber base seeking reinforcement.
Both tech and media are fundamentally about disseminating information. The internet gave tech platforms superior distribution, disrupting media's business model and its role as the primary shaper of public narrative. This created a power struggle over who controls what society sees and thinks.
Coined by Michael Crichton, this cognitive bias describes how we spot glaring errors in news articles about our own profession, only to flip the page and accept reporting on other subjects as credible. This explains why trust in flawed institutions persists, as we compartmentalize our skepticism.
While AI lowers content creation costs (e.g., resumes, emails), it massively increases verification costs. This flood of "AI slop" breaks markets that rely on trust between strangers (recruiting, sales), causing more economic damage from verification overhead than benefit from creation efficiency.
