Superhuman's CEO repeatedly called the 'Expert Review' feature "not good" and misaligned with strategy. Simultaneously, he maintained the legal claims against it are "without merit." This dual-track defense allows a company to manage public perception and appease critics while preserving its legal position in court.
Faced with backlash for using names without consent, Superhuman's first move was an email-based opt-out. This common tech crisis playbook addresses the immediate PR problem with a tactical fix, delaying the harder strategic decision to kill the feature entirely, which only came later.
To create a high-quality AI agent of oneself, an expert can't just rely on their public work. They must manually document their nuanced style and judgment into a system of prompts and triggers. This shifts the burden of creating a good AI product from the platform to the creator, asking them to codify their intuition.
The CEO described a canonical decision-making process designed to solicit feedback and avoid groupthink. Yet, a feature using names without permission—a clear ethical and legal risk—was launched by a small team. This indicates a failure to apply the company's own governance framework to product development.
Superhuman's CEO suggests creators must build new AI agent-based business models on his platform. This frames the solution as a new opportunity, but it forces creators to perform new labor to reclaim value that was first extracted from their entire body of work without permission or compensation by the AI industry.
The Superhuman CEO apologized for the controversial feature, but framed the failure around its poor user experience, low usage, and bad outputs. This tactic subtly shifts the focus away from the core ethical problem—using likenesses without consent—and reframes it as a more forgivable product mistake.
As AI devalues digital content ('bits') by making it infinitely reproducible, creators are increasingly forced to monetize through physical goods ('atoms') like merchandise or food products. Unlike most industries that digitize to improve margins, the creator economy is de-digitizing to survive, a rare and telling economic shift.
Past disruptive technologies like file-sharing and ride-sharing overcame legal and ethical objections because their utility was immense to the public. AI currently polls worse than ICE because it is perceived as purely extractive without yet providing a clear, indispensable benefit to the average person that outweighs its social costs.
The threat of AI models replicating SaaS features is real. Superhuman's defense isn't a superior core technology but a platform strategy. The bet is that users won't build their own tools if the platform offers a powerful network effect of pre-built, integrated agents that work everywhere, creating a defensible ecosystem.
The CEO repeatedly cites YouTube's Content ID—a system for post-infringement monetization—as the model for AI platforms. This analogy breaks down because while a copied video can be claimed or removed, AI-generated impersonations can cause immediate and lasting reputational damage that cannot be clawed back.
