Products like Sora and current LLMs are not yet sustainable businesses. They function as temporary narratives, or "shims," to attract immense capital for building compute infrastructure. This high-risk game bets on a religious belief in a future breakthrough, not on the viability of current products.
AI models are becoming commodities; the real, defensible value lies in proprietary data and user context. The correct strategy is for companies to use LLMs to enhance their existing business and data, rather than selling their valuable context to model providers for pennies on the dollar.
The obvious social play for OpenAI is to embed collaborative features within ChatGPT, leveraging its utility. Instead, the company launched Sora, a separate entertainment app. This focus on niche content creation over core product utility is a questionable strategy for building a lasting social network.
The internet's value stems from an economy of unique human creations. AI-generated content, or "slop," replaces this with low-quality, soulless output, breaking the internet's economic engine. This trend now appears in VC pitches, with founders presenting AI-generated ideas they don't truly understand.
OpenAI's new video tool reveals a strategic trade-off: it is extremely restrictive on content moderation (blocking prompts about appearance) while being permissive with copyrighted material (e.g., Nintendo characters). This suggests a strategy of prioritizing brand safety over potential future copyright battles.
AI video tools like Sora optimize for high production value, but popular internet content often succeeds due to its message and authenticity, not its polish. The assumption that better visuals create better engagement is a risky product bet, as it iterates on an axis that users may not value.
Altman’s prominent role as the face of OpenAI products despite his 0% ownership stake highlights a shift where control over narrative and access to capital is more valuable than direct ownership. This “modern mercantilism” values influence and power over traditional cap table percentages.
A core debate in AI is whether LLMs, which are text prediction engines, can achieve true intelligence. Critics argue they cannot because they lack a model of the real world. This prevents them from making meaningful, context-aware predictions about future events—a limitation that more data alone may not solve.
