/
© 2026 RiffOn. All rights reserved.
  1. a16z Podcast
  2. How OpenAI Builds for 800 Million Weekly Users: Model Specialization and Fine-Tuning
How OpenAI Builds for 800 Million Weekly Users: Model Specialization and Fine-Tuning

How OpenAI Builds for 800 Million Weekly Users: Model Specialization and Fine-Tuning

a16z Podcast · Nov 28, 2025

OpenAI's Sherwin Wu discusses the shift from a single AGI model to a specialized portfolio, the API vs. app paradox, and pricing intelligence.

OpenAI Intentionally Powers Competitors to Win the Platform War

OpenAI embraces the 'platform paradox' by selling API access to startups that compete directly with its own apps like ChatGPT. The strategy is to foster a broad ecosystem, believing that enabling competitors is necessary to avoid losing the platform race entirely.

How OpenAI Builds for 800 Million Weekly Users: Model Specialization and Fine-Tuning thumbnail

How OpenAI Builds for 800 Million Weekly Users: Model Specialization and Fine-Tuning

a16z Podcast·3 months ago

'Context Engineering' Has Replaced Simple Prompt Engineering in AI Development

The early focus on crafting the perfect prompt is obsolete. Sophisticated AI interaction is now about 'context engineering': architecting the entire environment by providing models with the right tools, data, and retrieval mechanisms to guide their reasoning process effectively.

How OpenAI Builds for 800 Million Weekly Users: Model Specialization and Fine-Tuning thumbnail

How OpenAI Builds for 800 Million Weekly Users: Model Specialization and Fine-Tuning

a16z Podcast·3 months ago

OpenAI Abandons 'One Model' Dream for a Portfolio of Specialized Models

Initially, even OpenAI believed a single, ultimate 'model to rule them all' would emerge. This thinking has completely changed to favor a proliferation of specialized models, creating a healthier, less winner-take-all ecosystem where different models serve different needs.

How OpenAI Builds for 800 Million Weekly Users: Model Specialization and Fine-Tuning thumbnail

How OpenAI Builds for 800 Million Weekly Users: Model Specialization and Fine-Tuning

a16z Podcast·3 months ago

LLMs Resist Disintermediation Because Users Bond with Specific Models

Unlike traditional APIs, LLMs are hard to abstract away. Users develop a preference for a specific model's 'personality' and performance (e.g., GPT-4 vs. 3.5), making it difficult for applications to swap out the underlying model without user notice and pushback.

How OpenAI Builds for 800 Million Weekly Users: Model Specialization and Fine-Tuning thumbnail

How OpenAI Builds for 800 Million Weekly Users: Model Specialization and Fine-Tuning

a16z Podcast·3 months ago

Quora's Early Engineering Team Seeded the Modern AI Startup Ecosystem

Quora's initial engineering team was a legendary concentration of talent that later dispersed to found or lead major AI players, including Perplexity and Scale AI. This highlights how talent clusters from one generation of startups can become the founding diaspora for the next.

How OpenAI Builds for 800 Million Weekly Users: Model Specialization and Fine-Tuning thumbnail

How OpenAI Builds for 800 Million Weekly Users: Model Specialization and Fine-Tuning

a16z Podcast·3 months ago

Enterprise AI Value Is Unlocked by Reinforcement Fine-Tuning, Not Simple SFT

Basic supervised fine-tuning (SFT) only adjusts a model's style. The real unlock for enterprises is reinforcement fine-tuning (RFT), which leverages proprietary datasets to create state-of-the-art models for specific, high-value tasks, moving beyond mere 'tone improvements.'

How OpenAI Builds for 800 Million Weekly Users: Model Specialization and Fine-Tuning thumbnail

How OpenAI Builds for 800 Million Weekly Users: Model Specialization and Fine-Tuning

a16z Podcast·3 months ago

Real-World AI Agents Require Deterministic Workflows, Not Full Autonomy

Contrary to the vision of free-wheeling autonomous agents, most business automation relies on strict Standard Operating Procedures (SOPs). Products like OpenAI's Agent Builder succeed by providing deterministic, node-based workflows that enforce business logic, which is more valuable than pure autonomy.

How OpenAI Builds for 800 Million Weekly Users: Model Specialization and Fine-Tuning thumbnail

How OpenAI Builds for 800 Million Weekly Users: Model Specialization and Fine-Tuning

a16z Podcast·3 months ago

Open Source Models Pose Low Cannibalization Risk to Premium APIs

OpenAI has seen no cannibalization from its open source model releases. The use cases, customer profiles, and immense difficulty of operating inference at scale create a natural separation. Open source serves different needs and helps grow the entire AI ecosystem, which benefits the platform leader.

How OpenAI Builds for 800 Million Weekly Users: Model Specialization and Fine-Tuning thumbnail

How OpenAI Builds for 800 Million Weekly Users: Model Specialization and Fine-Tuning

a16z Podcast·3 months ago