OpenAI President Greg Brockman reveals that the most dangerous period for the company's culture was the post-ChatGPT launch party. The feeling of having 'won' threatened the underdog mentality that he believes is essential for innovation and competitiveness against larger, more established players.
Instead of viewing compute as a cost center, OpenAI treats it as a revenue generator, analogous to hiring salespeople. The core belief is that demand for AI capabilities is so vast that they can never build compute fast enough to satisfy it, justifying massive, forward-looking infrastructure investments.
Initially, Greg Brockman and his team viewed Codex as a tool strictly for software engineers. They later realized the underlying technology was not about code, but about general problem-solving and managing context. This insight shifted their strategy from 'Codex for coders' to 'Codex for everyone'.
A key part of OpenAI's 'takeoff' strategy is building an automated AI researcher. This system is designed to perform the full end-to-end workflow of a human research scientist autonomously. The goal is to dramatically accelerate the cycle of AI improvement, with humans providing high-level direction and oversight.
Even as AI models become vastly more powerful, widespread adoption is throttled by the slow evolution of users' mental models of what AI can do. People rely on a system based on past experiences, and it takes a 'magical' result to expand their belief in its capabilities for new, complex tasks.
Greg Brockman describes the imminent arrival of AGI not as a singular event where AI becomes uniformly superhuman, but as a 'jagged' reality. The AI will be superhuman at most intellectual computer-based tasks while still struggling with some basic tasks a human can do, making a clear definition difficult.
The planned Superapp combining coding, browsing, and chat is more than a UI consolidation. The deeper, more critical goal is to merge multiple backend systems into a single, unified 'AI harness' that manages context, actions, and interaction loops. This creates a powerful, efficient AI layer for various applications.
OpenAI's president predicts that AI will soon produce creative breakthroughs comparable to AlphaGo's Move 37, which redefined Go strategy. This will not be limited to science and math but will extend to domains like literature and poetry, unlocking novel forms of human creative understanding and ideation.
Greg Brockman states that in AI, 'too much opportunity' is the main problem, as most ideas work. OpenAI's strategic decisions, like focusing on the GPT reasoning model over video generation, are primarily driven by an extreme scarcity of compute. They cannot fund all promising avenues simultaneously.
While AI agents provide incredible leverage, becoming a 'CEO of a fleet of agents' creates a risk of losing one's 'pulse on the problem.' Brockman warns that users cannot abdicate responsibility. Effective use of AI agents requires active human oversight and accountability to prevent critical details from being missed.
OpenAI's model development isn't about isolated releases. A new pre-trained base model like 'Spud' acts as a new foundation. It allows two years' worth of accumulated but previously unrealized research in areas like reinforcement learning and fine-tuning to finally come to fruition, creating a step-change in capability.
