Founders often believe success will bring ease and happiness, but building meaningful things is a constant, hard grind. The goal shouldn't be happiness, which is fleeting, but contentment—the deep satisfaction derived from tackling important problems. The hardness itself is a privilege to be embraced.
The model that powered ChatGPT was not new; its world-changing potential was unlocked by a simple application experiment (RLHF for instruction following). This proves massive opportunities are often hidden in plain sight, requiring not a breakthrough invention but the willingness to 'do the damned experiment.'
Entrepreneurs can often bend the world to their will, but it's crucial to differentiate what they *wish* will happen versus what *must* happen due to inevitable trends. Building on the 'must happen' landscape provides a more robust foundation for a startup's long-term success.
There is a massive gap between what AI models *can* do and how they are *currently* used. This 'capability overhang' exists because unlocking their full potential requires unglamorous 'ugly plumbing' and 'grunty product building.' The real opportunity for founders is in this grind, not just in model innovation.
Open source AI models can't improve in the same decentralized way as software like Linux. While the community can fine-tune and optimize, the primary driver of capability—massive-scale pre-training—requires centralized compute resources that are inherently better suited to commercial funding models.
Microsoft's early OpenAI investment was a calculated, risk-adjusted decision. They saw that generalizable AI platforms were a 'must happen' future and asked, 'Can we remain a top cloud provider without it?' The clear 'no' made the investment a defensive necessity, not just an offensive gamble.
In today's hype-driven AI market, founders must ignore 'false signals' like media attention and investor interest. These metrics have zero, or even negative, correlation with building a useful product. The only signal that matters is genuine user love and feedback from actual customers.
Kevin Scott recounts leaving his PhD because his work was intellectually stimulating but had marginal real-world impact. At Google, he chose to automate ad approvals—a less 'sexy' problem that ultimately saved the company a billion dollars in operating costs, cementing his 'impact-first' framework.
