VCs focused on horizontal tech often avoid robotics hardware. The reasoning is that a robot's success is determined by the vertical it serves—its competition, pricing, and supply chain are those of an agriculture or mining company, not a general technology company.
The mission to achieve AGI often conflicts with the commercial need to build a product. This creates a critical tension for founders: Should limited, expensive GPU resources be allocated to long-term research or to powering the revenue-generating product that funds that research?
At a massive scale, chip design economics flip. For a $1B training run, the potential efficiency savings on compute and inference can far exceed the ~$200M cost to develop a custom ASIC for that specific task. The bottleneck becomes chip production timelines, not money.
The VC market is obsessed with AI companies showing "zero to 100 in a year" growth. This creates a blind spot for high-quality, traditional software companies. A business growing 5x annually is a fantastic investment by any historical standard but now struggles for attention.
For the first time, investors can trace a direct line from dollars to outcomes. Capital invested in compute predictably enhances model capabilities due to scaling laws. This creates a powerful feedback loop where improved capabilities drive demand, justifying further investment.
While headlines focus on talent poaching by giants, the inflated compensation landscape has a silver lining for investors. It's driving an unprecedented number of acqui-hires where startups are acquired for their teams, providing excellent, non-traditional returns for early-stage funds.
Unlike software bottlenecked by engineering headcount, AI models scale with capital. A frontier model company can raise more than its entire app ecosystem combined, then use that capital to launch competitive first-party apps and subsume third-party developers.
While a current AI model may be gross-margin positive on inference, the company is not. The staggering cost of training the *next* model makes them gross-margin negative overall. Their business model relies on raising ever-larger rounds to fund R&D, a potentially unsustainable cycle.
Specialized coding models often fail because a developer's workflow isn't just writing code; it's a complex conversation involving brainstorming, compliance, and web research. The best coding assistants are the most generalist models because every complex task has AGI-like qualities.
The firm's strategy isn't to back every foundation model. It centers on identifying singular talents whose past work demonstrates a unique ability to achieve foundational breakthroughs. The belief is that in the current AI landscape, a few specific individuals can move the entire field forward.
Unlike the dot-com era where capital built unused "dark fiber," today's AI funding boom is different. Every dollar spent on GPUs is immediately consumed due to insatiable demand. This prevents a supply overhang, making the "circular funding" model more sustainable for now.
