The most effective government role in innovation is to act as a catalyst for high-risk, foundational R&D (like DARPA creating the internet). Once a technology is viable, the government should step aside to allow private sector competition (like SpaceX) to drive down costs and accelerate progress.
History shows pioneers who fund massive infrastructure shifts, like railroads or the early internet, frequently lose their investment. The real profits are captured later by companies that build services on top of the now-established, de-risked platform.
With industry dominating large-scale model training, academia’s comparative advantage has shifted. Its focus should be on exploring high-risk, unconventional concepts like new algorithms and hardware-aligned architectures that commercial labs, focused on near-term ROI, cannot prioritize.
Regulating technology based on anticipating *potential* future harms, rather than known ones, is a dangerous path. This 'precautionary principle,' common in Europe, stifles breakthrough innovation. If applied historically, it would have blocked transformative technologies like the automobile or even nuclear power, which has a better safety record than oil.
The Orphan Drug Act successfully incentivized R&D for rare diseases. A similar policy framework is needed for common, age-related diseases. Despite their massive potential markets, these indications suffer from extremely high failure rates and costs. A new incentive structure could de-risk development and align commercial goals with the enormous societal need for longevity.
The most profound innovations in history, like vaccines, PCs, and air travel, distributed value broadly to society rather than being captured by a few corporations. AI could follow this pattern, benefiting the public more than a handful of tech giants, especially with geopolitical pressures forcing commoditization.
CZI focuses on creating new tools for science, a 10-15 year process that's often underfunded. Instead of just giving grants, they build and operate their own institutes, physically co-locating scientists and engineers to accelerate breakthroughs in areas traditional funding misses.
Unconventional AI operates as a "practical research lab" by explicitly deferring manufacturing constraints during initial innovation. The focus is purely on establishing "existence proofs" for new ideas, preventing premature optimization from killing potentially transformative but difficult-to-build concepts.
Moving from a science-focused research phase to building physical technology demonstrators is critical. The sooner a deep tech company does this, the faster it uncovers new real-world challenges, creates tangible proof for investors and customers, and fosters a culture of building, not just researching.
Attempting to hoard technology like a state secret is counterproductive for the US. The nation's true competitive advantage has always been its open society, which enables broad participation and bottom-up innovation. Competing effectively, especially in AI, means leaning into this openness, not trying to emulate closed, top-down systems.
An effective governance model involves successful private sector leaders doing a "tour of duty" in government. This brings valuable, real-world expertise to policymaking. While critics cite conflicts of interest, the benefit is having qualified individuals shape regulations for national benefit, rather than career bureaucrats.