Philosophy should have been central to AI's creation, but its academic siloing led to inaction. Instead of engaging with technology and building, philosophers remained focused on isolated cogitation. AI emerged from engineers who asked "what can I make?" rather than just asking "what is a mind?".
The AI landscape has three groups: 1) Frontier labs on a "superintelligence quest," absorbing most capital. 2) Fundamental researchers who think the current approach is flawed. 3) Pragmatists building value with today's "good enough" AI.
With industry dominating large-scale model training, academia’s comparative advantage has shifted. Its focus should be on exploring high-risk, unconventional concepts like new algorithms and hardware-aligned architectures that commercial labs, focused on near-term ROI, cannot prioritize.
The advancement of AI is not linear. While the industry anticipated a "year of agents" for practical assistance, the most significant recent progress has been in specialized, academic fields like competitive mathematics. This highlights the unpredictable nature of AI development.
When OpenAI started, the AI research community measured progress via peer-reviewed papers. OpenAI's contrarian move was to pour millions into GPUs and large-scale engineering aimed at tangible results, a strategy criticized by academics but which ultimately led to their breakthrough.
The PC revolution was sparked by thousands of hobbyists experimenting with cheap microprocessors in garages. True innovation waves are distributed and permissionless. Today's AI, dominated by expensive, proprietary models from large incumbents, may stifle this crucial experimentation phase, limiting its revolutionary potential.
Consciousness isn't an emergent property of computation. Instead, physical systems like brains—or potentially AI—act as interfaces. Creating a conscious AI isn't about birthing a new awareness from silicon, but about engineering a system that opens a new "portal" into the fundamental network of conscious agents that already exists outside spacetime.
The perceived limits of today's AI are not inherent to the models themselves but to our failure to build the right "agentic scaffold" around them. There's a "model capability overhang" where much more potential can be unlocked with better prompting, context engineering, and tool integrations.
Current LLMs fail at science because they lack the ability to iterate. True scientific inquiry is a loop: form a hypothesis, conduct an experiment, analyze the result (even if incorrect), and refine. AI needs this same iterative capability with the real world to make genuine discoveries.
Many engineers at large companies are cynical about AI's hype, hindering internal product development. This forces enterprises to seek external startups that can deliver functional AI solutions, creating an unprecedented opportunity for new ventures to win large customers.
The race to manage AGI is hampered by a philosophical problem: there's no consensus definition for what it is. We might dismiss true AGI's outputs as "hallucinations" because they don't fit our current framework, making it impossible to know when the threshold from advanced AI to true general intelligence has actually been crossed.