Rather than government regulation, market forces will address AI bias. As studies reveal biases in models from OpenAI and Google, competitors like Elon Musk's Grok can market their model's neutrality as a key selling point, attracting users and forcing the entire market to improve.
While tech giants could technically replicate Perplexity, their core business models—advertising for Google, e-commerce for Amazon—create a fundamental conflict of interest. An independent player can align purely with the user's best interests, creating a strategic opening that incumbents are structurally unable to fill without cannibalizing their primary revenue streams.
AI labs may initially conceal a model's "chain of thought" for safety. However, when competitors reveal this internal reasoning and users prefer it, market dynamics force others to follow suit, demonstrating how competition can compel companies to abandon safety measures for a competitive edge.
Unlike banking, the AI industry is fiercely competitive. With at least five major frontier model companies, the failure of one would simply lead to its market share being absorbed by rivals. This healthy competition makes the idea of a federal bailout for any single AI firm, such as OpenAI, nonsensical as none are "too big to fail."
Elon Musk founded OpenAI as a nonprofit to be the philosophical opposite of Google, which he believed had a monopoly on AI and a CEO who wasn't taking AI safety seriously. The goal was to create an open-source counterweight, not a for-profit entity.
AI leaders aren't ignoring risks because they're malicious, but because they are trapped in a high-stakes competitive race. This "code red" environment incentivizes patching safety issues case-by-case rather than fundamentally re-architecting AI systems to be safe by construction.
When prompted, Elon Musk's Grok chatbot acknowledged that his rival to Wikipedia, Grokipedia, will likely inherit the biases of its creators and could mirror Musk's tech-centric or libertarian-leaning narratives.
The AI industry is not a winner-take-all market. Instead, it's a dynamic "leapfrogging" race where competitors like OpenAI, Google, and Anthropic constantly surpass each other with new models. This prevents a single monopoly and encourages specialization, with different models excelling in areas like coding or current events.
Fears of a single AI company achieving runaway dominance are proving unfounded, as the number of frontier models has tripled in a year. Newcomers can use techniques like synthetic data generation to effectively "drink the milkshake" of incumbents, reverse-engineering their intelligence at lower costs.
The classic "trolley problem" will become a product differentiator for autonomous vehicles. Car manufacturers will have to encode specific values—such as prioritizing passenger versus pedestrian safety—into their AI, creating a competitive market where consumers choose a vehicle based on its moral code.
An FDA-style regulatory model would force AI companies to make a quantitative safety case for their models before deployment. This shifts the burden of proof from regulators to creators, creating powerful financial incentives for labs to invest heavily in safety research, much like pharmaceutical companies invest in clinical trials.