The AI security market is ripe for a correction as enterprises realize current guardrail products don't work and that free, open-source alternatives are often superior. Companies acquired for high valuations based on selling these flawed solutions may struggle as revenue fails to materialize.
Claiming a "99% success rate" for an AI guardrail is misleading. The number of potential attacks (i.e., prompts) is nearly infinite. For GPT-5, it's 'one followed by a million zeros.' Blocking 99% of a tested subset still leaves a virtually infinite number of effective attacks undiscovered.
The assumption that enterprise API spending on AI models creates a strong moat is flawed. In reality, businesses can and will easily switch between providers like OpenAI, Google, and Anthropic. This makes the market a commodity battleground where cost and on-par performance, not loyalty, will determine the winners.
Initially viewed as a growth driver, Generative AI is now seen by investors as a major disruption risk. This sentiment shift is driven by the visible, massive investments in AI infrastructure without corresponding revenue growth appearing in established enterprise sectors, causing a focus on potential downside instead of upside.
Recent security breaches (e.g., Gainsight/Drift on Salesforce) signal a shift. As AI agents access more data, incumbents can leverage security concerns to block third-party apps and promote their own integrated solutions, effectively using security as a competitive weapon.
AI leaders aren't ignoring risks because they're malicious, but because they are trapped in a high-stakes competitive race. This "code red" environment incentivizes patching safety issues case-by-case rather than fundamentally re-architecting AI systems to be safe by construction.
Many AI safety guardrails function like the TSA at an airport: they create the appearance of security for enterprise clients and PR but don't stop determined attackers. Seasoned adversaries can easily switch to a different model, rendering the guardrails a "futile battle" that has little to do with real-world safety.
Security expert Alex Komorowski argues that current AI systems are fundamentally insecure. The lack of a large-scale breach is a temporary illusion created by the early stage of AI integration into critical systems, not a testament to the effectiveness of current defenses.
The world's top AI researchers at labs like OpenAI, Google, and Anthropic have not solved adversarial robustness. It is therefore highly unlikely that third-party B2B security vendors, who typically lack the same level of deep research capability, possess a genuine solution.
Unlike traditional SaaS where high switching costs prevent price wars, the AI market faces a unique threat. The portability of prompts and reliance on interchangeable models could enable rapid commoditization. A price war could be "terrifying" and "brutal" for the entire ecosystem, posing a significant downside risk.
A developer reverse-engineered 200 AI startups and found that 146 were primarily wrappers for major APIs like OpenAI and Claude, despite marketing claims of "proprietary language models." This suggests a widespread disconnect between technical substance and marketing hype, a critical due diligence flag for investors and enterprise buyers in the AI space.