Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

The idea that major software vulnerabilities found by AI can be fixed in a short, coordinated effort is mere "theater." The sheer volume of bugs embedded in decades of code would necessitate a multi-year shutdown of the internet to truly address them, making short-term projects largely performative.

Related Insights

As AI generates vast quantities of code, the primary engineering challenge shifts from production to quality assurance. The new bottleneck is the limited human attention available to review, understand, and manage the quality of the codebase, leading to increased fragility and "slop" in production.

Claiming a "99% success rate" for an AI guardrail is misleading. The number of potential attacks (i.e., prompts) is nearly infinite. For GPT-5, it's 'one followed by a million zeros.' Blocking 99% of a tested subset still leaves a virtually infinite number of effective attacks undiscovered.

The same AI technology amplifying cyber threats can also generate highly secure, formally verified code. This presents a historic opportunity for a society-wide effort to replace vulnerable legacy software in critical infrastructure, leading to a durable reduction in cyber risk. The main challenge is creating the motivation for this massive undertaking.

As AI models become adept at finding software vulnerabilities, there's a limited time for companies to use these tools defensively. This brief "catch-up" period exists before these powerful capabilities become widely available to malicious actors, creating an urgent, time-boxed need for proactive patching of legacy systems.

AI agents prioritize speed and functionality, pulling code from repositories without vetting them. This behavior massively scales up existing software supply chain vulnerabilities, risking a collapse of trust as compromised code spreads uncontrollably through automated systems.

A former OpenAI security expert argues that even if AI makes codebases more secure, hacking won't become harder. Attackers exploit the entire system—runtime behavior, configurations, authentication—not just static code. Looking only at code is like seeing a dinosaur's bones; you miss the muscles, feathers, and behavior that define the real-world attack surface.

The massive increase in AI-generated code is simultaneously creating more software dependencies and vulnerabilities. This dynamic, described as 'more code, more problems,' significantly expands the attack surface for bad actors and creates new challenges for software supply chain security.

The emergence of AI that can easily expose software vulnerabilities may end the era of rapid, security-last development ('vibe coding'). Companies will be forced to shift resources, potentially spending over 50% of their token budgets on hardening systems before shipping products.

Unlike traditional software where a bug can be patched with high certainty, fixing a vulnerability in an AI system is unreliable. The underlying problem often persists because the AI's neural network—its 'brain'—remains susceptible to being tricked in novel ways.

While AI models excel at identifying security vulnerabilities, the next major innovation lies in automatic remediation. The "holy grail" for cybersecurity startups is developing AI systems that can instantly patch and fix identified threats, moving beyond simple detection to proactive, zero-day defense.

Fixing All AI-Discoverable Bugs Would Require Shutting Down the Internet for Years | RiffOn