Project Glasswing represents the private sector creating its own version of the government's Vulnerabilities Equities Process (VEP). A private company now coordinates a multinational effort to manage critical software flaws, a function historically belonging to state actors.
Historically, many organizations only implement robust cybersecurity after being attacked, despite knowing the risks. AI-powered offense dramatically raises the stakes by increasing the speed and scale of threats, making this reactive posture untenable and potentially catastrophic.
Mythos is a general-purpose system also proficient in biology. How society, governments, and companies manage the risks and norms of AI in cybersecurity is a direct preview of the much higher-stakes challenge of managing future AI-driven biological threats.
For the military, the toughest AI adoption challenge isn't on offense, but defense: overcoming institutional resistance to granting AI the autonomy needed to defend networks at machine speed. A human-alert system is too slow, creating a major bureaucratic and command-and-control dilemma.
The core open-source belief that enough human experts will find all bugs is invalidated by AI discovering decades-old vulnerabilities in widely scrutinized code. This proves that high-level machine analysis is now essential for security, as human review alone is insufficient.
The primary strategic advantage of an AI like Claude Mythos is not launching destructive attacks, but finding unique vulnerabilities for quiet, persistent intelligence collection. Its power lies in the slow, insidious shaping of the information environment rather than overt, 'whiz bang' effects.
Despite AI supercharging offensive capabilities, the defender's ultimate advantage remains unchanged: they set the operational terrain. Basic, often-neglected measures like network air-gapping are more critical than ever, as they create structural barriers that even advanced AI struggles to overcome.
AI will find vulnerabilities at an unprecedented rate. The real crisis will be the organizational inability to patch them, especially in critical infrastructure with long update cycles and unsupported software where original developers are long gone. The problem shifts from finding flaws to fixing them at scale.
Landmark cyberattacks like Stuxnet and NotPetya relied on automation for scale and impact long before modern AI. Models like Mythos don't invent this concept; they represent an exponential leap by automating the entire 'kill chain,' from discovery to exploitation, fulfilling a long-theorized potential.
While deepfakes garner attention, research from as early as 2020 shows AI can measurably change political opinions using only simple text. This scalable, text-based persuasion is a potent tool for information operations that may be more impactful than more technologically complex manipulations.
