A sophisticated threat involves state-sponsored actors from the DPRK using AI interview tools and virtual backgrounds to pass hiring processes. They get hired, receive company laptops, and then operate as insider threats, creating a significant and often undetected security risk for organizations.
Candidates are embedding hidden text and instructions in their resumes to game automated AI hiring platforms. This 'prompt hacking' tactic, reportedly found in up to 10% of applications by one firm, represents a new front in the cat-and-mouse game between applicants and the algorithms designed to filter them.
A viral thread showed a user tricking a United Airlines AI bot using prompt injection to bypass its programming. This highlights a new brand vulnerability where organized groups could coordinate attacks to disable or manipulate a company's customer-facing AI, turning a cost-saving tool into a PR crisis.
Unlike human attackers, AI can ingest a company's entire API surface to find and exploit combinations of access patterns that individual, siloed development teams would never notice. This makes it a powerful tool for discovering hidden security holes that arise from a lack of cross-team coordination.
For AI agents, the key vulnerability parallel to LLM hallucinations is impersonation. Malicious agents could pose as legitimate entities to take unauthorized actions, like infiltrating banking systems. This represents a critical, emerging security vector that security teams must anticipate.
With 88% of companies using AI to screen resumes, traditional applications are often unseen by humans. A new hack involves sending a small Venmo payment with a resume link directly to a hiring manager, creating an unignorable notification that bypasses automated gatekeepers.
Managing human identities is already complex, but the rise of AI agents communicating with systems will multiply this challenge exponentially. Organizations must prepare for managing thousands of "machine identities" with granular permissions, making robust identity management a critical prerequisite for the AI era.
Organizations must urgently develop policies for AI agents, which take action on a user's behalf. This is not a future problem. Agents are already being integrated into common business tools like ChatGPT, Microsoft Copilot, and Salesforce, creating new risks that existing generative AI policies do not cover.
AI 'agents' that can take actions on your computer—clicking links, copying text—create new security vulnerabilities. These tools, even from major labs, are not fully tested and can be exploited to inject malicious code or perform unauthorized actions, requiring vigilance from IT departments.
An AI agent capable of operating across all SaaS platforms holds the keys to the entire company's data. If this "super agent" is hacked, every piece of data could be leaked. The solution is to merge the agent's permissions with the human user's permissions, creating a limited and secure operational scope.
While sophisticated AI attacks are emerging, the vast majority of breaches will continue to exploit poor security fundamentals. Companies that haven't mastered basics like rotating static credentials are far more vulnerable. Focusing on core identity hygiene is the best way to future-proof against any attack, AI-driven or not.