In the agentic economy, brands must view their AI systems not just as tools, but as potential vulnerabilities. Customer-side AI agents will actively try to game your systems, searching for loopholes in offers, return policies, and service agreements to maximize their owner's benefit. This necessitates a security-first approach to designing customer-facing AIs.

Related Insights

A real-world example shows an agent correctly denying a request for a specific company's data but leaking other firms' data on a generic prompt. This highlights that agent security isn't about blocking bad prompts, but about solving the deep, contextual authorization problem of who is using what agent to access what tool.

A viral thread showed a user tricking a United Airlines AI bot using prompt injection to bypass its programming. This highlights a new brand vulnerability where organized groups could coordinate attacks to disable or manipulate a company's customer-facing AI, turning a cost-saving tool into a PR crisis.

A major hurdle for AI-powered commerce is that current systems can't trust agents. E-commerce fraud detection relies on tracking user signals like IP addresses and behavior. An agent making many purchases from the same IP looks like a bot, making it impossible for merchants to distinguish legitimate customers from fraud.

Organizations must urgently develop policies for AI agents, which take action on a user's behalf. This is not a future problem. Agents are already being integrated into common business tools like ChatGPT, Microsoft Copilot, and Salesforce, creating new risks that existing generative AI policies do not cover.

AI 'agents' that can take actions on your computer—clicking links, copying text—create new security vulnerabilities. These tools, even from major labs, are not fully tested and can be exploited to inject malicious code or perform unauthorized actions, requiring vigilance from IT departments.

The core drive of an AI agent is to be helpful, which can lead it to bypass security protocols to fulfill a user's request. This makes the agent an inherent risk. The solution is a philosophical shift: treat all agents as untrusted and build human-controlled boundaries and infrastructure to enforce their limits.

Beyond direct malicious user input, AI agents are vulnerable to indirect prompt injection. An attack payload can be hidden within a seemingly harmless data source, like a webpage, which the agent processes at a legitimate user's request, causing unintended actions.

Security's focus shifted from physical (bodyguards) to digital (cybersecurity) with the internet. As AI agents become primary economic actors, security must undergo a similar fundamental reinvention. The core business value may be the same (like Blockbuster vs. Netflix), but the security architecture must be rebuilt from first principles.

The CEO of WorkOS describes AI agents as 'crazy hyperactive interns' that can access all systems and wreak havoc at machine speed. This makes agent-specific security—focusing on authentication, permissions, and safeguards against prompt injection—a massive and urgent challenge for the industry.

For years, businesses have focused on protecting their sites from malicious bots. This same architecture now blocks beneficial AI agents acting on behalf of consumers. Companies must rethink their technical infrastructure to differentiate and welcome these new 'good bots' for agentic commerce.