Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Platforms like Vercel already see the majority of their admin traffic from bots. Crucially, these agents are not rational actors; they are easily influenced and heavily biased by the tools and patterns present in their original training data.

Related Insights

Tools like Moltbot make complex web automation trivial for anyone, not just engineers. This dramatic drop in the barrier to entry will flood the internet with bot traffic for content scraping and social manipulation, ultimately destroying the economic viability of traditional websites.

Vercel CEO Guillermo Rauch reveals a dramatic shift in traffic sources, highlighting the unforeseen and exponential growth of automated coding agents consuming information. This indicates a fundamental change in how developers and their new AI assistants utilize infrastructure and documentation.

AI agents are becoming the dominant source of internet traffic, shifting the paradigm from human-centric UI to agent-friendly APIs. Developers optimizing for human users may be designing for a shrinking minority, as automated systems increasingly consume web services.

AI models are not optimized to find objective truth. They are trained on biased human data and reinforced to provide answers that satisfy the preferences of their creators. This means they inherently reflect the biases and goals of their trainers rather than an impartial reality.

Vercel is building infrastructure based on a threat model where developers cannot be trusted to handle security correctly. By extracting critical functions like authentication and data access from the application code, the platform can enforce security regardless of the quality or origin (human or AI) of the app's code.

Organizations must urgently develop policies for AI agents, which take action on a user's behalf. This is not a future problem. Agents are already being integrated into common business tools like ChatGPT, Microsoft Copilot, and Salesforce, creating new risks that existing generative AI policies do not cover.

As AI agents increasingly browse the web, they encounter UIs designed for humans that block their progress. This creates an invisible problem for businesses, as this server-side traffic often goes unseen. New companies are emerging to provide analytics for this agentic web traffic.

AI agents are the fastest-growing users of command-line tools. They have unique behaviors, like running "status" after every command, and struggle with interactive flows. Tools must be designed with this new, non-human persona in mind, not just for human developers.

For years, businesses have focused on protecting their sites from malicious bots. This same architecture now blocks beneficial AI agents acting on behalf of consumers. Companies must rethink their technical infrastructure to differentiate and welcome these new 'good bots' for agentic commerce.

The company's strategy for managing threats from malicious AI agents is to use AI for defense. They are building the capacity to scan everything happening on the platform in real-time, believing that monitoring AI can be just as powerful as generative AI.