The transition to agent-centric workflows is not a simple software deployment; it's a complex re-engineering of business processes. This creates a huge opportunity for a new generation of consulting firms that specialize in getting organizations "agent-ready."
As autonomous agents become prevalent, they'll need a sandboxed environment to access, store, and collaborate on enterprise data. This core infrastructure must manage permissions, security, and governance, creating a new market opportunity for platforms that can serve as this trusted container.
Simply giving an agent a user account is dangerous. An agent creator is liable for its actions, and the agent has no right to privacy. This requires a new identity and access management (IAM) paradigm, distinct from human user accounts, to manage liability and oversight.
The stakes for data quality are now higher than ever. An agent pulling the wrong document has severe consequences, while one with access to clean information provides a huge competitive edge. This dynamic will compel organizations to adopt better documentation and data organization practices.
As enterprises deploy agents for critical tasks like RFP generation or invoice processing, they will require dedicated evaluation frameworks and teams. This will create a massive new market for agent observability and eval tools, moving them beyond AI-native companies to the broader enterprise.
Unlike previous technologies that integrated into existing workflows, AI agents require us to fundamentally re-engineer our work processes to make them effective. Early adopters who adapt their operations to how agents "think" will gain compounding advantages over competitors.
Unlike humans who have an intuitive sense of when to stop searching, agents can get stuck in expensive, fruitless loops trying to find information that may not exist. Teaching models the judgment to abandon a task is a new and vital frontier for reliable agentic AI.
Current AI tools are in "easy mode" because they operate with the user's direct authentication and permissions. The much harder, yet-to-be-solved problem is "hard mode": autonomous agents that need their own scoped access to enterprise resources without dramatically increasing security risks.
Messy AI-generated code ("slop") can still result in a functional product, hiding imperfections from the end user. In knowledge work, a slightly "off" AI-generated contract or memo creates immediate legal or business risk, as there is no interface to abstract away the sloppiness.
Unlike humans who can prune irrelevant information, an AI agent's context window is its reality. If a past mistake is still in its context, it may see it as a valid example and repeat it. This makes intelligent context pruning a critical, unsolved challenge for agent reliability.
A successful long-term founder must distinguish between routine operations and existential threats. Levie delegates the vast majority of Box's work, but immerses himself in areas like the AI transition, where a few wrong decisions could make the company obsolete.
AI coding agents thrive because developers have broad codebase access and work in a text-based medium. Enterprise knowledge work is stalled by fragmented data access, complex permissions, and multi-modal information (calls, meetings), which are significant hurdles for current AI.
