The usefulness of AI agents is severely hampered because most web services lack robust, accessible APIs. This forces agents to rely on unstable methods like web scraping, which are easily blocked, limiting their reliability and potential integration into complex workflows.
As AI makes it trivial to scrape data and bypass native UIs, companies will retaliate by shutting down open APIs and creating walled gardens to protect their business models. This mirrors the early web's shift away from open standards like RSS once monetization was threatened.
The LLM itself only creates the opportunity for agentic behavior. The actual business value is unlocked when an agent is given runtime access to high-value data and tools, allowing it to perform actions and complete tasks. Without this runtime context, agents are merely sophisticated Q&A bots querying old data.
By running locally on a user's machine, AI agents can interact with services like Gmail or WhatsApp without needing official, often restrictive, API access. This approach works around the corporate "red tape" that stifles innovation and effectively liberates user data from platform control.
Unlike coding, where context is centralized (IDE, repo) and output is testable, general knowledge work is scattered across apps. AI struggles to synthesize this fragmented context, and it's hard to objectively verify the quality of its output (e.g., a strategy memo), limiting agent effectiveness.
Companies struggle to get value from AI because their data is fragmented across different systems (ERP, CRM, finance) with poor integrity. The primary challenge isn't the AI models themselves, but integrating these disparate data sets into a unified platform that agents can act upon.
Tasklet's experience shows AI agents can be more effective directly calling HTTP APIs using scraped documentation than using the specialized MCP framework. This "direct API" approach is so reliable that users prefer it over official MCP integrations, challenging the assumption that structured protocols are superior.
For years, businesses have focused on protecting their sites from malicious bots. This same architecture now blocks beneficial AI agents acting on behalf of consumers. Companies must rethink their technical infrastructure to differentiate and welcome these new 'good bots' for agentic commerce.
Exposing a full API via the Model Context Protocol (MCP) overwhelms an LLM's context window and reasoning. This forces developers to abandon exposing their entire service and instead manually craft a few highly specific tools, limiting the AI's capabilities and defeating the "do anything" vision of agents.
Research shows employees are rapidly adopting AI agents. The primary risk isn't a lack of adoption but that these agents are handicapped by fragmented, incomplete, or siloed data. To succeed, companies must first focus on creating structured, centralized knowledge bases for AI to leverage effectively.
Contrary to being overhyped, AI agent browsers are actually underrated for a small but growing set of complex tasks like data scraping, research consolidation, and form automation. For these use cases, their value is immense and time-saving.