Anthropic's new "Dispatch" feature provides mobile control of desktop agent sessions. For many, it covers most OpenClaw use cases but with better safety and stability. This shows a trend of incumbents integrating agentic features into existing products as a more controlled alternative to powerful open-source tools.
The "Agent Skills" format was created by Anthropic to solve a key performance bottleneck. As capabilities were added, system prompts became too large, degrading speed and reliability. Skills use "progressive disclosure," loading only relevant information as needed, which preserves the context window for the task at hand.
Anthropic's Claude Code team reports that AI agent skills designed for "verification"—teaching an agent to test and validate its own output—provide an extremely high return on investment. This suggests that building reliability and correctness into AI workflows is as critical, if not more so, than the initial generation capability.
The concept of "Agent Skills"—reusable, context-rich capabilities for AI—is migrating from developer-focused platforms like Claude Code to mainstream applications like Notion. This shows a broader industry trend of shifting from single-use prompts to creating persistent, reliable, and user-defined AI functions for all types of users.
According to Anthropic's Claude Code team, the most valuable part of an AI agent's "Skill" is often a "Gotcha Section." This explicitly details common failure points and edge cases. This practice focuses on encoding hard-won experience to prevent repeated mistakes, proving more valuable than simply outlining a correct process.
Despite viral consumer adoption, China's government is warning state-owned enterprises against using the open-source agent OpenClaw. This highlights a growing tension between the country's push for rapid AI innovation and the state's deep-seated concerns over data security, privacy, and control with open, unaudited models.
