For complex, multi-turn agentic workflows, Tasklet prioritizes a model's iterative performance over standard benchmarks. Anthropic's models are chosen based on a qualitative "vibe" of being superior over long sequences of tool use, a nuance that quantitative evaluations often miss.

Related Insights

Once AI coding agents reach a high performance level, objective benchmarks become less important than a developer's subjective experience. Like a warrior choosing a sword, the best tool is often the one that has the right "feel," writes code in a preferred style, and integrates seamlessly into a human workflow.

Recognizing there is no single "best" LLM, AlphaSense built a system to test and deploy various models for different tasks. This allows them to optimize for performance and even stylistic preferences, using different models for their buy-side finance clients versus their corporate users.

Instead of switching between ChatGPT, Claude, and others, a multi-agent workflow lets users prompt once to receive and compare outputs from several LLMs simultaneously. This consolidates the AI user experience, saving time and eliminating 'LLM ping pong' to find the best response.

Rather than relying on a single LLM, LexisNexis employs a "planning agent" that decomposes a complex legal query into sub-tasks. It then assigns each task (e.g., deep research, document drafting) to the specific LLM best suited for it, demonstrating a sophisticated, model-agnostic approach for enterprise AI.

AI platforms using the same base model (e.g., Claude) can produce vastly different results. The key differentiator is the proprietary 'agent' layer built on top, which gives the model specific tools to interact with code (read, write, edit files). A superior agent leads to superior performance.

Tasklet's CEO argues that while traditional workflow automation seems safer, agentic systems that let the model plan and execute will ultimately prove more robust. They can handle unexpected errors and nuance that break rigid, pre-defined workflows, a bet on future model improvements.

While AI labs tout performance on standardized tests like math olympiads, these metrics often don't correlate with real-world usefulness or qualitative user experience. Users may prefer a model like Anthropic's Claude for its conversational style, a factor not measured by benchmarks.

Contrary to the trend toward multi-agent systems, Tasklet finds that one powerful agent with access to all context and tools is superior for a single user's goals. Splitting tasks among specialized agents is less effective than giving one generalist agent all information, as foundation models are already experts at everything.

Tasklet's CEO points to pricing as the ultimate proof of an LLM's value. Despite GPT-4o being cheaper, Anthropic's Sonnet maintains a higher price, indicating customers pay a premium for its superior performance on multi-turn agentic tasks—a value not fully captured by benchmarks.

A key advancement in Sonnet 4.5 is its work style. Unlike past models with "grand ambitions" that would meander, this AI pragmatically breaks down large projects into small, manageable chunks. This methodical approach feels more like working with a human colleague, making it more reliable for complex tasks.