Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Separate AI's role. Use an AI assistant to write reliable, deterministic code for structuring data (e.g., pulling Slack messages via API). Then, apply a live AI model only for the subjective task, like categorizing message urgency. This hybrid approach creates a more robust and controllable system.

Related Insights

A practical hack to improve AI agent reliability is to avoid built-in tool-calling functions. LLMs have more training data on writing code than on specific tool-use APIs. Prompting the agent to write and execute the code that calls a tool leverages its core strength and produces better outcomes.

The key to creating effective and reliable AI workflows is distinguishing between tasks AI excels at (mechanical, repetitive actions) and those it struggles with (judgment, nuanced decisions). Focus on automating the mechanical parts first to build a valuable and trustworthy product.

Getting high-quality results from AI doesn't come from a single complex command. The key is "harness engineering"—designing structured interaction patterns between specialized agents, such as creating a workflow where an engineer agent hands off work to a separate QA agent for verification.

High productivity isn't about using AI for everything. It's a disciplined workflow: breaking a task into sub-problems, using an LLM for high-leverage parts like scaffolding and tests, and reserving human focus for the core implementation. This avoids the sunk cost of forcing AI on unsuitable tasks.

Marketers mistakenly believe implementing AI means full automation. Instead, design "human-in-the-loop" workflows. Have an AI score a lead and draft an email, but then send that draft to a human for final approval via a Slack message with "approve/reject" buttons. This balances efficiency with critical human oversight.

The most effective use of AI isn't full automation, but "hybrid intelligence." This framework ensures humans always remain central to the decision-making process, with AI serving in a complementary, supporting role to augment human intuition and strategy.

Instead of treating a complex AI system like an LLM as a single black box, build it in a componentized way by separating functions like retrieval, analysis, and output. This allows for isolated testing of each part, limiting the surface area for bias and simplifying debugging.

Just as you use different social media apps for different purposes, you should use various specialized AI tools for specific tasks. Relying on a single tool like ChatGPT for everything results in watered-down solutions. A better approach is to build a toolkit, matching the right AI to the right problem.

For complex, one-time tasks like a code migration, don't just ask AI to write a script. Instead, have it build a disposable tool—a "jig" or "command center”—that visualizes the process and guides you through each step. This provides more control and understanding than a black-box script.

When developing AI capabilities, focus on creating agents that each perform one task exceptionally well, like call analysis or objection identification. These specialized agents can then be connected in a platform like Microsoft's Copilot Studio to create powerful, automated workflows.