Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Intercom monitors its internal AI skills with Honeycomb for usage and analyzes session transcripts stored in S3. This product-centric approach provides insights to improve tools, identify user struggles, and offer personalized feedback to engineers.

Related Insights

Intercom's CTO set a goal to 2x R&D throughput, using pull requests as a simple, albeit crude, metric. In a high-trust environment, this focused the team on adopting AI tools to increase output, leading to measurable success.

In an AI-driven product org, traditional research methods like surveys are becoming obsolete. The new model involves automatically synthesizing diverse signals—product telemetry, customer service insights, user sentiment—to get near real-time, specific direction on the most important problems to solve.

To move beyond mandates, Salesforce provides leaders with a dashboard showing exactly which employees are using approved AI tools and how often. This data-driven approach allows managers to pinpoint adoption gaps and diagnose the root cause—such as skill versus will—for targeted intervention.

Unlike traditional software where UX can be pre-assessed, AI products are inherently unpredictable. The CEO of Braintrust argues that this makes observability critical. Companies must monitor real-world user interactions to capture failures and successes, creating a data flywheel for rapid improvement.

Move beyond one-on-one interviews for prototype feedback. By prompting an AI tool to integrate analytics platforms like PostHog, you can gather quantitative data at scale. This allows you to track usage, view session replays, and analyze heatmaps, providing robust validation before engineering gets involved.

A key metric for AI coding agent performance is real-time sentiment analysis of user prompts. By measuring whether users say 'fantastic job' or 'this is not what I wanted,' teams get an immediate signal of the agent's comprehension and effectiveness, which is more telling than lagging indicators like bug counts.

A custom internal AI tool can act as a command center by integrating with HubSpot, Slack, and call recordings. It creates a unified customer view, automatically analyzing sentiment to predict renewal likelihood and proactively suggesting specific expansion opportunities.

Designers at OpenAI don't have to wait for data scientists. They use an internal AI agent to ask questions about user behavior and query usage data, dramatically speeding up the design process by reducing cross-functional dependencies.

Newman's most critical infrastructure for AI-assisted development is a universal logging service for all his apps (front-end, back-end, mobile). When a bug appears, he can tell an AI agent to "debug this," and it can analyze the comprehensive logs to find the root cause without guesswork.

Reviewing user interaction data is the highest ROI activity for improving an AI product. Instead of relying solely on third-party observability tools, high-performing teams build simple, custom internal applications. These tools are tailored to their specific data and workflow, removing all friction from the process of looking at and annotating traces.

Instrument Internal AI Tools with Telemetry to Treat Your Engineering Org Like a Product | RiffOn