Using plain-English rule files in tools like Cursor, data teams can create reusable AI agents that automate the entire A/B test write-up process. The agent can fetch data from an experimentation platform, pull context from Notion, analyze results, and generate a standardized report automatically.
The most significant productivity gains come from applying AI to every stage of development, including research, planning, product marketing, and status updates. Limiting AI to just code generation misses the larger opportunity to automate the entire engineering process.
The next frontier for AI in product is automating time-consuming but cognitively simple tasks. An AI agent can connect CRM data, customer feedback, and product specs to instantly generate a qualified list of beta testers, compressing a multi-week process into days.
Instead of codebases becoming harder to manage over time, use an AI agent to create a "compounding engineering" system. Codify learnings from each feature build—successful plans, bug fixes, tests—back into the agent's prompts and tools, making future development faster and easier.
Use Claude's "Artifacts" feature to generate interactive, LLM-powered application prototypes directly from a prompt. This allows product managers to test the feel and flow of a conversational AI, including latency and response length, without needing API keys or engineering support, bridging the gap between a static mock and a coded MVP.
Instead of writing Python or TypeScript to prototype an AI agent, PM Dennis Yang writes a "super MVP" using plain English instructions directly in Cursor. He leverages Cursor's built-in agentic capabilities, model switching, and tool-calling to test the agent's logic and flow without writing a single line of code.
To enable AI tools like Cursor to write accurate SQL queries with minimal prompting, data teams must build a "semantic layer." This file, often a structured JSON, acts as a translation layer defining business logic, tables, and metrics, dramatically improving the AI's zero-shot query generation ability.
Go beyond just generating documents. PM Dennis Yang uses an AI agent in Cursor to read comments on a Confluence PRD, categorize them by priority, draft responses, and post them on his behalf. This automates the tedious but critical process of acknowledging and incorporating feedback.
Instead of manual survey design, provide an AI with a list of hypotheses and context documents. It can generate a complete questionnaire, the platform-specific code file for deployment (e.g., for Qualtrics), and an analysis plan, compressing the user research setup process from days to minutes.
The prompts for your "LLM as a judge" evals function as a new form of PRD. They explicitly define the desired behavior, edge cases, and quality standards for your AI agent. Unlike static PRDs, these are living documents, derived from real user data and are constantly, automatically testing if the product meets its requirements.
Before diving into SQL, analysts can use enterprise AI search (like Notion AI) to query internal documents, PRDs, and Slack messages. This rapidly generates context and hypotheses about metric changes, replacing hours of manual digging and leading to better, faster analysis.