To enable AI tools like Cursor to write accurate SQL queries with minimal prompting, data teams must build a "semantic layer." This file, often a structured JSON, acts as a translation layer defining business logic, tables, and metrics, dramatically improving the AI's zero-shot query generation ability.
Don't view AI as just a feature set. Instead, treat "intelligence" as a fundamental new building block for software, on par with established primitives like databases or APIs. When conceptualizing any new product, assume this intelligence layer is a non-negotiable part of the technology stack to solve user problems effectively.
Traditional API integration requires strict adherence to a predefined contract. The new AI paradigm flips this: developers can describe their desired data format in a manifest file, and the AI handles the translation, dramatically lowering integration barriers and complexity.
The notion of building a business as a 'thin wrapper' around a foundational model like GPT is flawed. Truly defensible AI products, like Cursor, build numerous specific, fine-tuned models to deeply understand a user's domain. This creates a data and performance moat that a generic model cannot easily replicate, much like Salesforce was more than just a 'thin wrapper' on a database.
The early focus on crafting the perfect prompt is obsolete. Sophisticated AI interaction is now about 'context engineering': architecting the entire environment by providing models with the right tools, data, and retrieval mechanisms to guide their reasoning process effectively.
Instead of writing Python or TypeScript to prototype an AI agent, PM Dennis Yang writes a "super MVP" using plain English instructions directly in Cursor. He leverages Cursor's built-in agentic capabilities, model switching, and tool-calling to test the agent's logic and flow without writing a single line of code.
The future of data analysis is conversational interfaces, but generic tools struggle. An AI must deeply understand the data's structure to be effective. Vertical-specific platforms (e.g., for marketing) have a huge advantage because they have pre-built connectors and an inherent understanding of the data model.
Using plain-English rule files in tools like Cursor, data teams can create reusable AI agents that automate the entire A/B test write-up process. The agent can fetch data from an experimentation platform, pull context from Notion, analyze results, and generate a standardized report automatically.
Instead of building shared libraries, teams can grant an AI access to different codebases. The AI acts as a translator, allowing developers to understand and reimplement logic from one tech stack into a completely different one, fostering reuse without the overhead of formal abstraction.
AI development has evolved to where models can be directed using human-like language. Instead of complex prompt engineering or fine-tuning, developers can provide instructions, documentation, and context in plain English to guide the AI's behavior, democratizing access to sophisticated outcomes.
Before diving into SQL, analysts can use enterprise AI search (like Notion AI) to query internal documents, PRDs, and Slack messages. This rapidly generates context and hypotheses about metric changes, replacing hours of manual digging and leading to better, faster analysis.