Instead of expensive, static pre-training on proprietary data, enterprises prefer RAG. This approach is cheaper, allows for easy updates as data changes, and benefits from continuous improvements in foundation models, making it a more practical and dynamic solution.
A single AI agent can provide personalized and secure responses by dynamically adopting the data access permissions of the person querying it. This ensures users only see data they are authorized to view, maintaining granular governance without separate agent instances.
Capable AI coding assistants allow PMs to build and test functional prototypes or "skills" in a single day. This changes the product development philosophy, prioritizing quick validation with users over creating detailed UI mockups and specifications upfront.
Text-to-SQL has historically been unreliable. However, recent advancements in reasoning models, combined with AI-assisted semantic layer creation, have boosted quality enough for broad deployment to non-technical business users, democratizing data access.
To meet strict enterprise security and governance requirements, Snowflake's strategy is to "bring AI to the data." Through partnerships with cloud and model providers, inference is run inside the Snowflake security boundary, preventing sensitive data from being moved.
Fine-tuning remains relevant but is not the primary path for most enterprise use cases. It's a specialized tool for situations with unique data unseen by foundation models or when strict cost and throughput requirements for a high-volume task justify the investment. Most should start with RAG.
The vast majority of enterprise information, previously trapped in formats like PDFs and documents, was largely unusable. AI, through techniques like RAG and automated structure extraction, is unlocking this data for the first time, making it queryable and enabling new large-scale analysis.
Value in the AI stack will concentrate at the infrastructure layer (e.g., chips) and the horizontal application layer. The "middle layer" of vertical SaaS companies, whose value is primarily encoded business logic, is at risk of being commoditized by powerful, general AI agents.
While frontier models like Claude excel at analyzing a few complex documents, they are impractical for processing millions. Smaller, specialized, fine-tuned models offer orders of magnitude better cost and throughput, making them the superior choice for large-scale, repetitive extraction tasks.
Despite constant new model releases, enterprises don't frequently switch LLMs. Prompts and workflows become highly optimized for a specific model's behavior, creating significant switching costs. Performance gains of a new model must be substantial to justify this re-engineering effort.
AI agents make it dramatically easier to extract and migrate data from platforms, reducing vendor lock-in. In response, platforms like Snowflake are embracing open file formats (e.g., Iceberg), shifting the competitive basis from data gravity to superior performance, cost, and features.
There is no one-size-fits-all agent design. Business users need optimized, structured agents with high reliability for specific tasks (e.g., a sales assistant). In contrast, technical users like developers benefit most from flexible, open-ended "choose your own adventure" coding agents.
