Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Breaking from transformer dominance, Shopify leverages Liquid AI's state-space-like models for high-value tasks. For search query understanding, they run a 300M parameter Liquid model with an impressive 30ms end-to-end latency, a feat difficult to achieve with traditional architectures.

Related Insights

While often discussed for privacy, running models on-device eliminates API latency and costs. This allows for near-instant, high-volume processing for free, a key advantage over cloud-based AI services.

The top 1% of AI companies making significant revenue don't rely on popular frameworks like Langchain. They gain more control and performance by using small, direct LLM calls for specific application parts. This avoids the black-box abstractions of frameworks, which are more common among the other 99% of builders.

Models that generate "chain-of-thought" text before providing an answer are powerful but slow and computationally expensive. For tuned business workflows, the latency from waiting for these extra reasoning tokens is a major, often overlooked, drawback that impacts user experience and increases costs.

A 'GenAI solves everything' mindset is flawed. High-latency models are unsuitable for real-time operational needs, like optimizing a warehouse worker's scanning path, which requires millisecond responses. The key is to apply the right tool—be it an optimizer, machine learning, or GenAI—to the specific business problem.

Rather than committing to a single LLM provider like OpenAI or Gemini, Hux uses multiple commercial models. They've found that different models excel at different tasks within their app. This multi-model strategy allows them to optimize for quality and latency on a per-workflow basis, avoiding a one-size-fits-all compromise.

By blending Mamba's linear-time processing for efficiency with a few Transformer layers for high-fidelity retrieval, Nemotron 3 Super makes its 1 million token context window practical, not just theoretical. This 'best-of-both-worlds' design overcomes the typical trade-off between speed and precision in large language models.

For low-latency applications, start with a small model to rapidly iterate on data quality. Then, use a large, high-quality model for optimal tuning with the cleaned data. Finally, distill the capabilities of this large, specialized model back into a small, fast model for production deployment.

As enterprises scale AI, the high inference costs of frontier models become prohibitive. The strategic trend is to use large models for novel tasks, then shift 90% of recurring, common workloads to specialized, cost-effective Small Language Models (SLMs). This architectural shift dramatically improves both speed and cost.

Yahoo built its AI search engine, Scout, not by training a massive model, but by using a smaller, affordable LLM (Anthropic's Haiku) as a processing layer. The real power comes from feeding this model Yahoo's 30 years of proprietary search data and knowledge graphs.

Shopify's CTO clarifies that Liquid AI models don't compete with frontier models like GPT-4. Instead, their key advantage is serving as a highly effective target for knowledge distillation. This allows Shopify to compress a huge model's capabilities into a smaller, faster, cheaper Liquid AI model for specific tasks.