We scan new podcasts and send you the top 5 insights daily.
Periodic Labs doesn't use a single monolithic model. Instead, a powerful language model acts as a central coordinator or "copilot." It directs experiments by calling upon smaller, highly specialized, and more efficient neural nets (e.g., those with symmetry awareness for atomic systems) as tools.
The perception of a 'critically thinking' AI doesn't come from a single, powerful model. It's the result of using multiple levels of LLMs, each with a very specific, targeted task—one for orchestrating, one for actioning, and another for responding. This specificity yields far better results than a generalist approach.
Instead of interacting with a single LLM, users will increasingly call an API that represents a "system as a model." Behind the scenes, this triggers a complex orchestration of multiple specialized models, sub-agents, and tools to complete a task, while maintaining a simple user experience.
The path to robust AI applications isn't a single, all-powerful model. It's a system of specialized "sub-agents," each handling a narrow task like context retrieval or debugging. This architecture allows for using smaller, faster, fine-tuned models for each task, improving overall system performance and efficiency.
Enterprises will shift from relying on a single large language model to using orchestration platforms. These platforms will allow them to 'hot swap' various models—including smaller, specialized ones—for different tasks within a single system, optimizing for performance, cost, and use case without being locked into one provider.
The LAM is not a model in the traditional sense, but an agent system. It uses the best available LLMs for language understanding and connects them to Rabbit's proprietary tech for controlling actions, allowing for modular upgrades of the underlying AI.
Breakthroughs will emerge from 'systems' of AI—chaining together multiple specialized models to perform complex tasks. GPT-4 is rumored to be a 'mixture of experts,' and companies like Wonder Dynamics combine different models for tasks like character rigging and lighting to achieve superior results.
The most effective AI architecture for complex tasks involves a division of labor. An LLM handles high-level strategic reasoning and goal setting, providing its intent in natural language. Specialized, efficient algorithms then translate that strategic intent into concrete, tactical actions.
Dr. Juraji argues against a single "do-it-all" AI. Instead, he envisions a future of "speciated" AI systems where different modules, like the lobes of a brain (e.g., LLMs, causal AI), work together to tackle the multifaceted challenges of drug development.
To optimize costs, users configure powerful models like Claude Opus as the 'brain' to strategize and delegate execution tasks (e.g. coding) to cheaper, specialized models like ChatGPT's Codec, treating them as muscles.
Building one centralized AI model is a legacy approach that creates a massive single point of failure. The future requires a multi-layered, agentic system where specialized models are continuously orchestrated, providing checks and balances for a more resilient, antifragile ecosystem.