Current AI "agents" are often just recursive LLM loops. To achieve genuine agency and proactive curiosity—to anticipate a user's real goal instead of just responding—AI will need a synthetic analogue to the human limbic system that provides intrinsic drives.

Related Insights

Agency emerges from a continuous interaction with the physical world, a process refined over billions of years of evolution. Current AIs, operating in a discrete digital environment, lack the necessary architecture and causal history to ever develop genuine agency or free will.

Reinforcement learning incentivizes AIs to find the right answer, not just mimic human text. This leads to them developing their own internal "dialect" for reasoning—a chain of thought that is effective but increasingly incomprehensible and alien to human observers.

An agent can be trained on a user's entire output to build a 'human replica.' This model helps other agents resolve complex questions by navigating the inherent contradictions in human thought (e.g., financial self vs. personal self), enabling better autonomous decision-making.

An AI agent uses an LLM with tools, giving it agency to decide its next action. In contrast, a workflow is a predefined, deterministic path where the LLM's actions are forced. Most production AI systems are actually workflows, not true agents.

The true building block of an AI feature is the "agent"—a combination of the model, system prompts, tool descriptions, and feedback loops. Swapping an LLM is not a simple drop-in replacement; it breaks the agent's behavior and requires re-engineering the entire system around it.

The defining characteristic of a powerful AI agent is its ability to creatively solve problems when it hits a dead end. As demonstrated by an agent that independently figured out how to convert an unsupported audio file, its value lies in its emergent problem-solving skills rather than just following a pre-defined script.

While language models are becoming incrementally better at conversation, the next significant leap in AI is defined by multimodal understanding and the ability to perform tasks, such as navigating websites. This shift from conversational prowess to agentic action marks the new frontier for a true "step change" in AI capabilities.

Biological evolution used meta-reinforcement learning to create agents that could then perform imitation learning. The current AI paradigm is inverted: it starts with pure imitation learners (base LLMs) and then attempts to graft reinforcement learning on top to create coherent agency and goals. The success of this biologically 'backwards' approach remains an open question.

While AI models excel at gathering and synthesizing information ('knowing'), they are not yet reliable at executing actions in the real world ('doing'). True agentic systems require bridging this gap by adding crucial layers of validation and human intervention to ensure tasks are performed correctly and safely.

Instead of forcing AI to be as deterministic as traditional code, we should embrace its "squishy" nature. Humans have deep-seated biological and social models for dealing with unpredictable, human-like agents, making these systems more intuitive to interact with than rigid software.