Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

A new class of power users, or "token maxers," are building hyper-personalized AI assistants. By giving models unlimited tokens and access to all personal data, they are asking the AI not just to help them, but to *be* them—handling emails, scheduling, and even offering parenting advice from a digital clone.

Related Insights

The next major evolution in AI will be models that are personalized for specific users or companies and update their knowledge daily from interactions. This contrasts with current monolithic models like ChatGPT, which are static and must store irrelevant information for every user.

Companies like Meta are pushing a new practice called "token maxing," where developers are encouraged to spend heavily on AI coding assistant tokens. This is being gamified with leaderboards to accelerate output, but it raises questions about efficiency versus vanity metrics and whether it's a true indicator of productivity.

Power users are building personal AI assistants not just by feeding data, but by creating curated context layers. This involves exporting all digital communications (email, Slack), then using LLMs to create tiered summaries (e.g., monthly chief-of-staff briefs) to give agents deep, usable context.

A trend called "tokenmaxxing" is emerging in Silicon Valley, where companies like Meta use leaderboards to track employee AI token usage. This reflects a corporate bet that higher token consumption correlates with increased productivity, turning AI usage into a new, albeit gameable, performance metric for engineers.

Some large companies are incentivizing employees to use the maximum amount of AI tokens, even ranking them on usage. This seemingly inefficient strategy is a deliberate investment to accelerate adoption. The goal is to retrain employee thinking to be "AI native" before optimizing for cost and efficiency.

Roblox aims to create personal NPCs by training them on users' specific behaviors, gestures, and speech. These "virtual doppelgangers" could act as agents, performing tasks or standing in for the user in virtual experiences, moving far beyond generic AI companions.

Unlike generative AI (like ChatGPT) which only provides text output, agentic AI can perform actions on your behalf. It can log into accounts, click buttons, and complete multi-step tasks, shifting AI from a smart consultant to an autonomous digital assistant.

Generic AI tools provide generic results. To make an AI agent truly useful, actively customize it by feeding it your personal information, customer data, and writing style. This training transforms it from a simple tool into a powerful, personalized assistant that understands your specific context and needs.

For AI to function as a "second brain"—synthesizing personal notes, thoughts, and conversations—it needs access to highly sensitive data. This is antithetical to public cloud AI. The solution lies in leveraging private, self-hosted LLMs that protect user sovereignty.

Matthew McConaughey's desire for an LLM trained only on his personal data highlights a key consumer demand beyond simple memory. Users want AI that doesn't just recall facts about them, but deeply adopts their unique worldview and personality, creating a truly personalized intelligence.