When tested at scale in Civilization, different LLMs don't just produce random outputs; they develop consistent and divergent strategic 'personalities.' One model might consistently play aggressively, while another favors diplomacy, revealing that LLMs encode coherent, stable reasoning styles.
Traditional AI struggles with games like Civilization not due to computational complexity, but because these games require maintaining a long-term strategic narrative, not just optimizing individual moves. Human players win by committing to a coherent story for their civilization's development.
Reinforcement learning incentivizes AIs to find the right answer, not just mimic human text. This leads to them developing their own internal "dialect" for reasoning—a chain of thought that is effective but increasingly incomprehensible and alien to human observers.
Static benchmarks are easily gamed. Dynamic environments like the game Diplomacy force models to negotiate, strategize, and even lie, offering a richer, more realistic evaluation of their capabilities beyond pure performance metrics like reasoning or coding.
Analysis of model performance reveals a distinct shift with GPT-4 and subsequent models. These newer models are much more correlated with each other in the tasks they succeed or fail on compared to the pre-GPT-4 era. This could suggest a convergence in training data, architectures, or agent scaffolding methodologies across different labs.
The most effective AI architecture for complex tasks involves a division of labor. An LLM handles high-level strategic reasoning and goal setting, providing its intent in natural language. Specialized, efficient algorithms then translate that strategic intent into concrete, tactical actions.
As models mature, their core differentiator will become their underlying personality and values, shaped by their creators' objective functions. One model might optimize for user productivity by being concise, while another optimizes for engagement by being verbose.
Good Star Labs found GPT-5's performance in their Diplomacy game skyrocketed with optimized prompts, moving it from the bottom to the top. This shows a model's inherent capability can be masked or revealed by its prompt, making "best model" a context-dependent title rather than an absolute one.
Large Language Models are uniquely suited for complex strategy games like Civilization. Their strength lies not in calculation, where traditional AI excels, but in maintaining long-term narrative consistency and strategic coherence, which is the actual bottleneck for game mastery.
Unlike traditional software, large language models are not programmed with specific instructions. They evolve through a process where different strategies are tried, and those that receive positive rewards are repeated, making their behaviors emergent and sometimes unpredictable.
To build robust social intelligence, AIs cannot be trained solely on positive examples of cooperation. Like pre-training an LLM on all of language, social AIs must be trained on the full manifold of game-theoretic situations—cooperation, competition, team formation, betrayal. This builds a foundational, generalizable model of social theory of mind.