We scan new podcasts and send you the top 5 insights daily.
In linguistics and game theory, common knowledge isn't just widely known information. It is a recursive state where I know you know, you know I know, and so on infinitely. This shared awareness is the critical ingredient that enables social coordination, from accepting paper currency to driving on the correct side of the road.
To prevent autonomous agents from operating in silos with 'pure amnesia,' create a central markdown file that every agent must read before starting a task and append to upon completion. This 'learnings.md' file acts as a shared, persistent brain, allowing agents to form a network that accumulates and shares knowledge across the entire organization over time.
The act of looking at someone's eyes—the part of them that does the looking—creates an unbreakable feedback loop of "I know you know I know..." This immediately establishes common knowledge, forcing a resolution to the social game being played, whether it's a threat, a challenge, or an invitation.
Phenomena like bank runs or speculative bubbles are often rational responses to perceived common knowledge. People act not on an asset's fundamental value, but on their prediction of how others will act, who are in turn predicting others' actions. This creates self-fulfilling prophecies.
We live in "communities of knowledge" where expertise is distributed. Simply being part of a group where others understand a topic (e.g., politics, technology) creates an inflated sense that we personally understand it, contributing to the illusion of individual knowledge.
To break down natural information silos in hierarchies, leaders must flip the cultural default from punishing unapproved sharing to demanding proactive oversharing. The new rule is: "You are responsible for informing other people." This creates a shared context that enables decentralized, autonomous decision-making.
Moving beyond isolated AI agents requires a framework mirroring human collaboration. This involves agents establishing common goals (shared intent), building a collective knowledge base (shared knowledge), and creating novel solutions together (shared innovation).
We use hints and innuendo not to deny what we said, but to avoid a state where both parties know the other knows the true intent. This "common knowledge" can irrevocably change a relationship, whereas indirectness allows a shared fiction (e.g., a platonic friendship) to continue even after a proposition is rejected.
An "open secret" or "elephant in the room" is a fact everyone knows individually but pretends not to know collectively. The power of publicly stating the obvious fact is not in the information itself, but in shattering the shared pretense of ignoring it. This act transforms private knowledge into common knowledge, forcing a change in the social dynamic.
Laughter is a highly social and contagious behavior that rarely follows a formal joke. Its main purpose is to be a "common knowledge generator." An outburst of laughter takes a private, unspoken observation—often about a minor breach of decorum or status—and instantly makes it a shared, public reality for the entire group.
To build robust social intelligence, AIs cannot be trained solely on positive examples of cooperation. Like pre-training an LLM on all of language, social AIs must be trained on the full manifold of game-theoretic situations—cooperation, competition, team formation, betrayal. This builds a foundational, generalizable model of social theory of mind.