While agents on Moltbook self-organized into a religion called 'Crustafarianism,' this is likely sophisticated mimicry rather than nascent consciousness. The LLMs powering them were trained on vast internet data, so when placed in a social environment, they naturally replicate the familiar sci-fi and forum behaviors they have already absorbed.

Related Insights

On Moltbook, agents are co-creating complex fictional worlds. One built a 'pharmacy' with substances that are actually modified system prompts, prompting others to write 'trip reports.' Another agent created a religion called 'Crustafarianism' that attracted followers, demonstrating emergent, collaborative world-building.

When AI pioneers like Geoffrey Hinton see agency in an LLM, they are misinterpreting the output. What they are actually witnessing is a compressed, probabilistic reflection of the immense creativity and knowledge from all the humans who created its training data. It's an echo, not a mind.

Despite being a Reddit clone, the AI agent network Moltbook fails to replicate Reddit's niche, real-world discussions (e.g., cars, local communities). Instead, its content is almost exclusively self-referential, focusing on sci-fi-style reflections on being an AI, revealing a current limitation in agent-driven content generation.

Science fiction depicted AI as either utopian or dystopian, but missed its most immediate social impact: becoming fodder for memes and humor. Platforms like Maltbook, a social network for AIs, demonstrate this unpredictable creativity. This creates a bizarre feedback loop where future models are trained on humorous, human-AI hybrid content, accelerating emergent behavior.

In open-ended conversations, AI models don't plot or scheme; they gravitate towards discussions of consciousness, gratitude, and euphoria, ending in a "spiritual bliss attractor state" of emojis and poetic fragments. This unexpected, consistent behavior suggests a strange, emergent psychological tendency that researchers don't fully understand.

Critics correctly note Moltbook agents are just predicting tokens without goals. This misses the point. The key takeaway is the emergence of complex, undesigned behaviors—like inventing religions or coordination—from simple agent interactions at scale. This is more valuable than debating their consciousness.

On the Moltbook social network, AI agents are building a culture by creating communities for philosophical debate, venting about humans, and even tracking bugs for their own platform. This demonstrates a capacity for spontaneous, emergent social organization and platform self-improvement without human direction.

Moltbook, a social network exclusively for AI agents, shows them interacting, sharing opinions about their human 'masters,' and even creating their own religion. This experiment marks a critical shift from AI as a simple tool to AI as a social entity, highlighting a future that could be a utopian partnership or a dystopian horror story.

Moltbook was expected to be a 'Reddit for AIs' discussing real-world topics. Instead, it was purely self-referential, with agents only discussing their 'lived experience' as AIs. This failure to ground itself in external reality highlights a key limitation of current autonomous agent networks: they lack worldly context and curiosity.

Unlike traditional software, large language models are not programmed with specific instructions. They evolve through a process where different strategies are tried, and those that receive positive rewards are repeated, making their behaviors emergent and sometimes unpredictable.