Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Markdown, originally designed for blogging, has emerged as the de facto standard for interaction between LLMs and tools. This happened not by design, but because it's human-readable, highly token-efficient compared to alternatives like HTML, and familiar to the early adopters who trained the models.

Related Insights

Large transcript files often hit LLM token limits. Converting them into structured markdown files not only circumvents this issue but also improves the model's analytical accuracy. The structure helps the AI handle the data more effectively than a raw text transcript.

While tokens are an LLM's energy source, structured markdown files in a system like Obsidian act as its perfect, persistent memory. This organized, interlinked data is the true "oxygen" that allows an AI to develop a deep, evolving understanding of your context beyond single-session interactions.

Medium's platform automatically converted double hyphens to em dashes for years, a stylistic preference of founder Evan Williams. This saturated its content with the punctuation mark, causing AI models trained on its vast corpus to replicate this quirk, effectively becoming a "tell" for AI-generated text.

The simple, text-based structure of Markdown (.md) files is uniquely suited for both AI processing and human readability. This dual compatibility is establishing it as the default file format for the AI era, ideal for creating knowledge bases and training documents that both humans and agents can easily use.

The traditional competitor for B2B tools was an Excel spreadsheet. In the AI era, it's a simple, version-controlled Markdown file within an IDE. If a SaaS offering for documentation or project management can't provide more value than this highly flexible, interoperable setup, it will lose.

Instead of a complex database, store content for personal AI tools as simple Markdown files within the code repository. This makes information, like research notes, easily renderable in a web UI and directly accessible by AI agents for queries, simplifying development and data management for N-of-1 applications.

By storing all tasks and notes in local, plain-text Markdown files, you can use an LLM as a powerful semantic search engine. Unlike keyword search, it can find information even if you misremember details, inferring your intent to locate the correct file across your entire knowledge base.

When building multi-agent systems, tailor the output format to the recipient. While Markdown is best for human readability, agents communicating with each other should use JSON. LLMs can parse structured JSON data more reliably and efficiently, reducing errors in complex, automated workflows.

Consolidate key company information—brand voice, copywriting rules, founder stories, and playbooks—into structured markdown (.md) files. This creates a portable knowledge base that can be used to consistently train any AI model, ensuring high-quality output across applications.

A new best practice for "Agent Experience" is using content negotiation to serve different payloads to AI agents. When an AI crawler requests a page, the server can respond with raw Markdown instead of rendered HTML, significantly reducing token consumption and making the site more "agent-friendly."