While the em dash is a known sign of AI writing, a more subtle indicator is "contrastive parallelism"—the "it's not this, it's that" structure. This pattern, likely learned from marketing copy, is frequently used by LLMs but is uncommon in typical human writing.

Related Insights

The most common marketing phrases generated by ChatGPT are now so overused they cause a 15% drop in audience engagement. Marketers must use a follow-up prompt to 'un-AI' the content, specifically telling the tool to remove generic phrases, corporate tone, and predictable language to regain authenticity.

OpenAI has publicly acknowledged that the em-dash has become a "neon sign" for AI-generated text. They are updating their model to use it more sparingly, highlighting the subtle cues that distinguish human from machine writing and the ongoing effort to make AI outputs more natural and less detectable.

MIT research reveals that large language models develop "spurious correlations" by associating sentence patterns with topics. This cognitive shortcut causes them to give domain-appropriate answers to nonsensical queries if the grammatical structure is familiar, bypassing logical analysis of the actual words.

In the age of AI, the new standard for value is the "GPT Test." If a person's public statements, writing, or ideas could have been generated by a large language model, they will fail to stand out. This places an immense premium on true originality, deep insight, and an authentic voice—the very things AI struggles to replicate.

Anthropic suggests that LLMs, trained on text about AI, respond to field-specific terms. Using phrases like 'Think step by step' or 'Critique your own response' acts as a cheat code, activating more sophisticated, accurate, and self-correcting operational modes in the model.

AI-generated text often falls back on clichés and recognizable patterns. To combat this, create a master prompt that includes a list of banned words (e.g., "innovative," "excited to") and common LLM phrases. This forces the model to generate more specific, higher-impact, and human-like copy.

When an LLM produces text with the wrong style, re-prompting is often ineffective. A superior technique is to use a tool that allows you to directly edit the model's output. This act of editing creates a perfect, in-context example for the next turn, teaching the LLM your preferred style much more effectively than descriptive instructions.

The rise of LLMs creates a new bar for leadership communication: the "GPT test." If a public figure's statements or writings are indistinguishable from what ChatGPT could generate, they will fail to build an authentic brand. This forces a shift towards genuine originality and unpolished thought.

To prove the flaw, researchers ran two tests. In one, they used nonsensical words in a familiar sentence structure, and the LLM still gave a domain-appropriate answer. In the other, they used a known fact in an unfamiliar structure, causing the model to fail. This definitively proved the model's dependency on syntax over semantics.

In an AI-driven world, unique stylistic choices—like specific emoji use, unconventional capitalization, or even intentional typos—serve as crucial signifiers of human authenticity. These personal quirks build a distinct brand voice and assure readers that a real person is behind the writing.