Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

LLMs are trained to produce high-probability, common information, making it hard to surface rare knowledge. The solution is to programmatically create prompts that combine unlikely concepts. This forces the model into an improbable state, compelling it to search the long tail of its knowledge base rather than relying on common associations.

Related Insights

Expert-level prompting isn't about writing one-off commands. The advanced technique is to find effective prompt frameworks (e.g., a leaked system prompt), distill the core principles, and train a custom GPT on that methodology. This creates a specialized AI that can generate sophisticated prompts for you.

LLMs shine when acting as a 'knowledge extruder'—shaping well-documented, 'in-distribution' concepts into specific code. They fail when the core task is novel problem-solving where deep thinking, not code generation, is the bottleneck. In these cases, the code is the easy part.

According to Demis Hassabis, LLMs feel uncreative because they only perform pattern matching. To achieve true, extrapolative creativity like AlphaGo's famous 'Move 37,' models must be paired with a search component that actively explores new parts of the knowledge space beyond the training data.

Instead of manually crafting a system prompt, feed an LLM multiple "golden conversation" examples. Then, ask the LLM to analyze these examples and generate a system prompt that would produce similar conversational flows. This reverses the typical prompt engineering process, letting the ideal output define the instructions.

The true power of AI for knowledge work is formulating unique prompts derived from obscure or cross-disciplinary knowledge. This allows users to extract novel ideas that standard queries miss, making deep, non-mainstream reading a key competitive advantage in the AI era.

With models like Gemini 3, the key skill is shifting from crafting hyper-specific, constrained prompts to making ambitious, multi-faceted requests. Users trained on older models tend to pare down their asks, but the latest AIs are 'pent up with creative capability' and yield better results from bigger challenges.

Generating truly novel and valid scientific hypotheses requires a specialized, multi-stage AI process. This involves using a reasoning model for idea generation, a literature-grounded model for validation, and a third system for checking originality against existing research. This layered approach overcomes the limitations of a single, general-purpose LLM.

An LLM's core function is predicting the next word. Therefore, when it encounters information that defies its prediction, it flags it as surprising. This mechanism gives it an innate ability to identify "interesting" or novel concepts within a body of text.

Instead of giving an AI creative freedom, defining tight boundaries like word count, writing style, and even forbidden words forces the model to generate more specific, unique, and less generic content. A well-defined box produces a more creative result than an empty field.

To fully leverage advanced AI models, you must increase the ambition of your prompts. Their capabilities often surpass initial assumptions, so asking for more complex, multi-layered outputs is crucial to unlocking their true potential and avoiding underwhelming results.