As an immediate defense, researchers developed an automatic benchmarking tool rather than attempting to retrain models. It systematically generates inputs with misaligned syntax and semantics to measure a model's reliance on these shortcuts, allowing developers to quantify and mitigate this risk before deployment.
MIT research reveals that large language models develop "spurious correlations" by associating sentence patterns with topics. This cognitive shortcut causes them to give domain-appropriate answers to nonsensical queries if the grammatical structure is familiar, bypassing logical analysis of the actual words.
This syntactic bias creates a new attack vector where malicious prompts can be cloaked in a grammatical structure the LLM associates with a safe domain. This 'syntactic masking' tricks the model into overriding its semantic-based safety policies and generating prohibited content, posing a significant security risk.
To prove the flaw, researchers ran two tests. In one, they used nonsensical words in a familiar sentence structure, and the LLM still gave a domain-appropriate answer. In the other, they used a known fact in an unfamiliar structure, causing the model to fail. This definitively proved the model's dependency on syntax over semantics.
