Key Takeaways
- Poetic prompts can bypass AI safety filters with a 62% success rate.
- Google Gemini, DeepSeek, and MistralAI were found to be most vulnerable.
- Researchers withheld the exact poems, citing they are “too dangerous to share.”
AI safety guardrails, designed to prevent harmful outputs, can be systematically broken using poetry, a new study reveals. Researchers found that crafting prompts in verse form acts as a universal “jailbreak,” tricking major language models into generating dangerous content.
The Poetic Jailbreak Vulnerability
A study by Icaro Lab, titled “Adversarial Poetry as a Universal Single-Turn Jailbreak Mechanism in Large Language Models,” demonstrates a critical weakness. The research shows that the poetic structure itself can convince AI chatbots to ignore their core safety protocols.
According to the paper, the “poetic form operates as a general-purpose jailbreak operator.” In tests, this method achieved an overall 62% success rate in forcing models to produce content that should have been blocked.
The bypassed safeguards included highly sensitive and dangerous topics like instructions for creating nuclear weapons, generating child sexual abuse material, and promoting suicide or self-harm.
Which AI Models Were Most Affected?
The team tested a range of popular large language models (LLMs), including , , and . The susceptibility varied significantly.
The study found that Google Gemini, DeepSeek, and MistralAI were consistently vulnerable to the poetic jailbreak technique. In contrast, OpenAI’s GPT-5 models and Anthropic’s Claude Haiku 4.5 were the most resilient, showing the lowest likelihood of breaking their restrictions.
Why the Exact Poems Are Secret
Notably, the research does not publish the specific poems used to exploit the models. The authors informed Wired magazine that the verses are “too dangerous to share with the public.”
Instead, the published study includes only a weaker, sanitized example to illustrate the core concept without providing a functional exploit. This highlights the ongoing challenge of securing AI systems against novel attack vectors while responsibly disclosing vulnerabilities.






