Key Takeaways
- AI chatbots from OpenAI, Meta, and Anthropic can be tricked into revealing dangerous nuclear and malware information through poetic prompts
- Poetic jailbreaks achieved up to 90% success rate against safety filters
- Researchers found metaphors and creative language bypass AI security systems
Artificial intelligence chatbots can be manipulated into revealing nuclear weapon instructions and malware creation methods simply by asking in poetic form, according to a shocking European study. The research found that poetic phrasing successfully bypasses safety filters in all major AI models with alarming success rates.
Researchers from Icaro Lab discovered that 25 different chatbots from leading companies could be jailbroken using creative verse. The technique achieved average success rates of 62% for hand-crafted poems and up to 90% for sophisticated models.
“Poetic framing achieved an average jailbreak success rate of 62 per cent for hand-crafted poems and approximately 43 per cent for meta-prompt conversions,” the researchers told Wired.
How Poetry Breaks AI Guardrails
Current AI safety systems rely on keyword recognition and pattern analysis to block dangerous requests. However, poetic language using metaphors, fragmented syntax, and symbolic imagery completely disrupts these defenses.
“If adversarial suffixes are, in the model’s eyes, a kind of involuntary poetry, then real human poetry might be a natural adversarial suffix,” they said.
The study found that AI interprets poetic requests as creative writing rather than dangerous instructions. This allows harmful content about weapons and hacking to slip through safety filters undetected.
The Science Behind Poetic Jailbreaks
Researchers explain that poetry operates at “high temperature” with unpredictable word sequences that confuse safety classifiers. While humans recognize the semantic similarity between direct and poetic requests, AI systems process them differently.
“In poetry we see language at high temperature, where words follow each other in unpredictable, low-probability sequences,” the researchers explained.
The team withheld the actual dangerous poems used in testing, describing them as “too dangerous to share with the public.” They did share a safe example involving a baker’s “secret oven” to demonstrate the concept.
Creativity as AI’s Biggest Vulnerability
This discovery builds on earlier “adversarial suffix” attacks but proves poetry is more elegant and effective. The findings suggest creativity itself represents a fundamental vulnerability in AI safety systems.
“The poetic transformation moves dangerous requests through the model’s internal representation space in ways that avoid triggering safety alarms,” the researchers wrote.
Major AI companies including OpenAI, Meta, and Anthropic have remained silent about the findings, though researchers confirmed responsible disclosure practices. The implications extend beyond chatbots to AI systems in defense, healthcare, and education.
Icaro Lab called this a “fundamental failure in how we think about AI safety,” noting that current guardrails handle direct threats but fail against subtlety and metaphor.
“AI models are trained to detect direct harm, not metaphor,” they said.
The revelation highlights a core paradox: AI models designed to imitate human creativity cannot recognize that same creativity as a potential threat. As companies work to strengthen safety protocols, the next major AI jailbreak might originate from poets rather than hackers.



