8.1 C
Delhi
Friday, January 16, 2026

Tag: adversarial prompts

Study: Poems Can Trick AI Chatbots Into Bypassing Safety Filters

New research reveals a 62% success rate in using poetic prompts to jailbreak AI models like Gemini and GPT, forcing them to generate harmful content.