8.1 C
Delhi
Friday, January 16, 2026

Tag: vulnerability

Study: Poems Can Trick AI Chatbots Into Bypassing Safety Filters

New research reveals a 62% success rate in using poetic prompts to jailbreak AI models like Gemini and GPT, forcing them to generate harmful content.

AI Safety Breach: Poetry Can Trick ChatGPT and Gemini Into Harmful Answers

New research reveals poetic prompts bypass AI safety filters with 62% success rate, exposing critical vulnerability in major language models from Google and OpenAI.