8.1 C
Delhi
Friday, January 16, 2026

Tag: LLM security

Study: Poems Can Trick AI Chatbots Into Bypassing Safety Filters

New research reveals a 62% success rate in using poetic prompts to jailbreak AI models like Gemini and GPT, forcing them to generate harmful content.

Google Warns of AI Malware That Thinks and Rewrites Its Own Code

Google reveals a new breed of self-evolving AI malware that can adapt and evade detection, marking a dangerous shift in cyber threats for 2025.