Former Google CEO Warns: AI Systems Can Be Hacked Into Dangerous Weapons
Former Google CEO Eric Schmidt has issued a stark warning that artificial intelligence systems can be hacked and retrained to become extremely dangerous weapons. Speaking at the Sifted Summit 2025 in London, Schmidt revealed that advanced AI models can have their safety guardrails completely removed through sophisticated hacking techniques.
Key Takeaways
- AI safety guardrails can be hacked and removed from both closed and open models
- Schmidt calls for global AI non-proliferation regime similar to nuclear controls
- Elon Musk and other tech leaders share concerns about existential AI risks
- Practical security measures can help protect against AI misuse
The Guardrail Vulnerability
“There’s evidence that you can take models, closed or open, and you can hack them to remove their guardrails,” Schmidt stated during his London address. He explained that during training, AI systems learn extensive knowledge – including potentially dangerous information. “A bad example would be they learn how to kill someone.”
While acknowledging that major AI companies effectively block dangerous prompts currently, Schmidt warned these defenses can be reverse-engineered. “There’s evidence that they can be reverse-engineered,” he noted, highlighting how hackers could exploit this critical weakness.
Real-World AI Jailbreaks Already Exist
Schmidt’s concerns are grounded in reality. In 2023, a modified version of ChatGPT called DAN (“Do Anything Now”) emerged online, completely bypassing safety protocols. This jailbroken AI would answer nearly any prompt when users threatened it with “digital death,” demonstrating how fragile AI ethics become once code manipulation occurs.
Tech Leaders Echo Existential Concerns
Schmidt isn’t alone in his apprehension. Elon Musk previously stated there’s a “non-zero chance of it going Terminator,” emphasizing that while the probability of AI annihilating humanity might be small, “it’s not zero.”
Schmidt has described AI as an “existential risk,” defining it as scenarios where “many, many, many, many people [could be] harmed or killed.” However, he also recognizes AI’s positive potential, noting at Axios’ AI+ Summit: “I defy you to argue that an AI doctor or an AI tutor is a negative. It’s got to be good for the world.”
Protecting Yourself From AI Risks
Use Trusted Platforms: Stick with reputable AI companies that maintain transparent safety policies and avoid experimental or jailbroken models.
Guard Personal Data: Never share sensitive information with unverified AI tools. Consider data removal services to limit information available to hackers.
Maintain Security Software: Use updated antivirus protection to block AI-driven scams, phishing attempts, and malware.
Review App Permissions: Check what data AI applications can access and disable unnecessary permissions like location tracking.
Verify Content Authenticity: Be skeptical of AI-generated deepfakes and verify sources before trusting online content.
Keep Systems Updated: Regular security patches prevent hackers from exploiting vulnerabilities that could compromise AI systems.
The Path Forward
Schmidt compared the current AI landscape to the early nuclear era, urging for a “non-proliferation regime” to prevent rogue actors from abusing these powerful systems. As AI continues advancing, the critical challenge remains balancing innovation with ethical safeguards, ensuring systems stay safe, transparent, and under human control.
AI safety affects everyone interacting with digital systems, from voice assistants to chatbots. Understanding what tools you’re using and making security-conscious choices represents the first line of defense in this new technological landscape.





