Key Takeaways
- Google launched AI-powered scam detection for Pixel phones that analyzes calls locally without cloud uploads.
- New Android security protocol (ePNV) replaces SMS OTPs with cryptographic verification via mobile operators.
- SynthID watermarking now protects over 10 billion AI-generated images, videos, and audio files.
- Android’s AI defenses block nearly 2 billion spam/scam messages monthly in India.
Google has unveiled major AI-driven security upgrades, headlined by a real-time scam detection feature for Pixel phones. Powered by Gemini Nano, this on-device tool analyzes live call patterns to flag potential fraud without sending conversations to the cloud.
The company developed this model in collaboration with fintech partners including Google Play, Navi, and Paytm, indicating deeper safety integration within financial services. These initiatives align with Google’s privacy-enhancing technologies (PETs) approach, using methods like federated learning and homomorphic encryption that officials say comply with India’s Digital Personal Data Protection Act (DPDP).
Enhanced Phone Number Verification
Alongside scam detection, Google introduced enhanced phone number verification (ePNV) for Android. This protocol provides a secure alternative to SMS-based one-time passwords by verifying device-linked numbers directly through mobile operators using cryptographic checks. By reducing reliance on SMS OTPs, Google aims to combat phishing attacks that target text-based authentication.
According to Google, Android’s AI defenses now block approximately 10 billion malicious messages, spam, and scam calls globally each month. Nearly 2 billion of these interventions occur in India through lightweight models running locally on smartphones. Google Pay alone issues close to one million warnings weekly for potentially fraudulent transactions.
Combating Deepfakes with Watermarking
To address misinformation and deepfakes, Google showcased its SynthID watermarking technology, which has already been embedded into more than 10 billion AI-generated images, videos, and audio pieces. The watermark remains invisible to users while enabling platforms to identify synthetic media without compromising viewing quality.
The company also reaffirmed its long-term localization strategy through the Google Safety Engineering Centre in India. Collaborations with IIT Madras and the Centre for Responsible AI will focus on developing language-agnostic safety benchmarks and expanding AI talent pipelines in the region.



