The Indian government has clarified it is not seeking to restrict AI-generated content but only mandates clear labeling for transparency, empowering users to make informed choices about synthetic media.
Key Takeaways
- Government mandates AI content labeling, not restrictions
- Focus on transparency and user empowerment
- Shared responsibility among creators, platforms, and AI service providers
- New IT rules target deepfakes and misinformation
- Stakeholder comments open until November 6, 2025
Government Clarifies AI Content Approach
Electronics and IT Secretary S. Krishnan stated on Thursday that the government’s proposed IT rule changes focus solely on disclosure rather than prohibition. “All that we are asking for is to label the content,” Krishnan emphasized.
Transparency Over Censorship
Krishnan clarified the government’s position: “We are not saying don’t put it up… Whatever you’re creating, it’s fine. You just say it is synthetically generated. Once that is established, people can then make up their minds as to whether it is good, bad, or whatever.”
Shared Responsibility Framework
The responsibility for AI content labeling will be shared among three key groups: content creators, AI service providers, and social media platforms. India prioritizes AI innovation before regulation, with enforcement actions targeting only unlawful content.
New IT Rule Amendments
The proposed amendments establish legal requirements for labeling, traceability, and accountability of synthetic content. Large social media platforms with 50 lakh or more users must implement technical measures to verify and flag AI-generated information. The draft rules clearly define synthetic content and mandate permanent labels and metadata embedding.



