India Proposes Mandatory AI Content Labeling to Combat Deepfake Risks
The Indian government has proposed significant amendments to IT rules requiring clear labeling of AI-generated content and greater accountability for major social media platforms. The move aims to combat growing risks from deepfakes and synthetic media that can spread misinformation and manipulate public opinion.
Key Takeaways
- All AI-generated content must carry prominent labels and metadata identifiers
- Major platforms must verify user declarations about synthetic content
- Visual labels must cover at least 10% of display area; audio warnings in initial 10% duration
- Public feedback on draft amendments open until November 6, 2025
Enhanced Platform Responsibilities
Under the proposed changes, significant social media intermediaries – platforms with over 50 lakh users like Meta – must obtain user declarations about whether uploaded content is synthetically generated. They must deploy technical measures to verify these claims and ensure all synthetic content carries clear identification.
Clear Definition and Standards
The draft introduces a precise definition of ‘synthetically generated information’ as content artificially created using computer resources that appears reasonably authentic. The rules mandate visibility standards requiring synthetic content to be prominently marked with minimum coverage requirements.
Growing Concerns Behind the Move
The IT Ministry cited rising misuse of generative AI tools for spreading misinformation, election manipulation, and impersonation as key concerns. “Recent incidents of deepfake audio, videos and synthetic media going viral on social platforms have demonstrated the potential of generative AI to create convincing falsehoods – depicting individuals in acts or statements they never made,” stated the ministry’s explanatory note.
Global Context and Protection Measures
Policymakers worldwide are increasingly concerned about deepfakes being used for non-consensual intimate imagery, political manipulation, and financial fraud. The amendments provide statutory protection to intermediaries removing synthetic content based on reasonable efforts or user grievances, while prohibiting modification or removal of content labels.





