Key Takeaways
- New IT rules mandate 24-hour takedown of obscene content and deepfake labeling
- Platforms face expanded due-diligence responsibilities for synthetic media
- Mixed reactions from digital rights groups and industry over free speech concerns
The Indian government is introducing significant amendments to the Information Technology Rules to combat obscene content and AI-generated deepfakes. The new framework requires platforms to remove flagged content within approximately 24 hours and mandates clear labeling of synthetic media.
Defining Obscene Content and Faster Takedowns
At the core of the amendments is a formal definition of “obscene digital content,” covering non-consensual intimate imagery and explicit sexual material. Platforms must respond promptly to complaints, with officials indicating a 24-hour compliance window for valid takedown requests.
Deepfake Labeling Requirements
The rules specifically target synthetic media, requiring creators to declare AI-generated content. Platforms must implement mechanisms to detect manipulated media and apply clear labels to identify synthetic images, videos, and audio clips. This aims to curb deepfake misuse for harassment, impersonation, and political manipulation.
Government’s Stance on Accountability
The Ministry of Electronics and Information Technology argues these changes will strengthen transparency in India’s digital ecosystem. The framework reinforces platforms’ legal obligation to act upon “actual knowledge” of illegal content, particularly when notified through court orders or government agencies.
Mixed Reactions and Concerns
Digital rights groups and legal experts warn that broad terms like “obscene” could lead to censorship of legitimate artistic and journalistic content. They emphasize the need for clearer procedural safeguards to prevent arbitrary enforcement that might discourage creative expression.
Industry response remains divided. While some entertainment and advertising sectors welcome measures against unauthorized explicit content, smaller platforms and independent creators worry about the compliance burden, including investments in moderation teams and verification systems.
Implementation Challenges
Experts highlight several operational hurdles, including the technical difficulty of reliably detecting AI-generated media. Automated tools may misidentify legitimate satire, while malicious creators can easily disguise synthetic content. International hosting adds another layer of complexity for enforcement.
The government is likely to position these rules as essential for protecting women, children, and the public from exploitation and misinformation. However, critics may challenge their constitutionality, arguing they could infringe on free-speech protections if applied without adequate oversight.
The coming months will be crucial in determining how these rules transform India’s digital landscape, with court challenges and platform responses shaping the final implementation.



