Key Takeaways
- India will introduce a comprehensive AI law modeled on the IT Act, 2000
 - New rules require permanent labeling for all AI-generated content
 - Major platforms must verify user declarations on synthetic content
 - Government aims to prevent legal challenges with primary legislation
 
India is preparing a comprehensive Artificial Intelligence law modeled on the Information Technology Act, 2000, following its recent push to regulate deepfakes and synthetic content. The Ministry of Electronics and Information Technology (MeitY) will introduce a parliamentary bill to address growing concerns around AI-generated misinformation.
Legal Framework Development
Official sources confirm that once public consultation on draft AI rules concludes on November 6, the government will finalize them. However, to avoid potential legal challenges, a full-fledged AI legislation will follow. Currently, the proposed rules operate under the IT Rules, 2021, which derive authority from the IT Act.
Cyber law expert Pavan Duggal emphasized: “A law to curb deepfakes or any aspect of AI will be needed, as the rules currently proposed by MeitY can be challenged because their scope goes beyond the primary legislation. Rules are secondary in nature and cannot exceed the ambit of the parent law.”
Officials noted that once established, the AI Act can be expanded through additional rules as technology evolves, similar to the IT Act framework.
New Content Labeling Requirements
The regulations mark a significant shift in how AI-generated digital content will be controlled. Government concern stems from the rapid proliferation of deepfake videos, images, and audio used for deception.
Under proposed Rule 3(1), any platform allowing users to create, modify, or share synthetic content must ensure permanent, non-removable labels or metadata identifying it as artificial. Visual content must display labels covering at least 10% of the screen, while audio must include audible statements within the first 10% of playback.
Platform Responsibilities
Major social media platforms with over 5 million users (classified as Significant Social Media Intermediaries) face additional obligations. They must:
- Obtain user declarations at upload time about whether content is AI-generated
 - Deploy technical tools to verify these claims
 - Clearly label any content identified as synthetic
 
Platforms that promptly remove harmful synthetic content will maintain safe harbor protection under Section 79(2) of the IT Act, shielding them from liability for user-generated material.
MeitY has clarified that these requirements apply only to publicly shared content, excluding private or unpublished material. The definition of “information” under IT Rules now includes synthetically generated data, ensuring AI-created misinformation, defamatory content, and impersonations receive equivalent legal treatment as real-world counterparts.


                                    
