India Proposes Strict AI Regulations to Combat Deepfake Threat
The Ministry of Information Technology (MeitY) has announced sweeping new AI regulations aimed at curbing the surge of deepfakes and synthetic media misinformation across India’s vast internet user base.
Key Takeaways
- New regulations target deepfake labelling, traceability, and accountability
- Government launches Sahyog portal for automated content notices
- India’s 900+ million internet users face growing synthetic media threats
Growing Deepfake Concerns
MeitY cited “the growing misuse of technologies used for the creation or generation of synthetic media” as the driving force behind the proposed amendments. A ministry briefing note stated: “Recent incidents of deepfake audio, videos and synthetic media going viral on social platforms have demonstrated the potential of generative AI to create convincing falsehoods.”
The ministry warned that such content “can be weaponised to spread misinformation, damage reputations, manipulate or influence elections, or commit financial fraud.”
Strengthening Legal Framework
The government has already launched the Sahyog portal to automate the process of sending notices to content intermediaries like X and Facebook. The proposed amendments “provide a clear legal basis for labelling, traceability, and accountability” while strengthening “the due diligence obligations” of social media platforms.
AI Industry Expansion Continues
Despite regulatory tightening, major AI firms are expanding their India presence. US startup Anthropic plans to open an India office, with CEO Dario Amodei meeting Prime Minister Narendra Modi. OpenAI has also committed to opening an India office, with Sam Altman noting ChatGPT usage in the country grew fourfold over the past year. AI firm Perplexity announced a major partnership with Indian telecom giant Airtel in July.



