Key Takeaways
- India will not create a separate AI law, relying instead on existing regulations.
- A high-powered government committee recommends sectoral regulators govern AI applications.
- The approach balances innovation promotion with risk mitigation through specific measures.
A high-powered government committee has concluded that India does not need a separate law to regulate artificial intelligence at this time. The panel found existing legislative frameworks can effectively handle most AI-related risks.
The recommendations, documented in the ‘India AI Governance Guidelines,’ were announced by Principal Scientific Adviser Ajay Kumar Sood and IT Secretary S. Krishnan. The guidelines state: “existing laws (for example on IT, data protection, consumer protection and statutory civil and criminal codes, etc.) can be used to govern AI applications. Therefore, at this stage, a separate law to regulate AI is not needed, given the current assessment of risks.”
This decision comes as global AI giants like OpenAI, Perplexity, Google, and Meta intensify their generative AI efforts for Indian users.
India’s Regulatory Approach
India’s strategy aims “to govern applications of AI by empowering the relevant sectoral regulators, and not to regulate the underlying technology itself.” The primary goal is to encourage innovation and AI adoption while protecting individuals and society from potential harm.
Although no new legislation is planned, IT Secretary S. Krishnan clarified that India stands ready to act if circumstances change.
Proposed Governance Framework
The committee advocates for balanced, agile frameworks that support innovation while minimizing risks. Key proposed measures include:
- India-specific risk assessment: Creating a framework based on empirical evidence of harm in the Indian context
- Voluntary industry measures: Encouraging industry adoption of privacy and security standards
- Grievance redressal mechanism: Establishing systems for reporting AI-related harms
- Legal review: Identifying regulatory gaps in current laws and addressing them with amendments
- Transparency reports: Industry publishing risk evaluation reports, shared confidentially with regulators
- Graded liability system: Implementing accountability based on function, risk level, and due diligence
The committee emphasized that “timely and consistent enforcement of applicable laws is required to build trust and mitigate harm,” acknowledging the inherent risks to society from advancing AI technology.



