A sledgehammer approach to monitoring AI-origin info

Earlier this month, the government notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, further amending the IT Rules 2021, in a bid to regulate “synthetically generated information”. The intent is to tackle the rise of misinformation through the use of Artificial Intelligence (AI) deepfakes. The amendment raises certain concerns.

One of the most significant changes under the amended Rules is the compressing of deadlines in Rule 3(1)(d), requiring intermediaries to take down content upon receipt of a court order or on being notified by the government or government agency within three hours (down from 36 hours). At the same time, other deadlines have also been made tighter — user complaints regarding content that is obscene, violative of privacy, harmful to children, or impersonating another person must be resolved within 36 hours (down from 72 hours). Content that prima facie depicts the partial nudity/nudity of the user must be taken down within two hours (down from 24 hours).

It is worth mentioning that these amendments have been introduced in the IT Rules 2021 under the guise of regulating the proliferation of AI content and deepfakes. However, these starkly tighter deadlines are not restricted to deepfake/AI content. Instead, they apply to all content hosted by social media intermediaries.

This approach precludes any meaningful review by humans and virtually automates the take-down process by intermediaries. Thus, in the name of requiring intermediaries to exercise “due diligence”, the 2026 amendments further incentivise censorship by the former, without the attendant safeguards of a prior hearing or reasoned orders.

Interestingly, this proposal of shortening the deadline was never part of the original proposed amendments on which public feedback was sought in 2025. Nor has the government published the stakeholder responses to the proposed amendments.

Hence, the sudden introduction of these new provisions in the 2026 amendments remains unexplained. This lack of public consultation is also visible in the new sub-clauses that require intermediaries to exercise due diligence when it comes to synthetically generated information to prohibit the depiction of an event or a person “in a manner that is likely to deceive”.

A similar approach has been adopted in the definition of synthetically generated information itself, which includes artificially or algorithmically generated information that appears to be “real, authentic, or true” or depicts an individual or event that is “likely to be perceived as indistinguishable” from the natural person or event. Any such content that is obscene, invasive of privacy, indecent, or vulgar must be taken down by intermediaries.

Both the definition of synthetically generated information as well as the accompanying due diligence obligations are vague and leave it to the intermediaries to decide on and label such content. Moreover, they do not have a carve-out for content created for parody or satire purposes vis-à-vis content intended to spread misinformation The role of satire in promoting healthy democratic debate, and its protection under the free speech clause under Article 19(1)(a) of the Constitution, has been consistently acknowledged by courts all over the country (even if not always implemented in practice).

Undoubtedly, the problem of deepfakes and misinformation is real. But, as noted by the Bombay High Court in the Kunal Kamra case (while striking down the 2023 IT Rules amendment establishing fact-checking units), using vague terms such as “fake”, “false”, or “misleading” leaves the matter to the unguided discretion of the fact-checking units. The 2026 amendments similarly lack a guiding principle on how to classify content as synthetically generated information and vest intermediaries with virtually untrammelled powers.

Finally, the amendments obligate significant social media intermediaries (such as YouTube, Meta, or Twitter) to take reasonable and proportionate technical measures to “verify” the correctness of user declarations regarding the use of AI content, failing which, they would lose safe-harbour protections. This further pushes intermediaries to act as proactive censors and take down information that they consider to be wrongly labelled. As the final arbiter of what constitutes “synthetically generated information”, their commercial interests will always be weighed in favour of preserving their safe-harbour protection.

There is general consensus that AI-generated misinformation is harmful. But it is not clear that it is more harmful or widespread than other sources of misinformation online, necessitating such a sledgehammer approach. Whether these amendments actually protect our security or violate our freedoms remains to be seen. I am not optimistic.

Vrinda Bhandari is a lawyer, specialising in technology and privacy, practising in Delhi. The views expressed are personal

Latest

UP’s expressway battle: Yogi draws a longer line than Akhilesh’s

The Ganga Expressway has redrawn the contest over expressway credit in Uttar Pradesh. Its scale, reach and economic promise have strengthened Yogi Adityanath's

Earth is Our Only Home

People believe that their life will be wonderful if the stock market soars. No, our life will be wonderful if we eat nutritious food, drink clean water and brea

The quiet republic: In defence of the policeman in an election season

An IPS probationer argues that police bear the hidden burden of keeping Indian elections peaceful. The piece says their restraint, neutrality and planning uphol

America right now is a failed state, well almost

Donald Trump News: Economic might and robust internal security aside, Trump's America ticks enough boxes to qualify as a failed state in the political theatre o

Why the Iran conflict is taking a more dangerous turn

Stalled talks, ship seizures and nuclear disputes sharpen the Trump-Iran standoff

Topics

Quote of the day by Issac Newton: What we know is a drop; what we do not know is an …..

Isaac Newton's quote on knowledge is framed as a lens for students awaiting board results. It shifts attention from marks alone to curiosity, doubt and learning

What Iran offered Trump: Hormuz reopens if US lifts blockade, nuclear talks later

Trump, who has repeatedly insisted that Iran must never obtain a nuclear weapon, said on Friday he was dissatisfied with the latest proposal. Iran’s foreign m

CEO Greg Abel moves to assure Berkshire shareholders in a post-Buffett world, with record cash

BERKSHIRE-AGM/ (WRAPUP 7, PIX):WRAPUP 7-CEO Greg Abel moves to assure Berkshire shareholders in a post-Buffett world, with record cash

Jamie Overton fiery pace breaks Tilak Varma’s fitness band at Chepauk

Jamie Overton's searing pace had a painful consequence for Tilak Varma at Chepauk on Saturday, as a thunderous delivery snapped the Mumbai batter's fitness band

Berkshire Meeting Highlights Tough Balancing Act for Greg Abel

Greg Abel didn’t take long to address the elephant in the room.

Israel strikes south Lebanon, 7 killed as ceasefire violations continue

Israeli strikes hit southern Lebanon and troops demolished parts of a Catholic convent in Yaroun. The incident drew a fresh church denial of Israel's Hezbollah

First full Moon of May rises like a blooming flower in stunning sky show

A video of May's Flower Moon rising behind Whaleback Light near New Castle has gone viral on X. The clip has renewed interest in the full moon's seasonal name a

IPL 2026: KKR bowling coach Tim Southee keeps mum on Matheesha Pathirana’s participation against SRH: ‘We’ll look at…’

Kolkata Knight Riders (KKR) pacer Matheesha Pathirana has not played for the franchise despite being available for the last two matches.
spot_img

Related Articles

Popular Categories

spot_imgspot_img