Search
Advertisement
New deepfake rules tighten platform liability, leave grey areas on intent and free speech

New deepfake rules tighten platform liability, leave grey areas on intent and free speech

Legal experts say the new AI content rules narrow a long-standing regulatory gap but stop short of creating a standalone deepfake offence.

Arun Padmanabhan
Arun Padmanabhan
  • Delhi,
  • Updated Feb 11, 2026 3:01 PM IST
New deepfake rules tighten platform liability, leave grey areas on intent and free speechThe amendments define SGI as audio, visual or audio-visual material artificially or algorithmically created or altered to appear authentic, and “likely to be perceived as indistinguishable from a natural person or real-world event”.

The Centre's latest amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2026, formally define “synthetically generated information” (SGI) and impose stringent due diligence, labelling and metadata obligations on platforms, marking the government’s most direct regulatory response yet to deepfakes and AI-generated misinformation.

Legal experts say the changes narrow a long-standing regulatory gap but stop short of creating a standalone deepfake offence.

Advertisement

Related Articles

“The formal definition of 'Synthetically Generated Information' narrows the legal gap by providing a clear regulatory basis for platform action, which industry stakeholders have long requested,” said Probir Roy Chowdhury, Partner at JSA Advocates & Solicitors. “However, distinguishing between malicious disinformation and satire often hinges on context and intent, which remain difficult for platforms to determine at scale.”

"The amended IT Rules mark a more calibrated approach to regulating AI-generated deepfakes. By narrowing the definition of synthetically generated information, easing overly prescriptive labelling requirements, and exempting legitimate uses like accessibility, the government has responded to key industry concerns, while still signalling a clear intent to tighten platform accountability,” said Rohit Kumar, Founding Partner at the public policy firm The Quantum Hub. 

Advertisement

The amendments define SGI as audio, visual or audio-visual material artificially or algorithmically created or altered to appear authentic, and “likely to be perceived as indistinguishable from a natural person or real-world event”. They also mandate prominent labelling and embedding of “permanent metadata”, including a unique identifier for traceability.

Also read: India’s new AI content rules: What social media platforms must do and what changes for users

From notice-and-takedown to proactive policing

The new framework significantly expands intermediary obligations, including deployment of “reasonable and appropriate technical measures” to prevent unlawful SGI and compliance with sharply compressed takedown timelines, in some cases “within three hours”.

“The Government has created additional liability and obligations on platforms vis-à-vis AI-generated content and proactive due diligence,” Roy Chowdhury said. “This signifies a move from the established passive notice and take-down regime to a framework, and now implies proactive compliance. That said, it is to be seen if platform liability will stand the test of judicial scrutiny.”

Advertisement

Alvin Antony, Chief Compliance Officer at GovernAI, said the amendments “substantially tighten the regime around deepfakes” by defining SGI, expressly treating it as “information” for unlawful acts, and imposing due diligence and provenance obligations.

However, Antony cautioned that “the underlying criminal and civil consequences still rest mainly on existing statutes like the IT Act, Bharatiya Nyaya Sanhita (BNS), POCSO and Consumer Protection Act,” meaning that intention, knowledge and harm will remain fact-intensive questions.

Subjectivity and speech risks

The rules carve out exclusions for “routine or good-faith editing” and educational or research material, an attempt to shield ordinary creative and editorial work.

“Specific exclusions for 'routine editing' and 'good-faith' educational work is a welcome safeguard that attempts to balance safety with creativity,” Roy Chowdhury said. “However, terms like 'appearing real or authentic' introduce a degree of subjectivity that could indeed pose challenges for satire or high-fidelity artistic tools.”

Antony added that because the definition turns on whether content is “likely to be perceived” as real, it could face a constitutional challenge for vagueness.

Metadata, privacy and proportionality

Perhaps the most technically ambitious requirement is the mandate to embed “permanent metadata or other appropriate technical provenance mechanisms” in SGI, including a unique identifier.

Advertisement

“The requirement for permanent metadata and unique identifiers is technically ambitious and will require careful balancing with the principles of data minimisation enshrined in the DPDP Act,” Roy Chowdhury said. “There is a valid legal question regarding whether embedding persistent traceability markers constitutes disproportionate surveillance of ordinary users.”

Antony noted that although the identifier is aimed at the intermediary’s system rather than an individual, it may in practice be linkable to user accounts or sessions, raising proportionality concerns under India’s data protection framework.

Three-hour takedowns 

The shift from 36 hours to “within three hours” for certain official intimations, alongside a two-hour removal in specific complaint scenarios, has drawn concerns.

“A 3-hour takedown timeline may not be technically feasible for many platforms,” Roy Chowdhury said. “While I fully support the urgency of removing harm, such compressed timelines inevitably create a take-down-first, question-later atmosphere to avoid platform liability, which increases the risk of lawful content being taken down erroneously.”

Antony echoed that the operational burden will be heavier on smaller platforms and startups, potentially creating enforcement asymmetry and raising the risk of over-removal.

Kumar said that "the significantly compressed grievance timelines, such as the two- to three-hour takedown windows, will materially raise compliance burdens and merit close scrutiny, particularly given that non-compliance is linked to the loss of safe harbour protections.”

Advertisement

Roy Chowdhury added that this concern is “further exacerbated by recent judicial rulings on the Sahyog Portal, which have watered down the process to be followed by authorities issuing take-down orders.”

The amendments come into force on February 20, 2026, placing immediate operational pressure on platforms to update detection, labelling and response systems before the deadline.

For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine

Published on: Feb 11, 2026 12:51 PM IST
    Post a comment0