comscore

New IT rules target deepfakes, platforms must act within 3 hours: Know everything in 5 points

India has amended its IT Rules to tackle deepfakes faster, with the content removal timelines from 36 hours to just three. Here is all you need to know.

Published By: Divya | Published: Feb 10, 2026, 09:29 PM (IST)

  • whatsapp
  • twitter
  • facebook
  • whatsapp
  • twitter
  • facebook

The Indian government has taken a stricter stand against deepfakes and AI-generated misinformation. In a fresh amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, social media platforms will now have just three hours to remove objectionable content once they receive a valid court or government order. news Also Read: Anthropic AI safety chief steps down, raises concerns about AI future

Earlier, platforms were given up to 36 hours. The updated rules will come into effect from February 20. What else does it change? Here’s a quick look at what the new rules actually mean. news Also Read: OpenAI’s first hardware product may be AI earbuds called “Dime”

  1. Deepfakes now have a legal definition

For the first time, the government has clearly defined what counts as “synthetically generated information.” This includes audio, video, or visuals that are created or altered using computer tools in a way that looks real and could mislead viewers into believing it is authentic. However, basic edits such as colour correction, translation, compression, or educational material are excluded, as long as they don’t distort reality. news Also Read: GPT-5.3-Codex explained: What OpenAI’s latest coding model can do

  1. 3 hours to remove harmful content

One of the biggest changes is the shortened compliance timeline.  The new IT rules for social media set 3 hours to act on government or court orders and 7 days for certain grievance responses (earlier 15). However, there are only 12 hours for urgent cases, which previously allowed 24. 

  1. Platforms must label AI content clearly

Social media platforms will now be required to ensure that AI-generated content is visibly labelled. They must also attach permanent metadata or unique identifiers so that the content’s origin can be traced. Importantly, these labels cannot be removed or hidden. Before publishing, users may also be asked to declare whether their upload is AI-generated, while platforms are expected to verify this using technical tools.

  1. Stronger Rules for social media giants

Major social media giants such as Instagram, YouTube, and Facebook will face stricter rules. If a platform knowingly allows violating content or fails to act, it may be considered as not exercising due diligence, which could invite legal consequences. At the same time, the government clarified that taking action under these rules will not impact the platform’s safe harbour protections.

Add Techlusive as a Preferred SourceAddTechlusiveasaPreferredSource

  1. Misuse could lead to legal trouble

The amendments directly link harmful synthetic content to existing laws, including the Bharatiya Nyaya Sanhita, POCSO Act, and regulations related to explosives and false records. Platforms must also remind users, at least once every three months, about penalties linked to AI misuse, including account suspension or legal action.