Written By Divya
Published By: Divya | Published: Feb 10, 2026, 09:29 PM (IST)
The Indian government has taken a stricter stand against deepfakes and AI-generated misinformation. In a fresh amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, social media platforms will now have just three hours to remove objectionable content once they receive a valid court or government order. Also Read: Anthropic AI safety chief steps down, raises concerns about AI future
Earlier, platforms were given up to 36 hours. The updated rules will come into effect from February 20. What else does it change? Here’s a quick look at what the new rules actually mean. Also Read: OpenAI’s first hardware product may be AI earbuds called “Dime”
For the first time, the government has clearly defined what counts as “synthetically generated information.” This includes audio, video, or visuals that are created or altered using computer tools in a way that looks real and could mislead viewers into believing it is authentic. However, basic edits such as colour correction, translation, compression, or educational material are excluded, as long as they don’t distort reality. Also Read: GPT-5.3-Codex explained: What OpenAI’s latest coding model can do
One of the biggest changes is the shortened compliance timeline. The new IT rules for social media set 3 hours to act on government or court orders and 7 days for certain grievance responses (earlier 15). However, there are only 12 hours for urgent cases, which previously allowed 24.
Social media platforms will now be required to ensure that AI-generated content is visibly labelled. They must also attach permanent metadata or unique identifiers so that the content’s origin can be traced. Importantly, these labels cannot be removed or hidden. Before publishing, users may also be asked to declare whether their upload is AI-generated, while platforms are expected to verify this using technical tools.
Major social media giants such as Instagram, YouTube, and Facebook will face stricter rules. If a platform knowingly allows violating content or fails to act, it may be considered as not exercising due diligence, which could invite legal consequences. At the same time, the government clarified that taking action under these rules will not impact the platform’s safe harbour protections.
The amendments directly link harmful synthetic content to existing laws, including the Bharatiya Nyaya Sanhita, POCSO Act, and regulations related to explosives and false records. Platforms must also remind users, at least once every three months, about penalties linked to AI misuse, including account suspension or legal action.