comscore

Indian Government confirms new IT rules to regulate deepfakes and AI-generated content online

Indian government has updated IT rules to control AI-generated and deepfake content on social media platforms. New rules will take effect from February 20.

Edited By: Deepti Ratnam | Published By: Deepti Ratnam | Published: Feb 11, 2026, 10:46 AM (IST)

  • whatsapp
  • twitter
  • facebook
  • whatsapp
  • twitter
  • facebook

The concern regarding the spread of deepfakes and AI-made content has reached a challenging height, wherein the Indian government has started taking major steps to curb AI-generated content on social media platforms. In a notable step, the central government has updated its existing IT rules to bring synthetic content under regulation. The changes will be implemented to reduce the misuse of the misleading content generated by AI. Additionally, the government is also trying to stop misinformation and make platforms act faster when harmful content appears online.

Updated IT Rules Come Into Force From February 20

The amendment released by central government notified on February 10 and will take effect from February 20 in the country. The amendment says that the Information Technology Intermediary Rules will be modified. To recall, these rules were first introduced in 2021. This will be the first time when AI-generated content will be formally under scrutiny, recognized, and regulated under India law.

Notably, the rules will be implied to all online platforms, including Facebook, Instagram, Telegram, WhatsApp, and more that host or publish AI-generated content.

AI-Generated Content

The Indian government has clearly defined what counts as an AI-generated or synthetically generated content. The content will include several categories that can produce the AI content, including audio, images, videos, or mixed media. These content are often created or changed using computer tools in a way that makes them look real. Content like deepfake videos, AI voice clones, and face swapped comes under this category.

The main focus will be on content that are misleading people by appearing authentic. Nevertheless, central government also clarified that color correction, subtitles, translations, compressions, and accessibility changes are not count under this new law. Importantly, this will be applied if these correction doesn’t change the original meaning. Illustrations used for training, research, or presentations are also excluded.

New Responsibilities for Social Media Platforms

  1. Platforms that allow AI-generated content like Instagram, Facebook, WhatsApp, and more must now clearly label it. Under the new law, the labels should be visible and easy to notice for users. These labels cannot be hidden strictly.
  2. The second rule is that platforms also need to attach technical markers such as identifiers, metadata, and keywords so that the AI-generated content can easily be traced.
  3. As soon as these markers are added, platforms will not be allowed to remove or altered these content.
  4. All these steps together are meant to prevent AI content from being reshared online without disclosure.

Extra Rules

The central government has also listed some extra rules for large platforms like Facebook, Instagram, as they are informed to have stricter requirements.

These rules include-

  1. The first rule is that before uploading any content, users must declare whether AI tools are used or not. Additionally, platforms are also expected to use automated systems to verify these contents or to make declarations.
  2. The second rule is if AI-generated or synthetic-generated content is detected, the platform must publish it with a clear label. If they are found allowing unlabelled synthetic content knowingly then they will lie under legal protection law.

Takedowns and User Alerts

  1. The new rules also shorten the timing that platforms get to act on official orders. In certain cases, they are also advised to remove the content within three hours.
  2. The next rule is platforms need to actively block the AI content. These include misleading deepfakes, child abuse, fake records, weapon linked content, harmful, suicidal, and more.
  3. Platforms are asked to warn users at least once every three months about the downside of creating misleading AI content and its penalties involved.

What It Means for Users

One of the most noticeable changes for users will be labels on AI-generated posts. This rule will help users to identify synthetic content or machine-made content before sharing or engaging with it. You also need to confirm if AI tools were used while creating the content.

Giving false information can now lead to account action or legal consequences in serious cases.

Add Techlusive as a Preferred SourceAddTechlusiveasaPreferredSource

Deadline

The first draft rules were shared in 2025. This time, the central government has released final notification for platforms. They have until February 20, 2026 to comply with all the rules and regulations.