Latest Legal NewsLegal Updates

Centre Brings AI-Generated Content Under IT Rules, Cuts Takedown Time to 3 Hours

The Central government has amended the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 to bring AI-generated content clearly within the scope of intermediary regulation in India.

The changes were notified by the Ministry of Electronics and Information Technology (MeitY) on February 10, 2026, and will come into effect from February 20, 2026. These amendments have been made under the rule-making powers granted by the Information Technology Act, 2000.

AI-Generated Content Now Covered Under the Rules

The amended Rules clarify that the term “information” includes “synthetically generated information.” This means that content created or modified using artificial intelligence tools will now be treated like any other content under intermediary obligations.

“Synthetically generated information” includes audio-visual content that is artificially created, altered or modified using computer systems in a way that makes it appear real. In simple terms, this covers deepfakes or AI-generated videos, images or audio that may look authentic or resemble real persons or events.

However, the Rules make an important distinction. Routine activities such as editing, formatting, translation, transcription, accessibility improvements, educational materials and research outputs will not be treated as synthetically generated information, as long as they do not create false or misleading records.

Mandatory Labelling of AI Content

Intermediaries that allow users to create or share AI-generated content must ensure that such content is clearly and prominently labelled as “synthetically generated.”

Where technically possible, platforms must also embed permanent metadata or unique identifiers in such content. This will help authorities trace the computer resource used to generate or modify the content. Importantly, platforms are not allowed to enable removal or alteration of these labels or metadata.

Significant social media intermediaries will now have an additional responsibility. They must require users to declare whether their content is AI-generated before uploading or publishing it. Platforms must also use appropriate technical tools to verify these declarations. If content is confirmed to be AI-generated, it must be displayed with a clear notice stating that it is synthetic.

Stricter User Notification Duties

The amendments also strengthen obligations relating to user awareness. Platforms must inform users at least once every three months that violation of platform rules or user agreements can lead to suspension, content removal or termination of access.

Users must also be warned that unlawful activities can attract penalties under applicable laws. This includes offences that require mandatory reporting, such as those under the Bharatiya Nagarik Suraksha Sanhita, 2023 and the Protection of Children from Sexual Offences Act, 2012.

Takedown Timelines Drastically Reduced

One of the most significant changes is the sharp reduction in compliance timelines:

  • The time to comply with lawful takedown directions has been reduced from 36 hours to just 3 hours.
  • The grievance redressal period has been shortened from 15 days to 7 days.
  • For urgent complaints, the response time has been cut from 72 hours to 36 hours.
  • In certain specified cases, intermediaries must now act within 2 hours instead of the earlier 24 hours.

These changes are aimed at ensuring faster action against unlawful and harmful content, especially AI-generated material that can spread rapidly.

Impact on Safe Harbour Protection

The amendment clarifies that if intermediaries remove or disable access to synthetically generated information in compliance with the Rules, such action will not violate safe harbour protection under Section 79(2) of the IT Act.

This means platforms will not lose their legal immunity for acting in accordance with the amended Rules.

What This Means

With these amendments, the government has formally brought AI-generated content under India’s digital regulatory framework. Platforms will now have to act faster, monitor AI content more carefully and ensure proper labelling and traceability.

The changes reflect a clear policy move towards stricter digital governance in the age of artificial intelligence.

Courtroom Today WhatsApp Community