Latest Legal NewsLegal Updates

Centre Moves To Mandate Labels On AI Content; Deepfakes Under IT Rules Lens

The Ministry of Electronics and Information Technology (MeitY) has proposed changes to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, to make it compulsory to label AI-generated content, including deepfakes. The move aims to curb the spread of misleading material on major social media platforms like Facebook, YouTube and X.

Why this move?

MeitY has flagged that deepfake audio and video, as well as other deceptive content, can:

  • harm reputations,
  • influence elections, and
  • enable financial fraud.

To address this, the draft rules bring synthetically generated information within the scope of intermediary obligations.

What counts as “synthetically generated information”?

The draft defines it as content created, modified or altered using a computer resource in a way that appears authentic or true even though it is not.

What will intermediaries have to do?

Intermediaries that offer tools or resources to create or modify synthetic content must:

  • Clearly label the content or embed a unique metadata/identifier indicating it is synthetic.
  • Ensure the label/metadata cannot be removed or suppressed.
  • Follow visual and audio labelling standards:
    • Visuals: The label should occupy at least 10% of the display surface.
    • Audio: The synthetic nature should be announced for at least 10% of the duration.

Extra duties for big platforms (SSMIs)

Significant Social Media Intermediaries (SSMIs) such as X, Meta and YouTube must:

  • Collect a declaration from users at the time of upload on whether the content is synthetic.
  • Use reasonable and appropriate technical measures (including automated tools) to verify these declarations.
  • If content is found to be synthetic, prominently display a label or notice stating it is algorithmically generated.

What happens if platforms ignore this?

Intermediaries may be treated as violating due diligence obligations if they:

  • knowingly permit or promote deceptive synthetic content, or
  • fail to act against such content after being made aware of it.

Consultation window

The draft—Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025—is open for public feedback till November 6, 2025. Comments can be sent to itrules.consultation@meity.gov.in. After notification, both content-creation tools and social media platforms will need to update systems and policies to comply.

Practical angle for the legal community

For lawyers, law firms and legal-tech users (including those using ChatGPT and similar tools):

  • Client confidentiality: If AI tools help draft documents or summarise case files, ensure no personal information or client details are fed into systems without proper consent and safeguards.
  • Privacy policies: Update website/app privacy policies to explain AI use, purposes of processing, retention, and user rights. Mention any labelling practices adopted for synthetic outputs.
  • Risk controls: Put in place internal SOPs to label synthetic content (draft explainer videos, AI-generated imagery, voiceovers) as required—especially for marketing, thought leadership posts and social media reels.
  • Data breach preparedness: Review incident response plans. If an AI workflow causes a data leak, this could trigger breach notification and reputational harm.
  • Training & declarations: Brief teams and vendors about upload declarations on SSMIs and ensure tools embed non-removable identifiers where feasible.
  • Accuracy checks: Use human review to avoid AI hallucinations in legal content—misleading posts could invite platform action and regulatory scrutiny.
Courtroom Today WhatsApp Community

Leave a Reply

Your email address will not be published. Required fields are marked *

Courtroom Today Popup Banner