India has amended the IT Rules, 2021, expanding them to regulate synthetic and AI-generated content.
The latest Gazette notification defines “synthetically generated information” as content created, altered, or manipulated using computer resources—bringing deepfakes and other forms of synthetic media under legal scrutiny.
The updated rules, effective from February 20, 2026, mandate clear labelling, traceability, and user disclosure for AI-generated audio, visual, and audio-visual content.
Platforms must ensure compliance through automated safeguards, audit trails, and fast takedown mechanisms.
This is a major step towards responsible AI governance and protecting users from impersonation, misinformation, and digital fraud.
However, implementation won’t be simple.
Experts highlight a widening gap between AI creation and detection speed, challenges for small platforms, and cross-platform traceability loss due to metadata stripping.
User non-compliance, jurisdictional issues, and increased operational demands also complicate enforcement.
According to Dr. Deepak Kumar Sahu, Founder of FaceOff Technologies, have built a real-time detection engine that flags deepfakes with precision. The platform is already aligned with the government's vision to combat synthetic media threats.
While regulation has taken a leap forward, success will depend on a triad of compliance, detection capability, and digital literacy.
The road ahead is long, but India has signalled its commitment to leading the fight against AI-driven misinformation.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.



