The Three-Hour Rule: Can Speed Deliver Justice in the Digital Age?

The recent amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules marks a decisive turn in India’s digital regulatory approach. With effect from 20 February 2026, intermediaries are required to remove specified categories of unlawful content within three hours of receiving a valid court order or lawful governmental direction. The earlier thirty-six-hour compliance window has been significantly reduced.

This change is not symbolic. It reflects an institutional acknowledgment that digital harm unfolds at a pace traditional enforcement models were never designed to manage. Deepfake videos, synthetic intimate imagery and AI-generated misinformation can circulate widely within minutes, often causing reputational and psychological damage long before any legal intervention becomes effective.

The amendment therefore seeks to narrow the gap between harm and response. Whether it succeeds will depend not only on speed, but on balance.

The Statutory Framework and Intermediary Liability

The obligation must be understood within the architecture of Section 79 of the Information Technology Act, 2000, which grants conditional immunity to intermediaries for third-party content. That immunity is contingent upon due diligence and compliance with lawful removal directions. The Supreme Court in Shreya Singhal v. Union of India clarified that intermediaries are required to act upon receiving actual knowledge through a court order or appropriate government notification.

By compressing the compliance timeline to three hours, the amendment effectively raises the standard of responsiveness expected from platforms. Failure to act within the prescribed period may expose intermediaries to regulatory action and potential loss of safe harbour protection.

Uniform timelines apply regardless of platform size. While major technology companies may possess the infrastructure to meet such expectations, smaller intermediaries may find the operational burden substantial. The law, however, does not presently differentiate on the basis of scale.

Deepfakes, Privacy and Criminal Exposure

The regulatory focus on deepfakes and synthetic content is rooted in tangible legal risk. Artificial intelligence tools now enable fabrication of realistic video and audio representations capable of misleading audiences and harming individuals. Such content may trigger liability under provisions relating to defamation, impersonation, obscenity, identity theft and fraud, as recognised under the Bharatiya Nyaya Sanhita, 2023 and the Information Technology Act.

The constitutional right to privacy, affirmed in Justice K.S. Puttaswamy v. Union of India, further strengthens the legal foundation for protecting individuals from non-consensual digital manipulation. The three-hour mandate is therefore positioned as a protective mechanism aimed at containing harm before it becomes irreversible.

Transparency Through Labelling

Beyond expedited removal, the Rules now require platforms to label “Synthetically Generated Information” and, where technically feasible, embed identifiable metadata or provenance indicators. This marks a regulatory preference for disclosure rather than prohibition.

Labelling serves an informational function. It allows users to evaluate content with awareness of its artificial origin. However, practical questions remain. Standards for determining technical feasibility and methods of verification will require clarity to ensure uniform application across platforms.

Constitutional Balance and Proportionality

Any regulatory measure affecting online expression must be assessed in light of Article 19(1)(a) of the Constitution and the reasonable restrictions permitted under Article 19(2). While the three-hour rule operates only upon receipt of a lawful order, the compressed timeline may encourage cautious over-removal in ambiguous cases.

The doctrine of proportionality, repeatedly emphasized by the Supreme Court, requires that restrictions be narrowly tailored and not excessive. If removal directions are precise and grounded in clear statutory violations, constitutional concerns may remain limited. If broadly framed, they may invite challenge.

The tension between rapid enforcement and protection of legitimate speech will define judicial scrutiny in the coming years.

The Practical Implications

For users and content creators, the amendment underscores that artificial intelligence does not dilute personal accountability. Circulation of manipulated or harmful synthetic content may attract civil and criminal consequences. The medium has evolved; liability principles have not.

For victims, timely preservation of evidence, invocation of grievance mechanisms, filing of complaints and seeking injunctive relief remain essential steps. The reduced compliance window may assist in containing circulation, but procedural vigilance remains critical.

The legal system itself will need to strengthen its forensic and investigative capabilities. As synthetic media becomes more sophisticated, courts must be equipped to evaluate authenticity with technical precision.

The three-hour rule reflects a regulatory belief that delay amplifies damage. It attempts to align enforcement speed with technological velocity. Its legitimacy, however, will depend on measured implementation. Speed must serve justice, not substitute it.