The regulatory conversation on artificial intelligence has moved beyond ethics into enforceability. With the amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules effective 20 February 2026, digital conduct involving synthetic media now operates within a compressed and more accountable framework.
The consequences are no longer theoretical.
Intermediary Obligations After 20 February 2026
Intermediaries are now required to remove specified unlawful content within three hours of receiving a valid court order or lawful government direction. The amendment specifically targets deepfakes, non-consensual intimate imagery and AI-generated misinformation affecting user safety.
The reduced compliance window changes platform behaviour. Automated monitoring systems are being strengthened, and borderline content may face swift disabling. For users, this means that circulation of questionable synthetic material may result in rapid account suspension or access restriction, even before a full adjudication takes place.
The safe harbour protection available under Section 79 of the Information Technology Act remains conditional. Platforms that fail to act within the prescribed timeline risk losing immunity.
Personal Criminal Exposure
The use of artificial intelligence does not dilute individual liability. Creation or circulation of manipulated content may attract prosecution under multiple statutory provisions.
Under the Bharatiya Nyaya Sanhita, offences relating to defamation, impersonation, forgery, cheating and obscenity may be invoked depending on the nature of the content. The Information Technology Act continues to govern unauthorised access, identity misuse and publication of objectionable material in electronic form.
In cases involving non-consensual intimate imagery or identity-based manipulation, liability may extend to both the creator and the distributor. Forwarding harmful synthetic content can constitute participation in the offence.
The assumption that liability attaches only to the originator is legally incorrect.
Deepfakes and Evidentiary Challenges
One of the emerging legal difficulties lies in proving authenticity. Synthetic media can replicate voice, facial movement and contextual cues with high precision. In litigation, questions of admissibility and authenticity will increasingly require digital forensic expertise.
Courts may demand metadata analysis, device tracing, expert testimony and chain-of-custody documentation before accepting or rejecting contested digital material. The burden of proof will depend on the nature of proceedings, but evidentiary rigour is expected to intensify.
This has implications for both prosecution and defence strategy.
Labelling of AI-Generated Content
The amended Rules require platforms to label synthetically generated information and embed identifiable provenance markers where technically feasible. This does not criminalise AI-generated content per se. It introduces transparency obligations.
However, removal of labels, deliberate misrepresentation of synthetic content as authentic, or manipulation intended to deceive may strengthen allegations of fraudulent intent in subsequent proceedings.
Transparency reduces ambiguity. Concealment increases exposure.
Digital Conduct in a Regulated Environment
Artificial intelligence is now embedded in routine communication. Corporate advertising, political messaging, entertainment and personal expression increasingly rely on synthetic tools.
The legal framework is not designed to stifle innovation. It is designed to prevent misuse. Individuals using AI for creative or commercial purposes should maintain clarity about consent, attribution and accuracy. When reputational or financial harm results, civil and criminal consequences follow.
The digital space is no longer loosely regulated territory. Enforcement is becoming faster, and traceability is improving.
In matters of synthetic media, the safest assumption is simple: if the content could cause harm when believed to be real, legal scrutiny is likely to follow.