MeitY’s 2-Hour Deepfake Takedown Window Under IT Amendment Rules 2026: Constitutionally Proportionate or Operationally Impossible?
Introduction
The proliferation of artificial intelligence-generated synthetic media has created unprecedented challenges for digital governance worldwide. In India, the Ministry of Electronics and Information Technology notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 (IT Amendment Rules 2026) on February 10, 2026, which came into force on February 20, 2026 [1]. These amendments introduce stringent timelines for content takedown, particularly a two-hour window for removing non-consensual intimate images and deepfake pornography, raising critical questions about constitutional validity and practical feasibility. This article examines whether the IT Amendment Rules 2026 strike a proportionate balance between protecting fundamental rights and ensuring operational viability for digital intermediaries.
The Regulatory Framework: Understanding the IT Amendment Rules 2026
The Information Technology Act, 2000 serves as the foundational legislation governing cyberspace in India, with the IT Rules 2021 providing detailed guidelines for intermediary liability. The IT Amendment Rules 2026 specifically target synthetically generated information, defined under the newly inserted Rule 2(1)(wa) as “audio, visual or audio-visual information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information appears to be real” [1].
Under the amended framework, intermediaries must now remove content within drastically compressed timelines. Rule 3(1)(d) mandates removal of unlawful content within three hours of receiving a government or court order, reduced from the previous thirty-six hour window [2]. More significantly, Rule 3(2)(b) requires intermediaries to act within two hours for cases involving exposure of private areas, nudity, sexual acts, or artificially morphed images that were previously subject to a twenty-four hour deadline [3].
The amendments also impose mandatory labelling requirements under Rule 4(1A), requiring significant social media intermediaries to ensure users declare whether uploaded content is synthetically generated, and to embed permanent metadata or unique digital identifiers in such content [2]. These provisions are designed to address the exponential rise in deepfake-related crimes, with Indians losing approximately twenty-two thousand eight hundred forty-five crore rupees to cybercriminals in 2024, marking a two hundred six percent increase from the previous year [1].
Constitutional Foundations: Article 19 and the Freedom of Speech Framework
The constitutional validity of content takedown regulations must be examined through the lens of Article 19(1)(a) of the Constitution of India, which guarantees all citizens the right to freedom of speech and expression. This right extends to digital platforms and online speech, as established in numerous Supreme Court pronouncements. However, Article 19(2) permits the state to impose reasonable restrictions on this freedom in the interests of sovereignty and integrity of India, security of the state, friendly relations with foreign states, public order, decency or morality, contempt of court, defamation, or incitement to an offense.
The landmark judgment in Shreya Singhal v. Union of India [4] fundamentally reshaped intermediary liability law in India. The Supreme Court struck down Section 66A of the Information Technology Act, 2000 for being unconstitutionally vague and having a chilling effect on free speech. More importantly for the present discussion, the Court read down Section 79 of the IT Act and Rule 3(4) of the Intermediaries Guidelines to mean that intermediaries obtain actual knowledge requiring content takedown only through a court order or notification from a government authority, not through private complaints.
Justice Nariman observed in Shreya Singhal that “adjudicating on whether or not there is contravention of a particular provision of law, is the quintessential sovereign function to be discharged by the State or its organs. This function cannot be delegated to private parties such as intermediaries” [4]. This principle remains foundational to understanding the scope and limits of intermediary obligations under Indian law.
The tension between free expression and content moderation has been further explored in recent jurisprudence. The Constitution bench in recent observations emphasized that restrictions on speech must be precisely tailored, proportionate, and narrowly drawn to pass constitutional scrutiny. Any framework limiting expression must not be ambiguous or overbroad, and must serve a legitimate state interest through the least restrictive means available.
Proportionality Analysis: Balancing Rights and Regulatory Objectives
The proportionality test, derived from constitutional jurisprudence, requires that any restriction on fundamental rights must satisfy four criteria: it must have a legitimate aim, be suitable to achieve that aim, be necessary in that no less restrictive alternative exists, and maintain a fair balance between the restriction and the rights affected.
The legitimate aim of the two-hour takedown window is clear and compelling. Non-consensual intimate imagery and deepfake pornography cause severe psychological trauma, reputational damage, and constitute violations of dignity and privacy. These harms are often irreversible, with content spreading rapidly across platforms and causing lasting damage to victims. The Supreme Court of India, including the Chief Justice himself who became a victim of a deepfake video, has repeatedly flagged the inadequacy of existing laws in addressing this digital menace [5].
However, the necessity prong of the proportionality test raises significant concerns. A two-hour response window for global platforms handling millions of content pieces daily presents formidable operational challenges. Automated detection systems, while increasingly sophisticated, struggle with accuracy rates and generate both false positives and false negatives. Human moderation at scale within such compressed timelines requires substantial infrastructure investment, multilingual expertise, and contextual understanding that may not be immediately available.
Furthermore, the rules do not provide clear standards for what constitutes “reasonable and appropriate technical measures” for detecting prohibited synthetic content, nor do they establish performance benchmarks or acceptable error-rate thresholds [3]. This ambiguity creates uncertainty for intermediaries attempting compliance while simultaneously risking over-censorship to avoid liability.
The Deepfake Crisis: Judicial Recognition and Response
Indian courts have increasingly recognized the unique threats posed by deepfake technology. In Arun Jaitley v. Network Solutions Private Limited, the Delhi High Court protected personality rights in the digital domain, establishing that personal names of prominent individuals merit protection against cybersquatting and unauthorized use [6]. While this case predated the deepfake era, its reasoning about protecting digital identity and preventing misuse of persona has been extended to contemporary challenges.
More recently, courts have addressed deepfake-specific harms. The Delhi High Court, in addressing cases involving prominent personalities, has issued orders requiring platforms to deploy automated technology for detecting and deleting infringing content. These judicial directions acknowledge that manual takedown procedures are inadequate for addressing the scope and velocity of digital harm, necessitating technological solutions to counter technological threats [7].
The Supreme Court’s jurisprudence on dignity and privacy rights under Article 21 provides additional constitutional grounding for robust anti-deepfake measures. The right to life and personal liberty has been interpreted expansively to include the right to dignity, privacy, and reputation. Non-consensual intimate imagery, whether real or synthetic, violates these fundamental rights in ways that justify state intervention.
Comparative Perspectives: Global Approaches to Deepfake Regulation
India’s two-hour takedown mandate can be contextualized against international regulatory approaches. The United States enacted the Take It Down Act in May 2025, requiring platforms to remove non-consensual intimate imagery and deepfakes within forty-eight hours of notification [8]. This legislation provides more time for compliance while establishing federal standards for notice-and-takedown procedures.
The European Union’s approach under the AI Act and the Digital Services Act establishes risk-based frameworks that impose heightened obligations on very large online platforms while providing more nuanced timelines and procedural safeguards. These frameworks recognize that different types of content and different platform capacities warrant differentiated regulatory responses.
The critical distinction in India’s approach is the extremely compressed timeline coupled with potential loss of safe harbor immunity under Section 79 of the IT Act for non-compliance. This creates high-stakes pressure on intermediaries that may incentivize over-removal of content to avoid liability, potentially infringing on legitimate speech and expression.
Operational Feasibility: The Implementation Challenge
The operational challenges of implementing a two-hour takedown window cannot be understated. Platforms must first receive notification, verify the complainant’s identity and claim, locate the specific content across potentially multiple instances and formats, assess whether it genuinely violates the rules rather than constituting legitimate parody or satire, and then execute technical removal while maintaining records for potential legal challenges.
For global platforms operating across time zones, the requirement means maintaining round-the-clock moderation teams with expertise in Indian law and cultural context. For smaller intermediaries and emerging platforms, these requirements may create insurmountable barriers to entry, potentially consolidating the digital marketplace in favor of large incumbents with resources to build extensive compliance infrastructure.
The IT Amendment Rules 2026 provide limited clarity on contentious edge cases. Exclusions for “routine editing” and “good faith creation” remain subject to interpretation, particularly for satire, parody, or artistic expression [3]. The mechanism for verifying user declarations about synthetic content is also undefined, leaving intermediaries to develop their own standards without regulatory guidance.
Furthermore, the rules do not address the reality that deepfakes are constantly evolving technologically. Detection methods that work today may be obsolete tomorrow as generation techniques become more sophisticated. This creates an arms race dynamic where compliance frameworks must continuously adapt, yet the regulatory timelines remain fixed.
The Safe Harbor Dilemma: Balancing Protection and Accountability
Section 79 of the Information Technology Act provides intermediaries with safe harbor immunity from liability for third-party content, provided they comply with due diligence requirements. The Shreya Singhal judgment clarified that this immunity is preserved when intermediaries respond appropriately to government or court orders for content takedown [4].
The IT Amendment Rules 2026 explicitly state that intermediaries will not lose safe harbor protection when removing or disabling access to synthetically generated content in accordance with the rules [2]. However, the practical effect of the compressed timelines is to shift substantial risk to intermediaries. Failure to remove content within two hours could result in loss of immunity, exposing platforms to liability for damages suffered by victims.
This creates a strong incentive structure favoring over-removal. When faced with uncertainty about whether specific content violates the rules, platforms will likely err on the side of taking down questionable material rather than risking significant legal exposure. This dynamic undermines the careful balance struck in Shreya Singhal, where the Court sought to prevent intermediaries from becoming private judges of content legality.
The constitutional concern is that this effectively delegates quasi-judicial functions to private platforms, requiring them to make rapid determinations about content legality without the procedural safeguards that accompany governmental or judicial decision-making. This runs contrary to the Shreya Singhal principle that adjudicating legal violations is a quintessentially sovereign function.
Recommendations: Toward a More Balanced Framework
A more constitutionally sound and operationally viable framework would incorporate several modifications. First, the rules should establish clear, graduated timelines based on content type and harm severity. Genuinely harmful non-consensual intimate imagery might warrant expedited removal, while other synthetic content could operate under longer timeframes allowing for careful review.
Second, procedural safeguards must be strengthened. Users whose content is removed should receive notification and have meaningful opportunity for appeal. The rules should establish independent review mechanisms, similar to content review boards that some platforms have voluntarily adopted, ensuring that takedown decisions are subject to oversight beyond the initial platform determination.
Third, the regulatory framework should provide clearer technical standards and guidance. Rather than leaving intermediaries to develop their own detection methodologies, the government could establish certification programs for detection tools, create safe harbors for good faith use of approved technologies, and provide regular guidance on emerging deepfake techniques and appropriate responses.
Fourth, the rules should explicitly protect legitimate uses of synthetic media. Clear carve-outs for news reporting, academic research, artistic expression, and political commentary would prevent over-censorship while still addressing genuinely harmful content. These exceptions should be defined with sufficient precision to provide meaningful guidance while remaining flexible enough to accommodate technological evolution.
Finally, enforcement should be proportionate and consider platform size and resources. Differential standards for large social media intermediaries versus smaller platforms would recognize that compliance capacity varies substantially across the digital ecosystem. This tiered approach is common in other jurisdictions and helps prevent regulatory capture by large incumbents.
Conclusion
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 represent India’s most comprehensive effort to address the deepfake crisis. The two-hour takedown window for non-consensual intimate imagery reflects legitimate concerns about severe harms that victims suffer from such content. However, the constitutional validity and operational feasibility of this extremely compressed timeline remain questionable.
The framework must be evaluated against the standards established in Shreya Singhal v. Union of India and the broader constitutional jurisprudence on freedom of speech and expression. While protecting victims of deepfake abuse is a compelling state interest, the means chosen must be narrowly tailored, provide adequate procedural safeguards, and avoid creating incentive structures that lead to over-censorship of legitimate speech.
The tension between rapid response to digital harm and protection of free expression is not unique to India, but India’s approach is among the most aggressive globally. As implementation proceeds, close monitoring of compliance rates, false positive removals, and impact on legitimate speech will be essential. The rules include provisions for periodic review, and such reviews should incorporate empirical data on implementation challenges and constitutional concerns.
Ultimately, effective deepfake regulation requires a multi-stakeholder approach combining legal frameworks, technological solutions, media literacy, and international cooperation. The two-hour takedown window, while well-intentioned, may prove to be operationally impossible without substantial modifications that better balance the legitimate interests of all stakeholders while maintaining fidelity to constitutional principles of free expression and due process.
References
[1] Ministry of Electronics and Information Technology. (2026). Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026. https://www.outlookbusiness.com/news/meity-notifies-it-rules-to-curb-deepfakes-and-ai-generated-content
[2] Outlook Business. (2026). AI Labelling, Quicker Takedowns: Decoding India’s New Social Media Rules. https://www.outlookbusiness.com/explainers/ai-labelling-quicker-takedowns-decoding-indias-new-social-media-rules
[3] Obhan & Associates. (2026). India’s New Deepfake Regulation: MeitY Notifies Amendments to Information Technology Rules 2021. https://www.obhanandassociates.com/blog/indias-new-deepfake-regulation-meity-notifies-amendments-to-information-technology-rules-2021/
[4] Shreya Singhal v. Union of India, (2015) 5 SCC 1, AIR 2015 SC 1523. https://indiankanoon.org/doc/110813550/
[5] The Sentinel Assam. (2026). Can new IT rules stop the deepfake epidemic? https://www.sentinelassam.com/more-news/editorial/can-new-it-rules-stop-the-deepfake-epidemic
[6] Arun Jaitley v. Network Solutions Private Limited, CS(OS) 1745/2009, Delhi High Court (2011). https://indiankanoon.org/doc/754672/
[7] Khurana & Khurana. (2025). Deepfake Regulation India 2025: MeitY’s Comprehensive IT Rules Amendment. https://www.khuranaandkhurana.com/deepfake-regulation-india-2025-meity-s-comprehensive-it-rules-amendment
[8] Skadden, Arps, Slate, Meagher & Flom LLP. (2025). ‘Take It Down Act’ Requires Online Platforms To Remove Unauthorized Intimate Images and Deepfakes When Notified. https://www.skadden.com/insights/publications/2025/06/take-it-down-act
[9] The Federal. (2026). India mandates 3-hour takedown for AI content: FAQ of what you need to know. https://thefederal.com/category/explainers-2/ai-content-faq-on-new-it-rules-for-ai-generated-content-deepfake-229394
Whatsapp

