<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Intermediary Liability Archives - Bhatt &amp; Joshi Associates</title>
	<atom:link href="https://bhattandjoshiassociates.com/tag/intermediary-liability/feed/" rel="self" type="application/rss+xml" />
	<link>https://bhattandjoshiassociates.com/tag/intermediary-liability/</link>
	<description>Best High Court Advocates &#38; Lawyers</description>
	<lastBuildDate>Fri, 20 Feb 2026 12:55:55 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>MeitY’s 2-Hour Deepfake Takedown Window Under IT Amendment Rules 2026: Constitutionally Proportionate or Operationally Impossible?</title>
		<link>https://bhattandjoshiassociates.com/meitys-2-hour-deepfake-takedown-window-under-it-amendment-rules-2026-constitutionally-proportionate-or-operationally-impossible/</link>
		
		<dc:creator><![CDATA[Chandni Joshi]]></dc:creator>
		<pubDate>Fri, 20 Feb 2026 12:54:59 +0000</pubDate>
				<category><![CDATA[Information Technology]]></category>
		<category><![CDATA[Content Moderation]]></category>
		<category><![CDATA[Cyber Law India]]></category>
		<category><![CDATA[Deepfake Regulation]]></category>
		<category><![CDATA[Digital Governance]]></category>
		<category><![CDATA[Digital Rights]]></category>
		<category><![CDATA[Freedom of Speech]]></category>
		<category><![CDATA[Intermediary Liability]]></category>
		<category><![CDATA[IT Amendment Rules 2026]]></category>
		<category><![CDATA[Non Consensual Content]]></category>
		<category><![CDATA[Section 79]]></category>
		<guid isPermaLink="false">https://bhattandjoshiassociates.com/?p=31821</guid>

					<description><![CDATA[<p>Introduction The proliferation of artificial intelligence-generated synthetic media has created unprecedented challenges for digital governance worldwide. In India, the Ministry of Electronics and Information Technology notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 (IT Amendment Rules 2026) on February 10, 2026, which came into force on February 20, 2026 [&#8230;]</p>
<p>The post <a href="https://bhattandjoshiassociates.com/meitys-2-hour-deepfake-takedown-window-under-it-amendment-rules-2026-constitutionally-proportionate-or-operationally-impossible/">MeitY’s 2-Hour Deepfake Takedown Window Under IT Amendment Rules 2026: Constitutionally Proportionate or Operationally Impossible?</a> appeared first on <a href="https://bhattandjoshiassociates.com">Bhatt &amp; Joshi Associates</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h2><b>Introduction</b></h2>
<p>The proliferation of artificial intelligence-generated synthetic media has created unprecedented challenges for digital governance worldwide. In India, the Ministry of Electronics and Information Technology notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 (IT Amendment Rules 2026) on February 10, 2026, which came into force on February 20, 2026 [1]. These amendments introduce stringent timelines for content takedown, particularly a two-hour window for removing non-consensual intimate images and deepfake pornography, raising critical questions about constitutional validity and practical feasibility. This article examines whether the IT Amendment Rules 2026 strike a proportionate balance between protecting fundamental rights and ensuring operational viability for digital intermediaries.</p>
<h2><b>The Regulatory Framework: Understanding the IT Amendment Rules 2026</b></h2>
<p><span style="font-weight: 400;">The Information Technology Act, 2000 serves as the foundational legislation governing cyberspace in India, with the IT Rules 2021 providing detailed guidelines for intermediary liability. The </span>IT Amendment Rules 2026 <span style="font-weight: 400;">specifically target synthetically generated information, defined under the newly inserted Rule 2(1)(wa) as &#8220;audio, visual or audio-visual information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information appears to be real&#8221; [1].</span></p>
<p><span style="font-weight: 400;">Under the amended framework, intermediaries must now remove content within drastically compressed timelines. Rule 3(1)(d) mandates removal of unlawful content within three hours of receiving a government or court order, reduced from the previous thirty-six hour window [2]. More significantly, Rule 3(2)(b) requires intermediaries to act within two hours for cases involving exposure of private areas, nudity, sexual acts, or artificially morphed images that were previously subject to a twenty-four hour deadline [3].</span></p>
<p><span style="font-weight: 400;">The amendments also impose mandatory labelling requirements under Rule 4(1A), requiring significant social media intermediaries to ensure users declare whether uploaded content is synthetically generated, and to embed permanent metadata or unique digital identifiers in such content [2]. These provisions are designed to address the exponential rise in deepfake-related crimes, with Indians losing approximately twenty-two thousand eight hundred forty-five crore rupees to cybercriminals in 2024, marking a two hundred six percent increase from the previous year [1].</span></p>
<h2><b>Constitutional Foundations: Article 19 and the Freedom of Speech Framework</b></h2>
<p><span style="font-weight: 400;">The constitutional validity of content takedown regulations must be examined through the lens of Article 19(1)(a) of the Constitution of India, which guarantees all citizens the right to freedom of speech and expression. This right extends to digital platforms and online speech, as established in numerous Supreme Court pronouncements. However, Article 19(2) permits the state to impose reasonable restrictions on this freedom in the interests of sovereignty and integrity of India, security of the state, friendly relations with foreign states, public order, decency or morality, contempt of court, defamation, or incitement to an offense.</span></p>
<p><span style="font-weight: 400;">The landmark judgment in Shreya Singhal v. Union of India [4] fundamentally reshaped intermediary liability law in India. The Supreme Court struck down Section 66A of the Information Technology Act, 2000 for being unconstitutionally vague and having a chilling effect on free speech. More importantly for the present discussion, the Court read down Section 79 of the IT Act and Rule 3(4) of the Intermediaries Guidelines to mean that intermediaries obtain actual knowledge requiring content takedown only through a court order or notification from a government authority, not through private complaints.</span></p>
<p><span style="font-weight: 400;">Justice Nariman observed in Shreya Singhal that &#8220;adjudicating on whether or not there is contravention of a particular provision of law, is the quintessential sovereign function to be discharged by the State or its organs. This function cannot be delegated to private parties such as intermediaries&#8221; [4]. This principle remains foundational to understanding the scope and limits of intermediary obligations under Indian law.</span></p>
<p><span style="font-weight: 400;">The tension between free expression and content moderation has been further explored in recent jurisprudence. The Constitution bench in recent observations emphasized that restrictions on speech must be precisely tailored, proportionate, and narrowly drawn to pass constitutional scrutiny. Any framework limiting expression must not be ambiguous or overbroad, and must serve a legitimate state interest through the least restrictive means available.</span></p>
<h2><b>Proportionality Analysis: Balancing Rights and Regulatory Objectives</b></h2>
<p><span style="font-weight: 400;">The proportionality test, derived from constitutional jurisprudence, requires that any restriction on fundamental rights must satisfy four criteria: it must have a legitimate aim, be suitable to achieve that aim, be necessary in that no less restrictive alternative exists, and maintain a fair balance between the restriction and the rights affected.</span></p>
<p><span style="font-weight: 400;">The legitimate aim of the two-hour takedown window is clear and compelling. Non-consensual intimate imagery and deepfake pornography cause severe psychological trauma, reputational damage, and constitute violations of dignity and privacy. These harms are often irreversible, with content spreading rapidly across platforms and causing lasting damage to victims. The Supreme Court of India, including the Chief Justice himself who became a victim of a deepfake video, has repeatedly flagged the inadequacy of existing laws in addressing this digital menace [5].</span></p>
<p><span style="font-weight: 400;">However, the necessity prong of the proportionality test raises significant concerns. A two-hour response window for global platforms handling millions of content pieces daily presents formidable operational challenges. Automated detection systems, while increasingly sophisticated, struggle with accuracy rates and generate both false positives and false negatives. Human moderation at scale within such compressed timelines requires substantial infrastructure investment, multilingual expertise, and contextual understanding that may not be immediately available.</span></p>
<p><span style="font-weight: 400;">Furthermore, the rules do not provide clear standards for what constitutes &#8220;reasonable and appropriate technical measures&#8221; for detecting prohibited synthetic content, nor do they establish performance benchmarks or acceptable error-rate thresholds [3]. This ambiguity creates uncertainty for intermediaries attempting compliance while simultaneously risking over-censorship to avoid liability.</span></p>
<h2><b>The Deepfake Crisis: Judicial Recognition and Response</b></h2>
<p><span style="font-weight: 400;">Indian courts have increasingly recognized the unique threats posed by deepfake technology. In Arun Jaitley v. Network Solutions Private Limited, the Delhi High Court protected personality rights in the digital domain, establishing that personal names of prominent individuals merit protection against cybersquatting and unauthorized use [6]. While this case predated the deepfake era, its reasoning about protecting digital identity and preventing misuse of persona has been extended to contemporary challenges.</span></p>
<p><span style="font-weight: 400;">More recently, courts have addressed deepfake-specific harms. The Delhi High Court, in addressing cases involving prominent personalities, has issued orders requiring platforms to deploy automated technology for detecting and deleting infringing content. These judicial directions acknowledge that manual takedown procedures are inadequate for addressing the scope and velocity of digital harm, necessitating technological solutions to counter technological threats [7].</span></p>
<p><span style="font-weight: 400;">The Supreme Court&#8217;s jurisprudence on dignity and privacy rights under Article 21 provides additional constitutional grounding for robust anti-deepfake measures. The right to life and personal liberty has been interpreted expansively to include the right to dignity, privacy, and reputation. Non-consensual intimate imagery, whether real or synthetic, violates these fundamental rights in ways that justify state intervention.</span></p>
<h2><b>Comparative Perspectives: Global Approaches to Deepfake Regulation</b></h2>
<p><span style="font-weight: 400;">India&#8217;s two-hour takedown mandate can be contextualized against international regulatory approaches. The United States enacted the Take It Down Act in May 2025, requiring platforms to remove non-consensual intimate imagery and deepfakes within forty-eight hours of notification [8]. This legislation provides more time for compliance while establishing federal standards for notice-and-takedown procedures.</span></p>
<p><span style="font-weight: 400;">The European Union&#8217;s approach under the AI Act and the Digital Services Act establishes risk-based frameworks that impose heightened obligations on very large online platforms while providing more nuanced timelines and procedural safeguards. These frameworks recognize that different types of content and different platform capacities warrant differentiated regulatory responses.</span></p>
<p><span style="font-weight: 400;">The critical distinction in India&#8217;s approach is the extremely compressed timeline coupled with potential loss of safe harbor immunity under Section 79 of the IT Act for non-compliance. This creates high-stakes pressure on intermediaries that may incentivize over-removal of content to avoid liability, potentially infringing on legitimate speech and expression.</span></p>
<h2><b>Operational Feasibility: The Implementation Challenge</b></h2>
<p><span style="font-weight: 400;">The operational challenges of implementing a two-hour takedown window cannot be understated. Platforms must first receive notification, verify the complainant&#8217;s identity and claim, locate the specific content across potentially multiple instances and formats, assess whether it genuinely violates the rules rather than constituting legitimate parody or satire, and then execute technical removal while maintaining records for potential legal challenges.</span></p>
<p><span style="font-weight: 400;">For global platforms operating across time zones, the requirement means maintaining round-the-clock moderation teams with expertise in Indian law and cultural context. For smaller intermediaries and emerging platforms, these requirements may create insurmountable barriers to entry, potentially consolidating the digital marketplace in favor of large incumbents with resources to build extensive compliance infrastructure.</span></p>
<p><span style="font-weight: 400;">The </span>IT Amendment Rules 2026 <span style="font-weight: 400;">provide limited clarity on contentious edge cases. Exclusions for &#8220;routine editing&#8221; and &#8220;good faith creation&#8221; remain subject to interpretation, particularly for satire, parody, or artistic expression [3]. The mechanism for verifying user declarations about synthetic content is also undefined, leaving intermediaries to develop their own standards without regulatory guidance.</span></p>
<p><span style="font-weight: 400;">Furthermore, the rules do not address the reality that deepfakes are constantly evolving technologically. Detection methods that work today may be obsolete tomorrow as generation techniques become more sophisticated. This creates an arms race dynamic where compliance frameworks must continuously adapt, yet the regulatory timelines remain fixed.</span></p>
<h2><b>The Safe Harbor Dilemma: Balancing Protection and Accountability</b></h2>
<p><span style="font-weight: 400;">Section 79 of the Information Technology Act provides intermediaries with safe harbor immunity from liability for third-party content, provided they comply with due diligence requirements. The Shreya Singhal judgment clarified that this immunity is preserved when intermediaries respond appropriately to government or court orders for content takedown [4].</span></p>
<p><span style="font-weight: 400;">The </span>IT Amendment Rules 2026 <span style="font-weight: 400;">explicitly state that intermediaries will not lose safe harbor protection when removing or disabling access to synthetically generated content in accordance with the rules [2]. However, the practical effect of the compressed timelines is to shift substantial risk to intermediaries. Failure to remove content within two hours could result in loss of immunity, exposing platforms to liability for damages suffered by victims.</span></p>
<p><span style="font-weight: 400;">This creates a strong incentive structure favoring over-removal. When faced with uncertainty about whether specific content violates the rules, platforms will likely err on the side of taking down questionable material rather than risking significant legal exposure. This dynamic undermines the careful balance struck in Shreya Singhal, where the Court sought to prevent intermediaries from becoming private judges of content legality.</span></p>
<p><span style="font-weight: 400;">The constitutional concern is that this effectively delegates quasi-judicial functions to private platforms, requiring them to make rapid determinations about content legality without the procedural safeguards that accompany governmental or judicial decision-making. This runs contrary to the Shreya Singhal principle that adjudicating legal violations is a quintessentially sovereign function.</span></p>
<h2><b>Recommendations: Toward a More Balanced Framework</b></h2>
<p><span style="font-weight: 400;">A more constitutionally sound and operationally viable framework would incorporate several modifications. First, the rules should establish clear, graduated timelines based on content type and harm severity. Genuinely harmful non-consensual intimate imagery might warrant expedited removal, while other synthetic content could operate under longer timeframes allowing for careful review.</span></p>
<p><span style="font-weight: 400;">Second, procedural safeguards must be strengthened. Users whose content is removed should receive notification and have meaningful opportunity for appeal. The rules should establish independent review mechanisms, similar to content review boards that some platforms have voluntarily adopted, ensuring that takedown decisions are subject to oversight beyond the initial platform determination.</span></p>
<p><span style="font-weight: 400;">Third, the regulatory framework should provide clearer technical standards and guidance. Rather than leaving intermediaries to develop their own detection methodologies, the government could establish certification programs for detection tools, create safe harbors for good faith use of approved technologies, and provide regular guidance on emerging deepfake techniques and appropriate responses.</span></p>
<p><span style="font-weight: 400;">Fourth, the rules should explicitly protect legitimate uses of synthetic media. Clear carve-outs for news reporting, academic research, artistic expression, and political commentary would prevent over-censorship while still addressing genuinely harmful content. These exceptions should be defined with sufficient precision to provide meaningful guidance while remaining flexible enough to accommodate technological evolution.</span></p>
<p><span style="font-weight: 400;">Finally, enforcement should be proportionate and consider platform size and resources. Differential standards for large social media intermediaries versus smaller platforms would recognize that compliance capacity varies substantially across the digital ecosystem. This tiered approach is common in other jurisdictions and helps prevent regulatory capture by large incumbents.</span></p>
<h2><b>Conclusion</b></h2>
<p><span style="font-weight: 400;">The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 represent India&#8217;s most comprehensive effort to address the deepfake crisis. The two-hour takedown window for non-consensual intimate imagery reflects legitimate concerns about severe harms that victims suffer from such content. However, the constitutional validity and operational feasibility of this extremely compressed timeline remain questionable.</span></p>
<p><span style="font-weight: 400;">The framework must be evaluated against the standards established in Shreya Singhal v. Union of India and the broader constitutional jurisprudence on freedom of speech and expression. While protecting victims of deepfake abuse is a compelling state interest, the means chosen must be narrowly tailored, provide adequate procedural safeguards, and avoid creating incentive structures that lead to over-censorship of legitimate speech.</span></p>
<p><span style="font-weight: 400;">The tension between rapid response to digital harm and protection of free expression is not unique to India, but India&#8217;s approach is among the most aggressive globally. As implementation proceeds, close monitoring of compliance rates, false positive removals, and impact on legitimate speech will be essential. The rules include provisions for periodic review, and such reviews should incorporate empirical data on implementation challenges and constitutional concerns.</span></p>
<p><span style="font-weight: 400;">Ultimately, effective deepfake regulation requires a multi-stakeholder approach combining legal frameworks, technological solutions, media literacy, and international cooperation. The two-hour takedown window, while well-intentioned, may prove to be operationally impossible without substantial modifications that better balance the legitimate interests of all stakeholders while maintaining fidelity to constitutional principles of free expression and due process.</span></p>
<h2><b>References</b></h2>
<p><span style="font-weight: 400;">[1] Ministry of Electronics and Information Technology. (2026). Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026.</span><a href="https://www.outlookbusiness.com/news/meity-notifies-it-rules-to-curb-deepfakes-and-ai-generated-content"> <span style="font-weight: 400;">https://www.outlookbusiness.com/news/meity-notifies-it-rules-to-curb-deepfakes-and-ai-generated-content</span></a></p>
<p><span style="font-weight: 400;">[2] Outlook Business. (2026). AI Labelling, Quicker Takedowns: Decoding India&#8217;s New Social Media Rules.</span><a href="https://www.outlookbusiness.com/explainers/ai-labelling-quicker-takedowns-decoding-indias-new-social-media-rules"> <span style="font-weight: 400;">https://www.outlookbusiness.com/explainers/ai-labelling-quicker-takedowns-decoding-indias-new-social-media-rules</span></a></p>
<p><span style="font-weight: 400;">[3] Obhan &amp; Associates. (2026). India&#8217;s New Deepfake Regulation: MeitY Notifies Amendments to Information Technology Rules 2021.</span><a href="https://www.obhanandassociates.com/blog/indias-new-deepfake-regulation-meity-notifies-amendments-to-information-technology-rules-2021/"> <span style="font-weight: 400;">https://www.obhanandassociates.com/blog/indias-new-deepfake-regulation-meity-notifies-amendments-to-information-technology-rules-2021/</span></a></p>
<p><span style="font-weight: 400;">[4] Shreya Singhal v. Union of India, (2015) 5 SCC 1, AIR 2015 SC 1523.</span><a href="https://indiankanoon.org/doc/110813550/"> <span style="font-weight: 400;">https://indiankanoon.org/doc/110813550/</span></a></p>
<p><span style="font-weight: 400;">[5] The Sentinel Assam. (2026). Can new IT rules stop the deepfake epidemic?</span><a href="https://www.sentinelassam.com/more-news/editorial/can-new-it-rules-stop-the-deepfake-epidemic"> <span style="font-weight: 400;">https://www.sentinelassam.com/more-news/editorial/can-new-it-rules-stop-the-deepfake-epidemic</span></a></p>
<p><span style="font-weight: 400;">[6] Arun Jaitley v. Network Solutions Private Limited, CS(OS) 1745/2009, Delhi High Court (2011).</span><a href="https://indiankanoon.org/doc/754672/"> <span style="font-weight: 400;">https://indiankanoon.org/doc/754672/</span></a></p>
<p><span style="font-weight: 400;">[7] Khurana &amp; Khurana. (2025). Deepfake Regulation India 2025: MeitY&#8217;s Comprehensive IT Rules Amendment.</span><a href="https://www.khuranaandkhurana.com/deepfake-regulation-india-2025-meity-s-comprehensive-it-rules-amendment"> <span style="font-weight: 400;">https://www.khuranaandkhurana.com/deepfake-regulation-india-2025-meity-s-comprehensive-it-rules-amendment</span></a></p>
<p><span style="font-weight: 400;">[8] Skadden, Arps, Slate, Meagher &amp; Flom LLP. (2025). &#8216;Take It Down Act&#8217; Requires Online Platforms To Remove Unauthorized Intimate Images and Deepfakes When Notified.</span><a href="https://www.skadden.com/insights/publications/2025/06/take-it-down-act"> <span style="font-weight: 400;">https://www.skadden.com/insights/publications/2025/06/take-it-down-act</span></a></p>
<p><span style="font-weight: 400;">[9] The Federal. (2026). India mandates 3-hour takedown for AI content: FAQ of what you need to know.</span><a href="https://thefederal.com/category/explainers-2/ai-content-faq-on-new-it-rules-for-ai-generated-content-deepfake-229394"> <span style="font-weight: 400;">https://thefederal.com/category/explainers-2/ai-content-faq-on-new-it-rules-for-ai-generated-content-deepfake-229394</span></a></p>
<p>The post <a href="https://bhattandjoshiassociates.com/meitys-2-hour-deepfake-takedown-window-under-it-amendment-rules-2026-constitutionally-proportionate-or-operationally-impossible/">MeitY’s 2-Hour Deepfake Takedown Window Under IT Amendment Rules 2026: Constitutionally Proportionate or Operationally Impossible?</a> appeared first on <a href="https://bhattandjoshiassociates.com">Bhatt &amp; Joshi Associates</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Section 79 Safe Harbour and AI Platforms: Can an Algorithm Be an Intermediary Under Indian Law?</title>
		<link>https://bhattandjoshiassociates.com/section-79-safe-harbour-and-ai-platforms-can-an-algorithm-be-an-intermediary-under-indian-law/</link>
		
		<dc:creator><![CDATA[Aaditya Bhatt]]></dc:creator>
		<pubDate>Fri, 20 Feb 2026 12:34:08 +0000</pubDate>
				<category><![CDATA[Information Technology]]></category>
		<category><![CDATA[AI and Law]]></category>
		<category><![CDATA[AI Regulation India]]></category>
		<category><![CDATA[Algorithmic Liability]]></category>
		<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[Intermediary Guidelines]]></category>
		<category><![CDATA[Intermediary Liability]]></category>
		<category><![CDATA[IT Act 2000]]></category>
		<category><![CDATA[IT Rules 2026]]></category>
		<category><![CDATA[Safe Harbour]]></category>
		<category><![CDATA[Section 79]]></category>
		<guid isPermaLink="false">https://bhattandjoshiassociates.com/?p=31818</guid>

					<description><![CDATA[<p>Introduction The question of whether an artificial intelligence platform can qualify as an &#8220;intermediary&#8221; under Indian law — and thereby claim the protection of safe harbour under Section 79 of the Information Technology Act, 2000 — is one of the most pressing and underexamined questions in Indian technology law today. For more than two decades, [&#8230;]</p>
<p>The post <a href="https://bhattandjoshiassociates.com/section-79-safe-harbour-and-ai-platforms-can-an-algorithm-be-an-intermediary-under-indian-law/">Section 79 Safe Harbour and AI Platforms: Can an Algorithm Be an Intermediary Under Indian Law?</a> appeared first on <a href="https://bhattandjoshiassociates.com">Bhatt &amp; Joshi Associates</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h2><b>Introduction</b></h2>
<p><span style="font-weight: 400;">The question of whether an artificial intelligence platform can qualify as an &#8220;intermediary&#8221; under Indian law — and thereby claim the protection of safe harbour under Section 79 of the Information Technology Act, 2000 — is one of the most pressing and underexamined questions in Indian technology law today. For more than two decades, Section 79 has functioned as the backbone of India&#8217;s internet economy, shielding platforms from secondary liability for third-party content. The provision was drafted at a time when the internet was imagined as a passive pipe: a conduit through which users sent and received information. Algorithms of the generative and recommending kind that now define digital experience were simply not contemplated [1].</span></p>
<p><span style="font-weight: 400;">Today, platforms such as YouTube, Instagram, and AI-native services like Grok do not simply host content. Their algorithms curate, amplify, personalise, and in the case of generative AI, actively produce it. This makes the question far from academic: if an algorithm is found to be an active participant in content creation or curation, the platform deploying it may lose its statutory shield entirely. The Ministry of Electronics and Information Technology (MeitY) has, through a series of advisories in 2023 and 2024, begun to signal precisely this shift — that AI is not simply content hosted on a platform, but content shaped and generated by it [2].</span></p>
<h2><b>The Architecture of Section 79 of the IT Act: What the Provision Actually Says</b></h2>
<p><span style="font-weight: 400;">Section 79 of the Information Technology Act, 2000, provides in its operative part: </span><i><span style="font-weight: 400;">&#8220;Notwithstanding anything contained in any law for the time being in force but subject to the provisions of sub-sections (2) and (3), an intermediary shall not be liable for any third party information, data, or communication link made available or hosted by him.&#8221;</span></i><span style="font-weight: 400;"> This immunity is not unconditional. Sub-section (2) requires that the intermediary must not have initiated the transmission, must not have selected the receiver, and must not have selected or modified the information contained in the transmission. It must also observe due diligence and comply with the guidelines prescribed by the Central Government.</span></p>
<p><span style="font-weight: 400;">Sub-section (3) withdraws the protection in two scenarios: first, where the intermediary has conspired with, abetted, aided, or induced the commission of an unlawful act; and second, where the intermediary, upon receiving &#8220;actual knowledge&#8221; that unlawful content is being hosted on its platform, fails to expeditiously remove or disable access to that material. The term &#8220;intermediary&#8221; is defined under Section 2(1)(w) of the IT Act as </span><i><span style="font-weight: 400;">&#8220;any person who on behalf of another person receives, stores or transmits that record or provides any service with respect to that record,&#8221;</span></i><span style="font-weight: 400;"> and expressly includes telecom service providers, internet service providers, web-hosting service providers, search engines, online payment sites, online-auction sites, online marketplaces, and cyber cafes [1].</span></p>
<p><span style="font-weight: 400;">The structure of this provision assumes a fundamental premise: that the intermediary is a passive actor. Its immunity is premised on its not having shaped the content in question. The moment it crosses into active participation — selecting, modifying, inducing — the statutory protection falls away. The rise of AI platforms tests every element of this assumption.</span></p>
<h2><b>Shreya Singhal v. Union of India (2015): The Constitutional Baseline</b></h2>
<p><span style="font-weight: 400;">No discussion of Section 79 of the IT Act is complete without a reckoning with the Supreme Court&#8217;s landmark judgment in </span><i><span style="font-weight: 400;">Shreya Singhal v. Union of India</span></i><span style="font-weight: 400;">, (2015) 5 SCC 1, delivered on 24 March 2015 by a bench of Justices J. Chelameswar and R.F. Nariman. The case arose from a batch of writ petitions under Article 32 of the Constitution of India, principally challenging the constitutionality of Sections 66A, 69A, and 79 of the IT Act. The Supreme Court&#8217;s treatment of Section 79 fundamentally reshaped the intermediary liability regime in India [3].</span></p>
<p><span style="font-weight: 400;">The Court read down Section 79(3)(b) to narrow its scope significantly. The holding was unambiguous:</span></p>
<blockquote><p><i><span style="font-weight: 400;">&#8220;Section 79 is valid subject to Section 79(3)(b) being read down to mean that an intermediary upon receiving actual knowledge from a court order or on being notified by the appropriate Government or its agency that unlawful acts relatable to Article 19(2) are going to be committed then fails to expeditiously remove or disable access to such material.&#8221;</span></i></p></blockquote>
<p><span style="font-weight: 400;">In practical terms, the Court held that intermediaries are not required to act upon private takedown requests. &#8220;Actual knowledge,&#8221; as used in Section 79(3)(b), was interpreted to mean knowledge received through the medium of a court order — not a complaint from a private party. This interpretation rested on a practical foundation: holding intermediaries like Google and Facebook to a standard of responding to every private complaint would make it impossible for them to function, since millions of requests are received and an intermediary cannot be expected to adjudicate the legality of each piece of content on its own. The Court further affirmed that there is no positive obligation on intermediaries to monitor content on their platforms [3]. This no-monitoring principle remains foundational to India&#8217;s safe harbour regime under Section 79 of the IT Act, even as AI regulation begins to chip away at it.</span></p>
<h2><b>Active vs. Passive Intermediaries: The Christian Louboutin Standard</b></h2>
<p><span style="font-weight: 400;">The passive/active distinction now central to the AI liability debate was crystallised in Indian jurisprudence by the Delhi High Court in </span><i><span style="font-weight: 400;">Christian Louboutin SAS v. Nakul Bajaj &amp; Ors.</span></i><span style="font-weight: 400;">, 2018 SCC OnLine Del 12215, decided on 2 November 2018 by Justice Prathiba M. Singh. The case involved the luxury shoe brand&#8217;s claim against darveys.com, an e-commerce platform that used the plaintiff&#8217;s trademarks as meta-tags and claimed to sell authentic goods sourced from authorised stores [4].</span></p>
<p><span style="font-weight: 400;">The defendant&#8217;s principal defence was that it was a mere intermediary under Section 79 of the IT Act. Justice Singh rejected this defence and, in doing so, laid down a twenty-six point framework to determine whether an online platform is a passive conduit or an active participant. The court reasoned that so long as a platform acts as &#8220;mere conduit or passive transmitters of the records or of the information, they continue to be intermediaries, but merely calling themselves as intermediaries does not qualify all e-commerce platforms or online market places as one.&#8221; The court then held:</span></p>
<blockquote><p><i><span style="font-weight: 400;">&#8220;When an e-commerce website is involved in or conducts its business in such a manner, which would see the presence of a large number of elements enumerated above, it could be said to cross the line from being an intermediary to an active participant.&#8221;</span></i></p></blockquote>
<p><span style="font-weight: 400;">By curating product listings, arranging logistics, using meta-tags, and guaranteeing authenticity, darveys.com had exceeded the role of a neutral conduit. The court also held that failure to observe due diligence with respect to intellectual property rights could amount to &#8220;conspiring, aiding, abetting, or inducing&#8221; unlawful conduct under Section 79(3)(a), independently disentitling the platform from safe harbour [4].</span></p>
<p><span style="font-weight: 400;">This framework applies with full force to AI platforms. When a recommendation algorithm selects which content a user sees, or when a generative AI model produces text or video in response to a user prompt, the question of whether these functions constitute &#8220;selection&#8221; or &#8220;modification&#8221; of information within the language of Section 79(2)(b) becomes the defining legal inquiry. The </span><i><span style="font-weight: 400;">Christian Louboutin</span></i><span style="font-weight: 400;"> standard supplies the doctrinal tool; generative AI supplies the stress test.</span></p>
<h2><b>IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021: Expanding the Compliance Perimeter</b></h2>
<p><span style="font-weight: 400;">The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, notified on 25 February 2021 under Section 87 read with Section 79 of the IT Act, represent the most significant regulatory expansion of intermediary obligations since the original 2011 Guidelines. Rule 7 makes explicit that an intermediary which fails to comply with prescribed due diligence requirements shall no longer be entitled to safe harbour under Section 79(1) of the IT Act and shall be liable under applicable laws [1].</span></p>
<p><span style="font-weight: 400;">The 2021 Rules introduced the classification of &#8220;significant social media intermediaries&#8221; (SSMIs) — social media intermediaries with more than fifty lakh (five million) registered users in India. SSMIs bear substantially heavier obligations: they must appoint a Chief Compliance Officer, a Grievance Redressal Officer, and a Nodal Contact Person, all resident in India. Rule 4(2) requires SSMIs that primarily provide messaging services to enable identification of the &#8220;first originator&#8221; of information where directed by a court or competent authority under Section 69 of the IT Act.</span></p>
<p><span style="font-weight: 400;">For AI platforms, the most consequential provision is Rule 3(1)(b), which requires intermediaries to &#8220;make reasonable efforts by itself, and to cause the users of its computer resource&#8221; not to publish certain categories of prohibited content. This language has been interpreted as potentially imposing a preventive obligation — not merely reactive removal — that moves the compliance standard toward something approaching a monitoring duty. If AI systems deployed on a platform generate or amplify prohibited content, the question of whether the platform made &#8220;reasonable efforts&#8221; to prevent this, independently of any user action, becomes immediately live [2].</span></p>
<h2><b>MeitY&#8217;s AI Advisories: The Regulatory Turn</b></h2>
<p><span style="font-weight: 400;">India&#8217;s formal attempt to address AI within the intermediary liability framework began in November 2023 and crystallised through MeitY advisories issued in early 2024. The March 15, 2024 Advisory — which replaced the March 1, 2024 Advisory — directed intermediaries to ensure that the use of &#8220;AI models, large language models, generative AI technology, software or algorithms&#8221; on or through their platforms does not allow users to host, display, upload, modify, publish, transmit, store, update, or share any content in violation of the Intermediary Guidelines or any other law in force [2].</span></p>
<p><span style="font-weight: 400;">The advisory&#8217;s significance lies in its implicit treatment of AI not as content but as a potentially liable actor within the intermediary ecosystem. By requiring platforms to ensure that AI models deployed on them do not enable unlawful conduct, MeitY effectively placed the responsibility for AI-generated harm squarely on the platform. A platform that deploys a generative AI model which produces deepfake content, defamatory material, or content that undermines democratic processes cannot credibly claim it was merely hosting third-party information — because the AI is not a third party in any conventional sense. It is the platform&#8217;s own deployed technology [2].</span></p>
<p><span style="font-weight: 400;">The advisories also addressed deepfakes specifically, reflecting the 2023 Rashmika Mandanna incident, where AI-generated synthetic video caused significant public and political concern. That episode illustrated how AI-generated content can cause reputational harm at a scale and speed that outpaces any traditional notice-and-takedown mechanism, and demonstrated to MeitY that the existing framework needed explicit AI-specific obligations [5].</span></p>
<h2><b>IT (Intermediary Guidelines) Amendment Rules, 2026: Formalising AI Liability</b></h2>
<p><span style="font-weight: 400;">The most direct regulatory intervention to date is the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, notified by MeitY on 20 February 2026. These rules, for the first time, introduce a statutory definition of &#8220;synthetically generated information&#8221; (SGI), described as any content that is artificially or algorithmically created, generated, modified, or altered using a computer resource in a manner that appears authentic. This definition is intentionally broad, capturing the full range of AI-generated content including deepfakes, synthetic audio-visual material, and algorithmically altered images [5].</span></p>
<p><span style="font-weight: 400;">The 2026 Rules impose mandatory labelling obligations on intermediaries that facilitate the creation of SGI. Visual content must carry a clear and permanent metadata identifier covering at least ten percent of the display area; audio content must contain an audible disclosure during at least ten percent of its duration. These labels cannot be removed, modified, or suppressed by users. The rules also dramatically reduce takedown timelines: unlawful or prohibited AI-generated content must be removed or disabled within three hours of receiving a lawful notice [5].</span></p>
<p><span style="font-weight: 400;">The 2026 Rules expressly clarify that intermediaries acting in good faith and in compliance with these obligations will continue to enjoy safe harbour protection under Section 79 of the IT Act. Conversely, failure to comply — failure to label, delay in takedown, or inadequate grievance handling — may result in the loss of that protection. Safe harbour is thereby transformed from a passive shield into a compliance-contingent privilege. The standard is no longer merely reactive: an intermediary must demonstrate system-level preparedness to deal with AI-generated risks proactively, not merely respond to them after harm has occurred [5].</span></p>
<h2><b>The Grok Question: When AI Is the Platform</b></h2>
<p><span style="font-weight: 400;">The most pointed articulation of the AI-as-creator problem in Indian regulatory discourse concerns the deployment of Grok, an AI model integrated into X (formerly Twitter). The Indian government has argued — publicly, if not yet conclusively in litigation — that X&#8217;s deployment of Grok effectively makes it a creator of content, not merely a host. If Grok generates content in response to user prompts, X cannot claim to be a neutral intermediary whose only role is the passive transmission of third-party information. On this view, Section 79&#8217;s safe harbour would not apply, because the platform itself is the origin point of at least some of the content on it [6].</span></p>
<p><span style="font-weight: 400;">This is the active/passive distinction from </span><i><span style="font-weight: 400;">Christian Louboutin</span></i><span style="font-weight: 400;"> transposed directly onto generative AI. The legal framework as it currently stands does not offer a clean answer. The definition of intermediary in Section 2(1)(w) refers to a person who &#8220;receives, stores or transmits&#8221; electronic records or &#8220;provides any service with respect to that record.&#8221; A generative AI model arguably does none of these things in the traditional sense — it creates records rather than receiving or transmitting them [1][6].</span></p>
<p><span style="font-weight: 400;">Researchers at the Carnegie Endowment have observed that existing definitions under the IT Act, when applied to AI systems, are &#8220;being stretched too thin&#8221; and that &#8220;generative AI systems may not fall neatly within the purview of either publisher or intermediary&#8221; under the current statutory framework [7]. This definitional gap is precisely why the 2026 Amendment Rules and the anticipated Digital India Act are significant: they represent attempts to fill a statutory vacuum that the original IT Act, drafted in 2000, could not have anticipated.</span></p>
<h2><b>MySpace Inc. v. Super Cassettes Industries Ltd.: The No-Monitoring Principle and Its Limits</b></h2>
<p><span style="font-weight: 400;">The no-monitoring principle affirmed in </span><i><span style="font-weight: 400;">Shreya Singhal</span></i><span style="font-weight: 400;"> was reaffirmed by a Division Bench of the Delhi High Court in </span><i><span style="font-weight: 400;">MySpace Inc. v. Super Cassettes Industries Ltd.</span></i><span style="font-weight: 400;">, (2017) 236 DLT 478. The court held that intermediaries are not under any positive obligation to proactively monitor content on their platforms for copyright infringement, and that &#8220;actual knowledge&#8221; must be in the form of a court order — not constructive or inferred knowledge. The court expressly rejected the argument that a platform&#8217;s technical ability to detect infringing content was equivalent to legal knowledge sufficient to impose liability [8].</span></p>
<p><span style="font-weight: 400;">This principle sits uneasily alongside the 2026 Rules&#8217; mandatory labelling and three-hour takedown obligations for AI-generated content. If a platform deploys an AI model that generates content, and that content turns out to be unlawful, the platform&#8217;s argument that it had no &#8220;actual knowledge&#8221; of the specific unlawfulness is considerably weakened — because the AI is the platform&#8217;s own system. The content did not arrive from an unknown third-party originator; it was produced by the platform&#8217;s own technology. The no-monitoring principle was premised on the practical impossibility of reviewing every piece of user-generated content. That impossibility argument does not translate cleanly to AI-generated content, which the platform&#8217;s own systems produced and could, in principle, have been designed to screen from the outset [8].</span></p>
<h2><b>X Corp. v. Union of India: Section 79(3)(b) and the Live Battleground of Safe Harbour</b></h2>
<p><span style="font-weight: 400;">The question of how Section 79(3)(b) interacts with AI-generated content is being contested in live litigation before the Karnataka High Court in </span><i><span style="font-weight: 400;">X Corp. v. Union of India</span></i><span style="font-weight: 400;">, a writ petition filed on 5 March 2025 before Justice M. Nagaprasanna. X Corp. challenges the legality of information-blocking orders issued by various government ministries under Section 79(3)(b), following a MeitY Office Memorandum of 31 October 2023 that authorised all central ministries, state governments, and local police officers to issue content blocking orders through the Sahyog portal [9].</span></p>
<p><span style="font-weight: 400;">X&#8217;s core argument, drawing expressly on </span><i><span style="font-weight: 400;">Shreya Singhal</span></i><span style="font-weight: 400;">, is that Section 79(3)(b) cannot function as an independent mechanism for content blocking. Content blocking, X submits, can only occur through the constitutionally safeguarded process under Section 69A of the IT Act, which requires reasoned orders and procedural safeguards. By contrast, Section 79(3)(b) merely describes the circumstances in which safe harbour is lost — it does not independently confer blocking power on the executive [9]. For AI platforms, the implications are significant: if informal government notices under Section 79(3)(b) are sufficient to trigger takedown obligations for AI-generated content, platforms will face executive pressure to remove such content without judicial oversight, fundamentally altering the architecture of safe harbour from an immunity into a tool of executive content governance.</span></p>
<h2><b>Conclusion</b></h2>
<p><span style="font-weight: 400;">Section 79 of the IT Act was not written for the age of algorithms. Its passive-intermediary model, refined through case law from </span><i><span style="font-weight: 400;">Shreya Singhal</span></i><span style="font-weight: 400;"> to </span><i><span style="font-weight: 400;">Christian Louboutin</span></i><span style="font-weight: 400;"> to </span><i><span style="font-weight: 400;">MySpace</span></i><span style="font-weight: 400;">, assumes a clean separation between the platform and the content it hosts. Generative AI destroys that separation. When an algorithm recommends, curates, or creates content, the platform is no longer merely a conduit — it is a participant. Whether courts will treat that participation as sufficient to strip safe harbour protection depends on how the active/passive distinction is applied to algorithmic conduct. MeitY&#8217;s 2026 Amendment Rules have begun to answer this question legislatively, by conditioning safe harbour on demonstrated compliance with AI-specific obligations, mandatory labelling, and accelerated takedown timelines. The answer, in short, is that an algorithm can be treated as part of the intermediary for regulatory purposes — but the intermediary that deploys it cannot hide behind Section 79 when the algorithm itself is the source of the harm.</span></p>
<h2><b>References</b></h2>
<p><span style="font-weight: 400;">[1] Information Technology Act, 2000, Sections 2(1)(w) and 79, Ministry of Electronics and Information Technology, Government of India. Available at:</span><a href="https://www.indiacode.nic.in/show-data?actid=AC_CEN_45_76_00001_200021_1517807324077&amp;orderno=105"> <span style="font-weight: 400;">https://www.indiacode.nic.in/show-data?actid=AC_CEN_45_76_00001_200021_1517807324077&amp;orderno=105</span></a></p>
<p><span style="font-weight: 400;">[2] S&amp;R Associates, &#8220;Investing in AI in India (Part 3): AI-related Advisories Under the Intermediary Guidelines,&#8221; October 2024. Available at:</span><a href="https://www.snrlaw.in/investing-in-ai-in-india-part-3-ai-related-advisories-under-the-intermediary-guidelines/"> <span style="font-weight: 400;">https://www.snrlaw.in/investing-in-ai-in-india-part-3-ai-related-advisories-under-the-intermediary-guidelines/</span></a></p>
<p><span style="font-weight: 400;">[3] </span><i><span style="font-weight: 400;">Shreya Singhal v. Union of India</span></i><span style="font-weight: 400;">, (2015) 5 SCC 1, Supreme Court of India, 24 March 2015. Full judgment available at:</span><a href="https://globalfreedomofexpression.columbia.edu/wp-content/uploads/2015/06/Shreya_Singhal_vs_U.O.I_on_24_March_2015.pdf"> <span style="font-weight: 400;">https://globalfreedomofexpression.columbia.edu/wp-content/uploads/2015/06/Shreya_Singhal_vs_U.O.I_on_24_March_2015.pdf</span></a></p>
<p><span style="font-weight: 400;">[4] </span><i><span style="font-weight: 400;">Christian Louboutin SAS v. Nakul Bajaj &amp; Ors.</span></i><span style="font-weight: 400;">, 2018 SCC OnLine Del 12215, Delhi High Court, 2 November 2018. Available at:</span><a href="https://indiankanoon.org/doc/99622088/"> <span style="font-weight: 400;">https://indiankanoon.org/doc/99622088/</span></a></p>
<p><span style="font-weight: 400;">[5] TBA Law, &#8220;India&#8217;s IT Intermediary Rules 2026 Amendment on AI-Generated Content: A Legal Analysis,&#8221; 2026. Available at:</span><a href="https://www.tbalaw.in/post/india-s-it-intermediary-rules-2026-amendment-on-ai-generated-content-a-legal-analysis"> <span style="font-weight: 400;">https://www.tbalaw.in/post/india-s-it-intermediary-rules-2026-amendment-on-ai-generated-content-a-legal-analysis</span></a></p>
<p><span style="font-weight: 400;">[6] IAS Gyan, &#8220;Grok Case Raises Questions of AI Governance,&#8221; 2024. Available at:</span><a href="https://www.iasgyan.in/daily-editorials/grok-case-raises-questions-of-ai-governance"> <span style="font-weight: 400;">https://www.iasgyan.in/daily-editorials/grok-case-raises-questions-of-ai-governance</span></a></p>
<p><span style="font-weight: 400;">[7] Carnegie Endowment for International Peace, &#8220;India&#8217;s Advance on AI Regulation,&#8221; November 2024. Available at:</span><a href="https://carnegieendowment.org/research/2024/11/indias-advance-on-ai-regulation?lang=en"> <span style="font-weight: 400;">https://carnegieendowment.org/research/2024/11/indias-advance-on-ai-regulation?lang=en</span></a></p>
<p><span style="font-weight: 400;">[8] Bar and Bench, &#8220;Generative AI and Intermediary Liability Under the Information Technology Act&#8221; (discussing </span><i><span style="font-weight: 400;">MySpace Inc. v. Super Cassettes Industries Ltd.</span></i><span style="font-weight: 400;">, (2017) 236 DLT 478). Available at:</span><a href="https://www.barandbench.com/view-point/generative-ai-and-intermediary-liability-under-the-information-technology-act"> <span style="font-weight: 400;">https://www.barandbench.com/view-point/generative-ai-and-intermediary-liability-under-the-information-technology-act</span></a></p>
<p><span style="font-weight: 400;">[9] SC Observer, &#8220;X Relies on &#8216;Shreya Singhal&#8217; in Arbitrary Content-Blocking Case in Karnataka HC,&#8221; July 2025. Available at:</span><a href="https://www.scobserver.in/journal/x-relies-on-shreya-singhal-in-arbitrary-content-blocking-case-in-karnataka-hc/"> <span style="font-weight: 400;">https://www.scobserver.in/journal/x-relies-on-shreya-singhal-in-arbitrary-content-blocking-case-in-karnataka-hc/</span></a></p>
<p>The post <a href="https://bhattandjoshiassociates.com/section-79-safe-harbour-and-ai-platforms-can-an-algorithm-be-an-intermediary-under-indian-law/">Section 79 Safe Harbour and AI Platforms: Can an Algorithm Be an Intermediary Under Indian Law?</a> appeared first on <a href="https://bhattandjoshiassociates.com">Bhatt &amp; Joshi Associates</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
