Section 79 Safe Harbour and AI Platforms: Can an Algorithm Be an Intermediary Under Indian Law?

Introduction

The question of whether an artificial intelligence platform can qualify as an “intermediary” under Indian law — and thereby claim the protection of safe harbour under Section 79 of the Information Technology Act, 2000 — is one of the most pressing and underexamined questions in Indian technology law today. For more than two decades, Section 79 has functioned as the backbone of India’s internet economy, shielding platforms from secondary liability for third-party content. The provision was drafted at a time when the internet was imagined as a passive pipe: a conduit through which users sent and received information. Algorithms of the generative and recommending kind that now define digital experience were simply not contemplated [1].

Today, platforms such as YouTube, Instagram, and AI-native services like Grok do not simply host content. Their algorithms curate, amplify, personalise, and in the case of generative AI, actively produce it. This makes the question far from academic: if an algorithm is found to be an active participant in content creation or curation, the platform deploying it may lose its statutory shield entirely. The Ministry of Electronics and Information Technology (MeitY) has, through a series of advisories in 2023 and 2024, begun to signal precisely this shift — that AI is not simply content hosted on a platform, but content shaped and generated by it [2].

The Architecture of Section 79 of the IT Act: What the Provision Actually Says

Section 79 of the Information Technology Act, 2000, provides in its operative part: “Notwithstanding anything contained in any law for the time being in force but subject to the provisions of sub-sections (2) and (3), an intermediary shall not be liable for any third party information, data, or communication link made available or hosted by him.” This immunity is not unconditional. Sub-section (2) requires that the intermediary must not have initiated the transmission, must not have selected the receiver, and must not have selected or modified the information contained in the transmission. It must also observe due diligence and comply with the guidelines prescribed by the Central Government.

Sub-section (3) withdraws the protection in two scenarios: first, where the intermediary has conspired with, abetted, aided, or induced the commission of an unlawful act; and second, where the intermediary, upon receiving “actual knowledge” that unlawful content is being hosted on its platform, fails to expeditiously remove or disable access to that material. The term “intermediary” is defined under Section 2(1)(w) of the IT Act as “any person who on behalf of another person receives, stores or transmits that record or provides any service with respect to that record,” and expressly includes telecom service providers, internet service providers, web-hosting service providers, search engines, online payment sites, online-auction sites, online marketplaces, and cyber cafes [1].

The structure of this provision assumes a fundamental premise: that the intermediary is a passive actor. Its immunity is premised on its not having shaped the content in question. The moment it crosses into active participation — selecting, modifying, inducing — the statutory protection falls away. The rise of AI platforms tests every element of this assumption.

Shreya Singhal v. Union of India (2015): The Constitutional Baseline

No discussion of Section 79 of the IT Act is complete without a reckoning with the Supreme Court’s landmark judgment in Shreya Singhal v. Union of India, (2015) 5 SCC 1, delivered on 24 March 2015 by a bench of Justices J. Chelameswar and R.F. Nariman. The case arose from a batch of writ petitions under Article 32 of the Constitution of India, principally challenging the constitutionality of Sections 66A, 69A, and 79 of the IT Act. The Supreme Court’s treatment of Section 79 fundamentally reshaped the intermediary liability regime in India [3].

The Court read down Section 79(3)(b) to narrow its scope significantly. The holding was unambiguous:

“Section 79 is valid subject to Section 79(3)(b) being read down to mean that an intermediary upon receiving actual knowledge from a court order or on being notified by the appropriate Government or its agency that unlawful acts relatable to Article 19(2) are going to be committed then fails to expeditiously remove or disable access to such material.”

In practical terms, the Court held that intermediaries are not required to act upon private takedown requests. “Actual knowledge,” as used in Section 79(3)(b), was interpreted to mean knowledge received through the medium of a court order — not a complaint from a private party. This interpretation rested on a practical foundation: holding intermediaries like Google and Facebook to a standard of responding to every private complaint would make it impossible for them to function, since millions of requests are received and an intermediary cannot be expected to adjudicate the legality of each piece of content on its own. The Court further affirmed that there is no positive obligation on intermediaries to monitor content on their platforms [3]. This no-monitoring principle remains foundational to India’s safe harbour regime under Section 79 of the IT Act, even as AI regulation begins to chip away at it.

Active vs. Passive Intermediaries: The Christian Louboutin Standard

The passive/active distinction now central to the AI liability debate was crystallised in Indian jurisprudence by the Delhi High Court in Christian Louboutin SAS v. Nakul Bajaj & Ors., 2018 SCC OnLine Del 12215, decided on 2 November 2018 by Justice Prathiba M. Singh. The case involved the luxury shoe brand’s claim against darveys.com, an e-commerce platform that used the plaintiff’s trademarks as meta-tags and claimed to sell authentic goods sourced from authorised stores [4].

The defendant’s principal defence was that it was a mere intermediary under Section 79 of the IT Act. Justice Singh rejected this defence and, in doing so, laid down a twenty-six point framework to determine whether an online platform is a passive conduit or an active participant. The court reasoned that so long as a platform acts as “mere conduit or passive transmitters of the records or of the information, they continue to be intermediaries, but merely calling themselves as intermediaries does not qualify all e-commerce platforms or online market places as one.” The court then held:

“When an e-commerce website is involved in or conducts its business in such a manner, which would see the presence of a large number of elements enumerated above, it could be said to cross the line from being an intermediary to an active participant.”

By curating product listings, arranging logistics, using meta-tags, and guaranteeing authenticity, darveys.com had exceeded the role of a neutral conduit. The court also held that failure to observe due diligence with respect to intellectual property rights could amount to “conspiring, aiding, abetting, or inducing” unlawful conduct under Section 79(3)(a), independently disentitling the platform from safe harbour [4].

This framework applies with full force to AI platforms. When a recommendation algorithm selects which content a user sees, or when a generative AI model produces text or video in response to a user prompt, the question of whether these functions constitute “selection” or “modification” of information within the language of Section 79(2)(b) becomes the defining legal inquiry. The Christian Louboutin standard supplies the doctrinal tool; generative AI supplies the stress test.

IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021: Expanding the Compliance Perimeter

The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, notified on 25 February 2021 under Section 87 read with Section 79 of the IT Act, represent the most significant regulatory expansion of intermediary obligations since the original 2011 Guidelines. Rule 7 makes explicit that an intermediary which fails to comply with prescribed due diligence requirements shall no longer be entitled to safe harbour under Section 79(1) of the IT Act and shall be liable under applicable laws [1].

The 2021 Rules introduced the classification of “significant social media intermediaries” (SSMIs) — social media intermediaries with more than fifty lakh (five million) registered users in India. SSMIs bear substantially heavier obligations: they must appoint a Chief Compliance Officer, a Grievance Redressal Officer, and a Nodal Contact Person, all resident in India. Rule 4(2) requires SSMIs that primarily provide messaging services to enable identification of the “first originator” of information where directed by a court or competent authority under Section 69 of the IT Act.

For AI platforms, the most consequential provision is Rule 3(1)(b), which requires intermediaries to “make reasonable efforts by itself, and to cause the users of its computer resource” not to publish certain categories of prohibited content. This language has been interpreted as potentially imposing a preventive obligation — not merely reactive removal — that moves the compliance standard toward something approaching a monitoring duty. If AI systems deployed on a platform generate or amplify prohibited content, the question of whether the platform made “reasonable efforts” to prevent this, independently of any user action, becomes immediately live [2].

MeitY’s AI Advisories: The Regulatory Turn

India’s formal attempt to address AI within the intermediary liability framework began in November 2023 and crystallised through MeitY advisories issued in early 2024. The March 15, 2024 Advisory — which replaced the March 1, 2024 Advisory — directed intermediaries to ensure that the use of “AI models, large language models, generative AI technology, software or algorithms” on or through their platforms does not allow users to host, display, upload, modify, publish, transmit, store, update, or share any content in violation of the Intermediary Guidelines or any other law in force [2].

The advisory’s significance lies in its implicit treatment of AI not as content but as a potentially liable actor within the intermediary ecosystem. By requiring platforms to ensure that AI models deployed on them do not enable unlawful conduct, MeitY effectively placed the responsibility for AI-generated harm squarely on the platform. A platform that deploys a generative AI model which produces deepfake content, defamatory material, or content that undermines democratic processes cannot credibly claim it was merely hosting third-party information — because the AI is not a third party in any conventional sense. It is the platform’s own deployed technology [2].

The advisories also addressed deepfakes specifically, reflecting the 2023 Rashmika Mandanna incident, where AI-generated synthetic video caused significant public and political concern. That episode illustrated how AI-generated content can cause reputational harm at a scale and speed that outpaces any traditional notice-and-takedown mechanism, and demonstrated to MeitY that the existing framework needed explicit AI-specific obligations [5].

IT (Intermediary Guidelines) Amendment Rules, 2026: Formalising AI Liability

The most direct regulatory intervention to date is the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, notified by MeitY on 20 February 2026. These rules, for the first time, introduce a statutory definition of “synthetically generated information” (SGI), described as any content that is artificially or algorithmically created, generated, modified, or altered using a computer resource in a manner that appears authentic. This definition is intentionally broad, capturing the full range of AI-generated content including deepfakes, synthetic audio-visual material, and algorithmically altered images [5].

The 2026 Rules impose mandatory labelling obligations on intermediaries that facilitate the creation of SGI. Visual content must carry a clear and permanent metadata identifier covering at least ten percent of the display area; audio content must contain an audible disclosure during at least ten percent of its duration. These labels cannot be removed, modified, or suppressed by users. The rules also dramatically reduce takedown timelines: unlawful or prohibited AI-generated content must be removed or disabled within three hours of receiving a lawful notice [5].

The 2026 Rules expressly clarify that intermediaries acting in good faith and in compliance with these obligations will continue to enjoy safe harbour protection under Section 79 of the IT Act. Conversely, failure to comply — failure to label, delay in takedown, or inadequate grievance handling — may result in the loss of that protection. Safe harbour is thereby transformed from a passive shield into a compliance-contingent privilege. The standard is no longer merely reactive: an intermediary must demonstrate system-level preparedness to deal with AI-generated risks proactively, not merely respond to them after harm has occurred [5].

The Grok Question: When AI Is the Platform

The most pointed articulation of the AI-as-creator problem in Indian regulatory discourse concerns the deployment of Grok, an AI model integrated into X (formerly Twitter). The Indian government has argued — publicly, if not yet conclusively in litigation — that X’s deployment of Grok effectively makes it a creator of content, not merely a host. If Grok generates content in response to user prompts, X cannot claim to be a neutral intermediary whose only role is the passive transmission of third-party information. On this view, Section 79’s safe harbour would not apply, because the platform itself is the origin point of at least some of the content on it [6].

This is the active/passive distinction from Christian Louboutin transposed directly onto generative AI. The legal framework as it currently stands does not offer a clean answer. The definition of intermediary in Section 2(1)(w) refers to a person who “receives, stores or transmits” electronic records or “provides any service with respect to that record.” A generative AI model arguably does none of these things in the traditional sense — it creates records rather than receiving or transmitting them [1][6].

Researchers at the Carnegie Endowment have observed that existing definitions under the IT Act, when applied to AI systems, are “being stretched too thin” and that “generative AI systems may not fall neatly within the purview of either publisher or intermediary” under the current statutory framework [7]. This definitional gap is precisely why the 2026 Amendment Rules and the anticipated Digital India Act are significant: they represent attempts to fill a statutory vacuum that the original IT Act, drafted in 2000, could not have anticipated.

MySpace Inc. v. Super Cassettes Industries Ltd.: The No-Monitoring Principle and Its Limits

The no-monitoring principle affirmed in Shreya Singhal was reaffirmed by a Division Bench of the Delhi High Court in MySpace Inc. v. Super Cassettes Industries Ltd., (2017) 236 DLT 478. The court held that intermediaries are not under any positive obligation to proactively monitor content on their platforms for copyright infringement, and that “actual knowledge” must be in the form of a court order — not constructive or inferred knowledge. The court expressly rejected the argument that a platform’s technical ability to detect infringing content was equivalent to legal knowledge sufficient to impose liability [8].

This principle sits uneasily alongside the 2026 Rules’ mandatory labelling and three-hour takedown obligations for AI-generated content. If a platform deploys an AI model that generates content, and that content turns out to be unlawful, the platform’s argument that it had no “actual knowledge” of the specific unlawfulness is considerably weakened — because the AI is the platform’s own system. The content did not arrive from an unknown third-party originator; it was produced by the platform’s own technology. The no-monitoring principle was premised on the practical impossibility of reviewing every piece of user-generated content. That impossibility argument does not translate cleanly to AI-generated content, which the platform’s own systems produced and could, in principle, have been designed to screen from the outset [8].

X Corp. v. Union of India: Section 79(3)(b) and the Live Battleground of Safe Harbour

The question of how Section 79(3)(b) interacts with AI-generated content is being contested in live litigation before the Karnataka High Court in X Corp. v. Union of India, a writ petition filed on 5 March 2025 before Justice M. Nagaprasanna. X Corp. challenges the legality of information-blocking orders issued by various government ministries under Section 79(3)(b), following a MeitY Office Memorandum of 31 October 2023 that authorised all central ministries, state governments, and local police officers to issue content blocking orders through the Sahyog portal [9].

X’s core argument, drawing expressly on Shreya Singhal, is that Section 79(3)(b) cannot function as an independent mechanism for content blocking. Content blocking, X submits, can only occur through the constitutionally safeguarded process under Section 69A of the IT Act, which requires reasoned orders and procedural safeguards. By contrast, Section 79(3)(b) merely describes the circumstances in which safe harbour is lost — it does not independently confer blocking power on the executive [9]. For AI platforms, the implications are significant: if informal government notices under Section 79(3)(b) are sufficient to trigger takedown obligations for AI-generated content, platforms will face executive pressure to remove such content without judicial oversight, fundamentally altering the architecture of safe harbour from an immunity into a tool of executive content governance.

Conclusion

Section 79 of the IT Act was not written for the age of algorithms. Its passive-intermediary model, refined through case law from Shreya Singhal to Christian Louboutin to MySpace, assumes a clean separation between the platform and the content it hosts. Generative AI destroys that separation. When an algorithm recommends, curates, or creates content, the platform is no longer merely a conduit — it is a participant. Whether courts will treat that participation as sufficient to strip safe harbour protection depends on how the active/passive distinction is applied to algorithmic conduct. MeitY’s 2026 Amendment Rules have begun to answer this question legislatively, by conditioning safe harbour on demonstrated compliance with AI-specific obligations, mandatory labelling, and accelerated takedown timelines. The answer, in short, is that an algorithm can be treated as part of the intermediary for regulatory purposes — but the intermediary that deploys it cannot hide behind Section 79 when the algorithm itself is the source of the harm.

References

[1] Information Technology Act, 2000, Sections 2(1)(w) and 79, Ministry of Electronics and Information Technology, Government of India. Available at: https://www.indiacode.nic.in/show-data?actid=AC_CEN_45_76_00001_200021_1517807324077&orderno=105

[2] S&R Associates, “Investing in AI in India (Part 3): AI-related Advisories Under the Intermediary Guidelines,” October 2024. Available at: https://www.snrlaw.in/investing-in-ai-in-india-part-3-ai-related-advisories-under-the-intermediary-guidelines/

[3] Shreya Singhal v. Union of India, (2015) 5 SCC 1, Supreme Court of India, 24 March 2015. Full judgment available at: https://globalfreedomofexpression.columbia.edu/wp-content/uploads/2015/06/Shreya_Singhal_vs_U.O.I_on_24_March_2015.pdf

[4] Christian Louboutin SAS v. Nakul Bajaj & Ors., 2018 SCC OnLine Del 12215, Delhi High Court, 2 November 2018. Available at: https://indiankanoon.org/doc/99622088/

[5] TBA Law, “India’s IT Intermediary Rules 2026 Amendment on AI-Generated Content: A Legal Analysis,” 2026. Available at: https://www.tbalaw.in/post/india-s-it-intermediary-rules-2026-amendment-on-ai-generated-content-a-legal-analysis

[6] IAS Gyan, “Grok Case Raises Questions of AI Governance,” 2024. Available at: https://www.iasgyan.in/daily-editorials/grok-case-raises-questions-of-ai-governance

[7] Carnegie Endowment for International Peace, “India’s Advance on AI Regulation,” November 2024. Available at: https://carnegieendowment.org/research/2024/11/indias-advance-on-ai-regulation?lang=en

[8] Bar and Bench, “Generative AI and Intermediary Liability Under the Information Technology Act” (discussing MySpace Inc. v. Super Cassettes Industries Ltd., (2017) 236 DLT 478). Available at: https://www.barandbench.com/view-point/generative-ai-and-intermediary-liability-under-the-information-technology-act

[9] SC Observer, “X Relies on ‘Shreya Singhal’ in Arbitrary Content-Blocking Case in Karnataka HC,” July 2025. Available at: https://www.scobserver.in/journal/x-relies-on-shreya-singhal-in-arbitrary-content-blocking-case-in-karnataka-hc/