<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>IT Rules 2026 Archives - Bhatt &amp; Joshi Associates</title>
	<atom:link href="https://bhattandjoshiassociates.com/tag/it-rules-2026/feed/" rel="self" type="application/rss+xml" />
	<link>https://bhattandjoshiassociates.com/tag/it-rules-2026/</link>
	<description>Best High Court Advocates &#38; Lawyers</description>
	<lastBuildDate>Wed, 15 Apr 2026 11:29:00 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>The Jurisprudence of Synthetic Reality: A Comprehensive Legal and Constitutional Analysis of India’s IT Amendment Rules 2026</title>
		<link>https://bhattandjoshiassociates.com/the-jurisprudence-of-synthetic-reality-a-comprehensive-legal-and-constitutional-analysis-of-indias-it-amendment-rules-2026/</link>
		
		<dc:creator><![CDATA[Team]]></dc:creator>
		<pubDate>Wed, 15 Apr 2026 11:24:40 +0000</pubDate>
				<category><![CDATA[Cyber Law]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI Law India]]></category>
		<category><![CDATA[AI Regulation]]></category>
		<category><![CDATA[Cyber Law India]]></category>
		<category><![CDATA[deepfake regulation india]]></category>
		<category><![CDATA[IT Rules 2026]]></category>
		<guid isPermaLink="false">https://bhattandjoshiassociates.com/?p=32059</guid>

					<description><![CDATA[<p>Introduction: The Advent of Algorithmic Governance and the Crisis of Epistemic Trust The intersection of artificial intelligence, digital constitutionalism, and intermediary liability has reached a historic and precarious inflection point within the Republic of India. On 10 February 2026, the Ministry of Electronics and Information Technology (MeitY) notified the Information Technology (Intermediary Guidelines and Digital [&#8230;]</p>
<p>The post <a href="https://bhattandjoshiassociates.com/the-jurisprudence-of-synthetic-reality-a-comprehensive-legal-and-constitutional-analysis-of-indias-it-amendment-rules-2026/">The Jurisprudence of Synthetic Reality: A Comprehensive Legal and Constitutional Analysis of India’s IT Amendment Rules 2026</a> appeared first on <a href="https://bhattandjoshiassociates.com">Bhatt &amp; Joshi Associates</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h3 data-section-id="12db71r" data-start="613" data-end="704"><span role="text"><strong data-start="616" data-end="704">Introduction: The Advent of Algorithmic Governance and the Crisis of Epistemic Trust</strong></span></h3>
<p data-start="706" data-end="1388">The intersection of artificial intelligence, digital constitutionalism, and intermediary liability has reached a historic and precarious inflection point within the Republic of India. On 10 February 2026, the Ministry of Electronics and Information Technology (MeitY) notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, through Gazette Notification G.S.R. 120(E), which came into force on 20 February 2026. This legislative intervention constitutes one of the most assertive and prescriptive regulatory frameworks globally aimed at governing synthetically generated information (SGI), commonly referred to as deepfakes.[3][5]</p>
<p data-start="1390" data-end="1988">This Amendment is not merely procedural in nature. It represents a structural recalibration of the socio-legal relationship between the State, digital intermediaries, and citizens—referred to in policy discourse as the Digital Nagrik. For over two decades, Indian intermediary liability jurisprudence has been anchored in the safe harbour framework under Section 79 of the Information Technology Act, 2000, which conceptualised platforms as passive conduits of information. However, the exponential rise of generative artificial intelligence has destabilised this model and exposed its limitations.</p>
<p data-start="1990" data-end="2748">The scale and velocity of synthetic media proliferation underscore the urgency of regulatory intervention. Industry estimates suggest that deepfake content has grown exponentially in recent years, with widespread implications for electoral integrity, financial fraud, and reputational harm. India, with over 900 million internet users, faces a particularly acute vulnerability to this epistemic crisis. Survey-based indicators suggest that a significant proportion of users have encountered synthetic content, often without recognising its artificial nature. Concurrently, deepfake-enabled financial fraud—especially in fintech and cryptocurrency sectors—has expanded dramatically, contributing to projected cybercrime losses exceeding ₹20,000 crore in 2025.</p>
<p data-start="2750" data-end="3212">Against this backdrop, the India’s IT Amendment Rules 2026 mark a decisive shift from a reactive, notice-based compliance framework to a proactive, technology-driven regulatory regime. This article argues that while the Amendment addresses genuine and escalating harms, it fundamentally transforms intermediary liability by imposing proactive algorithmic obligations, thereby raising significant constitutional concerns relating to free speech, privacy, and due process. [1] [2]</p>
<h3 data-section-id="1ar0wr1" data-start="3219" data-end="3298"><span role="text"><strong data-start="3222" data-end="3298">Deconstructing the Statutory Architecture of the IT Amendment Rules 2026</strong></span></h3>
<p data-start="3300" data-end="3686">At the core of the 2026 Amendment lies the formal statutory recognition of synthetic media. Prior to this development, Indian law lacked a precise and technologically informed definition of deepfakes, forcing reliance on traditional doctrines of forgery, impersonation, and misrepresentation. The introduction of Synthetically Generated Information (SGI) fills this jurisprudential gap.</p>
<p data-start="3688" data-end="4157">SGI is defined broadly as any audio, visual, or audio-visual information that is artificially created, generated, modified, or altered using computer resources, and is designed to appear real, authentic, or true. Crucially, the definition is anchored in the perception of authenticity rather than the underlying technological process. This ensures that the law remains adaptable to evolving forms of generative AI while focusing on the deceptive impact of such content.</p>
<p data-start="4159" data-end="4619">At the same time, the Amendment recognises the risk of regulatory overbreadth. It explicitly excludes routine and good-faith editing practices—such as formatting, colour correction, compression, transcription, and accessibility enhancements—provided these do not materially misrepresent the underlying content. This calibrated approach attempts to balance regulatory objectives with the need to preserve legitimate digital expression and technological utility.</p>
<h3 data-section-id="17bw14m" data-start="4626" data-end="4712"><span role="text"><strong data-start="4629" data-end="4712">Elevated Due Diligence: Mandatory Labelling, Metadata, and Technical Provenance</strong></span></h3>
<p data-start="4714" data-end="4981">A defining feature of the 2026 Amendment is the transformation of intermediaries from passive hosts into active technical gatekeepers. [3][4] The Rules mandate the deployment of “reasonable and appropriate technical measures” to identify, label, and trace synthetic content.</p>
<p data-start="4983" data-end="5353">Permitted SGI must be prominently labelled in a manner that is easily noticeable and comprehensible to users. In the case of audio content, disclosure must precede the substantive material, ensuring that listeners are aware of its synthetic nature from the outset. These labelling requirements aim to mitigate deception and enhance transparency in digital communication.</p>
<p data-start="5355" data-end="5841">In addition to visual or audio disclosures, the Rules introduce the concept of digital provenance through mandatory embedding of permanent metadata or equivalent identifiers. These identifiers are intended to trace the origin of synthetic content, thereby facilitating accountability and enforcement. Intermediaries are further prohibited from enabling the removal or alteration of such identifiers, ensuring the integrity of the provenance chain as content circulates across platforms.</p>
<p data-start="5843" data-end="6084">While these measures represent a significant advancement in traceability, they also raise practical concerns regarding technological feasibility, interoperability across platforms, and the potential for circumvention by sophisticated actors. [4]</p>
<h3 data-section-id="109ba1t" data-start="6091" data-end="6178"><span role="text"><strong data-start="6094" data-end="6178">The Heightened Quasi-Strict Liability of Significant Social Media Intermediaries</strong></span></h3>
<p data-start="6180" data-end="6482">The 2026 Amendment imposes its most stringent obligations on Significant Social Media Intermediaries (SSMIs), reflecting their scale and systemic influence. These entities are required to implement pre-publication mechanisms compelling users to declare whether their content is synthetically generated.</p>
<p data-start="6484" data-end="6778">However, the framework does not rely solely on user disclosures. Intermediaries must deploy automated detection tools to independently verify such declarations. Where discrepancies arise, platforms are obligated to override user inputs and enforce mandatory labelling and metadata requirements. [3][5]</p>
<p data-start="6780" data-end="7219">This dual-layer system—combining user declarations with algorithmic verification—effectively transforms SSMIs into real-time adjudicators of content authenticity. The shift introduces a quasi-strict liability regime in which failure to detect or act upon synthetic content may result in legal consequences. In operational terms, this places enormous reliance on algorithmic systems, raising questions about accuracy, bias, and scalability. [4][5].</p>
<h3 data-section-id="1ci9ajg" data-start="7226" data-end="7291"><span role="text"><strong data-start="7229" data-end="7291">The New Takedown Paradigm and the Collapse of Safe Harbour</strong></span></h3>
<p data-start="7293" data-end="7590">The most controversial and operationally disruptive aspect of the India’s IT Amendment Rules 2026 is the drastic compression of compliance timelines. The Rules fundamentally restructure grievance redressal and takedown obligations, imposing stringent deadlines that significantly depart from the earlier framework. [3] [4]</p>
<p data-start="7592" data-end="7655">A comparative analysis illustrates the magnitude of this shift:</p>
<div class="TyagGW_tableContainer">
<div class="group TyagGW_tableWrapper flex flex-col-reverse w-fit" tabindex="-1">
<table class="w-fit min-w-(--thread-content-width)" data-start="7657" data-end="8249">
<thead data-start="7657" data-end="7782">
<tr data-start="7657" data-end="7782">
<th class="" data-start="7657" data-end="7681" data-col-size="md"><strong data-start="7659" data-end="7680">Compliance Action</strong></th>
<th class="" data-start="7681" data-end="7717" data-col-size="sm"><strong data-start="7683" data-end="7716">Previous Timeline (2021/2022)</strong></th>
<th class="" data-start="7717" data-end="7753" data-col-size="sm"><strong data-start="7719" data-end="7752">New Timeline (2026 Amendment)</strong></th>
<th class="" data-start="7753" data-end="7782" data-col-size="sm"><strong data-start="7755" data-end="7780">Approximate Reduction</strong></th>
</tr>
</thead>
<tbody data-start="7907" data-end="8249">
<tr data-start="7907" data-end="7973">
<td data-start="7907" data-end="7944" data-col-size="md">Government / Court Takedown Orders</td>
<td data-col-size="sm" data-start="7944" data-end="7955">36 hours</td>
<td data-col-size="sm" data-start="7955" data-end="7965">3 hours</td>
<td data-col-size="sm" data-start="7965" data-end="7973">~92%</td>
</tr>
<tr data-start="7974" data-end="8058">
<td data-start="7974" data-end="8029" data-col-size="md">High-Risk Content (NCII, Deepfake Pornography, CSAM)</td>
<td data-col-size="sm" data-start="8029" data-end="8040">24 hours</td>
<td data-col-size="sm" data-start="8040" data-end="8050">2 hours</td>
<td data-col-size="sm" data-start="8050" data-end="8058">~92%</td>
</tr>
<tr data-start="8059" data-end="8132">
<td data-start="8059" data-end="8103" data-col-size="md">Grievance Resolution for Unlawful Content</td>
<td data-col-size="sm" data-start="8103" data-end="8114">72 hours</td>
<td data-col-size="sm" data-start="8114" data-end="8125">36 hours</td>
<td data-col-size="sm" data-start="8125" data-end="8132">50%</td>
</tr>
<tr data-start="8133" data-end="8196">
<td data-start="8133" data-end="8169" data-col-size="md">General User Grievance Resolution</td>
<td data-col-size="sm" data-start="8169" data-end="8179">15 days</td>
<td data-col-size="sm" data-start="8179" data-end="8188">7 days</td>
<td data-col-size="sm" data-start="8188" data-end="8196">~53%</td>
</tr>
<tr data-start="8197" data-end="8249">
<td data-start="8197" data-end="8220" data-col-size="md">GAC Order Compliance</td>
<td data-col-size="sm" data-start="8220" data-end="8231">24 hours</td>
<td data-col-size="sm" data-start="8231" data-end="8241">2 hours</td>
<td data-col-size="sm" data-start="8241" data-end="8249">~92%</td>
</tr>
</tbody>
</table>
</div>
</div>
<p data-start="8251" data-end="8542">The compression of compliance windows—particularly the 2-hour and 3-hour mandates—places an extraordinary burden on intermediaries. From an operational perspective, these timelines render meaningful human review nearly impossible, especially given the scale at which large platforms operate.</p>
<p data-start="8544" data-end="8890">As a result, intermediaries are structurally compelled to rely on automated moderation systems. This reliance is not incidental but effectively mandated by the architecture of the Rules. In practice, this creates a strong incentive for defensive over-compliance, where platforms preemptively remove or restrict content to minimise legal exposure.</p>
<p data-start="8892" data-end="9401">This transformation has profound implications for the safe harbour framework under Section 79 of the Information Technology Act, 2000. Traditionally, safe harbour functioned as a passive protection contingent upon due diligence and responsiveness to lawful orders. Under the 2026 Amendment, it is reconfigured as a conditional privilege dependent on proactive monitoring and enforcement. Failure to comply with these obligations may result in the loss of immunity, exposing intermediaries to direct liability.</p>
<h3 data-section-id="o2pf08" data-start="9408" data-end="9475"><span role="text"><strong data-start="9411" data-end="9475">Harmonising with Criminal Law and Data Protection Frameworks</strong></span></h3>
<p data-start="9477" data-end="9777">The IT Rules 2026 operate within a broader legal ecosystem that includes the Bharatiya Nyaya Sanhita (BNS) 2023 and the Digital Personal Data Protection (DPDP) Act 2023. This integration creates a multi-layered regulatory framework addressing both the creation and dissemination of synthetic content.[5]</p>
<p data-start="9779" data-end="10059">Under the BNS, deepfake-related activities may attract criminal liability for offences such as misinformation, impersonation, defamation, and obscenity. These provisions extend accountability beyond intermediaries to include creators and distributors of harmful synthetic content.</p>
<p data-start="10061" data-end="10469">Simultaneously, the DPDP Act introduces a consent-based regime governing the processing of personal data, including biometric identifiers such as facial and voice data. Given that generative AI systems often rely on such data, unauthorised use can result in substantial financial penalties. The combined effect is a comprehensive liability framework encompassing civil, criminal, and regulatory consequences. [5] [6]</p>
<h3 data-section-id="1y6gona" data-start="10476" data-end="10550"><span role="text"><strong data-start="10479" data-end="10550">Evidentiary Complexities under the Bharatiya Sakshya Adhiniyam 2023</strong></span></h3>
<p data-start="10552" data-end="10806">Despite the existence of robust substantive provisions, enforcement remains complicated by evidentiary challenges. The Bharatiya Sakshya Adhiniyam, 2023, which governs the admissibility of electronic evidence, requires reliable authentication mechanisms.</p>
<p data-start="10808" data-end="11146">However, the technical opacity of AI systems and the possibility of metadata manipulation complicate the establishment of authenticity and chain of custody. Courts may face significant difficulties in determining authorship, intent, and the reliability of synthetic content, particularly in the absence of specialised forensic frameworks. [5]</p>
<h3 data-section-id="1jpixrk" data-start="11153" data-end="11222"><span role="text"><strong data-start="11156" data-end="11222">Constitutional Scrutiny: Free Speech, Privacy, and Due Process</strong></span></h3>
<p data-start="187" data-end="848">From a constitutional perspective, the India’s IT Amendment Rules 2026 present a sharp duality: while they address serious digital harms, they also raise substantial concerns under Articles 14, 19(1)(a), and 21. The requirement of pre-publication disclosure and algorithmic verification effectively introduces a form of prior restraint, which is constitutionally suspect and risks transforming digital platforms into permission-based ecosystems. Additionally, vague standards such as content being “likely to deceive” create overbreadth, leading to inconsistent enforcement and incentivising platforms to over-censor, thereby producing a chilling effect on free speech. [4]</p>
<p data-start="850" data-end="1396" data-is-last-node="" data-is-only-node="">The framework also weakens established safeguards from <em data-start="905" data-end="946">Shreya Singhal v. Union of India (2015)</em> by compressing takedown timelines to such an extent that meaningful human or judicial review becomes impractical. This effectively shifts censorship decisions to intermediaries acting under legal pressure. Further, privacy concerns arise under Article 21, as provisions enabling disclosure of user identity without robust judicial oversight may expose individuals—especially journalists, whistleblowers, and dissenters—to harassment and retaliation. [5]</p>
<h3 data-section-id="1rh5r69" data-start="12100" data-end="12173"><span role="text"><strong data-start="12103" data-end="12173">The Institutional Crisis: Artificial Intelligence in the Judiciary</strong></span></h3>
<p data-start="54" data-end="497">Artificial intelligence has begun to directly impact judicial integrity in India. In <em data-start="139" data-end="183">Gummadi Usha Rani v. Sure Mallikarjuna Rao</em> (2026), the Supreme Court found that a trial court relied on completely non-existent judgments generated by an AI tool.[8] [9]  Despite the High Court only issuing a caution, the Supreme Court held that such reliance is misconduct, not mere error, and initiated steps to frame guidelines with Senior Advocate Shyam Divan.</p>
<p data-start="499" data-end="670">The Court had earlier also criticised lawyers for filing AI-generated pleadings citing fake cases like “Mercy vs Mankind,” highlighting growing misuse of AI in litigation.</p>
<p data-start="672" data-end="942">A similar issue arose in the Gujarat High Court in the <em data-start="727" data-end="753">Marhaba Overseas Pvt Ltd</em> case (2026), where a GST authority relied on fabricated and misattributed judgments. The Court termed this “flawed and deceptive” and warned against blind reliance on AI-generated content.</p>
<p data-start="944" data-end="1110" data-is-last-node="" data-is-only-node="">These incidents show that while India regulates deepfakes, the judiciary itself remains vulnerable, raising concerns about legal accuracy and institutional readiness.</p>
<h3 data-section-id="k1tl8t" data-start="12937" data-end="13020"><span role="text"><strong data-start="12940" data-end="13020">The Grievance Appellate Committee: Executive Oversight in Digital Governance</strong></span></h3>
<p data-start="88" data-end="489">The Grievance Appellate Committee (GAC), established under Rule 3A of the IT Rules, functions as a digital appellate body allowing users to challenge intermediary decisions such as content takedowns, account suspensions, or SGI labelling. Users can file appeals within 30 days, and the GAC aims to resolve them within a similar timeframe, with access streamlined through the NIC’s Parichay platform.</p>
<p data-start="491" data-end="813">With the India&#8217;s IT Amendment Rules 2026 introducing strict timelines and automated moderation, the GAC is expected to witness a surge in appeals arising from wrongful takedowns and algorithmic errors. Practical instances have shown its effectiveness—for example, restoring a YouTube channel after unjustified copyright strikes.</p>
<p data-start="815" data-end="1203" data-is-last-node="" data-is-only-node="">However, constitutional concerns persist. The GAC is an executive-controlled body, lacking judicial independence, and its orders must be complied with by intermediaries within extremely short timelines. While it offers a fast and accessible remedy, it also centralises significant content regulation power within the executive, raising concerns about due process and separation of powers. [10]</p>
<h3 data-section-id="1v97b8v" data-start="13504" data-end="13541"><span role="text"><strong data-start="13507" data-end="13541">Global Comparative Perspective</strong></span></h3>
<p data-start="59" data-end="584">Globally, AI regulation follows three distinct models. The <strong data-start="118" data-end="136">European Union</strong> adopts a risk-based approach, focusing on classification of AI systems and protection of fundamental rights, with limited reliance on rapid takedowns. <strong data-start="288" data-end="297">China</strong>, by contrast, enforces a strict, state-controlled regime requiring mandatory labelling, identity verification, and swift removal of deepfakes. The <strong data-start="445" data-end="462">United States</strong> follows a fragmented, state-driven model shaped by strong free speech protections, lacking a unified federal framework. [4] [5]</p>
<p data-start="586" data-end="837" data-is-last-node="" data-is-only-node=""> India’s IT Amendment Rules 2026 reflect a <strong data-start="618" data-end="634">hybrid model</strong>, combining rights-based principles with aggressive enforcement mechanisms such as strict takedown timelines and algorithmic monitoring, prioritising immediate harm prevention over procedural safeguards.</p>
<h3 data-section-id="1jttwt6" data-start="13917" data-end="13971"><span role="text"><strong data-start="13920" data-end="13971">Conclusion: The Future of Digital Jurisprudence</strong></span></h3>
<p data-start="13973" data-end="14195">The India&#8217;s  IT Amendment Rules 2026 represent a pivotal moment in India’s digital legal landscape. They respond to genuine harms posed by synthetic media and introduce mechanisms aimed at enhancing accountability and transparency.</p>
<p data-start="14197" data-end="14427">At the same time, they significantly alter the balance between regulation and fundamental rights. The compression of timelines, reliance on automated moderation, and expansion of intermediary obligations create risks of overreach.</p>
<p data-start="14429" data-end="14739">The long-term success of the framework will depend on its implementation and judicial interpretation. A balanced approach—grounded in constitutional principles and technological realism—will be essential to ensure that the regulation of synthetic media does not undermine the very freedoms it seeks to protect.</p>
<h3 data-section-id="180a07d" data-start="68" data-end="88"><span role="text"><strong data-start="70" data-end="88">Key References</strong></span></h3>
<ol data-start="90" data-end="1515">
<li data-section-id="11gn6tr" data-start="90" data-end="232">Official Notification – IT Amendment Rules 2026<br data-start="140" data-end="143" /><a class="decorated-link" href="https://www.meity.gov.in/static/uploads/2026/02/550681ab908f8afb135b0ad42816a1c9.pdf" target="_new" rel="noopener" data-start="146" data-end="230">https://www.meity.gov.in/static/uploads/2026/02/550681ab908f8afb135b0ad42816a1c9.pdf</a></li>
<li data-section-id="1gqj46r" data-start="234" data-end="350">MeitY FAQ on IT Rules<br data-start="258" data-end="261" /><a class="decorated-link" href="https://www.meity.gov.in/static/uploads/2025/10/065b6deb585441b5ccdf8be42502a49c.pdf" target="_new" rel="noopener" data-start="264" data-end="348">https://www.meity.gov.in/static/uploads/2025/10/065b6deb585441b5ccdf8be42502a49c.pdf</a></li>
<li data-section-id="4l5dg0" data-start="352" data-end="466">LiveLaw – Deepfake Rules Explained<br data-start="389" data-end="392" /><a class="decorated-link" href="https://www.livelaw.in/articles/ai-generated-content-deepfakes-524064" target="_new" rel="noopener" data-start="395" data-end="464">https://www.livelaw.in/articles/ai-generated-content-deepfakes-524064</a></li>
<li data-section-id="ys58l2" data-start="468" data-end="596">Khaitan &amp; Co. – Legal Analysis<br data-start="501" data-end="504" /><a class="decorated-link" href="https://www.khaitanco.com/thought-leadership/MeitY-notifies-the-IT-Amendment-Rules-2026" target="_new" rel="noopener" data-start="507" data-end="594">https://www.khaitanco.com/thought-leadership/MeitY-notifies-the-IT-Amendment-Rules-2026</a></li>
<li data-section-id="115qg5w" data-start="598" data-end="830">Nishith Desai – AI &amp; Deepfake Regulation<br data-start="641" data-end="644" /><a class="decorated-link" href="https://www.nishithdesai.com/research-and-articles/hotline/technology-law-analysis/ai-generated-content-and-combating-deepfakes-what-indias-new-rules-mean-for-global-platforms-15532" target="_new" rel="noopener" data-start="647" data-end="828">https://www.nishithdesai.com/research-and-articles/hotline/technology-law-analysis/ai-generated-content-and-combating-deepfakes-what-indias-new-rules-mean-for-global-platforms-15532</a></li>
<li data-section-id="18cf8mo" data-start="832" data-end="982">ORF – Deepfake Financial Cybercrime<br data-start="870" data-end="873" /><a class="decorated-link" href="https://www.orfonline.org/expert-speak/deepfakes-and-financial-cybercrime-india-s-multi-layered-response" target="_new" rel="noopener" data-start="876" data-end="980">https://www.orfonline.org/expert-speak/deepfakes-and-financial-cybercrime-india-s-multi-layered-response</a></li>
<li data-section-id="192bd1p" data-start="984" data-end="1078">Deepfake Statistics (DeepStrike)<br data-start="1019" data-end="1022" /><a class="decorated-link" href="https://deepstrike.io/blog/deepfake-statistics-2025" target="_new" rel="noopener" data-start="1025" data-end="1076">https://deepstrike.io/blog/deepfake-statistics-2025</a></li>
<li data-section-id="2cbzpb" data-start="1080" data-end="1262">Indian Express – AI Hallucination in Courts<br data-start="1126" data-end="1129" /><a class="decorated-link" href="https://indianexpress.com/article/legal-news/ai-hallucination-again-in-a-court-order-sc-talks-of-institutional-concern-10561833/" target="_new" rel="noopener" data-start="1132" data-end="1260">https://indianexpress.com/article/legal-news/ai-hallucination-again-in-a-court-order-sc-talks-of-institutional-concern-10561833/</a></li>
<li data-section-id="hji5pi" data-start="1264" data-end="1460">The Hindu – Supreme Court AI Fake Judgments<br data-start="1310" data-end="1313" /><a class="decorated-link" href="https://www.thehindu.com/news/national/supreme-court-takes-cognisance-of-trial-court-relying-on-ai-generated-fake-verdicts/article70694926.ece" target="_new" rel="noopener" data-start="1316" data-end="1458">https://www.thehindu.com/news/national/supreme-court-takes-cognisance-of-trial-court-relying-on-ai-generated-fake-verdicts/article70694926.ece</a></li>
<li data-section-id="10q4hdf" data-start="1462" data-end="1515">GAC Portal (Official)<br data-start="1487" data-end="1490" /><a class="decorated-link" href="https://gac.gov.in/" target="_new" rel="noopener" data-start="1494" data-end="1513">https://gac.gov.in/</a></li>
</ol>
<p>The post <a href="https://bhattandjoshiassociates.com/the-jurisprudence-of-synthetic-reality-a-comprehensive-legal-and-constitutional-analysis-of-indias-it-amendment-rules-2026/">The Jurisprudence of Synthetic Reality: A Comprehensive Legal and Constitutional Analysis of India’s IT Amendment Rules 2026</a> appeared first on <a href="https://bhattandjoshiassociates.com">Bhatt &amp; Joshi Associates</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Section 79 Safe Harbour and AI Platforms: Can an Algorithm Be an Intermediary Under Indian Law?</title>
		<link>https://bhattandjoshiassociates.com/section-79-safe-harbour-and-ai-platforms-can-an-algorithm-be-an-intermediary-under-indian-law/</link>
		
		<dc:creator><![CDATA[Aaditya Bhatt]]></dc:creator>
		<pubDate>Fri, 20 Feb 2026 12:34:08 +0000</pubDate>
				<category><![CDATA[Information Technology]]></category>
		<category><![CDATA[AI and Law]]></category>
		<category><![CDATA[AI Regulation India]]></category>
		<category><![CDATA[Algorithmic Liability]]></category>
		<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[Intermediary Guidelines]]></category>
		<category><![CDATA[Intermediary Liability]]></category>
		<category><![CDATA[IT Act 2000]]></category>
		<category><![CDATA[IT Rules 2026]]></category>
		<category><![CDATA[Safe Harbour]]></category>
		<category><![CDATA[Section 79]]></category>
		<guid isPermaLink="false">https://bhattandjoshiassociates.com/?p=31818</guid>

					<description><![CDATA[<p>Introduction The question of whether an artificial intelligence platform can qualify as an &#8220;intermediary&#8221; under Indian law — and thereby claim the protection of safe harbour under Section 79 of the Information Technology Act, 2000 — is one of the most pressing and underexamined questions in Indian technology law today. For more than two decades, [&#8230;]</p>
<p>The post <a href="https://bhattandjoshiassociates.com/section-79-safe-harbour-and-ai-platforms-can-an-algorithm-be-an-intermediary-under-indian-law/">Section 79 Safe Harbour and AI Platforms: Can an Algorithm Be an Intermediary Under Indian Law?</a> appeared first on <a href="https://bhattandjoshiassociates.com">Bhatt &amp; Joshi Associates</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h2><b>Introduction</b></h2>
<p><span style="font-weight: 400;">The question of whether an artificial intelligence platform can qualify as an &#8220;intermediary&#8221; under Indian law — and thereby claim the protection of safe harbour under Section 79 of the Information Technology Act, 2000 — is one of the most pressing and underexamined questions in Indian technology law today. For more than two decades, Section 79 has functioned as the backbone of India&#8217;s internet economy, shielding platforms from secondary liability for third-party content. The provision was drafted at a time when the internet was imagined as a passive pipe: a conduit through which users sent and received information. Algorithms of the generative and recommending kind that now define digital experience were simply not contemplated [1].</span></p>
<p><span style="font-weight: 400;">Today, platforms such as YouTube, Instagram, and AI-native services like Grok do not simply host content. Their algorithms curate, amplify, personalise, and in the case of generative AI, actively produce it. This makes the question far from academic: if an algorithm is found to be an active participant in content creation or curation, the platform deploying it may lose its statutory shield entirely. The Ministry of Electronics and Information Technology (MeitY) has, through a series of advisories in 2023 and 2024, begun to signal precisely this shift — that AI is not simply content hosted on a platform, but content shaped and generated by it [2].</span></p>
<h2><b>The Architecture of Section 79 of the IT Act: What the Provision Actually Says</b></h2>
<p><span style="font-weight: 400;">Section 79 of the Information Technology Act, 2000, provides in its operative part: </span><i><span style="font-weight: 400;">&#8220;Notwithstanding anything contained in any law for the time being in force but subject to the provisions of sub-sections (2) and (3), an intermediary shall not be liable for any third party information, data, or communication link made available or hosted by him.&#8221;</span></i><span style="font-weight: 400;"> This immunity is not unconditional. Sub-section (2) requires that the intermediary must not have initiated the transmission, must not have selected the receiver, and must not have selected or modified the information contained in the transmission. It must also observe due diligence and comply with the guidelines prescribed by the Central Government.</span></p>
<p><span style="font-weight: 400;">Sub-section (3) withdraws the protection in two scenarios: first, where the intermediary has conspired with, abetted, aided, or induced the commission of an unlawful act; and second, where the intermediary, upon receiving &#8220;actual knowledge&#8221; that unlawful content is being hosted on its platform, fails to expeditiously remove or disable access to that material. The term &#8220;intermediary&#8221; is defined under Section 2(1)(w) of the IT Act as </span><i><span style="font-weight: 400;">&#8220;any person who on behalf of another person receives, stores or transmits that record or provides any service with respect to that record,&#8221;</span></i><span style="font-weight: 400;"> and expressly includes telecom service providers, internet service providers, web-hosting service providers, search engines, online payment sites, online-auction sites, online marketplaces, and cyber cafes [1].</span></p>
<p><span style="font-weight: 400;">The structure of this provision assumes a fundamental premise: that the intermediary is a passive actor. Its immunity is premised on its not having shaped the content in question. The moment it crosses into active participation — selecting, modifying, inducing — the statutory protection falls away. The rise of AI platforms tests every element of this assumption.</span></p>
<h2><b>Shreya Singhal v. Union of India (2015): The Constitutional Baseline</b></h2>
<p><span style="font-weight: 400;">No discussion of Section 79 of the IT Act is complete without a reckoning with the Supreme Court&#8217;s landmark judgment in </span><i><span style="font-weight: 400;">Shreya Singhal v. Union of India</span></i><span style="font-weight: 400;">, (2015) 5 SCC 1, delivered on 24 March 2015 by a bench of Justices J. Chelameswar and R.F. Nariman. The case arose from a batch of writ petitions under Article 32 of the Constitution of India, principally challenging the constitutionality of Sections 66A, 69A, and 79 of the IT Act. The Supreme Court&#8217;s treatment of Section 79 fundamentally reshaped the intermediary liability regime in India [3].</span></p>
<p><span style="font-weight: 400;">The Court read down Section 79(3)(b) to narrow its scope significantly. The holding was unambiguous:</span></p>
<blockquote><p><i><span style="font-weight: 400;">&#8220;Section 79 is valid subject to Section 79(3)(b) being read down to mean that an intermediary upon receiving actual knowledge from a court order or on being notified by the appropriate Government or its agency that unlawful acts relatable to Article 19(2) are going to be committed then fails to expeditiously remove or disable access to such material.&#8221;</span></i></p></blockquote>
<p><span style="font-weight: 400;">In practical terms, the Court held that intermediaries are not required to act upon private takedown requests. &#8220;Actual knowledge,&#8221; as used in Section 79(3)(b), was interpreted to mean knowledge received through the medium of a court order — not a complaint from a private party. This interpretation rested on a practical foundation: holding intermediaries like Google and Facebook to a standard of responding to every private complaint would make it impossible for them to function, since millions of requests are received and an intermediary cannot be expected to adjudicate the legality of each piece of content on its own. The Court further affirmed that there is no positive obligation on intermediaries to monitor content on their platforms [3]. This no-monitoring principle remains foundational to India&#8217;s safe harbour regime under Section 79 of the IT Act, even as AI regulation begins to chip away at it.</span></p>
<h2><b>Active vs. Passive Intermediaries: The Christian Louboutin Standard</b></h2>
<p><span style="font-weight: 400;">The passive/active distinction now central to the AI liability debate was crystallised in Indian jurisprudence by the Delhi High Court in </span><i><span style="font-weight: 400;">Christian Louboutin SAS v. Nakul Bajaj &amp; Ors.</span></i><span style="font-weight: 400;">, 2018 SCC OnLine Del 12215, decided on 2 November 2018 by Justice Prathiba M. Singh. The case involved the luxury shoe brand&#8217;s claim against darveys.com, an e-commerce platform that used the plaintiff&#8217;s trademarks as meta-tags and claimed to sell authentic goods sourced from authorised stores [4].</span></p>
<p><span style="font-weight: 400;">The defendant&#8217;s principal defence was that it was a mere intermediary under Section 79 of the IT Act. Justice Singh rejected this defence and, in doing so, laid down a twenty-six point framework to determine whether an online platform is a passive conduit or an active participant. The court reasoned that so long as a platform acts as &#8220;mere conduit or passive transmitters of the records or of the information, they continue to be intermediaries, but merely calling themselves as intermediaries does not qualify all e-commerce platforms or online market places as one.&#8221; The court then held:</span></p>
<blockquote><p><i><span style="font-weight: 400;">&#8220;When an e-commerce website is involved in or conducts its business in such a manner, which would see the presence of a large number of elements enumerated above, it could be said to cross the line from being an intermediary to an active participant.&#8221;</span></i></p></blockquote>
<p><span style="font-weight: 400;">By curating product listings, arranging logistics, using meta-tags, and guaranteeing authenticity, darveys.com had exceeded the role of a neutral conduit. The court also held that failure to observe due diligence with respect to intellectual property rights could amount to &#8220;conspiring, aiding, abetting, or inducing&#8221; unlawful conduct under Section 79(3)(a), independently disentitling the platform from safe harbour [4].</span></p>
<p><span style="font-weight: 400;">This framework applies with full force to AI platforms. When a recommendation algorithm selects which content a user sees, or when a generative AI model produces text or video in response to a user prompt, the question of whether these functions constitute &#8220;selection&#8221; or &#8220;modification&#8221; of information within the language of Section 79(2)(b) becomes the defining legal inquiry. The </span><i><span style="font-weight: 400;">Christian Louboutin</span></i><span style="font-weight: 400;"> standard supplies the doctrinal tool; generative AI supplies the stress test.</span></p>
<h2><b>IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021: Expanding the Compliance Perimeter</b></h2>
<p><span style="font-weight: 400;">The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, notified on 25 February 2021 under Section 87 read with Section 79 of the IT Act, represent the most significant regulatory expansion of intermediary obligations since the original 2011 Guidelines. Rule 7 makes explicit that an intermediary which fails to comply with prescribed due diligence requirements shall no longer be entitled to safe harbour under Section 79(1) of the IT Act and shall be liable under applicable laws [1].</span></p>
<p><span style="font-weight: 400;">The 2021 Rules introduced the classification of &#8220;significant social media intermediaries&#8221; (SSMIs) — social media intermediaries with more than fifty lakh (five million) registered users in India. SSMIs bear substantially heavier obligations: they must appoint a Chief Compliance Officer, a Grievance Redressal Officer, and a Nodal Contact Person, all resident in India. Rule 4(2) requires SSMIs that primarily provide messaging services to enable identification of the &#8220;first originator&#8221; of information where directed by a court or competent authority under Section 69 of the IT Act.</span></p>
<p><span style="font-weight: 400;">For AI platforms, the most consequential provision is Rule 3(1)(b), which requires intermediaries to &#8220;make reasonable efforts by itself, and to cause the users of its computer resource&#8221; not to publish certain categories of prohibited content. This language has been interpreted as potentially imposing a preventive obligation — not merely reactive removal — that moves the compliance standard toward something approaching a monitoring duty. If AI systems deployed on a platform generate or amplify prohibited content, the question of whether the platform made &#8220;reasonable efforts&#8221; to prevent this, independently of any user action, becomes immediately live [2].</span></p>
<h2><b>MeitY&#8217;s AI Advisories: The Regulatory Turn</b></h2>
<p><span style="font-weight: 400;">India&#8217;s formal attempt to address AI within the intermediary liability framework began in November 2023 and crystallised through MeitY advisories issued in early 2024. The March 15, 2024 Advisory — which replaced the March 1, 2024 Advisory — directed intermediaries to ensure that the use of &#8220;AI models, large language models, generative AI technology, software or algorithms&#8221; on or through their platforms does not allow users to host, display, upload, modify, publish, transmit, store, update, or share any content in violation of the Intermediary Guidelines or any other law in force [2].</span></p>
<p><span style="font-weight: 400;">The advisory&#8217;s significance lies in its implicit treatment of AI not as content but as a potentially liable actor within the intermediary ecosystem. By requiring platforms to ensure that AI models deployed on them do not enable unlawful conduct, MeitY effectively placed the responsibility for AI-generated harm squarely on the platform. A platform that deploys a generative AI model which produces deepfake content, defamatory material, or content that undermines democratic processes cannot credibly claim it was merely hosting third-party information — because the AI is not a third party in any conventional sense. It is the platform&#8217;s own deployed technology [2].</span></p>
<p><span style="font-weight: 400;">The advisories also addressed deepfakes specifically, reflecting the 2023 Rashmika Mandanna incident, where AI-generated synthetic video caused significant public and political concern. That episode illustrated how AI-generated content can cause reputational harm at a scale and speed that outpaces any traditional notice-and-takedown mechanism, and demonstrated to MeitY that the existing framework needed explicit AI-specific obligations [5].</span></p>
<h2><b>IT (Intermediary Guidelines) Amendment Rules, 2026: Formalising AI Liability</b></h2>
<p><span style="font-weight: 400;">The most direct regulatory intervention to date is the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, notified by MeitY on 20 February 2026. These rules, for the first time, introduce a statutory definition of &#8220;synthetically generated information&#8221; (SGI), described as any content that is artificially or algorithmically created, generated, modified, or altered using a computer resource in a manner that appears authentic. This definition is intentionally broad, capturing the full range of AI-generated content including deepfakes, synthetic audio-visual material, and algorithmically altered images [5].</span></p>
<p><span style="font-weight: 400;">The 2026 Rules impose mandatory labelling obligations on intermediaries that facilitate the creation of SGI. Visual content must carry a clear and permanent metadata identifier covering at least ten percent of the display area; audio content must contain an audible disclosure during at least ten percent of its duration. These labels cannot be removed, modified, or suppressed by users. The rules also dramatically reduce takedown timelines: unlawful or prohibited AI-generated content must be removed or disabled within three hours of receiving a lawful notice [5].</span></p>
<p><span style="font-weight: 400;">The 2026 Rules expressly clarify that intermediaries acting in good faith and in compliance with these obligations will continue to enjoy safe harbour protection under Section 79 of the IT Act. Conversely, failure to comply — failure to label, delay in takedown, or inadequate grievance handling — may result in the loss of that protection. Safe harbour is thereby transformed from a passive shield into a compliance-contingent privilege. The standard is no longer merely reactive: an intermediary must demonstrate system-level preparedness to deal with AI-generated risks proactively, not merely respond to them after harm has occurred [5].</span></p>
<h2><b>The Grok Question: When AI Is the Platform</b></h2>
<p><span style="font-weight: 400;">The most pointed articulation of the AI-as-creator problem in Indian regulatory discourse concerns the deployment of Grok, an AI model integrated into X (formerly Twitter). The Indian government has argued — publicly, if not yet conclusively in litigation — that X&#8217;s deployment of Grok effectively makes it a creator of content, not merely a host. If Grok generates content in response to user prompts, X cannot claim to be a neutral intermediary whose only role is the passive transmission of third-party information. On this view, Section 79&#8217;s safe harbour would not apply, because the platform itself is the origin point of at least some of the content on it [6].</span></p>
<p><span style="font-weight: 400;">This is the active/passive distinction from </span><i><span style="font-weight: 400;">Christian Louboutin</span></i><span style="font-weight: 400;"> transposed directly onto generative AI. The legal framework as it currently stands does not offer a clean answer. The definition of intermediary in Section 2(1)(w) refers to a person who &#8220;receives, stores or transmits&#8221; electronic records or &#8220;provides any service with respect to that record.&#8221; A generative AI model arguably does none of these things in the traditional sense — it creates records rather than receiving or transmitting them [1][6].</span></p>
<p><span style="font-weight: 400;">Researchers at the Carnegie Endowment have observed that existing definitions under the IT Act, when applied to AI systems, are &#8220;being stretched too thin&#8221; and that &#8220;generative AI systems may not fall neatly within the purview of either publisher or intermediary&#8221; under the current statutory framework [7]. This definitional gap is precisely why the 2026 Amendment Rules and the anticipated Digital India Act are significant: they represent attempts to fill a statutory vacuum that the original IT Act, drafted in 2000, could not have anticipated.</span></p>
<h2><b>MySpace Inc. v. Super Cassettes Industries Ltd.: The No-Monitoring Principle and Its Limits</b></h2>
<p><span style="font-weight: 400;">The no-monitoring principle affirmed in </span><i><span style="font-weight: 400;">Shreya Singhal</span></i><span style="font-weight: 400;"> was reaffirmed by a Division Bench of the Delhi High Court in </span><i><span style="font-weight: 400;">MySpace Inc. v. Super Cassettes Industries Ltd.</span></i><span style="font-weight: 400;">, (2017) 236 DLT 478. The court held that intermediaries are not under any positive obligation to proactively monitor content on their platforms for copyright infringement, and that &#8220;actual knowledge&#8221; must be in the form of a court order — not constructive or inferred knowledge. The court expressly rejected the argument that a platform&#8217;s technical ability to detect infringing content was equivalent to legal knowledge sufficient to impose liability [8].</span></p>
<p><span style="font-weight: 400;">This principle sits uneasily alongside the 2026 Rules&#8217; mandatory labelling and three-hour takedown obligations for AI-generated content. If a platform deploys an AI model that generates content, and that content turns out to be unlawful, the platform&#8217;s argument that it had no &#8220;actual knowledge&#8221; of the specific unlawfulness is considerably weakened — because the AI is the platform&#8217;s own system. The content did not arrive from an unknown third-party originator; it was produced by the platform&#8217;s own technology. The no-monitoring principle was premised on the practical impossibility of reviewing every piece of user-generated content. That impossibility argument does not translate cleanly to AI-generated content, which the platform&#8217;s own systems produced and could, in principle, have been designed to screen from the outset [8].</span></p>
<h2><b>X Corp. v. Union of India: Section 79(3)(b) and the Live Battleground of Safe Harbour</b></h2>
<p><span style="font-weight: 400;">The question of how Section 79(3)(b) interacts with AI-generated content is being contested in live litigation before the Karnataka High Court in </span><i><span style="font-weight: 400;">X Corp. v. Union of India</span></i><span style="font-weight: 400;">, a writ petition filed on 5 March 2025 before Justice M. Nagaprasanna. X Corp. challenges the legality of information-blocking orders issued by various government ministries under Section 79(3)(b), following a MeitY Office Memorandum of 31 October 2023 that authorised all central ministries, state governments, and local police officers to issue content blocking orders through the Sahyog portal [9].</span></p>
<p><span style="font-weight: 400;">X&#8217;s core argument, drawing expressly on </span><i><span style="font-weight: 400;">Shreya Singhal</span></i><span style="font-weight: 400;">, is that Section 79(3)(b) cannot function as an independent mechanism for content blocking. Content blocking, X submits, can only occur through the constitutionally safeguarded process under Section 69A of the IT Act, which requires reasoned orders and procedural safeguards. By contrast, Section 79(3)(b) merely describes the circumstances in which safe harbour is lost — it does not independently confer blocking power on the executive [9]. For AI platforms, the implications are significant: if informal government notices under Section 79(3)(b) are sufficient to trigger takedown obligations for AI-generated content, platforms will face executive pressure to remove such content without judicial oversight, fundamentally altering the architecture of safe harbour from an immunity into a tool of executive content governance.</span></p>
<h2><b>Conclusion</b></h2>
<p><span style="font-weight: 400;">Section 79 of the IT Act was not written for the age of algorithms. Its passive-intermediary model, refined through case law from </span><i><span style="font-weight: 400;">Shreya Singhal</span></i><span style="font-weight: 400;"> to </span><i><span style="font-weight: 400;">Christian Louboutin</span></i><span style="font-weight: 400;"> to </span><i><span style="font-weight: 400;">MySpace</span></i><span style="font-weight: 400;">, assumes a clean separation between the platform and the content it hosts. Generative AI destroys that separation. When an algorithm recommends, curates, or creates content, the platform is no longer merely a conduit — it is a participant. Whether courts will treat that participation as sufficient to strip safe harbour protection depends on how the active/passive distinction is applied to algorithmic conduct. MeitY&#8217;s 2026 Amendment Rules have begun to answer this question legislatively, by conditioning safe harbour on demonstrated compliance with AI-specific obligations, mandatory labelling, and accelerated takedown timelines. The answer, in short, is that an algorithm can be treated as part of the intermediary for regulatory purposes — but the intermediary that deploys it cannot hide behind Section 79 when the algorithm itself is the source of the harm.</span></p>
<h2><b>References</b></h2>
<p><span style="font-weight: 400;">[1] Information Technology Act, 2000, Sections 2(1)(w) and 79, Ministry of Electronics and Information Technology, Government of India. Available at:</span><a href="https://www.indiacode.nic.in/show-data?actid=AC_CEN_45_76_00001_200021_1517807324077&amp;orderno=105"> <span style="font-weight: 400;">https://www.indiacode.nic.in/show-data?actid=AC_CEN_45_76_00001_200021_1517807324077&amp;orderno=105</span></a></p>
<p><span style="font-weight: 400;">[2] S&amp;R Associates, &#8220;Investing in AI in India (Part 3): AI-related Advisories Under the Intermediary Guidelines,&#8221; October 2024. Available at:</span><a href="https://www.snrlaw.in/investing-in-ai-in-india-part-3-ai-related-advisories-under-the-intermediary-guidelines/"> <span style="font-weight: 400;">https://www.snrlaw.in/investing-in-ai-in-india-part-3-ai-related-advisories-under-the-intermediary-guidelines/</span></a></p>
<p><span style="font-weight: 400;">[3] </span><i><span style="font-weight: 400;">Shreya Singhal v. Union of India</span></i><span style="font-weight: 400;">, (2015) 5 SCC 1, Supreme Court of India, 24 March 2015. Full judgment available at:</span><a href="https://globalfreedomofexpression.columbia.edu/wp-content/uploads/2015/06/Shreya_Singhal_vs_U.O.I_on_24_March_2015.pdf"> <span style="font-weight: 400;">https://globalfreedomofexpression.columbia.edu/wp-content/uploads/2015/06/Shreya_Singhal_vs_U.O.I_on_24_March_2015.pdf</span></a></p>
<p><span style="font-weight: 400;">[4] </span><i><span style="font-weight: 400;">Christian Louboutin SAS v. Nakul Bajaj &amp; Ors.</span></i><span style="font-weight: 400;">, 2018 SCC OnLine Del 12215, Delhi High Court, 2 November 2018. Available at:</span><a href="https://indiankanoon.org/doc/99622088/"> <span style="font-weight: 400;">https://indiankanoon.org/doc/99622088/</span></a></p>
<p><span style="font-weight: 400;">[5] TBA Law, &#8220;India&#8217;s IT Intermediary Rules 2026 Amendment on AI-Generated Content: A Legal Analysis,&#8221; 2026. Available at:</span><a href="https://www.tbalaw.in/post/india-s-it-intermediary-rules-2026-amendment-on-ai-generated-content-a-legal-analysis"> <span style="font-weight: 400;">https://www.tbalaw.in/post/india-s-it-intermediary-rules-2026-amendment-on-ai-generated-content-a-legal-analysis</span></a></p>
<p><span style="font-weight: 400;">[6] IAS Gyan, &#8220;Grok Case Raises Questions of AI Governance,&#8221; 2024. Available at:</span><a href="https://www.iasgyan.in/daily-editorials/grok-case-raises-questions-of-ai-governance"> <span style="font-weight: 400;">https://www.iasgyan.in/daily-editorials/grok-case-raises-questions-of-ai-governance</span></a></p>
<p><span style="font-weight: 400;">[7] Carnegie Endowment for International Peace, &#8220;India&#8217;s Advance on AI Regulation,&#8221; November 2024. Available at:</span><a href="https://carnegieendowment.org/research/2024/11/indias-advance-on-ai-regulation?lang=en"> <span style="font-weight: 400;">https://carnegieendowment.org/research/2024/11/indias-advance-on-ai-regulation?lang=en</span></a></p>
<p><span style="font-weight: 400;">[8] Bar and Bench, &#8220;Generative AI and Intermediary Liability Under the Information Technology Act&#8221; (discussing </span><i><span style="font-weight: 400;">MySpace Inc. v. Super Cassettes Industries Ltd.</span></i><span style="font-weight: 400;">, (2017) 236 DLT 478). Available at:</span><a href="https://www.barandbench.com/view-point/generative-ai-and-intermediary-liability-under-the-information-technology-act"> <span style="font-weight: 400;">https://www.barandbench.com/view-point/generative-ai-and-intermediary-liability-under-the-information-technology-act</span></a></p>
<p><span style="font-weight: 400;">[9] SC Observer, &#8220;X Relies on &#8216;Shreya Singhal&#8217; in Arbitrary Content-Blocking Case in Karnataka HC,&#8221; July 2025. Available at:</span><a href="https://www.scobserver.in/journal/x-relies-on-shreya-singhal-in-arbitrary-content-blocking-case-in-karnataka-hc/"> <span style="font-weight: 400;">https://www.scobserver.in/journal/x-relies-on-shreya-singhal-in-arbitrary-content-blocking-case-in-karnataka-hc/</span></a></p>
<p>The post <a href="https://bhattandjoshiassociates.com/section-79-safe-harbour-and-ai-platforms-can-an-algorithm-be-an-intermediary-under-indian-law/">Section 79 Safe Harbour and AI Platforms: Can an Algorithm Be an Intermediary Under Indian Law?</a> appeared first on <a href="https://bhattandjoshiassociates.com">Bhatt &amp; Joshi Associates</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
