<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>AI and Law Archives - Bhatt &amp; Joshi Associates</title>
	<atom:link href="https://bhattandjoshiassociates.com/tag/ai-and-law/feed/" rel="self" type="application/rss+xml" />
	<link>https://bhattandjoshiassociates.com/tag/ai-and-law/</link>
	<description>Best High Court Advocates &#38; Lawyers</description>
	<lastBuildDate>Fri, 20 Feb 2026 12:55:48 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.3</generator>

 
	<item>
		<title>Section 79 Safe Harbour and AI Platforms: Can an Algorithm Be an Intermediary Under Indian Law?</title>
		<link>https://bhattandjoshiassociates.com/section-79-safe-harbour-and-ai-platforms-can-an-algorithm-be-an-intermediary-under-indian-law/</link>
		
		<dc:creator><![CDATA[Aaditya Bhatt]]></dc:creator>
		<pubDate>Fri, 20 Feb 2026 12:34:08 +0000</pubDate>
				<category><![CDATA[Information Technology]]></category>
		<category><![CDATA[AI and Law]]></category>
		<category><![CDATA[AI Regulation India]]></category>
		<category><![CDATA[Algorithmic Liability]]></category>
		<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[Intermediary Guidelines]]></category>
		<category><![CDATA[Intermediary Liability]]></category>
		<category><![CDATA[IT Act 2000]]></category>
		<category><![CDATA[IT Rules 2026]]></category>
		<category><![CDATA[Safe Harbour]]></category>
		<category><![CDATA[Section 79]]></category>
		<guid isPermaLink="false">https://bhattandjoshiassociates.com/?p=31818</guid>

					<description><![CDATA[<p>Introduction The question of whether an artificial intelligence platform can qualify as an &#8220;intermediary&#8221; under Indian law — and thereby claim the protection of safe harbour under Section 79 of the Information Technology Act, 2000 — is one of the most pressing and underexamined questions in Indian technology law today. For more than two decades, [&#8230;]</p>
<p>The post <a href="https://bhattandjoshiassociates.com/section-79-safe-harbour-and-ai-platforms-can-an-algorithm-be-an-intermediary-under-indian-law/">Section 79 Safe Harbour and AI Platforms: Can an Algorithm Be an Intermediary Under Indian Law?</a> appeared first on <a href="https://bhattandjoshiassociates.com">Bhatt &amp; Joshi Associates</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h2><b>Introduction</b></h2>
<p><span style="font-weight: 400;">The question of whether an artificial intelligence platform can qualify as an &#8220;intermediary&#8221; under Indian law — and thereby claim the protection of safe harbour under Section 79 of the Information Technology Act, 2000 — is one of the most pressing and underexamined questions in Indian technology law today. For more than two decades, Section 79 has functioned as the backbone of India&#8217;s internet economy, shielding platforms from secondary liability for third-party content. The provision was drafted at a time when the internet was imagined as a passive pipe: a conduit through which users sent and received information. Algorithms of the generative and recommending kind that now define digital experience were simply not contemplated [1].</span></p>
<p><span style="font-weight: 400;">Today, platforms such as YouTube, Instagram, and AI-native services like Grok do not simply host content. Their algorithms curate, amplify, personalise, and in the case of generative AI, actively produce it. This makes the question far from academic: if an algorithm is found to be an active participant in content creation or curation, the platform deploying it may lose its statutory shield entirely. The Ministry of Electronics and Information Technology (MeitY) has, through a series of advisories in 2023 and 2024, begun to signal precisely this shift — that AI is not simply content hosted on a platform, but content shaped and generated by it [2].</span></p>
<h2><b>The Architecture of Section 79 of the IT Act: What the Provision Actually Says</b></h2>
<p><span style="font-weight: 400;">Section 79 of the Information Technology Act, 2000, provides in its operative part: </span><i><span style="font-weight: 400;">&#8220;Notwithstanding anything contained in any law for the time being in force but subject to the provisions of sub-sections (2) and (3), an intermediary shall not be liable for any third party information, data, or communication link made available or hosted by him.&#8221;</span></i><span style="font-weight: 400;"> This immunity is not unconditional. Sub-section (2) requires that the intermediary must not have initiated the transmission, must not have selected the receiver, and must not have selected or modified the information contained in the transmission. It must also observe due diligence and comply with the guidelines prescribed by the Central Government.</span></p>
<p><span style="font-weight: 400;">Sub-section (3) withdraws the protection in two scenarios: first, where the intermediary has conspired with, abetted, aided, or induced the commission of an unlawful act; and second, where the intermediary, upon receiving &#8220;actual knowledge&#8221; that unlawful content is being hosted on its platform, fails to expeditiously remove or disable access to that material. The term &#8220;intermediary&#8221; is defined under Section 2(1)(w) of the IT Act as </span><i><span style="font-weight: 400;">&#8220;any person who on behalf of another person receives, stores or transmits that record or provides any service with respect to that record,&#8221;</span></i><span style="font-weight: 400;"> and expressly includes telecom service providers, internet service providers, web-hosting service providers, search engines, online payment sites, online-auction sites, online marketplaces, and cyber cafes [1].</span></p>
<p><span style="font-weight: 400;">The structure of this provision assumes a fundamental premise: that the intermediary is a passive actor. Its immunity is premised on its not having shaped the content in question. The moment it crosses into active participation — selecting, modifying, inducing — the statutory protection falls away. The rise of AI platforms tests every element of this assumption.</span></p>
<h2><b>Shreya Singhal v. Union of India (2015): The Constitutional Baseline</b></h2>
<p><span style="font-weight: 400;">No discussion of Section 79 of the IT Act is complete without a reckoning with the Supreme Court&#8217;s landmark judgment in </span><i><span style="font-weight: 400;">Shreya Singhal v. Union of India</span></i><span style="font-weight: 400;">, (2015) 5 SCC 1, delivered on 24 March 2015 by a bench of Justices J. Chelameswar and R.F. Nariman. The case arose from a batch of writ petitions under Article 32 of the Constitution of India, principally challenging the constitutionality of Sections 66A, 69A, and 79 of the IT Act. The Supreme Court&#8217;s treatment of Section 79 fundamentally reshaped the intermediary liability regime in India [3].</span></p>
<p><span style="font-weight: 400;">The Court read down Section 79(3)(b) to narrow its scope significantly. The holding was unambiguous:</span></p>
<blockquote><p><i><span style="font-weight: 400;">&#8220;Section 79 is valid subject to Section 79(3)(b) being read down to mean that an intermediary upon receiving actual knowledge from a court order or on being notified by the appropriate Government or its agency that unlawful acts relatable to Article 19(2) are going to be committed then fails to expeditiously remove or disable access to such material.&#8221;</span></i></p></blockquote>
<p><span style="font-weight: 400;">In practical terms, the Court held that intermediaries are not required to act upon private takedown requests. &#8220;Actual knowledge,&#8221; as used in Section 79(3)(b), was interpreted to mean knowledge received through the medium of a court order — not a complaint from a private party. This interpretation rested on a practical foundation: holding intermediaries like Google and Facebook to a standard of responding to every private complaint would make it impossible for them to function, since millions of requests are received and an intermediary cannot be expected to adjudicate the legality of each piece of content on its own. The Court further affirmed that there is no positive obligation on intermediaries to monitor content on their platforms [3]. This no-monitoring principle remains foundational to India&#8217;s safe harbour regime under Section 79 of the IT Act, even as AI regulation begins to chip away at it.</span></p>
<h2><b>Active vs. Passive Intermediaries: The Christian Louboutin Standard</b></h2>
<p><span style="font-weight: 400;">The passive/active distinction now central to the AI liability debate was crystallised in Indian jurisprudence by the Delhi High Court in </span><i><span style="font-weight: 400;">Christian Louboutin SAS v. Nakul Bajaj &amp; Ors.</span></i><span style="font-weight: 400;">, 2018 SCC OnLine Del 12215, decided on 2 November 2018 by Justice Prathiba M. Singh. The case involved the luxury shoe brand&#8217;s claim against darveys.com, an e-commerce platform that used the plaintiff&#8217;s trademarks as meta-tags and claimed to sell authentic goods sourced from authorised stores [4].</span></p>
<p><span style="font-weight: 400;">The defendant&#8217;s principal defence was that it was a mere intermediary under Section 79 of the IT Act. Justice Singh rejected this defence and, in doing so, laid down a twenty-six point framework to determine whether an online platform is a passive conduit or an active participant. The court reasoned that so long as a platform acts as &#8220;mere conduit or passive transmitters of the records or of the information, they continue to be intermediaries, but merely calling themselves as intermediaries does not qualify all e-commerce platforms or online market places as one.&#8221; The court then held:</span></p>
<blockquote><p><i><span style="font-weight: 400;">&#8220;When an e-commerce website is involved in or conducts its business in such a manner, which would see the presence of a large number of elements enumerated above, it could be said to cross the line from being an intermediary to an active participant.&#8221;</span></i></p></blockquote>
<p><span style="font-weight: 400;">By curating product listings, arranging logistics, using meta-tags, and guaranteeing authenticity, darveys.com had exceeded the role of a neutral conduit. The court also held that failure to observe due diligence with respect to intellectual property rights could amount to &#8220;conspiring, aiding, abetting, or inducing&#8221; unlawful conduct under Section 79(3)(a), independently disentitling the platform from safe harbour [4].</span></p>
<p><span style="font-weight: 400;">This framework applies with full force to AI platforms. When a recommendation algorithm selects which content a user sees, or when a generative AI model produces text or video in response to a user prompt, the question of whether these functions constitute &#8220;selection&#8221; or &#8220;modification&#8221; of information within the language of Section 79(2)(b) becomes the defining legal inquiry. The </span><i><span style="font-weight: 400;">Christian Louboutin</span></i><span style="font-weight: 400;"> standard supplies the doctrinal tool; generative AI supplies the stress test.</span></p>
<h2><b>IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021: Expanding the Compliance Perimeter</b></h2>
<p><span style="font-weight: 400;">The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, notified on 25 February 2021 under Section 87 read with Section 79 of the IT Act, represent the most significant regulatory expansion of intermediary obligations since the original 2011 Guidelines. Rule 7 makes explicit that an intermediary which fails to comply with prescribed due diligence requirements shall no longer be entitled to safe harbour under Section 79(1) of the IT Act and shall be liable under applicable laws [1].</span></p>
<p><span style="font-weight: 400;">The 2021 Rules introduced the classification of &#8220;significant social media intermediaries&#8221; (SSMIs) — social media intermediaries with more than fifty lakh (five million) registered users in India. SSMIs bear substantially heavier obligations: they must appoint a Chief Compliance Officer, a Grievance Redressal Officer, and a Nodal Contact Person, all resident in India. Rule 4(2) requires SSMIs that primarily provide messaging services to enable identification of the &#8220;first originator&#8221; of information where directed by a court or competent authority under Section 69 of the IT Act.</span></p>
<p><span style="font-weight: 400;">For AI platforms, the most consequential provision is Rule 3(1)(b), which requires intermediaries to &#8220;make reasonable efforts by itself, and to cause the users of its computer resource&#8221; not to publish certain categories of prohibited content. This language has been interpreted as potentially imposing a preventive obligation — not merely reactive removal — that moves the compliance standard toward something approaching a monitoring duty. If AI systems deployed on a platform generate or amplify prohibited content, the question of whether the platform made &#8220;reasonable efforts&#8221; to prevent this, independently of any user action, becomes immediately live [2].</span></p>
<h2><b>MeitY&#8217;s AI Advisories: The Regulatory Turn</b></h2>
<p><span style="font-weight: 400;">India&#8217;s formal attempt to address AI within the intermediary liability framework began in November 2023 and crystallised through MeitY advisories issued in early 2024. The March 15, 2024 Advisory — which replaced the March 1, 2024 Advisory — directed intermediaries to ensure that the use of &#8220;AI models, large language models, generative AI technology, software or algorithms&#8221; on or through their platforms does not allow users to host, display, upload, modify, publish, transmit, store, update, or share any content in violation of the Intermediary Guidelines or any other law in force [2].</span></p>
<p><span style="font-weight: 400;">The advisory&#8217;s significance lies in its implicit treatment of AI not as content but as a potentially liable actor within the intermediary ecosystem. By requiring platforms to ensure that AI models deployed on them do not enable unlawful conduct, MeitY effectively placed the responsibility for AI-generated harm squarely on the platform. A platform that deploys a generative AI model which produces deepfake content, defamatory material, or content that undermines democratic processes cannot credibly claim it was merely hosting third-party information — because the AI is not a third party in any conventional sense. It is the platform&#8217;s own deployed technology [2].</span></p>
<p><span style="font-weight: 400;">The advisories also addressed deepfakes specifically, reflecting the 2023 Rashmika Mandanna incident, where AI-generated synthetic video caused significant public and political concern. That episode illustrated how AI-generated content can cause reputational harm at a scale and speed that outpaces any traditional notice-and-takedown mechanism, and demonstrated to MeitY that the existing framework needed explicit AI-specific obligations [5].</span></p>
<h2><b>IT (Intermediary Guidelines) Amendment Rules, 2026: Formalising AI Liability</b></h2>
<p><span style="font-weight: 400;">The most direct regulatory intervention to date is the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, notified by MeitY on 20 February 2026. These rules, for the first time, introduce a statutory definition of &#8220;synthetically generated information&#8221; (SGI), described as any content that is artificially or algorithmically created, generated, modified, or altered using a computer resource in a manner that appears authentic. This definition is intentionally broad, capturing the full range of AI-generated content including deepfakes, synthetic audio-visual material, and algorithmically altered images [5].</span></p>
<p><span style="font-weight: 400;">The 2026 Rules impose mandatory labelling obligations on intermediaries that facilitate the creation of SGI. Visual content must carry a clear and permanent metadata identifier covering at least ten percent of the display area; audio content must contain an audible disclosure during at least ten percent of its duration. These labels cannot be removed, modified, or suppressed by users. The rules also dramatically reduce takedown timelines: unlawful or prohibited AI-generated content must be removed or disabled within three hours of receiving a lawful notice [5].</span></p>
<p><span style="font-weight: 400;">The 2026 Rules expressly clarify that intermediaries acting in good faith and in compliance with these obligations will continue to enjoy safe harbour protection under Section 79 of the IT Act. Conversely, failure to comply — failure to label, delay in takedown, or inadequate grievance handling — may result in the loss of that protection. Safe harbour is thereby transformed from a passive shield into a compliance-contingent privilege. The standard is no longer merely reactive: an intermediary must demonstrate system-level preparedness to deal with AI-generated risks proactively, not merely respond to them after harm has occurred [5].</span></p>
<h2><b>The Grok Question: When AI Is the Platform</b></h2>
<p><span style="font-weight: 400;">The most pointed articulation of the AI-as-creator problem in Indian regulatory discourse concerns the deployment of Grok, an AI model integrated into X (formerly Twitter). The Indian government has argued — publicly, if not yet conclusively in litigation — that X&#8217;s deployment of Grok effectively makes it a creator of content, not merely a host. If Grok generates content in response to user prompts, X cannot claim to be a neutral intermediary whose only role is the passive transmission of third-party information. On this view, Section 79&#8217;s safe harbour would not apply, because the platform itself is the origin point of at least some of the content on it [6].</span></p>
<p><span style="font-weight: 400;">This is the active/passive distinction from </span><i><span style="font-weight: 400;">Christian Louboutin</span></i><span style="font-weight: 400;"> transposed directly onto generative AI. The legal framework as it currently stands does not offer a clean answer. The definition of intermediary in Section 2(1)(w) refers to a person who &#8220;receives, stores or transmits&#8221; electronic records or &#8220;provides any service with respect to that record.&#8221; A generative AI model arguably does none of these things in the traditional sense — it creates records rather than receiving or transmitting them [1][6].</span></p>
<p><span style="font-weight: 400;">Researchers at the Carnegie Endowment have observed that existing definitions under the IT Act, when applied to AI systems, are &#8220;being stretched too thin&#8221; and that &#8220;generative AI systems may not fall neatly within the purview of either publisher or intermediary&#8221; under the current statutory framework [7]. This definitional gap is precisely why the 2026 Amendment Rules and the anticipated Digital India Act are significant: they represent attempts to fill a statutory vacuum that the original IT Act, drafted in 2000, could not have anticipated.</span></p>
<h2><b>MySpace Inc. v. Super Cassettes Industries Ltd.: The No-Monitoring Principle and Its Limits</b></h2>
<p><span style="font-weight: 400;">The no-monitoring principle affirmed in </span><i><span style="font-weight: 400;">Shreya Singhal</span></i><span style="font-weight: 400;"> was reaffirmed by a Division Bench of the Delhi High Court in </span><i><span style="font-weight: 400;">MySpace Inc. v. Super Cassettes Industries Ltd.</span></i><span style="font-weight: 400;">, (2017) 236 DLT 478. The court held that intermediaries are not under any positive obligation to proactively monitor content on their platforms for copyright infringement, and that &#8220;actual knowledge&#8221; must be in the form of a court order — not constructive or inferred knowledge. The court expressly rejected the argument that a platform&#8217;s technical ability to detect infringing content was equivalent to legal knowledge sufficient to impose liability [8].</span></p>
<p><span style="font-weight: 400;">This principle sits uneasily alongside the 2026 Rules&#8217; mandatory labelling and three-hour takedown obligations for AI-generated content. If a platform deploys an AI model that generates content, and that content turns out to be unlawful, the platform&#8217;s argument that it had no &#8220;actual knowledge&#8221; of the specific unlawfulness is considerably weakened — because the AI is the platform&#8217;s own system. The content did not arrive from an unknown third-party originator; it was produced by the platform&#8217;s own technology. The no-monitoring principle was premised on the practical impossibility of reviewing every piece of user-generated content. That impossibility argument does not translate cleanly to AI-generated content, which the platform&#8217;s own systems produced and could, in principle, have been designed to screen from the outset [8].</span></p>
<h2><b>X Corp. v. Union of India: Section 79(3)(b) and the Live Battleground of Safe Harbour</b></h2>
<p><span style="font-weight: 400;">The question of how Section 79(3)(b) interacts with AI-generated content is being contested in live litigation before the Karnataka High Court in </span><i><span style="font-weight: 400;">X Corp. v. Union of India</span></i><span style="font-weight: 400;">, a writ petition filed on 5 March 2025 before Justice M. Nagaprasanna. X Corp. challenges the legality of information-blocking orders issued by various government ministries under Section 79(3)(b), following a MeitY Office Memorandum of 31 October 2023 that authorised all central ministries, state governments, and local police officers to issue content blocking orders through the Sahyog portal [9].</span></p>
<p><span style="font-weight: 400;">X&#8217;s core argument, drawing expressly on </span><i><span style="font-weight: 400;">Shreya Singhal</span></i><span style="font-weight: 400;">, is that Section 79(3)(b) cannot function as an independent mechanism for content blocking. Content blocking, X submits, can only occur through the constitutionally safeguarded process under Section 69A of the IT Act, which requires reasoned orders and procedural safeguards. By contrast, Section 79(3)(b) merely describes the circumstances in which safe harbour is lost — it does not independently confer blocking power on the executive [9]. For AI platforms, the implications are significant: if informal government notices under Section 79(3)(b) are sufficient to trigger takedown obligations for AI-generated content, platforms will face executive pressure to remove such content without judicial oversight, fundamentally altering the architecture of safe harbour from an immunity into a tool of executive content governance.</span></p>
<h2><b>Conclusion</b></h2>
<p><span style="font-weight: 400;">Section 79 of the IT Act was not written for the age of algorithms. Its passive-intermediary model, refined through case law from </span><i><span style="font-weight: 400;">Shreya Singhal</span></i><span style="font-weight: 400;"> to </span><i><span style="font-weight: 400;">Christian Louboutin</span></i><span style="font-weight: 400;"> to </span><i><span style="font-weight: 400;">MySpace</span></i><span style="font-weight: 400;">, assumes a clean separation between the platform and the content it hosts. Generative AI destroys that separation. When an algorithm recommends, curates, or creates content, the platform is no longer merely a conduit — it is a participant. Whether courts will treat that participation as sufficient to strip safe harbour protection depends on how the active/passive distinction is applied to algorithmic conduct. MeitY&#8217;s 2026 Amendment Rules have begun to answer this question legislatively, by conditioning safe harbour on demonstrated compliance with AI-specific obligations, mandatory labelling, and accelerated takedown timelines. The answer, in short, is that an algorithm can be treated as part of the intermediary for regulatory purposes — but the intermediary that deploys it cannot hide behind Section 79 when the algorithm itself is the source of the harm.</span></p>
<h2><b>References</b></h2>
<p><span style="font-weight: 400;">[1] Information Technology Act, 2000, Sections 2(1)(w) and 79, Ministry of Electronics and Information Technology, Government of India. Available at:</span><a href="https://www.indiacode.nic.in/show-data?actid=AC_CEN_45_76_00001_200021_1517807324077&amp;orderno=105"> <span style="font-weight: 400;">https://www.indiacode.nic.in/show-data?actid=AC_CEN_45_76_00001_200021_1517807324077&amp;orderno=105</span></a></p>
<p><span style="font-weight: 400;">[2] S&amp;R Associates, &#8220;Investing in AI in India (Part 3): AI-related Advisories Under the Intermediary Guidelines,&#8221; October 2024. Available at:</span><a href="https://www.snrlaw.in/investing-in-ai-in-india-part-3-ai-related-advisories-under-the-intermediary-guidelines/"> <span style="font-weight: 400;">https://www.snrlaw.in/investing-in-ai-in-india-part-3-ai-related-advisories-under-the-intermediary-guidelines/</span></a></p>
<p><span style="font-weight: 400;">[3] </span><i><span style="font-weight: 400;">Shreya Singhal v. Union of India</span></i><span style="font-weight: 400;">, (2015) 5 SCC 1, Supreme Court of India, 24 March 2015. Full judgment available at:</span><a href="https://globalfreedomofexpression.columbia.edu/wp-content/uploads/2015/06/Shreya_Singhal_vs_U.O.I_on_24_March_2015.pdf"> <span style="font-weight: 400;">https://globalfreedomofexpression.columbia.edu/wp-content/uploads/2015/06/Shreya_Singhal_vs_U.O.I_on_24_March_2015.pdf</span></a></p>
<p><span style="font-weight: 400;">[4] </span><i><span style="font-weight: 400;">Christian Louboutin SAS v. Nakul Bajaj &amp; Ors.</span></i><span style="font-weight: 400;">, 2018 SCC OnLine Del 12215, Delhi High Court, 2 November 2018. Available at:</span><a href="https://indiankanoon.org/doc/99622088/"> <span style="font-weight: 400;">https://indiankanoon.org/doc/99622088/</span></a></p>
<p><span style="font-weight: 400;">[5] TBA Law, &#8220;India&#8217;s IT Intermediary Rules 2026 Amendment on AI-Generated Content: A Legal Analysis,&#8221; 2026. Available at:</span><a href="https://www.tbalaw.in/post/india-s-it-intermediary-rules-2026-amendment-on-ai-generated-content-a-legal-analysis"> <span style="font-weight: 400;">https://www.tbalaw.in/post/india-s-it-intermediary-rules-2026-amendment-on-ai-generated-content-a-legal-analysis</span></a></p>
<p><span style="font-weight: 400;">[6] IAS Gyan, &#8220;Grok Case Raises Questions of AI Governance,&#8221; 2024. Available at:</span><a href="https://www.iasgyan.in/daily-editorials/grok-case-raises-questions-of-ai-governance"> <span style="font-weight: 400;">https://www.iasgyan.in/daily-editorials/grok-case-raises-questions-of-ai-governance</span></a></p>
<p><span style="font-weight: 400;">[7] Carnegie Endowment for International Peace, &#8220;India&#8217;s Advance on AI Regulation,&#8221; November 2024. Available at:</span><a href="https://carnegieendowment.org/research/2024/11/indias-advance-on-ai-regulation?lang=en"> <span style="font-weight: 400;">https://carnegieendowment.org/research/2024/11/indias-advance-on-ai-regulation?lang=en</span></a></p>
<p><span style="font-weight: 400;">[8] Bar and Bench, &#8220;Generative AI and Intermediary Liability Under the Information Technology Act&#8221; (discussing </span><i><span style="font-weight: 400;">MySpace Inc. v. Super Cassettes Industries Ltd.</span></i><span style="font-weight: 400;">, (2017) 236 DLT 478). Available at:</span><a href="https://www.barandbench.com/view-point/generative-ai-and-intermediary-liability-under-the-information-technology-act"> <span style="font-weight: 400;">https://www.barandbench.com/view-point/generative-ai-and-intermediary-liability-under-the-information-technology-act</span></a></p>
<p><span style="font-weight: 400;">[9] SC Observer, &#8220;X Relies on &#8216;Shreya Singhal&#8217; in Arbitrary Content-Blocking Case in Karnataka HC,&#8221; July 2025. Available at:</span><a href="https://www.scobserver.in/journal/x-relies-on-shreya-singhal-in-arbitrary-content-blocking-case-in-karnataka-hc/"> <span style="font-weight: 400;">https://www.scobserver.in/journal/x-relies-on-shreya-singhal-in-arbitrary-content-blocking-case-in-karnataka-hc/</span></a></p>
<p>The post <a href="https://bhattandjoshiassociates.com/section-79-safe-harbour-and-ai-platforms-can-an-algorithm-be-an-intermediary-under-indian-law/">Section 79 Safe Harbour and AI Platforms: Can an Algorithm Be an Intermediary Under Indian Law?</a> appeared first on <a href="https://bhattandjoshiassociates.com">Bhatt &amp; Joshi Associates</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Personality Rights in India: Legal Framework and Judicial Evolution</title>
		<link>https://bhattandjoshiassociates.com/personality-rights-in-india-legal-framework-and-judicial-evolution/</link>
		
		<dc:creator><![CDATA[aaditya.bhatt]]></dc:creator>
		<pubDate>Tue, 07 Oct 2025 13:11:03 +0000</pubDate>
				<category><![CDATA[Constitutional Law]]></category>
		<category><![CDATA[AI and Law]]></category>
		<category><![CDATA[Anil Kapoor Case]]></category>
		<category><![CDATA[Arijit Singh Case]]></category>
		<category><![CDATA[Deepfakes]]></category>
		<category><![CDATA[Delhi High Court]]></category>
		<category><![CDATA[Digital Identity]]></category>
		<category><![CDATA[Indian Law]]></category>
		<category><![CDATA[Legal Technology]]></category>
		<category><![CDATA[Personality Rights]]></category>
		<category><![CDATA[Right to Privacy]]></category>
		<guid isPermaLink="false">https://bhattandjoshiassociates.com/?p=27614</guid>

					<description><![CDATA[<p>Introduction The digital revolution has fundamentally transformed how celebrity identity is commodified, exploited, and protected in contemporary society. In recent years, Indian courts have witnessed an unprecedented surge in litigation concerning the unauthorized use of celebrity personas, particularly through emerging technologies like artificial intelligence and deepfake mechanisms. The Delhi High Court&#8217;s recent interventions in protecting [&#8230;]</p>
<p>The post <a href="https://bhattandjoshiassociates.com/personality-rights-in-india-legal-framework-and-judicial-evolution/">Personality Rights in India: Legal Framework and Judicial Evolution</a> appeared first on <a href="https://bhattandjoshiassociates.com">Bhatt &amp; Joshi Associates</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h2><img fetchpriority="high" decoding="async" class="alignright size-full wp-image-27615" src="https://bj-m.s3.ap-south-1.amazonaws.com/p/2025/10/Personality-Rights-in-India-Legal-Framework-and-Judicial-Evolution.png" alt="Personality Rights in India: Legal Framework and Judicial Evolution" width="1200" height="628" /></h2>
<h2><b>Introduction</b></h2>
<p><span style="font-weight: 400;">The digital revolution has fundamentally transformed how celebrity identity is commodified, exploited, and protected in contemporary society. In recent years, Indian courts have witnessed an unprecedented surge in litigation concerning the unauthorized use of celebrity personas, particularly through emerging technologies like artificial intelligence and deepfake mechanisms. The Delhi High Court&#8217;s recent interventions in protecting Bollywood celebrities such as Aishwarya Rai Bachchan, Abhishek Bachchan, and filmmaker Karan Johar against unauthorized commercial exploitation represent a watershed moment in the evolution of celebrity personality rights jurisprudence in India. These judicial pronouncements signal a robust commitment to safeguarding individual autonomy over personal identity in an increasingly digitized commercial landscape.</span></p>
<p><span style="font-weight: 400;">The significance of these developments extends beyond the entertainment industry, touching fundamental questions about human dignity, economic exploitation, and the balance between commercial interests and individual rights. As technology enables increasingly sophisticated methods of replicating human likeness and voice, the legal system must adapt to protect individuals from having their identities weaponized without consent. This article examines the comprehensive legal framework governing personality rights in India, analyzes landmark judicial decisions that have shaped this doctrine, explores the regulatory mechanisms currently in place, and discusses the challenges posed by artificial intelligence in the contemporary context.</span></p>
<h2><strong>Understanding Personality Rights in India: Conceptual Foundations</strong></h2>
<p><span style="font-weight: 400;">Personality rights in India encompass the legal entitlements that protect an individual&#8217;s control over the commercial use of their identity attributes. These attributes include not merely physical characteristics like name, image, and voice, but extend to unique mannerisms, signature catchphrases, distinctive styles, and any other identifiable features that constitute a person&#8217;s public persona. The doctrine recognizes that an individual&#8217;s identity possesses inherent economic value, particularly for public figures and celebrities whose fame creates marketable goodwill.</span></p>
<p><span style="font-weight: 400;">The philosophical underpinning of personality rights rests on two distinct but interconnected foundations. First, the dignitary interest recognizes that every person has a fundamental right to control how their identity is presented to the world, protecting against misrepresentation, degradation, or unauthorized association with products or causes. Second, the proprietary interest acknowledges that celebrities invest significant time, effort, and resources in building their public image, creating legitimate economic interests that warrant legal protection against free-riding and unjust enrichment by third parties.</span></p>
<p><span style="font-weight: 400;">Unlike many Western jurisdictions where personality rights are codified through specific legislation, India&#8217;s approach remains predominantly common law-based, drawing from multiple legal doctrines including privacy rights, passing off, defamation, and copyright principles. This fragmented approach has both advantages and disadvantages—while allowing judicial flexibility to adapt to evolving circumstances, it also creates uncertainty and inconsistency in application across different cases and jurisdictions.</span></p>
<h2><b>Constitutional Framework and Privacy Rights</b></h2>
<p><span style="font-weight: 400;">The Indian Constitution does not explicitly enumerate personality rights as fundamental rights. However, the Supreme Court&#8217;s expansive interpretation of Article 21, which guarantees the right to life and personal liberty, has created constitutional foundations for personality rights protection in India. The watershed moment came in 1994 with the Supreme Court&#8217;s decision in R. Rajagopal v. State of Tamil Nadu [1], where the Court recognized that the right to privacy forms an intrinsic component of personal liberty under Article 21.</span></p>
<p><span style="font-weight: 400;">The Rajagopal case involved a proposed autobiography of a death row convict named Auto Shankar, which prison authorities sought to suppress. While the immediate issue concerned freedom of press versus privacy, the Court laid down seminal principles regarding personality rights in India. The judgment established that every individual possesses the right to safeguard their privacy, including control over how their personal information and identity are disseminated publicly. Crucially, the Court held that unauthorized commercial exploitation of a person&#8217;s name or likeness constitutes a violation of this constitutional right.</span></p>
<p><span style="font-weight: 400;">The Court articulated a framework balancing privacy rights against freedom of expression guaranteed under Article 19(1)(a). It held that while the press enjoys freedom to publish matters of public interest, this freedom does not extend to invading privacy for purely commercial purposes. The judgment recognized that public figures have somewhat reduced privacy expectations regarding matters of legitimate public concern, but retained full protection against unauthorized commercial appropriation of their identity.</span></p>
<p><span style="font-weight: 400;">Building upon Rajagopal, subsequent constitutional developments have reinforced personality rights. The nine-judge bench decision in Justice K.S. Puttaswamy (Retd.) v. Union of India (2017) definitively established privacy as a fundamental right, explicitly recognizing the &#8220;right to control one&#8217;s personal information&#8221; as a critical aspect of informational privacy. While this case primarily concerned data protection and government surveillance, its principles extend naturally to personality rights, as both doctrines center on individual autonomy and control over personal attributes.</span></p>
<h2><strong>Statutory Framework: Limited but Significant Protections</strong></h2>
<p><span style="font-weight: 400;">India lacks dedicated legislation specifically addressing personality rights, instead relying on provisions scattered across various intellectual property and commercial statutes. This patchwork approach requires creative legal interpretation to provide adequate protection.</span></p>
<p><span style="font-weight: 400;">The Trade Marks Act, 1999 offers indirect protection through the doctrine of passing off under common law, codified in Section 27(2). While primarily designed to prevent consumer confusion regarding goods and services, courts have extended passing off principles to protect celebrity identities. When a third party uses a celebrity&#8217;s name or likeness in a manner suggesting endorsement or association, this may constitute actionable passing off even absent trademark registration. The critical requirement is demonstrating goodwill and reputation that the unauthorized use seeks to exploit.</span></p>
<p><span style="font-weight: 400;">The Copyright Act, 1957 provides limited protection for certain personality attributes. Section 57 grants performers moral rights over their performances, including the right to prevent distortion or mutilation that would harm their honor or reputation. Section 38-B, introduced through the 2012 amendment, specifically addresses performers&#8217; rights to broadcast and communication of their performances. While these provisions primarily target unauthorized reproduction of performances rather than identity per se, recent cases like Arijit Singh v. Codible Ventures LLP have successfully invoked these provisions in personality rights disputes [2].</span></p>
<p><span style="font-weight: 400;">The Information Technology Act, 2000, though not designed for personality rights protection, has become relevant in addressing digital violations. Section 66E criminalizes violation of privacy through intentional capture, publication, or transmission of images of private areas without consent. Section 66D addresses punishment for cheating by personation using computer resources. While these provisions primarily target privacy and identity theft rather than commercial exploitation, they establish the legal framework recognizing digital identity as worthy of protection.</span></p>
<h2><strong>Judicial Development: Landmark Cases Shaping Personality Rights in India</strong></h2>
<p><span style="font-weight: 400;">Indian courts have played the defining role in developing personality rights doctrine through progressive judgments that have expanded protection incrementally. Beyond the foundational Rajagopal decision, several cases merit detailed examination for their contribution to this evolving jurisprudence.</span></p>
<p><span style="font-weight: 400;">The Madras High Court&#8217;s decision concerning actor Rajinikanth established important precedents regarding the threshold for proving personality rights violations. The Court held that when a celebrity&#8217;s identity is sufficiently distinctive and recognized, unauthorized commercial use need not demonstrate consumer confusion or deception. The mere appropriation of the celebrity&#8217;s identity attributes for commercial gain, without consent, constitutes actionable wrong. This departure from traditional passing off requirements significantly strengthened personality rights protection by eliminating the often-difficult burden of proving actual confusion.</span></p>
<p><span style="font-weight: 400;">In ICC Development (International) Ltd. v. Arvee Enterprises (2003), the Delhi High Court addressed personality rights in the context of sports marketing. While the case primarily concerned ICC&#8217;s rights to the Cricket World Cup brand, the Court&#8217;s observations about protecting individual players&#8217; rights laid groundwork for future personality rights litigation. The judgment recognized that sportspersons develop protectable rights in their performances and public personas.</span></p>
<p><span style="font-weight: 400;">The case of Titan Industries Ltd. v. Ramkumar Jewellers (2012) saw the Delhi High Court injuncting unauthorized use of celebrity cricketer M.S. Dhoni&#8217;s image in jewelry advertisements. The Court held that Dhoni had acquired distinctive goodwill and reputation, creating protectable personality rights. Unauthorized use not only caused economic harm through lost endorsement opportunities but also violated his right to control commercial associations with his identity.</span></p>
<h2><b>The AI Era: Recent Judicial Responses to Technological Threats</b></h2>
<p><span style="font-weight: 400;">The emergence of artificial intelligence technologies capable of creating hyper-realistic deepfakes, voice clones, and digital avatars has precipitated a new wave of personality rights litigation. Courts have responded with heightened protective measures recognizing the existential threat these technologies pose to individual autonomy.</span></p>
<p><span style="font-weight: 400;">The Delhi High Court&#8217;s 2023 decision protecting actor Anil Kapoor represents a landmark in addressing AI-driven personality rights violations [3]. Kapoor approached the Court after discovering numerous instances of AI-generated deepfake videos superimposing his face onto other actors, unauthorized merchandise featuring his likeness, and websites selling fake autographs. The Court granted a sweeping ex-parte injunction restraining not only specifically identified defendants but also &#8220;the world at large&#8221; from misusing Kapoor&#8217;s personality attributes including his name, image, voice, signature catchphrases like &#8220;jhakaas,&#8221; and any AI-generated content featuring his likeness.</span></p>
<p><span style="font-weight: 400;">The Court&#8217;s reasoning emphasized several critical points. First, it recognized that personality rights exist independent of contractual arrangements or intellectual property registrations—they are inherent rights flowing from personal identity. Second, the judgment acknowledged that AI technologies democratize the ability to create convincing fake content, exponentially increasing the risk of harm. Third, the Court held that the scale and persistence of digital violations justify broader injunctions than traditional intellectual property cases, including dynamic injunctions that automatically apply to future infringers.</span></p>
<p><span style="font-weight: 400;">The Bombay High Court&#8217;s 2024 decision in Arijit Singh v. Codible Ventures LLP marked another significant milestone in protecting artists against AI voice cloning [2]. Singh sued after discovering platforms offering AI tools that could replicate his distinctive voice, allowing users to create songs apparently sung by him without permission. The Bombay High Court granted ad-interim injunction restraining the defendants from operating or promoting such voice cloning tools targeting Singh&#8217;s voice.</span></p>
<p><span style="font-weight: 400;">The Court&#8217;s analysis integrated multiple legal doctrines. It invoked the Copyright Act&#8217;s provisions on performers&#8217; rights, holding that Singh&#8217;s voice constitutes a protected performance. The judgment recognized personality rights as protecting the commercial value of Singh&#8217;s distinctive vocal characteristics. Significantly, the Court held that merely providing tools for others to create infringing content constitutes contributory infringement, establishing potential liability for technology platforms facilitating personality rights violations.</span></p>
<h2><b>Balancing Rights: Personality Rights versus Freedom of Expression</b></h2>
<p><span style="font-weight: 400;">While courts have robustly protected personality rights, they have simultaneously recognized the critical importance of preserving freedom of expression, particularly for artistic works, parody, satire, and matters of public interest. Establishing appropriate boundaries between these competing rights remains an ongoing judicial challenge.</span></p>
<p><span style="font-weight: 400;">The Delhi High Court&#8217;s decision in DM Entertainment Pvt. Ltd. v. Baby Gift House addressed this balance in the context of Rajesh Khanna&#8217;s estate seeking protection of the late actor&#8217;s personality rights. The Court granted protection but carved out exceptions for biographical works, documentaries, and artistic expressions that reference Khanna&#8217;s life and career. The judgment emphasized that personality rights cannot be weaponized to suppress legitimate artistic or journalistic expression about public figures.</span></p>
<p><span style="font-weight: 400;">Similarly, in Digital Collectibles PTE Ltd. v. Galactus Funware Technology Pvt. Ltd., the Court distinguished between commercial exploitation and permissible uses. The judgment held that using celebrity images or references in contexts of parody, criticism, or commentary—even when the creator derives revenue—does not necessarily violate personality rights if the use is genuinely expressive rather than purely commercial. The critical inquiry focuses on whether the use exploits the celebrity&#8217;s commercial value or rather makes an independent statement about them.</span></p>
<p><span style="font-weight: 400;">Courts have adopted a multi-factor test for evaluating whether particular uses fall within protected expression. Relevant considerations include: the transformative nature of the use, whether the work comments upon or criticizes the celebrity, the extent to which the celebrity&#8217;s identity dominates the work, whether the work serves primarily as a vehicle for commercial gain versus artistic expression, and the potential for consumer confusion regarding endorsement or sponsorship.</span></p>
<p><span style="font-weight: 400;">This balancing approach reflects constitutional imperatives. Article 19(1)(a) protects not merely speech but also artistic expression, satire, and dissent. An overly expansive interpretation of personality rights could chill legitimate artistic and journalistic endeavors, creating chilling effects on cultural production. Courts therefore tread carefully, protecting personality rights against naked commercial exploitation while preserving breathing space for creative expression.</span></p>
<h2><b>Regulatory Mechanisms and Enforcement Challenges</b></h2>
<p><span style="font-weight: 400;">Enforcing personality rights in India in the digital age presents formidable practical challenges. The borderless nature of internet commerce, the anonymity afforded by digital platforms, and the sheer volume of potential infringements create significant obstacles to effective rights protection.</span></p>
<p><span style="font-weight: 400;">Traditional enforcement mechanisms include civil suits seeking injunctions and damages. Courts have shown willingness to grant ex-parte injunctions in clear-cut cases, particularly where continuing violations threaten irreparable harm. However, obtaining and enforcing judgments against online infringers, especially those operating from foreign jurisdictions, remains extremely difficult. The technical complexity of blockchain-based platforms and cryptocurrency transactions further complicates enforcement.</span></p>
<p><span style="font-weight: 400;">Platform liability has emerged as a critical issue. While the Information Technology Act&#8217;s safe harbor provisions under Section 79 protect intermediaries from liability for user-generated content if they act as passive conduits and remove infringing content upon notice, courts have shown willingness to hold platforms accountable when they actively facilitate or profit from infringement. The dynamic injunction approach adopted in cases like Anil Kapoor&#8217;s attempts to address this by requiring platforms to proactively prevent similar future violations.</span></p>
<p><span style="font-weight: 400;">Administrative enforcement through existing regulatory bodies remains limited. While the Advertising Standards Council of India provides self-regulatory oversight over advertising content, including unauthorized celebrity endorsements, its jurisdiction is limited and enforcement mechanisms lack teeth. The Ministry of Electronics and Information Technology has issued guidelines and rules addressing various aspects of digital content, but these do not specifically target personality rights violations.</span></p>
<p><span style="font-weight: 400;">Criminal remedies exist for certain egregious violations. Sections 66C (identity theft) and 66D (cheating by personation) of the Information Technology Act criminalize specific digital identity crimes. However, prosecution under these provisions requires proving intent to defraud or cause harm, which may not encompass all personality rights violations motivated by commercial gain rather than malicious intent.</span></p>
<h2><b>International Perspectives and Comparative Analysis</b></h2>
<p><span style="font-weight: 400;">Examining how other jurisdictions address personality rights provides valuable insights for India&#8217;s evolving legal framework. The United States recognizes &#8220;right of publicity&#8221; through state law, with significant variations across jurisdictions. California&#8217;s statute provides robust protection extending even posthumously, allowing estates to control commercial use of deceased celebrities&#8217; identities. Courts have developed sophisticated doctrines balancing publicity rights against First Amendment protections.</span></p>
<p><span style="font-weight: 400;">The European Union addresses personality rights through multiple instruments including the General Data Protection Regulation, which protects personal data including biometric identifiers, and various national laws protecting image rights. France, for example, recognizes strong personality rights under the Civil Code, protecting individuals&#8217; right to control their image throughout life and limiting posthumous commercial exploitation.</span></p>
<p><span style="font-weight: 400;">The United Kingdom primarily addresses personality rights through passing off and trademark law, requiring demonstration of goodwill and misrepresentation. This approach resembles India&#8217;s but has developed more extensive case law. Recent cases have addressed social media influencers&#8217; personality rights and digital exploitation.</span></p>
<p><span style="font-weight: 400;">Learning from these jurisdictions, India could benefit from more explicit statutory frameworks while maintaining judicial flexibility. Clear legislative standards would provide predictability for both rights holders and potential users, reducing litigation costs and fostering innovation while respecting personality rights.</span></p>
<h2><b>Contemporary Challenges: Deepfakes, NFTs, and the Metaverse</b></h2>
<p><span style="font-weight: 400;">Emerging technologies continue presenting novel challenges to personality rights protection. Deepfake technology, which uses machine learning to create synthetic media indistinguishable from authentic recordings, poses existential threats to personal autonomy and truth itself. Beyond commercial exploitation, deepfakes enable creation of non-consensual intimate imagery, political disinformation, and reputational destruction.</span></p>
<p><span style="font-weight: 400;">Non-fungible tokens (NFTs) and digital collectibles raise complex questions about personality rights in virtual spaces. When digital artists create and sell NFTs featuring celebrity likenesses, does this constitute protected artistic expression or commercial exploitation? Courts will need to develop nuanced approaches distinguishing transformative artistic works from mere digital merchandise.</span></p>
<p><span style="font-weight: 400;">The metaverse and virtual worlds present perhaps the most complex frontier. As individuals increasingly inhabit digital avatars and virtual identities, questions arise about personality rights in these contexts. Can celebrities prevent others from creating virtual avatars resembling them? What about AI-powered virtual influencers modeled on real persons? These questions lack clear answers under existing legal frameworks.</span></p>
<p><span style="font-weight: 400;">Voice cloning technology, as addressed in the Arijit Singh case, continues advancing rapidly. Platforms now offer tools allowing anyone to synthesize speech in celebrity voices within seconds. While legitimate applications exist—such as preserving voices of individuals with degenerative conditions—the potential for abuse is immense, ranging from fraudulent impersonation to unauthorized commercial endorsements.</span></p>
<h2><b>The Path Forward: Recommendations for Legislative Reform</b></h2>
<p><span style="font-weight: 400;">Given the challenges identified, comprehensive legislative reform appears increasingly necessary. A dedicated personality rights statute could provide clarity while maintaining flexibility to address evolving technologies. Such legislation should clearly define protectable personality attributes, establish registration mechanisms for those seeking heightened protection, specify exceptions for legitimate uses including news reporting, artistic expression, parody, and satire, and provide effective remedies including injunctions, damages, and statutory penalties for willful violations.</span></p>
<p><span style="font-weight: 400;">The statute should address temporal limitations, particularly regarding posthumous personality rights. While some protection for deceased personalities&#8217; estates may be appropriate given ongoing commercial value, unlimited perpetual protection risks removing public domain material and hampering creative expression. A balanced approach might provide limited posthumous protection, perhaps 50-70 years, similar to copyright terms.</span></p>
<p><span style="font-weight: 400;">Platform accountability must be strengthened. Legislation should clarify intermediary liability standards, requiring platforms to implement robust content moderation systems, respond promptly to takedown notices, and potentially employ proactive measures like AI-driven detection of likely infringing content. Safe harbor protections should be contingent on demonstrable good faith efforts to prevent infringement.</span></p>
<p><span style="font-weight: 400;">Creating specialized adjudicatory mechanisms could expedite dispute resolution. Personality rights disputes often require technical expertise regarding digital technologies and quick resolution to prevent ongoing harm. Specialized tribunals or fast-track procedures within existing intellectual property forums could provide efficient remedies.</span></p>
<h2><b>Conclusion</b></h2>
<p><span style="font-weight: 400;">India&#8217;s personality rights jurisprudence stands at a critical juncture. Judicial decisions over the past three decades have constructed a robust framework protecting individuals&#8217; autonomy over their identities, with recent cases responding proactively to technological threats posed by artificial intelligence and deepfakes. The Delhi High Court&#8217;s protection of Anil Kapoor [3] and the Bombay High Court&#8217;s decision in Arijit Singh&#8217;s favor [2] demonstrate judicial recognition that traditional legal doctrines must adapt to digital realities.</span></p>
<p><span style="font-weight: 400;">However, the absence of comprehensive statutory frameworks creates uncertainty and risks inconsistent application across jurisdictions. As technology continues advancing, enabling ever-more sophisticated methods of identity appropriation and manipulation, the need for clear legislative standards becomes increasingly urgent. Such legislation must carefully balance personality rights protection against freedom of expression, ensuring that legitimate artistic, journalistic, and public interest uses remain permissible while preventing commercial exploitation and malicious misuse.</span></p>
<p><span style="font-weight: 400;">The stakes extend beyond celebrity endorsements and commercial interests. Personality rights implicate fundamental questions of human dignity, autonomy, and identity in an increasingly digital world. As artificial intelligence blurs boundaries between authentic and synthetic, protecting individuals&#8217; control over their own identities becomes essential to preserving meaningful human agency. India&#8217;s legal system must continue evolving to meet these challenges, combining judicial innovation with thoughtful legislative reform to create a framework protecting personality rights for all citizens, not merely the famous few.</span></p>
<h2><b>References</b></h2>
<p><span style="font-weight: 400;">[1] R. Rajagopal v. State of Tamil Nadu, AIR 1995 SC 264. Available at: </span><a href="https://indiankanoon.org/doc/501107/"><span style="font-weight: 400;">https://indiankanoon.org/doc/501107/</span></a><span style="font-weight: 400;"> </span></p>
<p><span style="font-weight: 400;">[2] SpicyIP. (2024). Synthetic Singers and Voice Theft: BomHC protects Arijit Singh&#8217;s Personality Rights. Available at: </span><a href="https://spicyip.com/2024/08/synthetic-singers-and-voice-theft-bomhc-protects-arijit-singhs-personality-rights-part-i.html"><span style="font-weight: 400;">https://spicyip.com/2024/08/synthetic-singers-and-voice-theft-bomhc-protects-arijit-singhs-personality-rights-part-i.html</span></a><span style="font-weight: 400;"> </span></p>
<p><span style="font-weight: 400;">[3] LiveLaw. (2023). Delhi High Court Protects Actor Anil Kapoor&#8217;s Personality Rights, Restrains Misuse Of His Name, Image Or Voice Without Consent. Available at: </span><a href="https://www.livelaw.in/top-stories/delhi-high-court-anil-kapoor-voice-image-misuse-personality-rights-238217"><span style="font-weight: 400;">https://www.livelaw.in/top-stories/delhi-high-court-anil-kapoor-voice-image-misuse-personality-rights-238217</span></a><span style="font-weight: 400;"> </span></p>
<p><span style="font-weight: 400;">[4] World Intellectual Property Organization. (2024). AI voice cloning: how a Bollywood veteran set a legal precedent. Available at: </span><a href="https://www.wipo.int/web/wipo-magazine/articles/ai-voice-cloning-how-a-bollywood-veteran-set-a-legal-precedent-73631"><span style="font-weight: 400;">https://www.wipo.int/web/wipo-magazine/articles/ai-voice-cloning-how-a-bollywood-veteran-set-a-legal-precedent-73631</span></a><span style="font-weight: 400;"> </span></p>
<p><span style="font-weight: 400;">[5] The IP Press. (2023). Delhi High Court&#8217;s Landmark Order: Protecting Anil Kapoor&#8217;s Persona in the Age of AI. Available at: </span><a href="https://www.theippress.com/2023/10/09/delhi-high-courts-landmark-order-protecting-anil-kapoors-persona-in-the-age-of-ai-an-indian-legal-perspective/"><span style="font-weight: 400;">https://www.theippress.com/2023/10/09/delhi-high-courts-landmark-order-protecting-anil-kapoors-persona-in-the-age-of-ai-an-indian-legal-perspective/</span></a><span style="font-weight: 400;"> </span></p>
<p><span style="font-weight: 400;">[6] Indian Kanoon. R. Rajagopal v. State of Tamil Nadu Full Judgment. Available at: </span><a href="https://indiankanoon.org/doc/501107/"><span style="font-weight: 400;">https://indiankanoon.org/doc/501107/</span></a><span style="font-weight: 400;"> </span></p>
<p><span style="font-weight: 400;">[7] Business Standard. (2023). Delhi HC restrains use of Anil Kapoor&#8217;s name, image, signature catchphrase. Available at: </span><a href="https://www.business-standard.com/india-news/delhi-hc-restrains-use-of-anil-kapoor-s-name-image-signature-catchphrase-123092001237_1.html"><span style="font-weight: 400;">https://www.business-standard.com/india-news/delhi-hc-restrains-use-of-anil-kapoor-s-name-image-signature-catchphrase-123092001237_1.html</span></a><span style="font-weight: 400;"> </span></p>
<p><span style="font-weight: 400;">[8] The IP Press. (2024). Voice Theft in the Digital Age: Bombay High Court&#8217;s Landmark Ruling on AI and Personality Rights. Available at: </span><a href="https://www.theippress.com/2024/09/05/voice-theft-in-the-digital-age-bombay-high-courts-landmark-ruling-on-ai-and-personality-rights/"><span style="font-weight: 400;">https://www.theippress.com/2024/09/05/voice-theft-in-the-digital-age-bombay-high-courts-landmark-ruling-on-ai-and-personality-rights/</span></a><span style="font-weight: 400;"> </span></p>
<p><span style="font-weight: 400;">[9] SCC Online. (2024). Bombay HC grants ad-interim injunction in favour of Arijit Singh to protect his personality rights. Available at: </span><a href="https://www.scconline.com/blog/post/2024/08/02/bomhc-grants-ad-interim-injunction-to-arijit-singh-to-protect-his-personality-rights/"><span style="font-weight: 400;">https://www.scconline.com/blog/post/2024/08/02/bomhc-grants-ad-interim-injunction-to-arijit-singh-to-protect-his-personality-rights/</span></a><span style="font-weight: 400;"> </span></p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>The post <a href="https://bhattandjoshiassociates.com/personality-rights-in-india-legal-framework-and-judicial-evolution/">Personality Rights in India: Legal Framework and Judicial Evolution</a> appeared first on <a href="https://bhattandjoshiassociates.com">Bhatt &amp; Joshi Associates</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>The Legal Status of Deepfakes and AI-Generated Media</title>
		<link>https://bhattandjoshiassociates.com/the-legal-status-of-deepfakes-and-ai-generated-media/</link>
		
		<dc:creator><![CDATA[Komal Ahuja]]></dc:creator>
		<pubDate>Mon, 17 Feb 2025 10:47:16 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Digital Law]]></category>
		<category><![CDATA[Privacy and Data Protection]]></category>
		<category><![CDATA[Technology Ethics and Policy]]></category>
		<category><![CDATA[AI and Law]]></category>
		<category><![CDATA[AI Generated Media]]></category>
		<category><![CDATA[AI in Law]]></category>
		<category><![CDATA[Deepfake Legislation]]></category>
		<category><![CDATA[Deepfake Regulation]]></category>
		<category><![CDATA[Deepfakes]]></category>
		<category><![CDATA[Digital Ethics]]></category>
		<category><![CDATA[intellectual property]]></category>
		<category><![CDATA[misinformation]]></category>
		<category><![CDATA[Privacy Laws]]></category>
		<guid isPermaLink="false">https://bhattandjoshiassociates.com/?p=24379</guid>

					<description><![CDATA[<p>Introduction The emergence of deepfake technology and AI-created content detached from real-world impacts has fundamentally changed how people create, consume and interact with digital content. Deepfakes can create realistic videos, images, and audio by using sophisticated machine learning algorithms, especially generative adversarial networks (GANs), to overlay a person’s voice or face onto someone else’s body [&#8230;]</p>
<p>The post <a href="https://bhattandjoshiassociates.com/the-legal-status-of-deepfakes-and-ai-generated-media/">The Legal Status of Deepfakes and AI-Generated Media</a> appeared first on <a href="https://bhattandjoshiassociates.com">Bhatt &amp; Joshi Associates</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h2><img decoding="async" class="alignright size-full wp-image-24383" src="https://bj-m.s3.ap-south-1.amazonaws.com/p/2025/02/the-legal-status-of-deepfakes-and-ai-generated-media.png" alt="The Legal Status of Deepfakes and AI-Generated Media" width="1200" height="628" /></h2>
<h2><b>Introduction</b></h2>
<p><span style="font-weight: 400;">The emergence of deepfake technology and AI-created content detached from real-world impacts has fundamentally changed how people create, consume and interact with digital content. Deepfakes can create realistic videos, images, and audio by using sophisticated machine learning algorithms, especially generative adversarial networks (GANs), to overlay a person’s voice or face onto someone else’s body and speech. While the possible uses for this technology across innovation, entertainment, and education industries are plentiful, its ethical, social, and legal repercussions are equally concerning. This article looks at the legal aspects surrounding deepfakes and AI-generated media, with special focus on their regulation, existing laws, landmark cases, and judicial analysis, seeking to address how society can deal with the challenges brought by this new technology.</span></p>
<h2><b>Understanding Deepfakes and AI-Generated Media</b></h2>
<p><span style="font-weight: 400;">Deepfakes are the result of highly sophisticated artificial intelligence techniques that use GANs. A GAN uses two neural networks competing against each other. One creates content, while the other seeks to detect it. At the end of each round, the two will swap positions. The AI trained to spot fakes will be better at spotting them while the one trained to generate them will be better at generating them. The result is media content that is extremely convincing but fake. AI-generated media includes deepfakes, but also visual and audio, computer-generated arts, music, literature, and so many more. These developments are transforming what is understood as creativity and bringing moral and legal issues regarding creation, copyright, and responsibility.</span></p>
<p><span style="font-weight: 400;">The focus of image and video manipulation technology has shifted to the concerns of damage that can be done to people and society as a whole. Some such harmful uses include non-consensual pornography, identity deception, political tampering, and even monetary scams. Legal systems in many regions are struggling with how to enforce laws on this advanced technology without limiting freedom and creativity.</span></p>
<h2><b>Regulatory Frameworks Governing Deepfakes</b></h2>
<p><span style="font-weight: 400;">Regulating deepfakes involves a delicate balance between mitigating harm and upholding freedom of expression and technological progress. Different jurisdictions have adopted varied approaches, reflecting their legal traditions, cultural values, and levels of technological advancement.</span></p>
<p><b>United States</b></p>
<p><span style="font-weight: 400;">The approach to regulating deepfakes in the US is disjointed and fragmented, varying widely by state. Some states like California, Texas, and Virginia have taken steps to legislate certain malicious applications of deepfake technology. For instance, California’s AB 730 bans the use of videos which falsely claim to be deepfakes within 60 days before an election. AB 602 also helps victims of deeply non-consensual pornographic deepfake videos by criminalizing the creation and advertisement of such videos. The legislation in Texas has also evolved to recognize the dangers of deepfake technology by criminalizing the use and creation of deepfakes that cause damage to people or manipulate election outcomes.</span></p>
<p><span style="font-weight: 400;">At the state level, the DEEPFAKES Accountability introduces legislation that aims to counter the use of deepfake technology from a more holistic point of view. The Act is not yet in effect but suggests deepfake content marked with identifying labels along with penalties for abusive uses failing which will result in severe punishments. While there are other laws such as the Communications Decency Act (Section 230) and some intellectual property laws do aid in trying to address some of the deepfake problems, their influence is quite passive, and vague.</span></p>
<p><b>European Union</b></p>
<p><span style="font-weight: 400;">The European Union has a broader strategy for regulating AI-based media. The outlined Artificial Intelligence Act (AIA) classifies AI systems into distinct risk classes and lays down highly restrictive obligations on those high-risk applications, the deepfakes. Transparency is one of the &#8220;cornerstones&#8221; of the AIA, and it requires disclosure whenever content is created or changed by an AI system.</span></p>
<p><span style="font-weight: 400;">The EU&#8217;s General Data Protection Regulation (GDPR) is also an important tool for the prevention of deepfakes. An unlawful generation or sharing of deepfake content is commonly achieved by, for instance, processing personal information without permission in a manner prohibited by the provisions of the GDPR. Specifically, the Digital Services Act (DSA) and the Digital Markets Act (DMA) are works in progress that will seek to improve the responsibility of online platforms with respect to tackling harmful content, like deepfakes, amongst others.</span></p>
<p><b>India</b></p>
<p><span style="font-weight: 400;">In India, the legal framework to deal with deepfakes is still in its infancy. Although no specific law specifically criminalizes the use of deepfake technology, the Indian Information Technology Act, 2000, and the Indian Penal Code (IPC) are used as legal frameworks to prosecute the offences that are related to this technology. Section 67A of Ithe T Act makes it unlawful to publish inc. nonconsensual pornographic deepfakes. Relevant other sections are defamation (Section 499 of the IPC) and identity theft (Section 66C of the IT Act). Nevertheless, enforcement difficulties remain because of the anonymity afforded by digital platforms and jurisdictional issues.</span></p>
<h2><b>Key Legal Issues Surrounding Deepfakes </b></h2>
<p><b>Privacy and Consent</b></p>
<p><span style="font-weight: 400;">Privacy violations and lack of consent are among the most pressing legal concerns associated with deepfakes. Non-consensual pornographic deepfakes disproportionately target women and have devastating consequences for their victims. Legal systems are increasingly recognizing the need to criminalize such conduct. However, the enforcement of privacy laws remains challenging, particularly in the digital age, where anonymity and cross-border platforms complicate accountability.</span></p>
<p><b>Intellectual Property</b></p>
<p><span style="font-weight: 400;">Deepfake and AI media produce a host of questions centred around the issues of intellectual property. The central issue is whether or not AI-generated media is copyrightable and if so who should own the copyright. The United States Copyright Office has clarified that a work will not be eligible for copyright protection simply because it was created solely by AI and as a result. After all, such works lack human authorship. However, when an AI is used as a tool by a human creator the resulting work may qualify for protection. Similar questions are being raised in the EU and other jurisdictions where laws are grappling with the concept of authorship about AI.</span></p>
<p><b>Defamation and Misinformation</b></p>
<p><span style="font-weight: 400;">Deepfakes have been used to create false and damaging representations of individuals, leading to defamation claims. The difficulty lies in proving the falsity and harm caused by the deepfake, as well as identifying the creator. The use of deepfakes in spreading political misinformation further complicates matters, raising concerns about the integrity of democratic processes. Legal frameworks must address these risks while safeguarding freedom of speech and expression.</span></p>
<p><b>National Security and Public Safety</b></p>
<p><span style="font-weight: 400;">Deepfakes pose significant risks to national security and public safety. They can be weaponized to spread disinformation, impersonate public officials, or incite panic. For example, a deepfake of a government leader issuing a false directive could have catastrophic consequences. Addressing these risks requires a multi-faceted approach, including robust legal and regulatory measures, technological interventions, and public awareness campaigns.</span></p>
<h2>Landmark Cases on Deepfakes and AI Media</h2>
<p><span style="font-weight: 400;">A myriad of legal cases have framed the debate on deepfakes and AI media, showcasing how the field is shifting:</span></p>
<p><span style="font-weight: 400;"><strong>People v. Tracey (California, 2020)</strong> &#8211; The case dealt with the nonconsensual deepfake pornography production and its distribution. The court upheld the California AB 602 law which said that there needs to be stronger legal boundaries against the infringement of privacy.</span></p>
<p><span style="font-weight: 400;"><strong>Deepfakes in Political Campaigns</strong>: There are still developing cases but there has been some discussion within the courts regarding the use of deepfakes in political elections. The suspension proceedings within California AB 730 cases illustrate the importance of the judicial power in stopping electoral fraud.</span></p>
<p><span style="font-weight: 400;"><strong>Thaler v. Copyright Office (2022)</strong>: This case dealt with the AI-created works regarding copyright. The United States Copyright Office denied a copyright application for a piece of art generated from an AI program with no human involvement, thus restating the need for human authorship. </span></p>
<p><span style="font-weight: 400;"><strong>EU Jurisprudence on GDPR Violations</strong>: European courts have been increasingly dealing with the issue of personal information being used without consent for the making of deepfakes, demonstrating the relationship between the law and technology.</span><span style="font-weight: 400;"><br />
</span></p>
<h2>The Path Forward for Deepfakes and AI-Generated Media</h2>
<p><b>Strengthening Legal Frameworks</b></p>
<p><span style="font-weight: 400;">To address the challenges posed by deepfakes and AI-generated media effectively, legal systems must evolve. Comprehensive legislation should explicitly define and regulate the creation, distribution, and use of deepfakes. Transparency requirements, such as labelling AI-generated content, should be mandated, and malicious uses of the technology, including non-consensual pornography and disinformation campaigns, must be penalized.</span></p>
<p><b>Enhancing International Cooperation</b></p>
<p><span style="font-weight: 400;">The borderless nature of the internet necessitates international collaboration to combat the misuse of deepfake technology. Harmonizing legal standards and facilitating cross-border enforcement through treaties and agreements are crucial steps in this direction.</span></p>
<p><b>Leveraging Technology</b></p>
<p><span style="font-weight: 400;">Regulators and law enforcement agencies can harness AI and machine learning to detect and combat deepfakes. Developing robust detection tools and integrating them into online platforms can help mitigate the spread of harmful content and reduce the technology’s misuse.</span></p>
<p><b>Promoting Ethical AI Development</b></p>
<p><span style="font-weight: 400;">Governments, tech companies, and civil society must share the responsibility of ensuring that AI technologies are developed and deployed responsibly. Ethical guidelines and industry standards can play a pivotal role in minimizing the risks associated with deepfakes.</span></p>
<h2><b>Conclusion</b></h2>
<p><span style="font-weight: 400;">The rise of deepfakes and AI-generated media creates unprecedented legal difficulties which must be dealt with creatively and proactively. While the existing laws provide some protection for the issues at hand they cannot address some of the issues that the tremendous evolution of technology creates. A forward-thinking view must be taken alongside innovative solutions to make use of the potential offered by these technologies while also protecting individual rights, public safety and democracy. Robust legal frameworks, international cooperation, technological development and ethical AI techniques will be essential in dealing with the complexities of this crucial turning point.</span></p>
<p>The post <a href="https://bhattandjoshiassociates.com/the-legal-status-of-deepfakes-and-ai-generated-media/">The Legal Status of Deepfakes and AI-Generated Media</a> appeared first on <a href="https://bhattandjoshiassociates.com">Bhatt &amp; Joshi Associates</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Legal Challenges of AI in Criminal Sentencing</title>
		<link>https://bhattandjoshiassociates.com/legal-challenges-of-ai-in-criminal-sentencing/</link>
		
		<dc:creator><![CDATA[Komal Ahuja]]></dc:creator>
		<pubDate>Thu, 13 Feb 2025 10:07:21 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Criminal Justice]]></category>
		<category><![CDATA[Technology Ethics and Policy]]></category>
		<category><![CDATA[AI and Law]]></category>
		<category><![CDATA[AI in Justice]]></category>
		<category><![CDATA[Criminal Sentencing]]></category>
		<category><![CDATA[Due Process]]></category>
		<category><![CDATA[Ethical AI]]></category>
		<category><![CDATA[fair trial]]></category>
		<category><![CDATA[Judicial AI]]></category>
		<category><![CDATA[Justice System]]></category>
		<category><![CDATA[Legal-Reforms]]></category>
		<category><![CDATA[Tech Ethics]]></category>
		<guid isPermaLink="false">https://bhattandjoshiassociates.com/?p=24352</guid>

					<description><![CDATA[<p>Introduction Artificial Intelligence (AI) has transformed various sectors, and the legal domain is no exception. One of the most controversial applications of AI is in criminal sentencing, where algorithms and predictive analytics are used to assist judges in making decisions about bail, parole, and sentencing. While this technological advancement promises efficiency and objectivity, it also [&#8230;]</p>
<p>The post <a href="https://bhattandjoshiassociates.com/legal-challenges-of-ai-in-criminal-sentencing/">Legal Challenges of AI in Criminal Sentencing</a> appeared first on <a href="https://bhattandjoshiassociates.com">Bhatt &amp; Joshi Associates</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h2><img decoding="async" class="alignright size-full wp-image-24353" src="https://bj-m.s3.ap-south-1.amazonaws.com/p/2025/02/legal-challenges-of-ai-in-criminal-sentencing.png" alt="Legal Challenges of AI in Criminal Sentencing" width="1200" height="628" /></h2>
<h2><b>Introduction</b></h2>
<p><span style="font-weight: 400;">Artificial Intelligence (AI) has transformed various sectors, and the legal domain is no exception. One of the most controversial applications of AI is in criminal sentencing, where algorithms and predictive analytics are used to assist judges in making decisions about bail, parole, and sentencing. While this technological advancement promises efficiency and objectivity, it also raises numerous legal, ethical, and procedural challenges. These challenges are critical because they directly impact the fairness of trials, the rights of the accused, and the integrity of the justice system.</span></p>
<h2><b>The Integration of AI in Criminal Sentencing</b></h2>
<p><span style="font-weight: 400;">AI tools in criminal sentencing are designed to analyze vast amounts of data, including criminal records, demographic information, and case histories, to predict the likelihood of recidivism or assess the risk posed by defendants. Popular examples include risk assessment tools like COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) and PSA (Public Safety Assessment). These tools aim to provide judges with data-driven insights to reduce biases and improve consistency in sentencing decisions.</span></p>
<p><span style="font-weight: 400;">However, these systems often operate as black boxes, where the methodology and decision-making processes are not transparent. This lack of transparency has profound legal implications, particularly regarding the right to a fair trial and due process. It raises the question of whether reliance on AI undermines the judiciary&#8217;s role as the ultimate arbiter of justice.</span></p>
<h2><b>Regulatory Framework Governing AI in Criminal Justice</b></h2>
<p><span style="font-weight: 400;">Local AI supervision within criminal sentencing contexts is quite different from one state to another. In the case of the United States, there is no broad AI sentencing law that is federal. Rather, the courts approximate the legality of the functions to general constitutional norms, such as the due process clause of the Fifth and Fourteenth Amendments. Some degree of regulation has been passed by state legislatures as well – certain states require concealment and accountability provisions to be implemented. </span></p>
<p><span style="font-weight: 400;">With its General Data Protection Regulation (GDPR), the European Union (EU) has automated decision-making, such as the right not only to receive an explanation but contest the outcome of algorithmic decision-making, granted under EU laws. Jurisdictions within the EU may choose to opt out of the GDPR provisions about criminal justice, but violations of personal rights through AI systems remain actionable. The planned EU Artificial Intelligence Act intends to design a categorization system based on the degree of risk posed by various AI systems, so criminal justice usages are seen as high risk and are therefore heavily regulated.</span></p>
<p><span style="font-weight: 400;">Currently, Indian legislation does not define the employment of AI within the criminal justice system. However, Article 14’s Equality before Law and Article 21’s Right to Life and Personal Liberty provide scaffolding to contest unfair practices stemming from the use of AI technologies.</span></p>
<h2><b>Bias and Discrimination in AI Systems</b></h2>
<p><span style="font-weight: 400;">Perhaps the most important AI-biased concern in the criminal jurisdiction is discrimination in sentencing. AI systems are highly dependent on the information they are given data to work with, which may introduce bias. The underlying data from criminal justice systems, for example, are fraught with biases like discrimination due to race, class, or region including socio-economic factors that AI systems assist in propagating and such. For example, one study showed that the algorithm used in COMPAS disproportionately identifies criminal risk among Black defendants than White counterparts.</span></p>
<p><span style="font-weight: 400;">The Bounds of Reasonable Discretion of algorithmic discrimination, legal standards for other countries such as the Equal Protection Clause of the Fourth Amendment of U.S law, prohibits discriminatory practices. Proving algorithmic bias is not applicable in the law context. It is challenging and technical. The State vs. Loomis case in 2016 was assured of how complicated this set of issues turns out to be. The defendant in question claimed that his due process rights were violated by the Illinois court’s use of COMPAS in sentencing the fact that they relied on an algorithm which does not make its logic public. While the Supreme Court of Wisconsin acknowledged the risk of misuse, ‘guardrails’, with related concepts, is necessary it did so without compromising the aim of placing AI-based systems in the decision-making processes of the law, it accepted reliance on COMPAS.</span></p>
<p><span style="font-weight: 400;">In the UK, worries have also been expressed about AI and its capacity to reproduce and even worsen existing gaps in sentencing. Civil rights organisations have reported how unjust use of algorithms may lead to outcomes requiring more scrutiny, societal responsibility, and demand.</span></p>
<h2><b>Accountability and Transparency</b></h2>
<p><span style="font-weight: 400;">The discussions about the use of AI technology in sentencing highlight the need for transparency and accountability. Many times, defendants alongside their counsel do not have access to the algorithms and information that determine risk scores, making a challenge to these assessments next to impossible. This primary lack of information creates suspicion issues relating to procedural due process; where a person has to be provided with a reasonable opportunity to contest decisions made that affect their rights.</span></p>
<p><span style="font-weight: 400;">The courts have begun to respond to these concerns. In the case of United States v. Molen (2013), the court held that the government was obligated to provide information detailing how the forensic software was constructed, arguing that there should be a lack of transparency with such technology evidence. The same reasoning should apply to AI-sentencing tools. Opponents believe that the sentencing algorithms and the data used to train them must be made available and put through independent assessments to ensure there is no bias and discrimination.</span></p>
<p><span style="font-weight: 400;">Intellectual property rights also add another layer of cloudiness to the already opaque systems of AI. Developers often shield their algorithms using claimed trade secrets, preventing the system from being examined in detail. This conflict between proprietary claims and the requisite for information within the justice system remains unsolved, presenting numerous obstacles to accountability.</span></p>
<h2><b>Judicial Oversight and Discretion</b></h2>
<p><span style="font-weight: 400;">The integration of AI in sentencing raises questions about the role of judicial discretion. While AI can provide valuable insights, over-reliance on these tools risks undermining the judiciary’s authority and responsibility to evaluate each case individually. Judicial discretion is a cornerstone of criminal justice, allowing judges to consider unique circumstances and exercise empathy. The mechanization of sentencing decisions, driven by AI, could lead to a one-size-fits-all approach, which conflicts with the principle of individualized justice.</span></p>
<p><span style="font-weight: 400;">To address this issue, courts and policymakers must strike a balance between leveraging AI’s capabilities and preserving judicial discretion. Jurisdictions like Canada have emphasized the importance of maintaining judicial independence in the face of technological advancements. In the case of </span><i><span style="font-weight: 400;">R v. Nur</span></i><span style="font-weight: 400;"> (2015), the Canadian Supreme Court highlighted the need for proportionality in sentencing, which AI alone cannot guarantee.</span></p>
<h2><b>Ethical and Privacy Concerns</b></h2>
<p><span style="font-weight: 400;">To produce risk evaluations, AI technologies tend to depend on highly sensitive personally identifiable information. This dependence creates ethical dilemmas and privacy risks. Data collection is subject to various privacy laws and ethical guidelines to ensure that people do not become victims of unnecessary attention and abuse of their details.</span></p>
<p><span style="font-weight: 400;">The GDPR’s principles of data protection such as purpose limitation and data minimization are very strong when it comes to privacy protection in the use of AI. American privacy issues are handled by a mix of state and federal legislation like the excuse of unreasonable search and seizure of the Fourth Amendment. Carpenter v. United States (2018) is one such case where the boundaries of these protections were extended to cover digital data, which has important implications for AI systems in the criminal justice domain.</span></p>
<p><span style="font-weight: 400;">There are other ethical concerns besides privacy issues. Some critics maintain that allowing AI to determine sentencing disrespects human beings as it turns them into mere numbers and statistics which they are. This concern is part of the broader issue of respecting individual autonomy and fundamental human rights.</span></p>
<h2><b>International Perspectives on AI in Criminal Sentencing</b></h2>
<p><span style="font-weight: 400;">Different nations have taken different steps towards trying to regulate the use of AI in their criminal justice system. The Sentencing Council in the United Kingdom has suggested caution in the implementation of AI tools, offering the claim that it is imperative to have human oversight, in addition to saying that the systems need to be validated. In China, however, AI assumes a more active role in the judiciary system, with the existence of AI systems like “Smart Court” platforms which serve to aid judges in decision writing. This creates issues concerning possible over-dependence and ever-shrinking accountability.</span></p>
<p><span style="font-weight: 400;">The differences in the systems point to the fact that there is an introspective problem where there needs to be more collaboration internationally in addressing the common problem of the use of AI in sentencing. There are reports from the United Nations describing the AI “arms race” which call for parameters that dictate and contain the use of AI such that basic human rights and respect of laws are not violated. These actions indicate the risks acknowledged and the attention AI requires.</span></p>
<h2><b>Future Directions and Legal Reforms</b></h2>
<p><span style="font-weight: 400;">To solve the legal issues concerning AI and criminal sentencing, a number of reforms are needed. In the first place, everything must begin with the appropriate level of scrutiny. There should be laws and policy decisions from legislatures and the courts that require the disclosure of algorithms and training data in AI systems. In the second place, there ought to be bias mitigation audits and assessments done on a routine basis. Third, policies should constrain the capability of AI with respect to exercising discretion on sentences such that the judges’ powers will always be the overriding factor. </span></p>
<p><span style="font-weight: 400;">Furthermore, judges and other legal practitioners need to undergo post-graduate courses in AI for them to understand the practical workings of the tools in question. This understanding will enable them to analyze the results provided by those systems and outputs in detail. </span></p>
<p><span style="font-weight: 400;">In addition, the participation of the general public is equally important as already noted. The design and use of AI technologies in the criminal justice system should be reviewed by other constituencies like civil society organizations, information and communication technologists, and communities with a special focus on systematic marginalization to foster inclusion. Such collaboration can go a long way in achieving AI that automatically fulfils the requirements of equity and justice.</span></p>
<h2><b>Conclusion: Ensuring Fairness in AI-Assisted Sentencing</b></h2>
<p><span style="font-weight: 400;">The integration of AI in criminal sentencing presents both opportunities and challenges. While these tools have the potential to enhance efficiency and consistency, they also raise significant legal and ethical concerns. Issues such as bias, transparency, accountability, and judicial discretion must be carefully addressed to ensure that AI complements rather than undermines the justice system. Through thoughtful regulation, international cooperation, and ongoing legal reforms, it is possible to harness the benefits of AI while safeguarding the principles of fairness and due process. As the legal landscape evolves, it is imperative to prioritize human rights and the rule of law in the adoption of AI-driven technologies in criminal justice.</span></p>
<p>The post <a href="https://bhattandjoshiassociates.com/legal-challenges-of-ai-in-criminal-sentencing/">Legal Challenges of AI in Criminal Sentencing</a> appeared first on <a href="https://bhattandjoshiassociates.com">Bhatt &amp; Joshi Associates</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Artificial Intelligence and International Law: Ethical and Legal Implications</title>
		<link>https://bhattandjoshiassociates.com/artificial-intelligence-and-international-law-ethical-and-legal-implications/</link>
		
		<dc:creator><![CDATA[Komal Ahuja]]></dc:creator>
		<pubDate>Mon, 10 Feb 2025 10:35:39 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[International Law]]></category>
		<category><![CDATA[Technology Ethics and Policy]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Accountability]]></category>
		<category><![CDATA[AI and Law]]></category>
		<category><![CDATA[AI Ethics]]></category>
		<category><![CDATA[AI Policy]]></category>
		<category><![CDATA[AI Regulation]]></category>
		<category><![CDATA[AI Surveillance]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[Autonomous Weapons]]></category>
		<category><![CDATA[Data Privacy]]></category>
		<category><![CDATA[Digital Governance]]></category>
		<category><![CDATA[Ethical AI]]></category>
		<category><![CDATA[Global AI Governance]]></category>
		<category><![CDATA[Human Rights]]></category>
		<guid isPermaLink="false">https://bhattandjoshiassociates.com/?p=24317</guid>

					<description><![CDATA[<p>Introduction Artificial intelligence (AI) has emerged as a transformative technology, influencing every aspect of modern life, from healthcare and finance to military and governance. While its benefits are undeniable, AI also poses significant ethical and legal challenges, particularly in the realm of international law. The development and deployment of AI technologies across borders raise questions [&#8230;]</p>
<p>The post <a href="https://bhattandjoshiassociates.com/artificial-intelligence-and-international-law-ethical-and-legal-implications/">Artificial Intelligence and International Law: Ethical and Legal Implications</a> appeared first on <a href="https://bhattandjoshiassociates.com">Bhatt &amp; Joshi Associates</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h2><img loading="lazy" decoding="async" class="alignright size-full wp-image-24318" src="https://bj-m.s3.ap-south-1.amazonaws.com/p/2025/02/artificial-intelligence-and-international-law-ethical-and-legal-implications.png" alt="Artificial Intelligence and International Law: Ethical and Legal Implications" width="1200" height="628" /></h2>
<h2><strong>Introduction</strong></h2>
<p><span style="font-weight: 400;">Artificial intelligence (AI) has emerged as a transformative technology, influencing every aspect of modern life, from healthcare and finance to military and governance. While its benefits are undeniable, AI also poses significant ethical and legal challenges, particularly in the realm of international law. The development and deployment of AI technologies across borders raise questions about accountability, fairness, and compliance with international legal norms. This article explores the intersection of artificial intelligence and international law, focusing on ethical concerns, regulatory efforts, and the need for a coherent global framework.</span></p>
<h2><b>The Rise of Artificial Intelligence</b></h2>
<p><span style="font-weight: 400;">AI refers to the simulation of human intelligence by machines, enabling them to perform tasks such as decision-making, problem-solving, and pattern recognition. Recent advances in machine learning, neural networks, and natural language processing have accelerated AI’s integration into critical domains. Autonomous weapons systems, predictive algorithms, and facial recognition technologies exemplify AI’s far-reaching applications.</span></p>
<p><span style="font-weight: 400;">However, these advancements also raise concerns about misuse, discrimination, and the erosion of privacy. In the context of international law, AI’s deployment in areas such as warfare, border control, and global governance highlights the urgent need for ethical and legal oversight.</span></p>
<h2><b>Ethical Concerns in AI Deployment</b></h2>
<p><span style="font-weight: 400;">The ethical challenges associated with AI are multifaceted, often involving conflicts between innovation and fundamental rights. Key concerns include:</span></p>
<ol>
<li style="font-weight: 400;" aria-level="1"><b>Bias and Discrimination:</b><span style="font-weight: 400;"> AI systems often reflect the biases present in their training data, leading to discriminatory outcomes. This issue is particularly concerning in areas such as criminal justice, immigration, and employment, where biased algorithms can perpetuate systemic inequalities.</span></li>
<li style="font-weight: 400;" aria-level="1"><b>Accountability and Transparency:</b><span style="font-weight: 400;"> The complexity of AI systems makes it difficult to determine responsibility for their actions. This lack of transparency, often referred to as the &#8220;black box&#8221; problem, complicates efforts to ensure accountability under international law.</span></li>
<li style="font-weight: 400;" aria-level="1"><b>Autonomous Weapons and Warfare:</b><span style="font-weight: 400;"> The development of lethal autonomous weapons systems (LAWS) raises ethical questions about the delegation of life-and-death decisions to machines. Such systems challenge the principles of proportionality, distinction, and accountability under international humanitarian law.</span></li>
<li style="font-weight: 400;" aria-level="1"><b>Privacy and Surveillance:</b><span style="font-weight: 400;"> AI-powered surveillance technologies, including facial recognition and predictive policing, often infringe on individuals’ privacy and freedom. These practices may violate international human rights norms, such as those enshrined in the Universal Declaration of Human Rights (UDHR).</span></li>
</ol>
<h2><b>International Legal Frameworks and Artificial Intelligence </b></h2>
<p><span style="font-weight: 400;">The regulation of AI at the international level remains fragmented and nascent. While existing legal frameworks provide a basis for addressing some AI-related issues, they are often inadequate for the complexities of this rapidly evolving technology. Key legal instruments include:</span></p>
<ol>
<li style="font-weight: 400;" aria-level="1"><b>International Humanitarian Law (IHL):</b><span style="font-weight: 400;"> IHL governs the conduct of armed conflicts, including the use of new technologies. The principles of distinction, proportionality, and necessity must be upheld in the deployment of AI-powered weapons. However, the applicability of IHL to autonomous systems remains a subject of debate.</span></li>
<li style="font-weight: 400;" aria-level="1"><b>Universal Declaration of Human Rights (UDHR):</b><span style="font-weight: 400;"> AI technologies must comply with human rights norms, including the right to privacy, freedom of expression, and protection from discrimination. The UDHR provides a foundational framework for evaluating AI’s impact on human rights.</span></li>
<li style="font-weight: 400;" aria-level="1"><b>General Data Protection Regulation (GDPR):</b><span style="font-weight: 400;"> While a regional framework, the EU’s GDPR has global implications for AI development. It establishes strict rules for data processing, consent, and accountability, offering a model for regulating AI’s use of personal data.</span></li>
<li style="font-weight: 400;" aria-level="1"><b>United Nations Initiatives:</b><span style="font-weight: 400;"> The UN has initiated discussions on the ethical and legal implications of AI, emphasizing the need for inclusive and transparent governance. The establishment of the High-Level Panel on Digital Cooperation and UNESCO’s Recommendation on the Ethics of AI are notable steps in this direction.</span></li>
</ol>
<h2><b>Challenges in Regulating AI </b></h2>
<p><span style="font-weight: 400;">Several challenges hinder the development of comprehensive international legal frameworks for AI:</span></p>
<ol>
<li style="font-weight: 400;" aria-level="1"><b>Rapid Technological Advancement:</b><span style="font-weight: 400;"> The pace of AI innovation outstrips the ability of legal systems to adapt, creating regulatory gaps and uncertainties.</span></li>
<li style="font-weight: 400;" aria-level="1"><b>Divergent National Priorities:</b><span style="font-weight: 400;"> States have varying approaches to AI regulation, reflecting their economic, political, and cultural contexts. Achieving consensus on global standards is a significant challenge.</span></li>
<li style="font-weight: 400;" aria-level="1"><b>Dual-Use Nature of AI:</b><span style="font-weight: 400;"> AI technologies often have both civilian and military applications, complicating efforts to regulate their use without stifling innovation.</span></li>
<li style="font-weight: 400;" aria-level="1"><b>Enforcement and Compliance:</b><span style="font-weight: 400;"> Ensuring adherence to international norms in the AI domain requires robust monitoring and enforcement mechanisms, which are currently lacking.</span></li>
</ol>
<h2><b>The Path Forward: Toward a Global AI Governance Framework</b></h2>
<p><span style="font-weight: 400;">Addressing the ethical and legal implications of AI requires a coordinated international effort. Key recommendations include:</span></p>
<ol>
<li style="font-weight: 400;" aria-level="1"><b>Developing Binding Agreements:</b><span style="font-weight: 400;"> States should negotiate binding international treaties to govern the development and deployment of AI, particularly in sensitive areas such as autonomous weapons and surveillance technologies.</span></li>
<li style="font-weight: 400;" aria-level="1"><b>Promoting Ethical Guidelines:</b><span style="font-weight: 400;"> International organizations should establish ethical guidelines for AI, emphasizing fairness, accountability, and respect for human rights. These guidelines can serve as a basis for national and regional regulations.</span></li>
<li style="font-weight: 400;" aria-level="1"><b>Strengthening Multilateral Cooperation:</b><span style="font-weight: 400;"> Multilateral forums, such as the United Nations and the G20, should prioritize AI governance and facilitate dialogue among stakeholders, including governments, industry, and civil society.</span></li>
<li style="font-weight: 400;" aria-level="1"><b>Investing in Research and Capacity Building:</b><span style="font-weight: 400;"> International efforts should focus on research and capacity building to address the ethical, technical, and legal challenges of AI. This includes fostering cross-border collaboration and sharing best practices.</span></li>
</ol>
<h2><strong>Conclusion: Regulating Artificial Intelligence in International Law</strong></h2>
<p><span style="font-weight: 400;">Artificial intelligence holds immense potential to drive progress and innovation, but its ethical and legal implications demand careful scrutiny. The intersection of artificial intelligence and international law presents both challenges and opportunities, requiring a balanced approach that upholds fundamental rights while enabling technological advancement. By fostering global cooperation and developing robust governance frameworks, the international community can ensure that AI serves the collective good and aligns with the principles of justice and equity.</span></p>
<p>The post <a href="https://bhattandjoshiassociates.com/artificial-intelligence-and-international-law-ethical-and-legal-implications/">Artificial Intelligence and International Law: Ethical and Legal Implications</a> appeared first on <a href="https://bhattandjoshiassociates.com">Bhatt &amp; Joshi Associates</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
