AI-Hallucinated Citations in Indian Courts: The Emerging Professional Liability of Advocates
Introduction
Notable Judicial Encounters with AI Generated Hallucinated Citations in Indian Courts
The Delhi High Court case of Greenopolis Welfare Association v. Narender Singh and Ors. [1] in 2025 marked one of the first documented instances where AI-generated fabricated citations were presented before an Indian court. Justice Girish Kathpalia noted that several judicial precedents cited by the petitioner did not exist at all, and in some precedents, the quoted portions were entirely fabricated. The petition was dismissed as withdrawn, with the court explicitly recognizing the use of AI-generated content without proper verification.
More significantly, the Supreme Court encountered this issue in Deepak Raheja v. Omkara Assets Reconstruction Private Limited [2] in November 2024. Senior Advocate Neeraj Kishan Kaul brought to the attention of Justices Dipankar Datta and Augustine George Masih that a rejoinder contained over one hundred citations to non-existent cases. The allegations included criminal law judgments misrepresented as insolvency precedents, cases with fabricated facts, and identical judgments cited for multiple unrelated propositions. Senior Advocate C.A. Sundaram expressed profound embarrassment and submitted an unconditional apology. The Supreme Court cautioned that it would hold the appellant accountable if citations proved fabricated.
The Income Tax Appellate Tribunal in Bengaluru took the extraordinary step of recalling its own order after discovering AI-generated fabrications in cited case laws. This action underscores the tangible impact of ai hallucinated citations on judicial decision-making. The Punjab and Haryana High Court reprimanded advocates for using AI tools during live hearings, warning that artificial intelligence cannot replace actual intelligence.
The Legal Framework Governing Professional Conduct
The professional conduct of advocates in India is primarily governed by the Advocates Act, 1961 [3], which provides the foundational statutory framework for regulating the legal profession. Section 35 establishes disciplinary mechanisms for professional misconduct. Under Section 35(1), where a State Bar Council has reason to believe that any advocate has been guilty of professional or other misconduct, it shall refer the case for disposal to its disciplinary committee. The committee has authority under Section 35(3) to dismiss the complaint, reprimand the advocate, suspend the advocate from practice, or remove the advocate’s name from the State roll.
Section 49(1)(c) empowers the Bar Council of India to make rules laying down standards of professional conduct and etiquette. These rules impose upon advocates the fundamental duty to maintain the dignity of the profession, act with utmost good faith towards clients, and represent clients fearlessly while maintaining truth and justice. The submission of fabricated citations constitutes professional misconduct as it violates the advocate’s duty to the court not to knowingly make false statements or conceal material facts. Beyond professional misconduct proceedings, advocates may face consequences under the Bharatiya Nyaya Sanhita, 2023 [4] for dishonestly presenting false claims, and such conduct may constitute contempt of court.
Supreme Court Initiatives and Institutional Responses
The Supreme Court reconstituted its Artificial Intelligence Committee under Justice P.S. Narasimha [5], tasked with ensuring ethical AI adoption. The Court released a White Paper on Artificial Intelligence and the Judiciary [6] in 2024, recognizing AI as crucial for addressing India’s judicial backlog while emphasizing that AI is intended to support, not replace, human judgment. The White Paper warns against AI hallucinations and premature adoption that could compromise judicial integrity.
The Supreme Court developed indigenous AI tools including SUPACE for case assistance, SUVAS and PANINI for translation, TERES for transcription, and LegRAA, an in-house tool trained exclusively on Indian case law. Chief Justice B.R. Gavai has issued multiple warnings about AI-generated fake citations. Justice Vikram Nath observed that while AI may expedite justice processes, only human intelligence can deliver its essence, as AI cannot understand victim experiences or navigate complex social situations requiring nuanced judgment.
Regulatory Approaches and State-Level Policies
The Kerala High Court drafted a policy effectively banning the use of artificial intelligence in judicial reasoning [7], representing one of the most restrictive approaches among Indian High Courts. The policy warns that advocates or judges who present fabricated citations could face contempt proceedings. The Court emphasized that advocates must adhere to the Bar Council’s code of ethics requiring honesty, truthfulness, and verification of all authorities cited. The policy makes clear that ignorance of AI’s drawbacks will not constitute a valid defense for professional misconduct.
Consumer Protection Laws and Professional Accountability
The Supreme Court addressed whether advocates can be held liable under consumer protection laws in Bar of Indian Lawyers v. D.K. Gandhi [8] (May 2024). Justices Bela M. Trivedi and Pankaj Mithal held that advocates cannot be held liable for deficiency of services under the Consumer Protection Act, 2019, as legal services constitute a ‘contract of personal service’ excluded from the Act’s purview. The legal profession is sui generis, involving fiduciary duties based on trust and confidence, fundamentally different from typical consumer-service relationships.
While this shields advocates from consumer protection liability, professional accountability remains intact. Advocates remain answerable to Bar Councils, civil courts, and potentially criminal prosecution. Clients affected by hallucinated citations can approach State Bar Councils for professional misconduct complaints, file civil negligence claims, or pursue remedies if fabricated citations influenced case outcomes. Professional responsibility cannot be delegated to artificial intelligence.
Consequences and Enforcement Mechanisms
Consequences for advocates who submit AI-generated fabricated citations, particularly in light of the growing problem of AI generated hallucinated citations in Indian courts, range from reprimands to suspension or removal from the roll of advocates. Courts have begun imposing wasted costs and indemnity costs to penalize reckless reliance on artificial intelligence tools. The principle is clear: the lawyer of record remains solely responsible for the content of submissions, even where research assistance is technology-driven. Professional duty requires independent verification of all cited authorities, and failure to verify constitutes a breach of professional conduct. Liability does not depend on whether the error arose from artificial intelligence or manual research.
Internationally, courts have imposed significant penalties. In September 2025, a California appellate court fined attorney Amir Mostafavi $10,000 for submitting a brief with 21 fabricated ChatGPT-generated citations. The U.S. case of Mata v. Avianca (2023) [9] established that ignorance of AI limitations is not a defense, with lawyers sanctioned for citing fictitious cases.
Future Directions and Recommendations
The legal profession must adapt to AI prevalence while maintaining robust verification protocols. Several measures are necessary. First, standardization of protocols across High Courts through Supreme Court-issued AI operating guidelines. Second, enforcement evolution from apologies to financial penalties and, in serious cases, suspension or disbarment. Third, mandatory disclosure requirements where advocates certify manual verification of all citations against official databases.
AI literacy must become integral to legal education and bar preparation. Law schools and the Bar Council should incorporate training on AI tools’ merits and limitations. Courts and law firms should develop AI models trained on verifiable legal databases, reducing reliance on general-purpose chatbots prone to hallucinations. Technological solutions include court-endorsed digital authentication protocols such as QR-coded judgments, cryptographic hash stamps, or official authenticity layers ensuring only verified citations circulate. Development of explainable AI models that reference sources would align with courts’ transparency requirements.
Conclusion
The emergence of AI-hallucinated citations in Indian courts represents a pivotal moment in the intersection of technology and legal practice. Cases before the Supreme Court, Delhi High Court, and tribunals demonstrate this is a present reality requiring immediate attention. The Advocates Act, 1961, provides adequate mechanisms to address professional misconduct, though the challenge lies in effective enforcement and creating awareness about AI limitations.
The Supreme Court’s initiatives, including AI Committee reconstitution and the White Paper release, signal institutional recognition of both opportunities and dangers. Indigenous AI tools like SUPACE, SUVAS, and LegRAA demonstrate commitment to harnessing technology while maintaining judicial integrity. The Kerala High Court’s restrictive policy reflects legitimate concerns about AI-generated content reliability.
While the Bar of Indian Lawyers decision shields advocates from consumer protection liability, professional accountability remains undiminished. Advocates remain answerable to Bar Councils, civil courts, and potentially criminal prosecution. Professional responsibility cannot be delegated to artificial intelligence. Advocates must exercise independent judgment, verify all citations, and maintain the highest standards regardless of technological tools employed. As AI evolves, the legal profession must balance technological efficiency with fundamental principles of justice, truth, and professional ethics. Technology can expedite research, but human intelligence, judgment, and ethical commitment remain irreplaceable in delivering justice.
References
[1] Greenopolis Welfare Association v. Narender Singh and Ors., Delhi High Court (2025). https://www.barandbench.com/news/litigation/delhi-high-court-allows-plea-to-be-withdrawn-after-petitioner-cites-fake-ai-generated-case-laws
[2] Deepak Raheja v. Omkara Assets Reconstruction Private Limited, Supreme Court of India (2024). https://www.barandbench.com/news/litigation/supreme-court-to-examine-claim-that-imaginary-ai-generated-case-laws-were-cited-in-pleadings
[3] The Advocates Act, 1961. https://www.indiacode.nic.in/bitstream/123456789/15341/1/advocate_1961.pdf
[4] Bharatiya Nyaya Sanhita, 2023. https://www.indiacode.nic.in/
[5] Supreme Court AI Committee reconstitution. https://indialegallive.com/cover-story-articles/il-feature-news/ai-inside-courtroom-supreme-court/
[6] White Paper on Artificial Intelligence and the Judiciary, Supreme Court of India (2024). https://acuitylaw.co.in/integrating-intelligence-the-courts-evolving-engagement-with-ai/
[7] Kerala High Court AI Policy. https://lawjurist.com/index.php/2025/11/04/drawing-boundaries-in-ai-honorable-kerala-high-courts-lesson-on-ai-hallucination-and-fake-citations/
[8] Bar of Indian Lawyers v. D.K. Gandhi, Supreme Court of India (2024). https://www.livelaw.in/articles/addressing-applicability-consumer-protection-act-advocates-263037
[9] Mata v. Avianca (2023), United States District Court. https://analyticsindiamag.com/ai-features/indias-new-courtroom-menace-judgments-that-never-existed/
Whatsapp

