Ethical AI Adoption in the Indian Judiciary: Strengthening Access to Justice
Credits: Jamillah Knowles & Digit / Better Images of AI, edited by Harshita Kesarwani
Project
/
Jul 2025 - Feb 2026

Ethical AI Adoption in the Indian Judiciary: Strengthening Access to Justice

Partners
DAKSH /UNDP

Key Highlights

  1. AI is increasingly seen as a way to improve efficiency and time management in court processes. For instance, the use of AI-enabled translation and transcription tools can speed up the transcription of depositions and arguments and help compensate for staffing gaps. Moreover, the use of GenAI tools like ChatGPT for tasks such as document drafting is indicative of the ubiquity and ease of access to this technology.
  2. But current adoption trajectories are not guided by adequate thought about AI’s ethical risks and harms, or its technical limitations. Judges and policymakers do not perceive AI as a replacement for human decision-making, but more as a means to augment existing processes, and have, in general, called for a cautious approach. However, they may simultaneously be under-equipped to recognise these limitations.
  3. Guiding frameworks that enable courts to decide how to select the best available AI tool and ensure adequate data protection frameworks can streamline AI acquisition. They can support courts in technical evaluation, risk assessment, and risk mitigation, overall ensuring the quality of AI tools. Further, AI adoption guidelines serve a way to integrate AI responsibly for public institutions that serve as upholders of the law. Such guidelines offer pathways to establish a systematic review and evaluation process to support safer, accountable, and responsible AI for courts. These can also help better evaluate vendors and products, and enable the establishment of post-contracting management systems to ensure sustainable use of technology.

Why This Project

Across India’s courts, there is a growing appetite for AI adoption. From translation and transcription tools to AI-enabled legal research, case management, and even scheduling, courts are increasingly seeing the value of these technologies. While adoption has largely been cautious, it has also been fragmented, often led by individual judges or through pilots, with no clear framework to guide long-term use.

This is where formal processes, such as procurement, become critical. AI holds promise, but also serious risks when deployed at scale in public institutions. Experiences from other jurisdictions, like errors in automated alimony calculations in the UK, highlight the dangers of unchecked implementation. For courts, where justice must not only be done, but be seen to be done, responsible pathways for AI adoption are essential.

At present, procurement in the judicial system is ad hoc. There are no clear rules for evaluating AI solutions, no safeguards to ensure vendors deliver on their promises, and little foresight about requirements for their continued use once pilot projects end. Questions around infrastructure costs, technical capacity, and the protection of sensitive judicial data remain unresolved. Institutional capacity for formal procurement processes is limited, as is the range of providers offering digital tools for courts. In such a scenario, developing voluntary guidelines as a first step towards institutionalising AI adoption processes can strengthen:

  • Principled Adoption: Establish guiding frameworks that can be applied consistently across courts, different stages and use cases of AI in the judiciary.
  • Risk Awareness and Mitigation: Provide courts with practical tools to assess and address ethical, technical, and sustainability risks associated with AI.
  • Ease for Procurers: Streamline procurement processes to reduce burdens on already overstretched courts while enabling informed, efficient decision-making.
  • Responsible Use of Public Finance: Ensure that investments in AI are accountable, transparent, and in the public interest.
  • Transparency and Accountability: Build trust in how AI is deployed within the judicial system by setting standards that protect rights and safeguard sensitive judicial data.

In the long term, it is essential to build standardised processes, such as public procurement, as tools of accountability and trust, helping courts adopt AI in ways that are responsible, sustainable, and aligned with the public interest.

Our Work

Using a combination of research and expert consultation, we developed a landscape report of existing AI adoption processes in courts, proposing a rights and risk-informed approach to streamline AI integration in the Indian legal context, underlining the role of case types, stages and litigant types in determing risks and potential for harms. To operationalise this approach, we developed four tools:

  • Institutional readiness: A set of questions to help the court decide whether its current human resources, infrastructure and finance considerations are adequate for the successful design, deployment, and monitoring of AI tools.
  • Risk assessment: A mechanism for courts to identify the potential risks of AI use in judicial processes. The assessment will help courts determine whether or not to proceed with deploying AI for the intended purpose and, if so, with what safeguards.
  • Technical Assessment: A detailed vendor assessment questionnaire that systematically examines vendor credentials, technical capabilities, data governance practices, transparency measures, safety protocols, and accountability mechanisms.
  • Ongoing assessment: Questions for the court and vendor to continuously monitor impact and determine the tool’s success metrics once adopted.

Stay tuned for the tools!

Outputs
Please select a tab to view its outputs

Ethical AI Adoption in the Indian Judiciary: Strengthening Access to Justice Through Responsible Procurement and Governance

Jointly conducted by DAKSH and Digital Futures Lab, with support from the United Nations Development Programme (UNDP), this project on Ethical AI Adoption in the Indian Judiciary seeks to develop best practices and guidelines for the ethical and robust procurement of AI by courts in India.

Through this project, we seek to develop clear usage guidelines as a way to ensure that the integration of AI in courts adheres to natural justice principles and advances public interest.

Team

Profile picture for Shivranjana Rathore
Shivranjana Rathore

Project Lead: Urvashi Aneja

Project Coordination and Researcher: Dona Mathew

Communications: Shivranjana Rathore