Justice in The Time of Algorithms
Credits: Jamillah Knowles & Digit (Better Images of AI)
Media
/
Apr 2026

Justice in The Time of Algorithms

Dona Mathew /Leah Verghese

The Gujarat High Court recently published its Artificial Intelligence (AI) policy, establishing guiding principles and categorising permitted and prohibited use cases. Like the Supreme Court’s White Paper on AI and the Kerala High Court’s policy, these initiatives will shape how courts in India integrate AI into judicial workflows.

While the policy is commendable in its intent, it requires further elaboration to operationalise its central concern: that human judgment in judicial decision-making must remain anchored in conscience, reason, law, and constitutional values, and that these foundational aspects of justice delivery cannot be delegated to automated systems. How can courts be better prepared to manage risks from a technology that is in a perpetual state of improvement?First, courts will need to determine which model is best suited for a particular application, weighing its benefits and establishing redlines for risk tolerance. Large Language Models (LLMs) are highly probabilistic with opaque internal reasoning. With the emergence of ‘AI wrappers’ built on commercial LLMs, courts are increasingly faced with the choice to select the best wrapper, diminishing their ability to audit and trace data sources and training parameters. For instance, if an LLM is not trained on Indian case law and Indian pleadings, its outputs will be inapplicable to Indian judicial contexts. They may also reflect biases embedded in the general data the model is trained on. A 2025 MIT Technology Review article found that caste bias is rampant in products like ChatGPT, a worrying finding in a country where inequality and discrimination are lived realities for many.Further, the policy prohibits AI use for order drafting or sentencing, while permitting deployment for functions such as administration, research and support in drafting. While practical in approach, courts must note that the risks associated with each permissible application will vary depending on the tool, the type of litigant, the case stage and the type of case. The risks of using a rule-based automated scheduling system are vastly different from using LLMs to improve the language of a judgment. For example, studies show that using LLMs to improve language and tone can lead to loss of substance or meaning, which will have far-reaching consequences in judicial settings.Data privacy is a key concern in the policy, but these considerations will change in intensity based on factors like child custody cases versus traffic violations or scrutiny at the filing stage versus pleadings. Courts must determine their comfort level in sharing sensitive documents, such as pleadings, with AI vendors. Mechanisms to assess data for representation, mitigate bias and ensure availability of digitised, error- and gap-free data to extract metadata from are critical to building reliable tools. Elsewhere, courts are developing models that aim to leverage the benefits of AI, while safeguarding against data breaches. For example, the UK judiciary’s use of Microsoft Copilot is within the judiciary’s secure environment, maintaining data privacy and the integrity of judicial information.Furthermore, for courts open to AI adoption, who will decide which tools to use and ensure they function as promised? Institutional scaffolding is essential. AI committees in High Courts and the Supreme Court should be responsible for the full AI lifecycle, from use-case conceptualisation to performance monitoring. The composition of such committees should be multidisciplinary, combining the expertise of judges with technologists, data protection experts, AI ethicists, and domain experts.Such committees should be supported by skilled technology cadre with defined career progression pathways and periodic opportunities for upskilling. Grievance redress mechanisms are also important to enable reporting and recourse in cases of rights violations. This must be supported by clear communication to litigants and lawyers about the nature of the AI tool, its purpose and stage of use, with an option to opt out.Given the emergent nature of AI, it is essential that its users within the judiciary develop a clear understanding of the technology. Strengthening the capacity of judges and court staff is a prerequisite for adoption — covering foundational technical knowledge of AI, awareness of ethical risks and constitutional rights implications, and practical guidance on the boundaries of its use.For the judiciary, AI integration promises efficiency and time-saving. However, AI tools are rarely plug-and-play solutions. Their performance depends on training datasets, model parameters and the humans who use them, reflecting real-world biases and limitations. As the integration of technology and the risk of harm can erode trust in the system, it is the duty of the court to maintain the tenet that justice must not only be done, but must be seen to be done.Originally published in the Deccan Herald.