Moderated by / Jhalak Kakkar (CCG, NLU Delhi)
Panel / Advocate N. S. Napinnai, Prateek Sibal, Dr. Sudhir Krishnaswamy and Usha Ramanathan
Since 2019, the conversation on adopting Artificial Intelligence (AI) for automating court processes and legal research has picked up following the introduction of the legal document translation software SUVAS. Organisations and academic institutions are developing new tools for case analysis and case outcome prediction. The draft phase 3 vision document for the eCourts project also envisions a future where repetitive and mechanical tasks are automated, significantly reducing pendency and delays in the justice system. While AI in courts is at a nascent stage in India, this is an opportune time to discuss how automation will change the justice system and to foresee challenges it may pose to individuals, communities and legal institutions.
Digital Futures Lab’s 7th Tech and Society Dialogue on artificial intelligence in Indian courts also doubled as a launch for our latest report, ‘Smart Automation and AI in India’s Judicial System: A Case of Organised Irresponsibility?’ - a comprehensive landscape report mapping out the actors and narratives driving the development and incorporation of AI in the Indian judiciary.
Dona Mathew, the Research Associate involved on the project, opened the session with a comprehensive presentation on the report. You can read some key takeaways and the report itself here.
Some key points that came up in the Q/A following the presentation:
When asked if the push for AI in courts is led by demands from the judiciary or by software vendors pitching their platforms in an effort to create a market for AI solutions, Dona and Urvashi answered that it worked both ways. The judiciary puts out certain demands but software companies come in looking to implement new business models in the space. There is a policy push towards the use of tech and incorporating principles of efficiency, even if the judiciary may be slightly hesitant. One of the major issues is that the judiciary does not have the capacity to assess whether the proposed tech solution matches the problem.
Urvashi and Dona also highlighted that there are lots of AI-related harms seen in other sectors that could translate to the judicial sector. There’s a lot of opacity on how these systems are being used; their analysis of the potential harms also looks at lessons from other jurisdictions.
They then went into a little bit more detail on the oversimplification of the apparent problems in the judicial system that are deemed to need fixing through the incorporation of AI. The judiciary sees the issue of docket backup as an issue of supply or demand and tech is seen as a solution to improve efficiency and speed up case pendency. However, there’s not enough analysis of the underlying issues causing this pendency. Simply improving case management systems may not solve the problem.
The discussion of the report then segued into the panel discussion with Jhalak Kakkar (moderator - CCG, NLU Delhi), Advocate N. S. Napinnai (Lawyer, Cyber Saathi), Prateek Sibal (UNESCO), Dr. Sudhir Krishnaswamy (NLSIU, Bengaluru and Oversight Board) and Usha Ramanathan (independent law researcher). Some key (paraphrased) insights from each panellist are listed below:
Prateek Sibal: AI systems are not infallible and this is an important point to drive across. It can be used for assistance; however, ethical assessments are needed. It is a script written by someone. There are also issues with regard to digital literacy and access; these systems should be designed with people in focus. In countries like Brazil, certain appeal systems have become faster through the use of tech; however, in Colombia, there have been issues with transparency.
Usha Ramanathan: The Supreme Court gave us an incredible judgment on privacy, however, [they] will not apply it to their own thinking. There only seems to be interest from private players - trying to adapt existing technology for persistent and systemic problems.
It’s the people who don’t have adequate digital literacy who will be suffering the negative consequences of tech in courts. We’re adding one more potential area for error and oppression.
Private companies can afford to make errors; they’re not responsible for fundamental rights. The whole point of the judiciary is that it hears everyone and doesn’t become a part of the system. Digitisation in itself is not a problem; however, digitisation is the first step to centralisation.
The push towards digitisation and AI use is a result of strategic policy decisions from the government, as opposed to a clear assessment of necessity from the judiciary.
Sudhir Krishnaswamy: It’s not necessarily the case that the real trade-off is between efficiency and efficacy or justice and democratic values. The costs that we pay are high and are visible in the form of vigilantism. There is no moral counterforce; we will always have disputes. We need to recognise the foundational problems first. Is the solution AI? Just to produce type-written judgements, it took nearly a decade. AI is a significant tech shift and should be treated as such.
AI is genuinely transformative tech, as if almost designed specifically for law: search function, text-to-speech and speech-to-text and text generation. It’s going to change the legal system. It’s going to happen either publicly or privately, but it’s going to happen. The risk is we don’t have an empowered, humanised version of this tech at scale. What kind of rollout is going to take place? What guardrails will we build around this tech?
Doing the adoption right will be critical. Machine models doing adjudication are far away. The erosion of the legitimacy of the courts will occur at the most inopportune time in the country if we do this wrong.
NS Nappinai: We can’t run away from this tech. Everything has pros and cons and trade-offs. What are we willing to give up? Is there some real reward? We do need to answer the problem statements. The adaptation can’t be rampant or thoughtless.
We need to be careful in terms of what we’re adapting the AI for, in courts. We’re far from replacing lawyers.
The report throws light on the use of data and how it could modulate usage. Usage needs to be process-driven. In the cases where AI has been used, human mediation and oversight have been mandatory. AI responses are merely guiding and not final.
Who will determine how AI will be used? Is it just the government? Will the introduction of civil society act as a backdoor for private actors to dictate the space? We need to make sure that the agenda being driven and the pilots and processes being implemented in the name of technological integration serves the institutions and the people who interact with it first. There needs to, therefore, be a focus on transparency.
Responsibility and transparency are powerful words; in cases of the judiciary, tech should be used to understand how to better allocate time to support an overburdened system.
You can read a live-tweet thread of the T&SD conversation here.