Report
Responsible AI
May 16, 2023

Smart Automation and Artificial Intelligence in India's Judicial System: A Case of Organised Irresponsibility?

A scattered approach to smart automation of the judicial system, without due deliberation and legislative processes, increases the risk of harms. Read DFL's original report on the development and incorporation of AI in the Indian judicial system. 
Download PDF
Smart Automation and Artificial Intelligence in India's Judicial System: A Case of Organised Irresponsibility?
illustration by:
Neeti Banerji

Since 2019 - when SUVAS (the AI portal of the Supreme Court was introduced - the conversation on adopting Artificial Intelligence (AI) for automating court processes and legal research has picked up quite a bit. With the recent instance of the Punjab and Haryana High Court soliciting views on bail from ChatGPT and the Rs. 7000 crores allocated to the eCourts project in the 2023 annual budget, India is leaning toward a tech-first-and-forward judicial future.

Organisations and academic institutions are developing new AI tools for case analysis and case outcome prediction. Actors in the space - ranging from academics, civic tech startup folks, lawyers, judges, etc. - are envisioning a future where repetitive and mechanical tasks are automated, significantly reducing pendency and delays in the justice system.

While AI in courts is at a nascent stage in India, it is an opportune time to consider how automation will change the justice system and foresee challenges it may pose to individuals, communities and legal institutions.


'Smart Automation and Artificial Intelligence in India's Judicial System: A Case of Organised Irresponsibility?' is an original DFL report by Urvashi Aneja and Dona Mathew, that explores the various challenges and opportunities around the deployment of AI in courts. They make the case that the techno-legal ecosystem can be currently characterised as one of "organised irresponsibility", where the actions of "many agents together cumulatively and collectively generate risks for others, but in which all of the distinct agents are able to either minimise or fully avoid culpability because of the difficulty in tracing the overall damage to the specific harmful actions of any one of those agents." To mitigate this, Aneja and Mathew offer a set of five recommendations for responsible innovation in this space that are premised on an 'ethic of care'.

Some key takeaways from 'Smart Automation and AI in India's Judicial System: A Case of Organised Irresponsibility?':
1) Justice system problems are often oversimplified, lending themselves to technology solutions that address efficiency, but efficiency does not always amount to justice.
2) The legal community and judges are not uniformly familiar with advanced technologies nor have the means to access them, increasing the risk of exclusion.
3) The growing push for open judicial data, without adequate guard rails for protecting sensitive and personal information, increases risks of privacy violations.
4) Skewed datasets used to build these databases for AI tools can lead to biased outcomes, especially for marginalised groups who may be over or under-represented in datasets.
5) The report recommends approaching AI development with an ethic of care that values the needs and capacities of all parties, as a necessary correlative to a robust legal framework.


Download and read the full report on the left.


You can read the summary of our 7th Tech and Society Dialogue (T&SD) session, 'Artificial Intelligence in Indian Courts, on this topic here. You can read a live-tweet thread of the T&SD session here.

Browse categories

Scroll right
Neeti Banerji
illustration by:
Neeti Banerji

Smart Automation and Artificial Intelligence in India's Judicial System: A Case of Organised Irresponsibility?

A scattered approach to smart automation of the judicial system, without due deliberation and legislative processes, increases the risk of harms. Read DFL's original report on the development and incorporation of AI in the Indian judicial system. 

Since 2019 - when SUVAS (the AI portal of the Supreme Court was introduced - the conversation on adopting Artificial Intelligence (AI) for automating court processes and legal research has picked up quite a bit. With the recent instance of the Punjab and Haryana High Court soliciting views on bail from ChatGPT and the Rs. 7000 crores allocated to the eCourts project in the 2023 annual budget, India is leaning toward a tech-first-and-forward judicial future.

Organisations and academic institutions are developing new AI tools for case analysis and case outcome prediction. Actors in the space - ranging from academics, civic tech startup folks, lawyers, judges, etc. - are envisioning a future where repetitive and mechanical tasks are automated, significantly reducing pendency and delays in the justice system.

While AI in courts is at a nascent stage in India, it is an opportune time to consider how automation will change the justice system and foresee challenges it may pose to individuals, communities and legal institutions.


'Smart Automation and Artificial Intelligence in India's Judicial System: A Case of Organised Irresponsibility?' is an original DFL report by Urvashi Aneja and Dona Mathew, that explores the various challenges and opportunities around the deployment of AI in courts. They make the case that the techno-legal ecosystem can be currently characterised as one of "organised irresponsibility", where the actions of "many agents together cumulatively and collectively generate risks for others, but in which all of the distinct agents are able to either minimise or fully avoid culpability because of the difficulty in tracing the overall damage to the specific harmful actions of any one of those agents." To mitigate this, Aneja and Mathew offer a set of five recommendations for responsible innovation in this space that are premised on an 'ethic of care'.

Some key takeaways from 'Smart Automation and AI in India's Judicial System: A Case of Organised Irresponsibility?':
1) Justice system problems are often oversimplified, lending themselves to technology solutions that address efficiency, but efficiency does not always amount to justice.
2) The legal community and judges are not uniformly familiar with advanced technologies nor have the means to access them, increasing the risk of exclusion.
3) The growing push for open judicial data, without adequate guard rails for protecting sensitive and personal information, increases risks of privacy violations.
4) Skewed datasets used to build these databases for AI tools can lead to biased outcomes, especially for marginalised groups who may be over or under-represented in datasets.
5) The report recommends approaching AI development with an ethic of care that values the needs and capacities of all parties, as a necessary correlative to a robust legal framework.


Download and read the full report on the left.


You can read the summary of our 7th Tech and Society Dialogue (T&SD) session, 'Artificial Intelligence in Indian Courts, on this topic here. You can read a live-tweet thread of the T&SD session here.

Browse categories

Scroll right