Report
Responsible AI
April 12, 2022

Enforcement Mechanisms for Responsible #AI for All

Niti Aayog invited responses to its proposal for Enforcement Mechanisms for Responsible AI. Our response evaluates risk-based approaches and suggests alternatives. It also examines the role of the oversight body and the need for upstream management of technological innovation processes.
Download PDF
 Enforcement Mechanisms for Responsible #AI for All
illustration by:
Adobe Stock

In this response, we highlight that risk-based approaches are not neutral policy instruments. Instead, they should be seen as a complex set of choices regarding which risks will be prioritised, and the degree of risks that will be tolerated. These choices are grounded in values and cannot be resolved through objective assessments alone. 

Risk-based regulatory approaches to AI also face methodological and epistemic challenges. For instance, not all AI risks may be amenable to categorisation of low, medium and high thresholds. Though some risks of AI may have a low impact, their cumulative effect could be overwhelming.

Risk-based approaches based on the principle of welfare maximisation are not equipped to safeguard against the disproportionate impact of AI harm on minorities and marginalised populations. 

For a risk-based approach to be effective, regulators must be explicit about the criteria of selection of the risks to be regulated as well as the risk appetites. Furthermore, the selection of risks must make room for open and transparent public deliberation. Any form of risk-based calculation should prioritise and uphold constitutionally guaranteed rights and liberties, and place greater weight on the disproportionate impact of AI on vulnerable populations.

Browse categories

Scroll right
Adobe Stock
illustration by:
Adobe Stock

Enforcement Mechanisms for Responsible #AI for All

Niti Aayog invited responses to its proposal for Enforcement Mechanisms for Responsible AI. Our response evaluates risk-based approaches and suggests alternatives. It also examines the role of the oversight body and the need for upstream management of technological innovation processes.

In this response, we highlight that risk-based approaches are not neutral policy instruments. Instead, they should be seen as a complex set of choices regarding which risks will be prioritised, and the degree of risks that will be tolerated. These choices are grounded in values and cannot be resolved through objective assessments alone. 

Risk-based regulatory approaches to AI also face methodological and epistemic challenges. For instance, not all AI risks may be amenable to categorisation of low, medium and high thresholds. Though some risks of AI may have a low impact, their cumulative effect could be overwhelming.

Risk-based approaches based on the principle of welfare maximisation are not equipped to safeguard against the disproportionate impact of AI harm on minorities and marginalised populations. 

For a risk-based approach to be effective, regulators must be explicit about the criteria of selection of the risks to be regulated as well as the risk appetites. Furthermore, the selection of risks must make room for open and transparent public deliberation. Any form of risk-based calculation should prioritise and uphold constitutionally guaranteed rights and liberties, and place greater weight on the disproportionate impact of AI on vulnerable populations.

Browse categories

Scroll right