In this response, we highlight that risk-based approaches are not neutral policy instruments. Instead, they should be seen as a complex set of choices regarding which risks will be prioritised, and the degree of risks that will be tolerated. These choices are grounded in values and cannot be resolved through objective assessments alone.
Risk-based regulatory approaches to AI also face methodological and epistemic challenges. For instance, not all AI risks may be amenable to categorisation of low, medium and high thresholds. Though some risks of AI may have a low impact, their cumulative effect could be overwhelming.
Risk-based approaches based on the principle of welfare maximisation are not equipped to safeguard against the disproportionate impact of AI harm on minorities and marginalised populations.
For a risk-based approach to be effective, regulators must be explicit about the criteria of selection of the risks to be regulated as well as the risk appetites. Furthermore, the selection of risks must make room for open and transparent public deliberation. Any form of risk-based calculation should prioritise and uphold constitutionally guaranteed rights and liberties, and place greater weight on the disproportionate impact of AI on vulnerable populations.