Report
May 3, 2023

Response to NITI Aayog's Latest FRT Framework

The growing use of automated facial recognition technologies (FRT) by public and private actors, despite evidence of their harms and risks, requires urgent policy attention. Aman Nair, Dona Mathew and Urvashi Aneja offer a response to NITI Aayog's latest FRT framework involving responsible AI.
Download PDF
Response to NITI Aayog's Latest FRT Framework
illustration by:
Tobias Tullius

Section 1 - Report specific responses  

Aman Nair, Dona Mathew and Urvashi Aneja submit that the report fails to adequately address the harms associated with Facial Recognition Technology (FRT) and the difficulties associated with solving those harms. Their analysis follows four clear points:

  1. The distinction drawn by the report between security and non-security use cases of FRT is an inadequate framework and downplays the harms associated with FRT
  2. The use of consent as a legitimising framework ignores the limitations of consent in the context of a lack of digital rights awareness and unequal power structures among individuals and institutions
  3. The inadequate addressal of the illegal nature of present FRT systems and the adoption of a dangerous framing of rights as simply challenges to be accounted for is a failure of the report
  4. The selection of Digi Yatra as a case study represents a suboptimal choice. Given the report's assertion of the harms of security use cases of FRT, by focusing on a non-security use case such as Digi Yatra, the report fails to adequately paint an accurate picture of the harms that would arise from the legitimisation of FRT systems.

Section 2 - General comments on FRT  

Moving on from the specific responses to the report, they look to provide an alternative viewpoint for policymakers and regulators. They first begin by examining the foundational limitations associated with FRT. These include:

  1. The fallibility and bias present within modern FRT systems
  2. The normalising effect that FRT has on other equally dangerous technologies
  3. The systematic effects of increased surveillance on privacy rights and freedom of speech

They then posit that while much of the discussion surrounding FRT focuses on its use by instruments of the state, there is a need to critically observe how it is being used by private actors. In doing so we demonstrate that the use of these systems by corporations and by community organisations can infringe on the rights of individuals much in the same way as its use by the state.

Finally, they suggest a set of factors that will help determine whether the introduction of FRT in a specific context carries with it a high risk of harm, and therefore whether it should be permissible. These factors include:

  1. Whether the system is deployed in a public space
  2. Whether there is an element of coercion involved in its deployment
  3. Whether individuals have the option to opt out of being surveilled by the system
  4. Whether the impact of the system is on an individual or multiple people


With these considerations in mind, they call for a ban on the use of FRT in the vast majority of current use cases.

Browse categories

Scroll right
Tobias Tullius
illustration by:
Tobias Tullius
3
May
,
2023
Report

Response to NITI Aayog's Latest FRT Framework

The growing use of automated facial recognition technologies (FRT) by public and private actors, despite evidence of their harms and risks, requires urgent policy attention. Aman Nair, Dona Mathew and Urvashi Aneja offer a response to NITI Aayog's latest FRT framework involving responsible AI.

Section 1 - Report specific responses  

Aman Nair, Dona Mathew and Urvashi Aneja submit that the report fails to adequately address the harms associated with Facial Recognition Technology (FRT) and the difficulties associated with solving those harms. Their analysis follows four clear points:

  1. The distinction drawn by the report between security and non-security use cases of FRT is an inadequate framework and downplays the harms associated with FRT
  2. The use of consent as a legitimising framework ignores the limitations of consent in the context of a lack of digital rights awareness and unequal power structures among individuals and institutions
  3. The inadequate addressal of the illegal nature of present FRT systems and the adoption of a dangerous framing of rights as simply challenges to be accounted for is a failure of the report
  4. The selection of Digi Yatra as a case study represents a suboptimal choice. Given the report's assertion of the harms of security use cases of FRT, by focusing on a non-security use case such as Digi Yatra, the report fails to adequately paint an accurate picture of the harms that would arise from the legitimisation of FRT systems.

Section 2 - General comments on FRT  

Moving on from the specific responses to the report, they look to provide an alternative viewpoint for policymakers and regulators. They first begin by examining the foundational limitations associated with FRT. These include:

  1. The fallibility and bias present within modern FRT systems
  2. The normalising effect that FRT has on other equally dangerous technologies
  3. The systematic effects of increased surveillance on privacy rights and freedom of speech

They then posit that while much of the discussion surrounding FRT focuses on its use by instruments of the state, there is a need to critically observe how it is being used by private actors. In doing so we demonstrate that the use of these systems by corporations and by community organisations can infringe on the rights of individuals much in the same way as its use by the state.

Finally, they suggest a set of factors that will help determine whether the introduction of FRT in a specific context carries with it a high risk of harm, and therefore whether it should be permissible. These factors include:

  1. Whether the system is deployed in a public space
  2. Whether there is an element of coercion involved in its deployment
  3. Whether individuals have the option to opt out of being surveilled by the system
  4. Whether the impact of the system is on an individual or multiple people


With these considerations in mind, they call for a ban on the use of FRT in the vast majority of current use cases.

Browse categories

Scroll right