The Economic Survey of India 2020-21 observed that India’s regulatory problems do not arise from the lack of standards or compliance but because of overregulation, due to the state’s need to prepare for every eventuality. Stakeholders in the tech-policy space in India have drawn parallels stating that uncertainty and complexity are defining characteristics of the technology sector. Therefore, it is difficult to foresee every outcome For India to avoid an over-regulated fate, they advocate for the adoption of abstract, principles-based regulation rather than state-led, detailed, rule-based regulations for the tech sector.
Amitabh Kant, the CEO of NITI Aayog, a public policy think tank of the Indian Government, claims that “AI will be the single largest tech revolution of our lifetimes with the potential to disrupt almost all aspects of human existence.” If this is indeed the case, it would be absurd for the state to take a backseat at such a significant point in our collective future. Although, as we shall see, this is exactly what NITI suggests about the regulation of AI in India.
This piece explores some of the outcomes of adopting a combination of principle and risk-based approaches to regulate Artificial Intelligence [“AI”] in India and cautions that lawmakers must contend with the challenges that can arise before regulatory path-dependencies are laid down. Three challenges that this piece identifies are the weakening of regulatory systems, in the long run, the inability to meet regulatory goals because of an overreliance on self-correction, and the displacement of democratic processes in establishing the thresholds of human rights.
Understanding Principle and Risk-Based approaches to regulating AI
There have been several attempts to define ethics and human-rights based principles to guide the development and deployment of AI. These principles have tended to converge around certain common themes including privacy, fairness and non-discrimination, accountability, and transparency. Since the risk of AI violating principles (such as the right to privacy) can range from insignificant to critical, it is argued that regulations should be proportionate to the potential risk the AI generates.
In the face of rapid technological growth, detailed rules tend to have short shelf lives. A principles based-approach provides regulatory flexibility and therefore avoids the proverbial catch-up game lawmakers are forced to play. This is because it shifts the responsibility of meeting regulatory objectives from the regulator to the regulated through the adoption of codes and co-regulatory methods.
The document on Responsible AI, #AIforALL by NITI Aayog seems to indicate an inclination towards adopting a combination of principle and risk-based approaches. The principles identified include the right to equality, non-discrimination, privacy, safety, transparency, and accountability, drawn from multiple sources, including the Constitution of India, and will be enforced through a flexible, risk-based approach.
Three discernible challenges to India’s approach to AI include:
The weakening of regulatory ecosystems: Without the state setting the boundaries for commercial activity, excessive risk-taking can ensue. If the state does not invest in its institutions in the long run, it may not be able to make corrections when private risk-taking harms the public interest. For instance, while a principles-based regulation of mortgage lending in the United States led to innovative products and short-term profits, in the long-term the lack of detailed regulation resulted in “substantial losses for financial institutions that threatened the soundness of the U.S. banking system.” The International Monetary Fund blamed the subprime mortgage crisis on endemic regulatory failure that resulted from excessive risk-taking.
In India, even as the private sector ramps up commercial exploitation of AI, it remains largely unregulated. To imagine what a regulatory deficit can mean against the background of AI deployment in India, take the use of AI in labour management. Tech companies are beginning to offer AI-based solutions in human resource management. While using AI may improve efficiency in hiring, it is also known to result in discriminatory outcomes that can adversely affect marginalized groups. Organizations that have studied hiring patterns in India have noted marked discrimination against certain groups, especially when filling positions at higher levels.
Unlike many other countries, in India, the constitutional right to equality is not supported by complementary legislation. For example, no law prevents private companies from discriminating based on religion in their hiring decisions. Using AI to sort through job applications could entrench structural discrimination and deflect responsibility from the company onto the technology for such outcomes. There is an acute need for a comprehensive anti-discrimination law in the country that could hold employers accountable if the AI hiring tools they use the result in biased decisions. One example is the Anti-Discrimination and Equality Bill, 2016 that seeks to protect citizens against all forms of social discrimination. It is also important that third parties audit AI to ensure fair outcomes which should be enforced by laws and/or regulations.
Self-regulation may not meet regulatory goals: Professor Jodi L. Short identifies three regulatory voids within which self-regulation usually occurs — knowledge gaps of the regulator (or information asymmetry), political gaps (resulting from contested norms and political opposition), and lastly institutional gaps (when there are no competent institutions to enforce norms). Short claims that not all regulatory voids are amenable to self-regulation. While self-regulation may correct knowledge gaps, in the case of political gaps, it tends to disregard public goals and often leads to regulatory capture. To avoid such undesirable outcomes, the Indian regulator must understand the kind of gap it is seeking to fill before employing the self-regulation of AI.
There are, for instance, contesting norms or potential political gaps in the regulation of AI. In allowing the private sector to self-regulate considerations of efficiency may often be preferred over the contesting goal of preservation of rights. The Strategic Director of Innovation at the London School of Economics, Julia Black, observes that principles-based regulation combined with a risk-based approach could create an ethical paradox. The former may encourage ethical business decisions, “but when compliance becomes a matter of risk management, non-compliance becomes an option.” The European Union’s risk-based approach to regulating AI has been criticised by organizations on similar grounds. Access Now, an organization that advocates for digital civil rights have argued that this approach necessitates companies engaging in a trade-off between pursuing their interests and respecting fundamental rights. The latter should be non-negotiable, immaterial of the level of risk encountered.
The displacement of democratic processes in establishing the threshold of rights: NITI Aayog has stated that the regulation of AI should/will be decentralized by encouraging self-regulation, and the adoption of standards and guidelines. Currently, India does not have a law to protect the data privacy of individuals, as it is yet to pass the Data Protection Bill that was introduced in the House of the People in 2019. In the meantime, the Bureau of Indian Standards has issued standards for personal data protection. In the absence of a data protection statute, these standards are the closest we have to ensure businesses respect the informational privacy of users.
However, it is important to remember that standards are not rights. NITI Aayog may borrow from the language of the Constitution by using terms like equality and non-discrimination, but by confining these rights to standards, their vitality is lost. Unlike rights, standards are not publicly made, reasoned, contestable, and transparent to those it affects, all important markers of the rule of law. Standard-setting bodies tend to be secretive and averse to public scrutiny. Mature standard-setting bodies such as the Institute of Electrical and Electronics Engineers [“IEEE”] and The Internet Engineering Task Force [“IEFT”] (the use of whose standards have been recommended by NITI) have been accused of being captured by dominant players.
This is not to say that technical standards should not be used to enforce rights concerning the regulation of technology. But that it should be preceded by relevant laws, developed through participatory methods (which involve all affected stakeholders), and followed up by independent oversight.
To conclude, we need more than abstract principles and technical fixes if we are to truly advance human well-being in the digital age. Legislatures in other jurisdictions have introduced bills for algorithmic transparency and accountability to counter the discriminatory outcomes that AI often engenders, among other things. This piece is therefore calling for state-led rulemaking for AI, of which right-based laws are an important component.
This piece was first published in Law and Other Things