As both a buzzword and real-world application, artificial intelligence (AI) has found its permanent place in the public policies of governments around the world.
In India, the Digital India mission saw a 67% increase in public funding from last year to ₹10,676 crores in 2022-23; the mission outlines a plan to use AI to promote financial inclusivity, supplement the education sector and transform urban infrastructure. States like Tamil Nadu, Punjab, Uttar Pradesh, and Telangana are already using AI-based tools to support law and order, boost agricultural productivity and support the delivery of last-mile health services.
Such uses of AI can support public sector efficiencies, but also come with significant harms and risks, including exclusion, discrimination and misuse.
India has developed principles to support the responsible use of AI, but the mechanisms to translate these into practice remain vague. The lack of an established process has meant that many of the AI-based tools currently in use by government agencies have entered public service without established systems for risk assessment, judicial oversight and accountability.
Well-designed public sector procurement processes are an essential first step in promoting AI’s responsible use in the public sector. They can improve public trust and confidence in the use of AI in public services. By establishing benchmarks and standards through procurement processes, the government can play a broad norm-setting and market-shaping role, something that is particularly important in the early stages of the AI market’s development.
A three-step approach is needed to develop procurement frameworks that can support the responsible use of AI in the public sector.
First, the procurement process must start with a clear definition of the problem that requires solving, a consideration of various policy options to address this problem, and an assessment of the use of AI versus other technologies and policy options.
It is important to consider whether an AI solution is indeed the best way to address a given problem. At this stage, it is also important to consider whether relevant data is available for the project, and whether adequate data governance mechanisms and rules for data sharing with vendors are in place.
Before issuing a tender, the relevant government department must also conduct a social impact assessment to evaluate the possible impacts of the intervention across different social groups. At this stage, it also essential to consider the availability and reliability of institutional mechanisms for risk mitigation and grievance redressal. This includes an assessment of the capacity of government officials and other end users.
Making the results of these evaluations publicly available will go a long way in building public trust as well as setting expectations within the market.
These assessments of the viability and impacts of AI must shape how the tender is framed and the expectations from a vendor. For example, the request for proposal (RFP) may require the vendor to demonstrate how it will manage the problem of unrepresentative data.
Second, in making a selection from among a set of vendors, the procurement process must question the technical specifications of the product and how it tests in real-world settings. This should include information about data provenance and factors for data selection, choice of features and algorithmic model, testing results including error rates and corrective measures, measures to address data gaps, and steps to manage people’s privacy, data security, and other risks.
The RFP must also seek information on organizational features of the vendor, such as internal accountability frameworks for data handling and the expertise of the product development team.
The preference should be for vendors that have robust systems for data documentation and other process logs, strong internal governance mechanisms and a reliable interdisciplinary product development team.
The RFP should clearly specify the preference for explainable or interpretable models over black-box models. This can enable government departments to understand and audit the solution and develop appropriate risk mitigation measures, prevent a vendor lock-in, and also win public trust and confidence.
Third, robust systems of continuous monitoring and evaluation need to be established prior to the procurement and deployment of AI systems in the public sector. This process must ideally be carried out by independent third-party auditors and must include a system for public disclosure and feedback. This process should also include an ongoing assessment of provisions for various end users to manage errors and effect corrective action.
While these guidelines are by no means exhaustive, they form a high-level checklist that should inform any comprehensive AI procurement framework in the country.
Using considerations of suitability, reliability and social impacts of AI solutions as part of procurement processes, alongside considerations of vendor qualifications, can help translate India’s principled commitment to responsible and equitable AI into practice.
This op-ed was originally published on Mint. You can also read it here.