2020 saw the Indian state of Telengana attempt to position itself as a leader in Artificial Intelligence (AI) adoption for commerce and research when it published a national level, actionable policy framework for AI implementation. With aims to transform the economy and ‘government affairs as usual’, the Fourth Industrial Revolution is making its relevance known in Telangana; every effort towards the responsible deployment of such technologies is more paramount than ever. Digital Futures Lab recently had the opportunity to contribute to those efforts at the Africa-Asia Policy Maker Network event hosted in Telengana, where we taught a crash course on responsible national AI implementation, followed by an ‘ideathon’ to crowdsource ideas for building systems to mitigate harm.
M. Jayesh Ranjan, Principal Secretary of the Industries & Commerce (I&C) and Information Technology (IT) Departments of Telangana, spoke of the need to regulate the use of AI in the public sector and develop practical guidelines for Responsible AI. Telangana has already collaborated with IITs, IIITs, and Bits Pilani to understand the complexities and impacts of these technologies, and Ranjan made note of the role of academic institutions as watchdogs in the regulation of AI.
Dr Urvashi Aneja, founder and director of Digital Futures Lab, started the workshop series on Day 1 with an exercise to test AI workflow with Teachable Machine. Through interactive dialogue with participants, supplemented with an exercise that involved feeding photos of dogs, cats and known terrorists into a Machine Learning (ML) tool, Dr Aneja urged participants to think of the limitations of AI and the consequences of such limitations.
Dr Aneja presented the presence of different kinds of AI: human-driven (a rule-based programme created by humans) and machine-driven (data-pattern-driven programme generated by a computer). After a delineation of what the ML process entails and the main approaches to ML to ensure participants were on the same page of what AI ultimately is, Dr Aneja emphasised that AI is not neutral or perfect.
Dr Aneja discussed a study on the impact of AI on SDGs that highlighted that while AI might act as an enabler on 134 targets (79%) across all SDGs, 59 targets (35%) may experience a negative impact from the development of AI. She spoke more on certain use cases in the health, fintech, and political sector, followed by a cautionary tale on the opportunities and risks at the intersection of AI and human rights.
The workshop was followed by an interactive group discussion on various case studies across sectors like health, education, and the environment.
Vidushi Marda, Senior Programme Officer at Article 19, kicked off her presentation on Day 2 with the all-important question: what AI procurement process does the government of Telangana follow? The audience responded by highlighting various AI procurement processes, such as RFPs, RFEs, RFQs, etc.
Ms Marda offered the frameworks of the UK’s Central Digital and Data Office and the Office for Artificial Intelligence as an overarching guideline for the procurement of AI. The considerations within this framework involve including responsible procurement as crucial within an AI adoption strategy, having multidisciplinary teams with individuals who possess domain expertise, conducting a data assessment before starting the procurement process, assessing the benefits and risks of AI development and effectively engaging with the market from the outset.
She highlighted the AI-specific considerations during the procurement process at four stages:
- Preparation and Planning: Having **multidisciplinary teams, data assessment and governance to understand the complexity and limitations of the data, AI impact assessment
- Publication: Using output-based requirements in the invitation to tender that focus on describing the challenges and opportunities being faced
- Selection, Evaluation and Award: Having an internal AI ethics approach, processes to ensure accountability over outputs of algorithms, avoiding outputs that could be unfairly discriminatory, testing the model under a range of conditions
- Contract Implementation and Ongoing Management: Instituting process-based governance, model testing on an ongoing basis, knowledge transfer and training
Crucial to Marda’s discussion of existing guidelines for responsible procurement of AI is the emphasis on the commonalities in these guidelines: space to deliberate the need for AI in the first place, recognition that multidisciplinary expertise is critical, a deeper understanding of the limitations of AI systems is necessary, sustenance of the technology well beyond purchase, consultation with the market and thriving competition. Moreover, she discussed why AI is different (AI is not readily adaptable to complex social situations, machine learning does not understand context, nuance, and tone).
This session was followed by a group discussion on procuring predictive policing solutions through case studies analysis.