Report
Responsible AI
December 22, 2023

Developing a Democratic and Responsible AI Ecosystem in India: Report of the Working Group on AI Governance

As part of the Responsible Technology Initiative, DFL constituted a working group on AI ethics and governance in late 2021. The aim was to bring together stakeholders from government, industry, and civil society to collectively diagnose and deliberate on the priorities for India.
Download PDF
Developing a Democratic and Responsible AI Ecosystem in India: Report of the Working Group on AI Governance
illustration by:
Neeti Banerji

As part of the Responsible Technology Initiative, DFL constituted a working group on AI ethics and governance in late 2021. The aim was to bring together stakeholders from government, industry, and civil society to collectively diagnose and deliberate on the priorities for India. This report summarises key discussions held among the working group members on topics related to the AI landscape in India, what responsible AI looks like for India and how best regulatory and policy levers can be designed to achieve the goal of using AI for social progress.

Despite rapid advancements in AI adoption and use in India, evidence has increasingly demonstrated that while it can be a force for good it can also produce unintended harms that must be mitigated and managed. While the debate on how to ethically and responsibly implement AI has emerged globally, discussions around how best it can be integrated and regulated across critical sectors in India remain nascent. Proposed policies from both central and state governments have sought to establish a framework to govern AI use, however, it is necessary to evaluate their applicability given the constantly evolving nature of AI.

In response, DFL instituted the ‘working group on AI governance’ with the goals of identifying AI related policy priorities and pathways in India, building contextually relevant and applicable AI governance frameworks and, developing a network of stakeholders that are committed to principles of responsible AI. The working group was formed in late 2021 and convened 4 times over 18 months. The group focused primarily on the use of AI within the public sector. Discussions occurred across two broad themes - i) democratising the AI ecosystem and, ii) responsible AI use and adoption.

The working group did not have the opportunity to comment on some of the more recent advancements in the AI field such as the emergence of language models and advancements in generative AI which have added both vast optimism towards the opportunities that AI can facilitate as well as concerns over the possibility of greater risks. However, the recommendations provided by the group are still pertinent and applicable given the changing nature of the field - with these rapid advances serving as more of a reason for policymakers and stakeholders to take quick action.

The working group consisted of 14 members: 

  1. Aakrit Vaish: Haptik
  2. Abhinav Verma: Independent
  3. Abhishek Singh: Digital India Corporation
  4. Ameen Jauhar: Vidhi Policy
  5. Rentala Chandrashekhar: Former NASSCOM
  6. Divy Thakkar: Google India
  7. Nehaa Chaudhari: Ikigai Law
  8. Balaraman Ravindran: IIT Madras
  9. Rama Devi Lanka: Government of Telengana
  10. Shailesh Kumar: Jio
  11. Smriti Parsheera: Fellow, CyberBRICS
  12. Subhashish Banerjee: IIT Delhi
  13. Vidushi Marda: REAL ML
  14. Vrinda Bhandari: Lawyer

The group identified 5 strategies to democratise the AI ecosystem in India: 

  1. Creation of public infrastructure and public goods by ensuring sufficient public investment across the AI value chain 
  2. Ensuring safe and secure access to high quality government data 
  3. Promoting competition in the field to ensure a level playing field and preventing the concentration of power among a few actors 
  4. Implementing forward looking governance frameworks 
  5. Supporting community participation through the creation of a multistakeholder body


The group also identified principles such as suitability, scientific rigour, transparency, accountability, humans in the loop and non-discrimination as necessary to ensure the responsible development and implementation of AI in the public sector. The experts outlined four measures through which these principles could be realised across AI use: 

  1. Appropriate problem identification and AI suitability assessment 
  2. Pre-deployment and ongoing impact assessment to determine harms, risks and successes 
  3. Well designed procurement practices 
  4. Develop monitoring and review mechanisms to determine efficacy

Browse categories

Scroll right
Neeti Banerji
illustration by:
Neeti Banerji

Developing a Democratic and Responsible AI Ecosystem in India: Report of the Working Group on AI Governance

As part of the Responsible Technology Initiative, DFL constituted a working group on AI ethics and governance in late 2021. The aim was to bring together stakeholders from government, industry, and civil society to collectively diagnose and deliberate on the priorities for India.

As part of the Responsible Technology Initiative, DFL constituted a working group on AI ethics and governance in late 2021. The aim was to bring together stakeholders from government, industry, and civil society to collectively diagnose and deliberate on the priorities for India. This report summarises key discussions held among the working group members on topics related to the AI landscape in India, what responsible AI looks like for India and how best regulatory and policy levers can be designed to achieve the goal of using AI for social progress.

Despite rapid advancements in AI adoption and use in India, evidence has increasingly demonstrated that while it can be a force for good it can also produce unintended harms that must be mitigated and managed. While the debate on how to ethically and responsibly implement AI has emerged globally, discussions around how best it can be integrated and regulated across critical sectors in India remain nascent. Proposed policies from both central and state governments have sought to establish a framework to govern AI use, however, it is necessary to evaluate their applicability given the constantly evolving nature of AI.

In response, DFL instituted the ‘working group on AI governance’ with the goals of identifying AI related policy priorities and pathways in India, building contextually relevant and applicable AI governance frameworks and, developing a network of stakeholders that are committed to principles of responsible AI. The working group was formed in late 2021 and convened 4 times over 18 months. The group focused primarily on the use of AI within the public sector. Discussions occurred across two broad themes - i) democratising the AI ecosystem and, ii) responsible AI use and adoption.

The working group did not have the opportunity to comment on some of the more recent advancements in the AI field such as the emergence of language models and advancements in generative AI which have added both vast optimism towards the opportunities that AI can facilitate as well as concerns over the possibility of greater risks. However, the recommendations provided by the group are still pertinent and applicable given the changing nature of the field - with these rapid advances serving as more of a reason for policymakers and stakeholders to take quick action.

The working group consisted of 14 members: 

  1. Aakrit Vaish: Haptik
  2. Abhinav Verma: Independent
  3. Abhishek Singh: Digital India Corporation
  4. Ameen Jauhar: Vidhi Policy
  5. Rentala Chandrashekhar: Former NASSCOM
  6. Divy Thakkar: Google India
  7. Nehaa Chaudhari: Ikigai Law
  8. Balaraman Ravindran: IIT Madras
  9. Rama Devi Lanka: Government of Telengana
  10. Shailesh Kumar: Jio
  11. Smriti Parsheera: Fellow, CyberBRICS
  12. Subhashish Banerjee: IIT Delhi
  13. Vidushi Marda: REAL ML
  14. Vrinda Bhandari: Lawyer

The group identified 5 strategies to democratise the AI ecosystem in India: 

  1. Creation of public infrastructure and public goods by ensuring sufficient public investment across the AI value chain 
  2. Ensuring safe and secure access to high quality government data 
  3. Promoting competition in the field to ensure a level playing field and preventing the concentration of power among a few actors 
  4. Implementing forward looking governance frameworks 
  5. Supporting community participation through the creation of a multistakeholder body


The group also identified principles such as suitability, scientific rigour, transparency, accountability, humans in the loop and non-discrimination as necessary to ensure the responsible development and implementation of AI in the public sector. The experts outlined four measures through which these principles could be realised across AI use: 

  1. Appropriate problem identification and AI suitability assessment 
  2. Pre-deployment and ongoing impact assessment to determine harms, risks and successes 
  3. Well designed procurement practices 
  4. Develop monitoring and review mechanisms to determine efficacy

Browse categories

Scroll right