Digital technologies are transforming nearly every facet of our lives. They can bring joy, support political participation, create new livelihood opportunities and accelerate development gains. But, current digital innovation trajectories and business practices are also undermining civil liberties and exacerbating social and economic inequality. Power and information asymmetries are increasing and societies must confront complex questions around individual agency, political accountability, and future uncertainty.
What are the values, strategies and capacities needed to steer innovation trajectories with social justice?
Digital technologies are central to modern India’s development story. However, low levels of regulatory capacity and stark socioeconomic inequities accentuate the harmful effects of digital technologies. India is at a critical juncture - technology policy decisions taken today will have long term implications for democracy, prosperity, and social justice. Public policy needs to steer technology trajectories toward the public interest, but is often reactive and ad-hoc.
Rather than the downstream management of risk and harm, how might we develop a more anticipatory approach to steer socio-technical change?
In response to a growing 'tech-lash' globally, many technology companies have begun to appeal to the language of responsibility and ethics. This may signal a change in societal expectations of technology companies, and consequently, a change in their business practices. But the use of the language of responsibility can also be a way for companies to avoid more stringent regulatory measures - a way of ‘ethics-washing’ while continuing business as usual.
States have the primary responsibility to protect citizens from harm. But, many are reluctant to over-regulate emerging digital technologies for fear of stifling innovation and economic growth. States may also lack the institutional frameworks and capacities to adequately regulate emerging technologies. Many national AI policies, including from India, thus call for responsible self-regulation by technology companies.
How might we re-frame responsibility into a productive category, oriented toward the realization of positive social outcomes, rather than a reduction of regulatory and policy steering?
The growing use of the language of responsibility and ethics by technology companies and governments creates a strategic window for civil society to shape what is meant by responsibility and its practice.
Governments and businesses are optimistic that AI will boost economic growth and enable India to leapfrog persistent socio-economic challenges. But, AI systems also risk exacerbating socioeconomic inequities, undermining civil liberties, and accentuating power and knowledge asymmetries. AI development and deployment is still at an early stage in India. This provides an opportunity to shape AI innovation and adoption trajectories in a way that aligns with India’s socio-political needs and capacities. Governance frameworks developed in more mature democracies may not be easily applicable in developing countries like India.
What does Responsible AI mean for India? What are the ethical frameworks, market policies, institutional capacities and governance frameworks needed to align AI trajectories with the public interest in India?
Governments are increasingly using technology to deliver public services in new ways. In India, this has involved the digitalisation and automation of offline services and the development of technology-based products for specific departments. A new form of Public Tech is now emerging - where the state and private sectors are working together to develop digital public goods and platforms for development.
How are emerging platforms for development impacting the delivery of public services and how should they be governed? How do we ensure that emerging public tech architectures meet the needs of the most marginalized?
Debates on the power and influence of global technology giants such as Google, Facebook, and Amazon, often referred to as 'Big Tech', are increasingly polarized. Their products and services bring benefits to citizens, businesses and governments. But, there is growing evidence of the harmful effects of their business practices, that threaten the functioning of healthy democracies and markets, and threaten civil liberties and rights. Regulating these companies is complicated because of the essential civic functions they perform ; in countries like India, they are also filling gaps in weak state and market capacity.
What are the policies needed to align the market and civic power of Big Tech companies with individual rights, social justice, and healthy markets?
The Responsible Technology Initiative is a research program at the Digital Futures Lab, a multi-disciplinary research network that examines technology and society transitions in the global south.
Through evidence based research, participatory foresight, and public engagement, the Digital Futures Lab seeks to realize pathways toward equitable, safe, and caring futures.
Digital Futures Lab is based out of Aldona, Goa, and was started by Urvashi Aneja in 2021.
Social problems are multi-causal and deeply interconnected. Our work draws on diverse disciplinary expertise to understand and address the complex societal transitions accompanying technological change.
Technology policy debates cannot be resolved with more or better evidence alone. Values and culture are at the heart of socio-technical change. We craft new narratives to align technology trajectories with individual and societal wellbeing.
Rapid technological change threatens to outpace much needed public policy redesign. Our work employs foresight methods to anticipate technology and society trajectories, linking present-day decision making to future outcomes.