From virtual assistants like Siri and self-driving cars to smart homes and electronic health records, AI has proven convenient, practical, and pervasive in our everyday lives. There is now growing attention to the impact of AI systems as they filter into not just complex but also routine tasks. What happens when the use cases lead to and assist more harmful and discriminatory practices than good?
Facial recognition, automation, real-time surveillance and online consumer behaviour tracking are touted to be beneficial but are implemented with direct negative consequences for people’s privacy, job security, and access to justice. Establishing guidelines for the development and implementation of AI seems a logical course of action. Yet, states are concerned with over-regulation, as it may stifle economic growth and global competitiveness. In parallel, technology companies are increasingly using the language of ethics, responsibility and human rights to stave off top-down regulation and provide the impression of conscious effort. It’s created a global and national situation of grand promises and sparse oversight and accountability. Can AI systems be designed and implemented to uphold human rights as a guiding principle, and is it possible to evaluate this progress?
The Global Index for Responsible AI (GIRAI) is a new Data for Development Network (D4D.net) initiative led by Research ICT Africa, that seeks to measure the evolution of commitment and progress on the implementation of responsible AI principles and practice. Urvashi Aneja, director and founder of the Digital Futures Lab, is a member of the Expert Advisory Committee tasked with overseeing the scientific accuracy, relevancy, priorities and inclusive nature of the GIRAI. She is joined by independent researchers worldwide, assessing AI use cases and their impact in over 120 countries. Digital Futures Lab will soon be a regional hub for the GIRAI.
The GIRAI aims to complement existing ethical and responsible AI standards, as it also contributes to the growing global movement through assessment, data collection, and principle recommendations. The GIRAI results from a participatory and collaborative process involving key stakeholders from human rights groups, women’s rights groups and indigenous communities. The overarching goal is to empower governments, civil society and key stakeholders with accurate contextual information to weigh decisions around AI use and development, to enable outcomes that respect human rights, further equality and sustainability, prioritise safety, and strengthen democracy.
More updates on this soon!