On April 21, the European Commission (EC) released the proposal for a comprehensive and harmonized regulation for AI, also called the Artificial Intelligence Act. Released along with a new Coordinated Plan with Member States, the Act is expected to ‘guarantee the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment and innovation across the EU.’ New rules on Machinery have also been released to complement the approach by adapting safety rules to increase users' trust in the new, versatile generation of AI products. Hailed as ambitious and strict, the Act is in consonance with the EC’s white paper on AI, released in 2020, which laid the twin values of promoting excellence and trust in AI as a priority for the EU. The proposed intention of this act is to ensure ‘uniformity’ in regulations and improve the development, marketing and use of AI within the internal market, in consonance with Unions’ values’.
Arguably, the Act is a move in the right direction as one of the first comprehensive legal frameworks to unify and bring regulatory clarity on the question of AI use and governance in the world’s third largest economic bloc.
Rather than take a blanket approach to AI regulation, the EC adopts a proportionate, risk-based regulatory framework. It divides AI products and practices into primarily two categories of ‘prohibited practices’ and ‘high risk’ AI, along with a few provisions for limited risk and minimal risk AI systems.
Among prohibited practices, the regulation bans the commercial sale of AI systems that – a.) ‘deploy subliminal techniques beyond a person’s consciousness’ or use it to exploit vulnerable persons in order to materially distort a person’s behaviour, or cause psychological and physical harm; b.) used by public authorities or on their behalf to generate social credit scores and classify trustworthiness of natural persons that can lead to unfavorable treatment of persons and groups; c.) the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement.
While at first glance, these prohibitions can be read as a strong push against practices such as targeted advertising, election manipulation, social credit scoring and the use of facial recognition, the language of the law has been criticized as too vague and wide. For instance, it is not clear what counts as ‘subliminal techniques beyond a person’s consciousness’. Secondly, concerns arise on the exemptions provided to the third prohibition, i.e. facial recognition in public spaces, which is allowed under special circumstances – such as the search for potential victims of crimes, prevention of the threat to life and safety from terrorist attacks, and search for suspects or perpetrators of crimes. These exemptions, as argued by Algorithm Watch, provide wide room for interpretation by law enforcement officials. Aside from these exemptions, the bans, already pointed out by Access Now, do not cover private companies’ use of facial recognition or social credit scoring.
The limited scope of the prohibition leaves citizen’s open to the internal policies and motivations of profit over ethics in the use of facial recognition or social credit scoring systems used by private companies.
In the case of high risk AI applications, the regulation proposes a number of strict obligations. Annex III of the regulations also provides a preliminary list of high risk AI. Amongst the applications considered high risk are – AI systems used in critical infrastructure such as transport, for recruiting, to evaluate creditworthiness, to determine access to social benefits, for predictive policing, to control migration and to assist judicial interpretation. These high risk use cases will be subject to requirements such as mandatory risk assessments and management, ensuring robust data quality, interpretability and traceability of results, high degree of accuracy, and appropriate human oversight.
A key issue in the draft regulation is the framing of high risk AI as a problem of inappropriate design and procedural gaps. As pointed out by Brookings Institution, while the regulations hint towards disparate impact, it does not require impact assessments on protected classes. Thus, a large part of the regulations call for better data quality and documentation practices, transparency mechanisms, and interpretable systems, which are expected to curb associated risks.
However, the problem with high risk AI is not simply a matter of the lack of robust datasets and transparent processes (though they do contribute to the problem to a large degree, and having strict compliance rules would be a huge step forward), but the fact that AI use is also a vector of power.
Who becomes subject to AI’s predictive and classificatory capacities and who does not is not merely an outcome of the ‘natural’ order of things in society but a consequence of power. Many civil society voices and researchers have consistently called into question the training of the AI gaze upon the most vulnerable and marginalized of society, which is problematic even if the technology works or is otherwise accurate and robust. For instance, a study conducted by MIT and Harvard law researchers, on retraining the algorithmic gaze on judges adherence to U.S. Constitution rules found data on judges in the American judicial system was harder to come by than data on those at the receiving end of prosecution. One of the reasons is that algorithmic systems such as those conducting pre-trial risk assessments, are built on decades of statistical tracking, and policing of those in the powerless factions of society rather than those in privileged positions.
In terms of implementation, the regulation also calls for the establishment of an European AI Board, to ensure consistent application of the regulation across EU Member States. Each member state is required to set-up a national supervisory authority, to coordinate with the AI board. The task of the AI Board is also to provide guidance, opinions, and written statements on the application of the regulation.
While the prohibitions may be vaguely phrased, the permissive aspects of the regulations are quite clear. All approved applications of AI must have a CE mark, which will allow them to freely circulate in the EU market. Member States will not be allowed to ‘create unjustified obstacles to the placing on the market or putting into service high-risk AI systems that comply with the requirements laid down in this Regulation and bear the CE marking’(pg. 33, pt. 67). Compliance assessments also largely fall on the developers and providers of the technology, rather than third party assessments.
The width of exemptions and the procedural character of compliance, could create a complex yet permissive operating environment for high-risk AI applications as opposed to establishing concrete redlines.
This could lead to a fate similar to the GDPR – which put the burden of compliance on companies, but several companies found ways to circumvent privacy protection nonetheless, through the use of dark patterns and implicit consent. The balance that regulation seeks to uphold– of easing the transition of AI from lab to market, and the protection of fundamental rights and safety needs to be struck carefully, and is by no means an easy task.
This is not to take away from many laudable provisions of the regulation. Notably, the regulation requires the creation of a public database for high-risk AI applications, extends transparency measures to limited and minimal risk AI, and proposes stiff fines for violations. This, in many ways, signals to industry that legal and ethical obligations are a ‘must have’ rather than a ‘nice to have’. Another development to keep track of in the coming months is whether the proposed EU regulations will set the benchmark for the convergence of global governance of AI, or whether the geopolitics of AI will split governance regimes into a maze of distinct jurisdictional blocs.