Australian Outlook

In this section

The EU AI Act and the Normative Challenge to Regulate Emerging Technologies

04 Dec 2024
By Dr Hannah Ruschemeier
US, Britain, EU to sign first international AI treaty. Source: FMT, https://t.ly/wyhf2

The AI Act is a step in the right direction but has too many flaws to set an endpoint in the discussion on how to regulate AI. It should therefore be seen and employed as the beginning of a more sophisitcated approach, and one not to be dictated by technical developments.

The need for rules for data-driven technologies, especially those presented as “Artificial Intelligence” (AI), is a logical consequence of the datafication of modern societies. After years of discussions on ethical implications, strategies, soft law, and other non-binding guidelines as normative frameworks for AI, the enactment of the Regulation about harmonised rules on Artificial Intelligence (AI Act) adopted on 21 May 2024 by the Council of the 27 EU member states, establishes the first legally binding horizontal act on AI. The AI Act claims to be the first encompassing, horizontal regulation that addresses AI itself as the regulatory object. Its ambitious goal is to establish a European single market for AI, promoting a human-centric and trustworthy AI, while ensuring a high level of protection of health, safety, and fundamental rights enshrined in the Charter, including democracy, the rule of law, and environmental protection, against the harmful effects of AI systems in the Union, and supporting innovation (Art. 1 AI Act). Within the landscape of EU digital legislation, the AI Act is meant to play a key role alongside other legal provisions that regulate AI and data-driven technologies: the General Data Protection Regulation, the New Legislative Framework on product safety, and the AI Liability Directive, currently under negotiation in the legislative process 

Most likely, the AI Act will evolve the Brussels Effect since it applies to AI systems in the European market, irrespective of whether the providers are from a third country (article 3 (1) AI Act). Additionally, the AI Act may have first mover advantage since it establishes the first comprehensive and enforceable legal framework for AI worldwide. Consequently, the AI Act regulates Australian companies that are active in the EU. Yet, the AI Act did not take up the chance to take the global impact of AI seriously, the proposal for an export ban of prohibited AI systems under the AI Act for EU companies was rejected during the legislative process. 

Generally, the AI Act is an immensely important first step in the right direction: it establishes a regulatory regime for AI across different sectors and therefore establishes legal guardrails for the quality, security, and legality of AI systems. Nevertheless, the AI Act in its current format is unlikely to fulfil its high promises regarding the protection of fundamental rights, democracy, environmental protection, and the rule of law. Three structural decisions of the AI Act illustrate these challenges: first, the AI Act mixes two regulatory, fundamentally different approaches by combining product safety law with the protection of fundamental rights. Second, the AI Act follows the model of regulated self-regulation, leaving many important questions to the discretion of the providers and deployers of AI systems as well as private standardisation organisations without effective public oversight mechanisms or stakeholder participation. And, third, the insufficient addressing of collective and societal risks arising from the socio-technical nature of AI. 

The first struggle was to define this regulatory object, since there has not been a unilateral definition or even a common understanding about what AI is. This is illustrated by the diverging definitions of AI in different disciplines of science. Sociologists have a different understanding of AI, compared to computer scientists. These different understandings exacerbate the problem, that legal definitions, especially when they refer to a regulatory object, have to fulfil certain criteria to make them manageable in practice: for lawyers, courts, and legal scholars. Due to the dynamic development and rapidly changing foundations of AI, this is difficult.  

The AI Act now defines an AI system under article 3 (1) as “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” This definition aligns with the OECD understanding, and thus leads to international uniformity. Additionally, the AI Act establishes five risk categories and therefore a tiered, risk-based regulatory approach. Furthermore, the detailed risk categories (unacceptable risk, high risk, minimal risk, no risk, and systemic risks for general-purpose AI systems) lead to the fact that the definition of AI itself is not really important for the regulatory scope. Quite the contrary. The risk posed by prohibited practices, for example biometric identification, also exists when no autonomous AI systems are deployed, because dangers for autonomy, data protection, freedom of expression, and freedom of assembly arise from the socio-technical character of data-intensive technologies and not only from their technical specifications. In addition, the systems that pose unacceptable risks are not fully banned under the AI Act, but article 5 of the Act establishes broad exceptions, especially for law enforcement and national security reasons.  

The risk qualification provisions for high-risk AI systems under article 6, annex III, and the categories for systemic risks for general-purpose AI models under article 52, annex XIII, do not sufficiently consider risks for fundamental rights, democratic societies, and the rule of law. Article 6  follows a dichotomy; AI systems can be either classified as high-risk when they are subject to already existing product safety law (article 6 (1) as a product or the safety component of a product) or fall under the context-based areas of applications under annex III (article 6 (2))  

For example, some use cases of the high-risk category in annex III have the potential to mainstream and normalise systems that are absolutely problematic like the use of polygraphs and emotional recognition systems (annex III 1c, 6b, 7a) in critical areas like asylum and border control. There is no empirical evidence that emotional recognition systems even work, and polygraphs are banned (e.g. from the German court system) for the exact same reason. Since high-risk systems can be certified and legally used under the AI Act as long as they fulfil the requirements in art. 9 et seq., they seem legitimate. This is risky since the AI Act legitimises fundamentally problematic systems by ignoring empirical evidence, and enables their use.  

On the other hand, other fundamental rights-sensitive areas are missing, for example media algorithms (the reference to the recommender systems under the Digital Services Act (DSA) was removed during the legislative process), science and academia, finance, and certain types of insurance. The systemic risk categories also do not sufficiently address the societal threats posed by AI. Annex XIII mainly lists technical parameters such as the number of computations, parameters, benchmarks, and registered end users as classification criteria for systemic risks. In comparison to the systemic risks listed under the DSA for elections, the democratic debate or the spreading of illegal content, the view of the AI Act neglects the fact that AI systems can have spillover effects that are not limited to the system, its developer, deployer or users.  

It is important to recognise that AI is about regulating power concerning data, predictions, epistemic implications, and basically the foundations of society. The decisions about the opportunities and challenges of AI should be negotiated through democratic processes and not dictated by technological developments. The debate should therefore continue and not end with the AI Act. 

Dr Hannah Ruschemeier is a junior professor for public law, data protection law and law of digitalisation at the University of Hagen, Germany. In her research, she combines traditional questions of public law with the challenges of the Digital Transformation. Her focus lies on the regulation of technologies, collective dimensions of rights, privacy & data protection, technical-driven inequalities, data power, surveillance, and legal theory & philosophy. She is a board member of RAILS e.V. and part of the editorial board of the German journal Legal Tech. 

This article is published under a Creative Commons License and may be republished with attribution.