Australian Outlook

In this section

Can the European Union Influence Global Standard-Setting in AI?

26 Mar 2024
By Dr Darren Lim, Walter Brenno Colnaghi and Professor Anthea Roberts
AI ACT - Strasbourg Parliament. Source: Ekō Flickr /

Competition to set global standards in AI is heating up. Can the EU’s new AI Act follow the GDPR in influencing behaviour far beyond its borders?

Technology is arguably the central terrain of geopolitical rivalry between the West and China. Technological leadership is foundational to national power, both on the battlefield and in the global economy. Governments increasingly weaponise economic and technological connectivity for their own strategic advantage. Its pervasiveness in modern life also means that technology shapes a society’s values – technology can promote liberal freedoms or enable oppressive authoritarianism.

The race for technological leadership is seeing an expansion in the tools of statecraft employed by governments in the name of national security. The Biden administration has dramatically expanded export controls of advanced semiconductor technology. Beijing has for years pursued large-scale industrial policy measures designed both to increase its self-sufficiency and to achieve global market dominance for its national champions.

The EU is in the crosshairs of this competition and is trying to assert its “strategic autonomy” by developing tools to safeguard its economic security and increase its competitiveness. Yet, with a relative lack of cutting-edge firms to protect, and with time and energy consuming intra-EU bargaining, the economic and national security measures adopted to date have been relatively modest, defensive, and not particularly interventionist. Notably, foreshadowed tariffs on electric vehicles would be a major departure from this trend.

There is one domain where the EU has traditionally wielded technological power – international standards. From a practical perspective, standards are important because they not only serve technical purposes of ensuring interoperability and enabling economies of scale, but because they can determine the distribution of compliance costs, shifting them to rule-takers.

To set standards, the EU has relied on the “Brussels effect” – the ability to set global standards unilaterally by virtue of its market size and stringent rules. The most famous example is the General Data Protection Regulation (GDPR), the EU standard for the processing and movement of personal data. In this case, the EU’s size meant tech companies such as Meta, Alphabet, and Apple complied with GDPR to maintain market access. Moreover, since firms benefit from scale economies and uniform business practices, there was little advantage in tailoring different privacy standards for each market, meaning the most stringent—GDPR—was adopted globally. Indeed, the GDPR became “best practice,” adopted by companies as a signal of quality, vividly illustrating how local standards could be a powerful tool to regulate technology use globally.

While abstract and dry to all but technology experts, standards are critically important. The widespread adoption of a given standard can elevate the company who developed the underlying technology to market dominance, or spread the values embedded within the standard around the world.

Nowadays, no technology has greater potential to shape the world than artificial intelligence (AI). Consequently, AI standards are at the heart of geopolitical and geoeconomic competition.

Any state that can successfully globalise its domestic regime for AI regulation—reflecting its interests and values—will reap major benefits. Given the pervasiveness of AI across industries encouraged by the internet of things, such advantage in standard-setting would deliver great economic benefit, entrenching an advantage for its national AI companies over foreign competitors while minimising compliance costs. Even more significantly, given AI can be used either to enable oppressive surveillance or to protect liberal values like privacy, standards power will influence what types of behaviour – and underpinning values – become acceptable and widespread.

China has been extremely active in shaping global AI standards that reflect its interests and values. This influence largely arises from the growing market presence of Chinese AI companies, and the active effort by Chinese actors to disseminate their preferred standards through both formal standard-setting organisations and de facto adoption by end-users.

For its part, the US is home to world-leading AI firms such as Meta, Alphabet, and OpenAI. America’s competitive advantage is usually attributed to the light touch of US regulations, which enable more creativity and innovation. However, recognising the economic, societal, and national security stakes, the US Government is doing more to regulate AI at home and influence standards abroad. For instance, in 2021 the US coordinated action within the International Telecommunications Union to avoid the adoption of proposed AI standards that would have allowed the use of facial recognition “in ways that could threaten human rights” – in other words, the adoption of standards favoured by Chinese firms. Last October, President Joe Biden signed an executive order issuing guidance for federal agencies’ use of AI systems, sharing of security results with the US governments, and calling on Congress to pass “bipartisan data privacy legislation.”

On 13 March EU lawmakers approved a new AI law. With it, the EU is hoping to replicate the Brussels effect that worked so well with the GDPR. By leveraging its large market, and by applying stringent regulations over specific unacceptable or high-risk uses of AI, the EU hopes to establish global standards. For instance, under the AI act, providers of general purpose AI, such as generative AI, will need to keep documentation of the content used to train the model, comply with copyright law, and disclose that content is AI-generated. Such measures will likely advantage European companies who rely on open-source coding and are thus exempted from the Act’s copyright requirement.

Can the AI Act replicate the Brussels effect? In some cases, there may be some success. For global platforms such as Meta that employ user data from a global network, it may be difficult to isolate their AI model geographically, making such companies more likely to comply with the AI Act to maintain access to the EU market.

For systems that utilise localised data, however, the Brussels effect will be harder to replicate. For example, a government may outsource the development of AI models for policing and surveillance, or to develop social credit systems. These are some of the most consequential and potentially problematic uses of AI technology, but providers may be able to train and tailor the AI model for the specific country. In this case, the provider of the AI model will comply with the AI Act only for products deployed within the EU.

Where the Brussels effect is weak, the ability to set standards through market dominance and develop quality products that meet user needs—but that also respect fundamental values—becomes even more important. In this case, scale and the ability to provide quality AI systems at low cost will help determine which standards are adopted by the recipient governments and societies. While some European AI companies are emerging – chiefly, the French LLM developer Mistral – they lack the scale of American rivals or the market presence of Chinese companies. This absence of “national champions” may limit the spread of the EU’s AI Act.

While the Act is the first of its kind, there are multiple formal standard development attempts concurrently taking place. International standard-setting organisations are a site of major contestation, while a growing number of multilateral initiatives, such as the G7’s Hiroshima Process or the OECD’s AI Principles, promote dialogue on shared AI standards. But given the rapid pace of innovation in AI and the different interests of the parties, the outcomes of such processes reflect more of a declaration of intent than a concrete attempt to craft common rules. This reinforces the competitive aspect of standard-setting, pushing the main players to adopt dissonant rules that best promote their interests and values.

Regulatory fragmentation may hinder the development and rollout of AI, and the ensuing economic benefits it will deliver. On the other hand, the use of AI for illiberal purposes and manipulation poses inherent threats for liberal democracies worldwide. The race for AI standard-setting is only just beginning.

Disclaimer: This work was supported by the Department of Foreign Affairs and Trade under Agreement No. 78366. The views expressed herein are those of the authors and not necessarily those of the Australian Government or DFAT.

Darren J. Lim is a Senior Lecturer in the School of Politics and International Relations at the Australian National University.

Walter Brenno Colnaghi is a Research Officer in the School of Politics and International Relations at the Australian National University.

Anthea Roberts is a Professor at the School of Regulation and Global Governance (RegNet) at the Australian National University and a Visiting Professor at Harvard Law School.

This article is published under a Creative Commons License and may be republished with attribution.