Australian Outlook

In this section

Reforming the European Union’s Proposed AI Regulatory Sandbox

06 Oct 2023
By Ryan Nabil
Flags in front of the European Commission building in Brussels. Source: Sébastien Bertrand / http://tiny.cc/lnxbvz

Regulatory sandboxes can be greatly beneficial for governments worldwide in developing flexible, evidence-based rules for artificial intelligence (AI) and promoting innovation. Unfortunately, several weaknesses of the European Union’s proposed AI sandbox could undermine its long-term potential to help the continent remain globally competitive in a rapidly changing international AI landscape.

With the European Union’s landmark Artificial Intelligence Act expected to become law by December 2023, European lawmakers need to think more strategically about AI innovation. Even though Europe increasingly risks falling behind China and the United States in global AI competition, the EU’s proposed AI Act continues to lack adequate mechanisms to foster innovation and craft flexible AI rules. A carefully designed AI sandbox could help the EU better manage its innovation challenge and develop a more flexible, well-calibrated approach to AI governance.

“Regulatory sandboxes” are government-created programs through which companies can offer innovative products and receive support and advice from regulators. In 2016, the United Kingdom’s Financial Conduct Authority (FCA) launched the world’s first regulatory sandbox to promote financial technology (fintech) innovation. Since then, more than 50 countries have established similar fintech sandboxes to promote innovation. Although regulatory sandboxes were initially introduced in financial services, well-designed AI sandboxes can help governments craft flexible AI rules and improve growth and productivity across the entire economy.

More specifically, an AI sandbox would allow innovative startups and tech companies to offer AI-enabled products under close regulatory supervision for a limited period and receive regulatory waivers, expedited registration, or guidance for compliance with relevant laws. Meanwhile, regulators would gain a more in-depth understanding of how emerging technologies and AI-enabled business models interact with current and proposed legal frameworks. Based on such practical insights, policymakers can create or update rules that promote innovation while minimising potential risks.

Recognising the potential benefits of AI sandboxes, governments in Brazil, Norway, and the UK have announced the creation of such programs. Norway’s AI sandbox has already witnessed the successful participation of ten projects, while another two projects are currently testing privacy-friendly facial recognition technologies and digital robots to detect and prevent child abuse, according to Datatilsynet, the Norwegian data protection authority.

As part of the EU’s AI Act, the European Commission — the bloc’s executive arm — has recommended the creation of AI sandboxes at the national level, with Spain becoming the first EU member to have launched an AI sandbox in June 2022. However, the innovation potential of such sandbox programs can only be realised if they are designed properly, as I recently pointed out in written submissions to the White House and the UK government. That is precisely where the European approach is lacking, which is why EU leaders must improve how European AI sandboxes are designed and implemented.

First, to attract innovative companies, an AI sandbox must provide some form of regulatory relief – such as exemption from certain regulations and an expedited registration process. However, unlike successful fintech sandbox programs in Britain, Singapore, and several US states, the EU’s proposed AI sandbox does not include the provision of any such relief. Without these benefits, the incentives for participating in European AI sandboxes will be limited, as evidenced by the relatively muted private sector interest in the Spanish sandbox thus far.

Second, innovation does not appear to be the most important priority of the EU’s sandbox strategy. In contrast, the UK government has put forward concrete measures and frameworks on how sandboxes will be part of the country’s pro-innovation approach to AI. In the British context, practical insights gained by regulators would be used by the government to review and update UK rules and guidelines for AI applications in different sectors. The European AI strategy would also benefit from similar mechanisms — adapted to the EU’s legal system and institutional architecture — through which national and EU authorities could review regulatory lessons from AI sandboxes periodically and evaluate whether updates to current AI rules are necessary.

Third, instead of creating sector-specific sandboxes, the European Commission recommends the creation of general AI sandboxes at the national level. However, such cross-sectoral sandboxes would inevitably face the considerable challenge of regulatory coordination. How would existing sectoral regulators (e.g., financial services) and the national AI regulator cooperate and jointly regulate AI-enabled products and services in specific sectors? Without effective mechanisms for regulatory coordination — either at the national or EU level — European AI sandboxes might be constrained in their long-term innovation potential.

Furthermore, since AI applications vary greatly by industry, sector-specific sandboxes would allow policymakers to understand AI applications in different contexts and advise EU institutions on whether existing rules must be updated in light of technological developments and emerging risks.

Finally, the proposed AI sandbox should be open and easily accessible for both European and non-EU entities, especially since a growing number of the world’s most innovative startups and tech companies are based in the Asia Pacific and North America. For many companies and startups outside the EU, participating in the sandbox could be an attractive way to achieve compliance with the AI Act and enter the European single market. Attracting these companies could be especially beneficial at a time when a growing number of EU member states need more high-tech investment and skilled AI professionals.

Nevertheless, certain proposals — like last year’s proposed requirement that no data within an AI sandbox could be transferred to a third country — could make it significantly difficult for foreign startups and companies to participate in the program. That is why European policymakers will need to ensure that cumbersome regulations do not effectively bar innovative non-EU entities from participating in the sandbox and contributing to European tech innovation.

Ultimately, while the European Union’s efforts to create the world’s first comprehensive AI legislation are commendable, Europe needs a more flexible approach. Carefully designed sandbox programs can allow the EU to develop a more business-friendly regulatory environment and a thriving AI ecosystem that helps the continent maintain its competitive edge in a rapidly changing global AI landscape. Europe’s leading centres for AI innovation – especially France and Germany – should seek to lead the way and advocate a more innovation-friendly European approach to AI.

Ryan Nabil is the Director of Technology Policy and Senior Fellow at the National Taxpayers Union Foundation, a US think tank in Washington, DC. He formerly served as a Research Fellow at the Competitive Enterprise Institute and Fox Fellow at the Institut d’Études Politiques de Paris (Sciences Po).

This article is published under a Creative Commons License and may be republished with attribution.