Comment

The EU AI Act: its international implications for AI policy development and cooperation

Josh Meltzer and Aaron Tielemans / Sep 2022

Image: Shutterstock

 

The EU AI Act (AIA) is the world’s first attempt to horizontally regulate AI, such that all AI systems are covered.  Given the extraterritorial application of the AIA and likely demonstration effect for governments considering AI regulation, the following highlights some of the key issues in the AIA that will have implications for AI regulation globally, as well as efforts to build international cooperation on AI.

Next steps for the AI Act

The EU AIA was passed by the European Commission in April 2022. The proposed AIA is now under review by the European Council and the European Parliament. A joint report from the two parliamentary committees leading on the AIA—the Committee on Internal Market and Consumer Protection (IMCO) and the Committee on Civil Liberties, Justice and Home Affairs (LIBE) – is expected this autumn. The Parliament and Council will likely attempt to reconcile their approaches to the AIA in trilogue. A successful compromise would tee up the AIA for the formal process for ratification in 2023.

The AI Act so far: International alignment or divergence?

The AIA addresses several issues crucial to international cooperation on AI regulation, including its definition of artificial intelligence, the scope of its risk-based approach, and its approach to developing AI standards, enforcement and conformity assessment.   

Defining what is artificial intelligence

The Act’s definition of AI systems will determine its scope. A narrow definition would leave certain systems out of scope, but too broad a definition risks expanding the AIA into less risky algorithms. As the AIA definition will be the first for regulatory purposes, it could become a reference point for other countries, facilitating regulatory consensus globally. Parliament has proposed a broader definition of AI in line with the Commission’s original definition, while the Council has proposed narrowing the definition to machine learning systems and knowledge-based approaches.

Whether the AI should apply to general-purpose AI (GPAI) is another ongoing and significant debate that could further expand the scope of what the AIA covers. Supporters of its inclusion argue that AI developers are best positioned to account for misuse and should therefore be covered by the AIA, while others argue the very nature of GPAI which allows for multitudes of uses makes it untenable for AI developers to anticipate and be responsible for all their uses.

A risk-based approach to regulating artificial intelligence

The AIA will use a “proportionate” risk-based approach which “imposes regulatory burdens only when an AI system is likely to pose high risks to fundamental rights and safety.” Risk-based regulatory approaches are explicitly endorsed by most governments considering AI regulation. However, the details around how the AIA determines risks and  benefits and identifies which AI systems have risks that are unacceptable (and therefore prohibited) or high risk (to which the AIA most comprehensive requirements apply), will impact whether countries can align on a common risk-based approach to AI regulation.

Moreover, the scope of the AIA will also ultimately be determined by both the definition of what is an AI system as well as by which AI systems are deemed unacceptably risky and which are listed as being high risk.

Governance, Enforcement and Self-Assessment Mechanisms

How the AIA is governed and enforced will also be key to assessing the overall impact of the AIA.  As it stands. the EU plans to create a European Artificial Intelligence Board (EAIB) comprising the European Data Protection Supervisor, the Commission, and national supervisors. IMCO and LIBE have proposed strengthening the EAIB’s role to complement national enforcement and to avoid fragmented enforcement across member states, as has occurred so far with GDPR.

Globally common approaches to assessing conformity with the AIA will underpin interoperability of AI regulation and facilitate international trade in AI. Currently under the AIA, AI systems that are a high risk when it comes to their implications for fundamental rights can rely on self-assessment to determine conformity with the AIA, whereas AI systems that raise product risk require third party conformity assessment by existing notified bodies. Some MEPs remain skeptical of self-assessments, while others express concern for overburdening companies, particularly SMEs.

The AIA will also have implications for AI standards development in the EU and globally. The “presumption of conformity” awarded to high-risk systems in compliance with standards developed by European Standards Organizations creates a strong incentive to develop and use European-based AI standards. It will be important to balance Eurocentric standards development mechanisms with the need for global, interoperable standards that reduce barriers to the diffusion of AI.

The AI Act and the bigger picture

The EU development in the AIA of comprehensive horizontal AI regulation needs to be understood in the context of Europe’s broader digital strategy. In particular, the Data Governance Act, the Data Act proposal and the GAIA-X project will also affect data access, use and storage and by extension the development and operation of AI systems.  Interactions between these regulations, and whether or not they will produce a so-called Brussels effect, remain unclear while these other regulations are under development and as other governments develop their own AI and other digital regulations.

There are a range of ways that the AIA could affect international AI regulation globally. The EU’s intent for the AIA to become a global standard and their immersion in this cooperative ecosystem means that a successful AIA could offer a baseline for other government approaches to AI regulation. However, the risk remains that the AIA diverges from other approaches, creating barriers to cooperation and costs to development. The ongoing debates in the Parliament and Council are crucial in setting this course.

These developments in the AIA underscore the importance of finding ways to develop international cooperation on AI.  In this regard, EU participation in the Forum on Cooperation in AI (FCAI) – a joint Brookings-CEPS convening of multistakeholder dialogues aimed at strengthening opportunities for international cooperation on AI, as well as the Global Partnership on AI and the OECD AI Policy Observatory, are important platforms for building international cooperation on AI.  The inclusion of AI in the EU-U.S. Trade and Technology Council demonstrates further potential for proactive regulatory alignment.

 

Josh Meltzer

Josh Meltzer

September 2022

About this author ︎►

Aaron Tielemans

Aaron Tielemans

September 2022

About this author ︎►

Related content

cartoonSlideImage

G20

See the bigger picture ►

cartoonSlideImage

COP 27

See the bigger picture ►

cartoonSlideImage

Control

See the bigger picture ►

cartoonSlideImage

Curse of Brexit

See the bigger picture ►

cartoonSlideImage

Putin birthday

See the bigger picture ►

cartoonSlideImage

Turbulence

See the bigger picture ►

cartoonSlideImage

Lyman

See the bigger picture ►

cartoonSlideImage

Clear blue water

See the bigger picture ►

soundcloud-link-mpu1 rss-link-mpu soundcloud-link-mpu itunes-link-mpu