Noah Greene / Jan 2024
The new year has brought more news about the European Union’s (EU) Artificial Intelligence (AI) Act. The final text of the EU AI Act was solidified in January of this year and became the first broad regulation controlling AI development and use. In 2023, Věra Jourová, Vice President of the European Commission for Values and Transparency, argued that the regulation will “intervene only where necessary, allowing AI to flourish.” The law intentionally avoids directly regulating the militaries of member states. While the EU has thus far avoided the creation of an AI governance tool constraining EU militaries directly, the dual use nature of the technology will undoubtedly mean the EU AI Act will impact military innovation among member states during the development of systems.
Under the law, AI systems are categorised into four groups. In the bottom tier, systems viewed as having minimal or no risk simply must comply with existing regulation. Second are systems viewed as having limited risk or transparency risk. These are products which pose a noticeable threat to European fundamental rights, as a result additional transparency obligations would be required. Third, high-risk systems are those with the ability to “negatively affect safety or fundamental rights.” AI that falls into this category will need to undergo screening before being placed on the market and require further evaluation during the lifecycle of the system. Finally, there is a category for unacceptable risk, which includes systems that in various ways pose a threat to humans and their livelihoods and should subsequently be banned. Examples include systems capable of behavioural manipulation, emotion recognition, and biometric categorisation.
A finalised version of the EU AI Act attempts to carve out an exemption for member state militaries. As the European Commission has noted the law “Does not apply to AI systems that are exclusively for military, defence or national security purposes, regardless of the type of entity carrying out those activities.” This is easier said than done.
Advanced AI technology rarely starts solely in the military domain with government technologists or defence contractors as the prime developers. The ecosystem for the development of highly useful AI often starts with commercial providers and then makes its way into military systems through collaborative mechanisms. For example, Palantir’s Artificial Intelligence Platform (AIP) is a set of large language models (LLMs) designed for military customers seeking to maintain an integrated pipeline of sensitive information and targeting capabilities for operators in the field. Palantir is a publicly traded company that competes for contracts administered by the U.S. military and its allies. The LLMs Palantir has designed have been refined for military customers, but are inspired by non-military systems, such as OpenAI’s ChatGPT, and likely would not have happened without innovation in LLMs from outside of the defense industrial base. The AI Act and similar legislation like it could have negative downstream effects in the ability of companies to develop highly useful systems that will enable future military capabilities.
It is still being determined how the EU AI Act may influence private sector technology companies and defence firms. In the short term, compliance offices at leading AI labs will continue to maintain the resources necessary to wade through the legal uncertainties in Europe. Smaller firms with less resources may not have the same capabilities. Subsequently, the pace of innovation in Europe will continue to lag behind.
Following the final adoption of the text the law will undergo a transition period of two to three years. Still, there are a couple solutions to help mitigate the negative impacts of the legislation. First, the European Commission should refrain from future AI legislation that does not more closely consider how member state militaries may be affected. In practice this could mean further collaboration with the North Atlantic Treaty Organization (NATO). NATO’s technical expertise in this area could provide better input on how to create space for militaries. Other analysts have called for the EU to take additional steps to control the perceived risks of military AI. With a war raging in Eastern Europe and the continuing challenge that China poses to international order, European states should not hamstring their militaries’ abilities to adopt new technologies. Militaries in democratic states need every advantage to counter these threats. These measures can only be taken after a deeper assessment of the specific opportunities and risks posed by AI. This requires further collaboration between industry, academia, and government actors.
Second, EU member states should expand their national level research and development (R&D) pipelines with the private sector. Counter proposals during AI Act trilogue negotiations attempted to reaffirm the military and national security competencies of member states. In essence, an expansion of national level R&D would include partnering with private sector firms so that they are involved in the development of AI systems from inception to field deployment. As it stands expanding national security AI R&D pipelines could aid in further circumventing the law’s controls, subsequently helping to ensure that the negative impacts of the regulation on innovation are even more limited.
In the back and forth surrounding the creation of the AI Act, one prediction is certain: the EU will continue to try to be a leader in regulating AI and other emerging technologies. The EU should move forward in a way that does not create overly broad restrictions that threaten military innovation otherwise the cost of doing so will harm democratic states across the globe in the long run.