Comment

Europe sets out to build its own brand of AI

Nigel Cameron / Feb 2020

Image: Shutterstock

 

Covid-19 isn’t the only virus replicating around the planet. Just look at the reports on AI.

When I asked one expert last week how many of these documents are out there, she said at least 120. Well, now we’re at 121.

The von der Leyen Commission’s much anticipated White Paper dropped on February 19th. It follows the statement on AI in her own political guidelines, and the previous Commission’s Communication on Artificial Intelligence for Europe from last spring. And it’s accompanied by the Commission’s Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics. Which, depending how you count, could mean we’re up to 122.

The White Paper itself admits it’s hardly the only game in town.

There are parallel efforts by OECD (endorsed by the G20), the Council of Europe, UNESCO, the WTO and the ITU. Not to forget the ongoing work of the previous Commission’s High-Level Expert Group on AI, which already came up with guidelines (last summer) but is still at it.

Moreover, there’s the UN Secretary-General’s “High-Level Panel on Digital Cooperation” – chaired by Melinda Gates and Jack Ma, which reported last summer. I was recently invited to an “expert consultation” hosted by the UN Human Rights Council in Geneva together with the Secretary-General’s office to address the human rights implications of AI.  

So there’s a thicket of reports to cut through, and a hosepipe of recommendations. All of which points to the radical significance of the technology - not just for each and every jurisdiction, but for multiple aspects of our economic, cultural and political life. Pretty much everyone is talking about it.

Of course, these documents are highly duplicative. What’s distinctive about the White Paper is not just that it has a European focus, but its sustained effort to forge a distinctive EU brand of AI. It’s also notable for its lack of focus (vis-à-vis the UN report) on human rights. In fact the phrase is never used, and when the idea appears it’s in the distinctive EU-branded guise of “fundamental rights.”

Meanwhile, as the White Paper acknowledges, the EU faces pressing challenges, first (in a fraught budget year) in the basic area of investment. Digital in Europe is strong in B2B, if much less so B2C, where America dominates (and China is waiting in the wings). And while there are multiple centres of engineering excellence, both in academia and industry, the raw investment numbers tell a demanding story - as the White Paper notes. In 2016, apparently the most recent year with data available for comparative purposes, total EU investment was 3.2 bn euro. In North America, 12.1 bn. The White Paper suggests a goal, to be achieved within 10 years, of 20 bn a year.

A second challenge, referred to more than once, is that of keeping all the Member States on a “level playing field” so that the Single Digital Market is not interrupted by some states applying significantly higher or lower standards. The White Paper notes that this has already begun to happen. Germany, Denmark and Malta are moving ahead in ways that could lead to “fragmentation in the internal market, which would undermine the objectives of trust, legal certainty and market uptake.”

So, looking back to the Commission strategy of 2019 and the work of the High-Level Expert Group, the White Paper notes that “while a number of requirements are already reflected in existing legal or regulatory regimes, those regarding transparency, traceability and human oversight are not specifically covered under current legislation in many economic sectors.” These are of course three of the core concerns raised by AI. A “clear European regulatory framework” is needed to “build trust” on the basis of these (currently) non-binding guidelines. (pp 9, 10)

The White Paper is far from proposing details. Certainly one way to read it is political – pushed out quickly, early in the life of the new Commission, to seize the initiative and discourage the more energetic Member States from taking their own policy proposals too far. So I was not surprised that when I asked Richard Sargeant of leading European consultancy Faculty AI for a comment, he came back with: “My overall take is that while it is well intentioned, their paper is too vague to be of any practical use to citizens or professionals.” Maria Axente, Responsible AI lead at PwC, responded a little differently. “Overall it’s a positive development in the regulatory space globally and a step in the right direction to allow us to build more responsible AI solutions, to deliver safe, fair and equitable outcomes.” Interestingly, she also notes as “particularly interesting, and with implications beyond regulation…the risk tiering approach, based on the impact and potential of risk to emerge, recommended initially by the German Data Ethics Commission.” (So the German effort is influential.)

The devil, as ever, is in the details. Axente again: “The paper also highlights the need for a mandatory apriori conformity assessment could include procedures for testing, inspection or certification, a more coordinated public consultation and upskilling the AI practitioners on ethical issues, using the HLEG on AI framework. We are looking forward to providing a thorough critical assessment and consultation response….”

So what comes next? Actions to take forward the vision of AI that is “human-centric, ethical, sustainable and respects fundamental rights and values.” (p 25) Where possible, there should not be new structures, but existing regulatory and other agencies tasked with taking on AI-related responsibilities, with a strong role for Member States.

There’s quite a challenge ahead for the new Commission. Aside from playing nice with Member States and discouraging, or adopting, their own initiatives, the task is both to promote European AI in what is explicitly recognized to be a “fiercely” competitive environment; but while taking a “European approach to excellence and trust.” What that entails of course is the likelihood of more constraints on technical and commercial development than may be the case among competitors. The hidden question behind these documents is: Can such constraints be turned to commercial advantage? Such that “European AI” takes on the appeal of, well, “German engineering” or “American burgers”?

The White Paper invites responses by May 19.

 

 

Nigel Cameron

Nigel Cameron

February 2020

About this author ︎►

Related content

cartoonSlideImage

US Gladiators

See the bigger picture ►

cartoonSlideImage

Scholz hacker

See the bigger picture ►

cartoonSlideImage

Navalny

See the bigger picture ►

cartoonSlideImage

Orbán Valentine

See the bigger picture ►

cartoonSlideImage

Trump - Be Afraid

See the bigger picture ►

cartoonSlideImage

Biden & Co

See the bigger picture ►

cartoonSlideImage

Orban and Ukraine

See the bigger picture ►

cartoonSlideImage

Badddest

See the bigger picture ►

soundcloud-link-mpu1 rss-link-mpu soundcloud-link-mpu itunes-link-mpu