Daniel Schlaepfer and Hugo Kruyne / Jul 2018
Image: Shutterstock
Should artificial intelligence (AI) be regulated? And how? This is the questions on everyone’s lips.
The European Union funds AI research while subjecting it to some of the world's strictest regulations. How this regulation should work is an especially acute question as the EU rolls out its ambitious initiative to increase European investment in AI research and innovation by at least €20 billion between now and the end of 2020, with the stated aim of catching up with China and the US.
The G7 might have recently signed up to a ‘Common Vision for the Future of Artificial Intelligence’, but there is no consensus in the tech industry on how to regulation should look or work. On one side of the debate there’s Elon Musk, who – along with the late Professor Stephen Hawking - argued that the potential threats of AI mean that we ought to think about regulation now, before it’s too late. Others, such as Mark Zuckerberg, suggest there is little point in regulating now when there is so little agreement on what AI even is, and will simply stifle innovation.
But widespread regulation of the artificial intelligence industry is almost inevitable. Countless industries have faced very similar dilemmas in their nascent eras. AI regulators have much experience to draw upon before casting the first stroke on this blank canvas.
Since the economic crash ten years ago, the financial industry has faced fierce debate on the appropriate level of regulation that is needed to avoid another catastrophe, while not stifling innovation and growth. There are four key lessons that will help regulators frame future EU legislation.
Interconnections
The EU’s new Markets in Financial Instruments Directive (MiFID II) shows just how interconnected capital markets are around the world. MiFID only applies to companies under European jurisdiction, but its effects have spread much further. The US Securities and Exchange Commission has had to issue a 30-month grace-period for market participants, ensuring they wouldn’t be trapped between two contradictory sets of regulation.
When it comes to regulating AI, legislators must consider a similarly interconnected and global sector.
AI has the potential to be a game changer across major sectors including health, defence, transport education and commerce. If the EU were to introduce significant regulation for AI—which the Commission conspicuously avoided in its Communication to the European Parliament—without broad and in-depth consultation with the AI community (as well as a plethora of affected industries) it would be finger-in-the-air policy making with potentially negative long term consequences for Europe’s economic competitiveness.
Competition
MiFID II also reveals the risks of hyper-regulation. At a recent event, I showed how mammoth pieces of regulation like MiFID II risks concentrating the industry into a small number of monopolistic companies that are “too big to fail” – just as we saw in the financial sector over a decade ago.
AI is already dominated by a small number of companies, and ill-conceived regulation could risk further entrenching these structures that see smaller players getting squeezed out by the burden of compliance and incoming entrants having a harder time establishing a foothold.
Regulators should focus on increasing competition and supporting new entrants to the market by avoiding unmanageable compliance.
Flexibility
Good regulation must be nimble and evolutionary. No piece of regulation is ever complete at the moment of its implementation, and it must continually adapt.
The Volcker Rule—which restricts US banks from making certain kinds of speculative investments that overtly risk their customers’ capital—is a good example of this. The Federal Reserve recently claimed it wasn’t working according to its original intent, and so adapted the rule and its implementation to better match its policy goals.
As AI rapidly evolves, regulation must evolve with it. Part of this means ensuring all parts of the AI industry’s ecosystem plays a role.
But flexibility must not mean rolling back legislation for short term gains. A new book by Carmen Reinhart and Kenneth Rogoff show how the regulations of the 1920’s Depression era were rolled back - only to then precipitate the crash of 2007-8. Independent and rigorous analysis from a broad set of political and commercial perspectives is needed to ensure regulation does not fail European citizens.
Ethics
The use of AI is becoming ubiquitous in financial markets. For example, Morgan Stanley recently announced it uses artificial intelligence to tailor its communications to clients during market turmoil. But I argue that great risks are posed to financial services if human incentives and liabilities are eroded by AI decision making. Additionally, when AI gives advice, it is not limited by the moral or ethical restrictions that financial advisers should be working under.
Similarly, the Commission’s AI initiative not only aims to boost investment, but it also aims to establish clear ethical guidelines.
For AI, any serious discussion on how to effectively regulate the industry must not be dictated by sermons of the converted – or those who stand to gain through lucrative board positions or funding grants from big tech companies.
Conflicts of interest need to be overcome by making the policy making process a broad tent, with stakeholders at all levels to be involved in a meaningful way.