Jim Shaughnessy / Feb 2021
Artificial intelligence (AI) is rapidly transforming modern societies. AI is improving healthcare, optimising commerce, improving energy resilience, enhancing employees’ careers, and driving human progress in countless other ways. Businesses use applications incorporating AI technologies across their operations to support better business decisions, accelerate operations, and deliver data-driven predictions to inform better human decisions.
Workday is a leading provider of enterprise cloud technology, helping customers adapt and thrive in a changing world. Our applications for financial management, human resources, planning, spend management, and analytics are delivered through the cloud and are highly trusted by thousands of customers and tens of millions of their employees. Today, many of our applications are enriched by a type of AI technology called machine learning (ML). We harness the power of ML to help our customers make more informed decisions and accelerate operations, while assisting workers with data-driven predictions that help lead to better outcomes. We believe that the most transformative uses of AI are those that leverage the insights and predictive power of AI to enhance human judgment and decision-making.
To achieve AI’s massive potential, there must be broad confidence that it has been developed ethically and is being used responsibly. In other words, it must be trusted. As with any emerging technology, concerns about AI’s potential risks must be addressed. These include questions about AI’s accuracy and safety, its impact on human autonomy and privacy, and whether AI will treat people equitably. Unless these questions are answered directly, people may lack faith that AI systems will treat them fairly, safely, and with dignity. Failure to address this trust gap early could significantly hinder the growth of AI. Companies will be less willing to invest in developing innovative AI solutions and bringing them to market if they fear that customers will not embrace them.
Workday believes regulation can help the development of AI and aid in AI’s acceptance. Policy makers across the globe are facing the challenge of developing policies that drive groundbreaking research, enable rapid innovation and uptake, while mitigating risks and building trust among the public. Many countries have launched AI strategies, and international organisations such as the OECD, the Council of Europe, and the World Economic Forum provide important fora for policy makers, industry, academia and civil society to discuss common principles.
The European Union will be the first major jurisdiction to propose comprehensive legislation for trustworthy and ethical AI. We welcome the EU’s leadership in this area and we have engaged consistently in the European Commission’s preparatory work: the White Paper on AI and the AI High-Level Expert Group’s Assessment List for Trustworthy Artificial Intelligence.
To contribute to the AI policy debate, we recently published a new proposal for a regulatory approach to AI regulation: “Building Trust in AI/ML Through Principles, Practice, and Policy.” I had the pleasure of discussing the paper with representatives from the European Commission, the European Parliament, industry and academia at a recent Forum Europe event. In brief, we propose establishing a Trustworthy-by-Design regulatory framework that would apply to producers of AI tools, and that we believe would promote public trust and drive sustained AI/ML innovation.
The Trustworthy-by-Design regulatory framework draws on risk-based models from cybersecurity and data protection and would apply to producers of AI tools. We call for a regulatory framework that is flexible enough to encompass the broad array of AI applications, with different use scenarios, and different risk profiles. Under our proposal, producers would be subject to regulatory requirements for transparency, governance and accountability. Among other things, they would be obliged to adopt and publish Trustworthy AI policies, put in place internal governance frameworks, conduct impact assessments, and address potential bias risks. These requirements would be mandatory and subject to oversight by regulatory authorities.
We have also had the opportunity to present our paper at an event with U.S. policy makers. We have consistently supported policy action on AI in the U.S. With the recent National Defense Authorization Act for 2021 finally becoming law, Congress has taken a much-needed step forward in artificial intelligence policy. The NDAA directs the National Institute of Standards and Technology to launch a trustworthy AI framework that we hope will lay the necessary foundation to continue responsible innovation in this growing area of technology while also providing a U.S. perspective for continued dialogue toward a globally-harmonised ethical AI approach.
AI and ML technologies are global in nature, and they raise similar policy issues across different countries and regions. Therefore, the policy solutions to manage them would benefit from cooperation among countries with shared values and similar political systems. We encourage the EU and the U.S. to work with partners globally to set regulatory frameworks that are harmonised and interoperable. We will continue contributing to these important policy debates, in Brussels, Washington and in global fora.