Enterprises around the world are rapidly incorporating artificial intelligence (AI) into existing and new products and processes. This effort is not just to improve such offerings and services, but to achieve a qualitatively higher level of capability not possible before. It is clear that AI carries the potential for many new opportunities, across all industries, but it is also already recognized that it brings numerous risks as well. As with any technology, senior management and board directors need to be aware of both the opportunity and the risk in order to successfully and responsibly manage the enterprise. The opportunities are great—AI can assist in robotic process automation (RPA), machine learning, natural language processing, finding new drugs and therapies, and will be essential for driverless transportation—but if the risks are downplayed or overlooked, there can be serious reputational and/or legal consequences.
AI products and services will be covered by numerous areas of the law, including privacy, data security, products liability, intellectual property, and antitrust, among others. Further, it is expected that these various areas of law will change in response to AI. Because AI is an emerging technology and due diligence regarding legal risks is not yet routine, legal compliance efforts call for creativity and a commitment to maintain awareness of emerging trends.
In recognition of the power of AI, a regulatory framework is being created, and in fact is being called for by some of the leading enterprises involved in AI. At this time the proposals are structured as principles and guidance, but it is expected that a regulatory system requiring compliance will emerge. Progress towards a framework is moving at a rapid pace but along somewhat different paths for different industries, and in different jurisdictions, such as in the EU, the U.S., and Japan. In some cases, highly divergent rules are emerging.1
The challenges for management and board directors are twofold: first and foremost, in assessing the positive potential use cases of AI within their own business and creating an appropriate strategy for how to best integrate AI. The second challenge will be to carry out a strategic-level analysis of the business risks of AI, and establishing and monitoring a pro-active, forward-looking compliance program. With the exception of certain sectors and products, there are currently no over-arching and prescriptive "hard-law" rules governing AI development. However, governing principles are emerging in international for a2, multi-nationally, nationally, and even at the local (state, province, etc.) level. Below we highlight some regulatory initiatives now underway in the Europe and Japan.
AI regulation in the EU
In 2019, the European Commission published Ethical Guidelines for Trustworthy AI in order to instruct businesses and the public about their expectations concerning the proper development and use of AI. The Guidelines, together with the General Data Protection Regulation, place the EU once again in the position of setting high standards for those who wish to do business in the EU and perhaps globally.
The principles include:
- Human agency and oversight3
- Robustness and safety4
- Privacy and data governance5
- Diversity, non-discrimination and fairness7
- Societal and environmental well-being8 and
The EU Guidelines include a pilot Trustworthy Assessment List for use by companies when developing AI systems. It is important to note that the Guidelines do not include any legally binding mandates for the moment but the European Commission has indicated that it will review this position in 2020.10
The challenge in the EU is that there is no single system of regulation that covers AI. Instead, there are various laws that are applicable (or potentially applicable) to the development and implementation of AI technologies. These laws include but are not limited to intellectual property law, data protection law, consumer protection or product liability laws, computer misuse laws, and human rights laws.
In February 2020, the European Commission released a White Paper on AI.11 Releasing a White Paper is a common first step in the preparation of EU legislation. The purpose of the AI White Paper is to seek input and proposals on the development of a common EU framework for the regulation of AI. However, it is not unusual for the final legislation to look very different to the initial White Paper. The AI White Paper notes that a number of EU Member States have adopted inconsistent approaches to AI regulation at a national level.
There are several major challenges with the approach taken by the AI White Paper. In particular, although the AI White Paper points to the work of the High-Level Expert Group in defining AI, no concrete definition is provided. Further, the precise legal wrongs that the AI White Paper is intended to address are not clearly set out. Another challenge is that the AI White Paper proposes separating AI applications into high-risk and non-high-risk categories, but very often businesses will not know which category applies until after the fact. Lastly, there appears to be a significant risk of regulatory overlap with existing laws that already apply to many AI technologies (e.g. the GDPR).
Although there are clearly a number of challenges that businesses would face if the AI White Paper is implemented, there are a few positive factors: (i) the AI White Paper acknowledges that AI can be a force for societal good; (ii) the creation of pan-EU consistency on regulation of AI could, in principle, reduce the level of compliance challenges that businesses currently face as a result of divergent requirements from one EU Member State to the next.
Businesses that are affected by these issues should consider whether to lobby for changes in approach.
AI regulation in Japan
1) Japanese laws
a) Social Principles of Human-centric AI12
Similar to the experience in Europe, there is no comprehensive regulation of AI in Japan at this time.13 Nevertheless, a number of existing laws are applicable to AI including the Constitution and laws pertaining to contracts, torts, certain economic statutes, intellectual property, personal data, privacy and the criminal code. While there is debate as to whether the current legal framework is suitable for the future development of AI (as described in more detail below), the Cabinet Office and various ministries have sought to influence AI by promulgating various strategies and guidelines. For example, in May 2018, in order to realize an "AI-Ready society" and to promote appropriate and proactive social implementation of AI, the Cabinet Office established a study group consisting of multi-stake holders from industry, academia, and the private sector, and made them formulate "Social Principles of AI", which society (especially a legislative and administrative authorities) should pay attention to. In March 2019, Cabinet Office also released a document titled "Social Principles of Human-Centric AI" after receiving public comments. This document defines three basic principles: (i) Dignity - a society in which human dignity is respected; (ii) Diversity and Inclusion - a society in which people with diverse backgrounds can pursue their own well-being; and(iii) Sustainability - a sustainable society. These principles are not legal principles themselves but could be used as guidelines to interpret the existing laws. The document includes the following principles:
i) AI Ready Society - Essential social revolution to achieve Society 5.0:
Even if we can entrust AI for complex processes to some extent, it is necessary for humans to set the objective configuration that answers the question, "what is the purpose of using AI?" In order to answer the said question, they mentioned that the following five aspects are important; (i) human, (ii) social systems, (iii) industrial structures, (iv) innovation systems, and (v) governance.
ii) Social Principles of Human-centric AI:
They systematize the basic AI principles into (a) "Social Principles of AI" which especially legislative or administrative agencies should pay attention to, and (b) "Development and Utilization Principles of AI" which researchers, developers and user enterprises of AI should pay attention to.
iii) "Social Principles of AI" consists of seven principles; (i) Human-centric, (ii) Education, (iii) Privacy, (iv) Security, (v) Fair Competition, (vi) Fairness, Accountability, and Transparency, and (vii) Innovation.
iv) "Development and Utilization Principles of AI" has not been established yet. They emphasize that it is important to build an international consensus through open discussions as soon as possible.
b) AI Utilization Guidelines
In October 2016, Japan Ministry of Internal Affairs and Communications organized a study group named "Promotion Council for AI Network Society" (AI network shakai suishin kaigi) (the "Committee") in October 2016 to further the discussion about AI network and formulate necessary guidelines. The Committee published (i) a draft of "Guidelines for AI Development for International Debate" (kokusaitekina giron no tameno AI kaihatsu guideline an)14 in July 2017, in which the Committee proposed AI development principles addressing AI developers, and (ii) a draft of "AI Utilization Guidelines" (AI rikatsuyou guideline an)15 in August 2019, which provides a guidance for AI users, including but not limited to AI service providers and business users of AI systems. Although the movement for the formulation of AI regulation is relatively slow in Japan, some Japanese large corporations, such as SONY, Fujitsu, NEC and NTT Data, have developed their own AI policies and published them online.
2) Governance Innovation
As a major player in the world economy, Japan wants to play a role in shaping the global future. Virtually every major company has embarked on some type of "digital transformation" and AI will play a major part. Many Japanese companies are known for their prowess in manufacturing (called "monozukuri" in Japanese) but now they have to learn how to marry that with software or as some would say "mono" and "koto". Those companies that learn to do it well will be more productive and more innovative. Consequently, a new policy perspective in Japan is developing around the notion of "governance innovation".
On July 13, 2020, METI published a White Paper that addresses the need for new governance models with respect to big data, the Internet of Things, artificial intelligence and other digital technologies. While this is still under debate within industry and among government officials, the argument is that in order for regulations to keep up with the changes in technology and foster innovation, a new regulatory paradigm is needed. The basic insight is that physical space and cyberspace are so integrated that the legal frameworks used until now are inadequate to address the risks and opportunities posed by new technologies and can impede innovation.
Much of the thinking in the METI White Paper seems to have been based on the analysis of regulation provided by Professor Christopher Decker in the U.K. He posits the difference between "rules-based" approaches to regulation and "goals-based" approaches. Of course, these are just paradigms with pros and cons but simply stated, a rules-based regulation would state that a car may not travel at more than 80 km per hour. A goals-based regulation would state that a car must be operated in a reasonably safe manner. The end is the same but the approach to regulation is radically different.
In most cases to date, rules set by government bureaucrats for vertically organized industries with detailed prescriptions are inadequate for AI technology – because technologies such as AI change to too rapidly and implementations can vary widely, making it difficult to link a particular process to the desired outcome. Thus, it is thought rules-based regulation cannot keep up with the technology. Instead, regulators will need to make sure that industry is at least driving towards appropriate goals.
As noted above, the legal framework for AI is evolving so industries have relied upon self-generated guidelines to govern their activities. For example, AI guidelines published by a number of companies often provide, among others:
(a) That the guidelines will apply to all officers and employees when they research, develop, manufacture and sell new products;
(b) The company will engage in dialogue with stakeholders;
(c) AI products will contain access security and privacy protections;
(d) AI will not discriminate against individuals or violate their human rights;
(e) AI products and services will provide transparency concerning how decisions are reached.
These guidelines seem to share many of the concerns that are being discussed in the EU White Paper on AI.
To be sure, goal-based regulations will not be sufficient, and rule-based regulations will still exist in the new environment. Consider again self-driving vehicles. Now, safety is ensured by actions taken by the driver and the vehicle hardware. This will shift to a combination of safety ensured by control software and the driving environment, including communications among other self-driving vehicles, manned vehicles, pedestrians, and others using the road. While protocols and the vehicle itself will still be under rule-based regulations, it seems the decisions by AI that replace the human element will need to be goal-based, even if they are fleshed out over time.
What should AI developers do?
Consult the EU Trustworthy Assessment List
This checklist contains a number of useful questions that designers should consider before releasing an AI product or service. An early internal assessment will mitigate foreseeable damages and loss of reputation.
Develop company level guidelines
Developers should specify what they will and will not do with the AI system they are creating. Comparisons with industry peers can be useful.
Communicate your guidelines to employees, customers and the public
All employees and especially AI designers need to know what the company policy is so that harmful features are not incorporated into products and services without knowledge of the current rules and how they may evolve.
Continuously monitor regulatory developments
Generally speaking, with the exception of certain products, binding rules specifically encompassed in omnibus AI legislation in the EU and Japan have not been adopted but as more experience is gained, it seems likely that this situation will change, particularly given the pace at which new applications are being developed.
Dialogue with others
While being mindful of potential antitrust concerns, developers should consult with others in industry associations and with governments to form a consensus on ethical AI and with a view to harmonizing standards.