Japan

Akira Matsuda is an attorney-at-law (admitted in Japan and New York) and a partner at Iwata Godo heading the AI and technology, media and telecoms (TMT) and data protection practice group. He is based in Tokyo and Singapore. His practice focuses on cross-border transactions, including M&A, as well as international disputes (litigation and arbitration) and advice on digital and TMT-related matters. Mr Matsuda regularly advises Japanese and foreign clients on data security issues (Japanese laws, Singapore Personal Data Protection Act and EU General Data Protection Regulation (GDPR)), including on the structuring of global compliance systems. He also advises complicated cross-border corporate investigation matters. He is a graduate of the University of Tokyo (LLB) and Columbia Law School (LLM).

Haruno Fukatsu is an associate at Iwata Godo. She is an attorney-at-law (admitted in Japan). Her practice focuses on general corporate matters and a wide variety of domestic dispute resolution. Her practice also includes corporate governance, shareholders’ meetings and M&A. Additionally, Ms Fukatsu has advised many clients on data protection and data security issues in terms of Japanese laws and GDPR. She graduated from the University of Osaka (LLB) and the University of Kyoto (JD).

Kazuto Anzai is an associate at Iwata Godo. He is an attorney-at-law (admitted in Japan). His practice focuses on intellectual property and technology and communications and general corporate. His practice also includes dispute resolution, finance and M&A. He graduated from the University of Keio (LLB).


1 What is the current state of the law and regulation governing AI in your jurisdiction? How would you compare the level of regulation with that in other jurisdictions?

The main principles and guidelines relating to AI published so far by public authorities in Japan are as follows:

  • Draft AI R&D Guidelines for International Discussions published by the Conference toward AI Network Society (a conference held by the Institute for Information and Communications Policy, the Ministry of Internal Affairs and Communications (MIC) with advisers and experts to study social, economic, ethical and legal issues towards promoting AI networking in society) in July 2017;
  • Social Principles of Human-centric AI, published in March 2019;
  • AI Utilization Guidelines: Practical Reference for AI Utilization, published by the Conference toward AI Network Society in August 2019; and
  • Governance Guidelines for the Implementation of AI Principles version 1.0, published by the Study Group on the Implementation of AI Principles in July 2021.

These principles and guidelines were formulated as non-binding soft law, and the government strongly encourages users of AI to take certain voluntary measures when using AI. The Social Principles of Human-centric AI states:

Since the development and utilisation principles of AI are currently being discussed in many countries, organisations, and companies, we emphasise it is important to build an international consensus through open discussions as soon as possible and to share it internationally as a non-regulatory and non-binding framework.

In addition, the AI Utilization Guidelines: Practical Reference for AI Utilization recommend sharing the guidelines as non-binding soft law and as best practice on how to use AI and as basic philosophy.

2 Has the government released a national strategy on AI? Are there any national efforts to create data sharing arrangements?

In January 2016, the government issued its 5th Science and Technology Basic Plan (2016–2021) setting out goals for Japan to lead the transition from ‘Industry 4.0’ to ‘Society 5.0’. The Japanese government established an Artificial Intelligence Technology Strategy Council in 2016, which published an Artificial Intelligence Technology Strategy in March 2017.

In May 2018, the Cabinet Office adopted the ‘Declaration to be the World’s Most Advanced IT Nation’ and the ‘Basic Plan for the Advancement of Public and Private Sector Data Utilization’.

It sets out a number of measures to be implemented without delay through governmental initiatives for the use of AI and ‘Internet of Things’ to solve social problems.

In addition, the Japanese government is preparing to establish 6th Science and Technology Basic Plan (2022–2026), This plan will cover the following matters: (1) concretisation of ‘society 5.0’, (2) speedy implementation of ‘society 5.0’ to society with a sense of crisis, (3) communication and cooperation between policies of science technology/innovation and society with human wellbeing, infections, disasters, and security environment in mind, (4) enhancement of ability to research and improvement of investments in research and development and (5) cultivation of human resources and globalisation for supporting the new society.

Regarding AI and data sharing, the Ministry of Economy, Trade and Industry (METI) published Contract Guidelines on the Utilization of AI and Data in June 2018.

The guidelines are divided into a data section and an AI section.

The data section divides data contracts into three types: data provision, data generation and data sharing (platform type). The data section explains the structure and main legal issues for each contract type.

The AI section explains the basic concepts of AI technology and the legal issues in the field of software development using AI technology.

3 What is the government policy and strategy for managing the ethical and human rights issues raised by the deployment of AI?

The Conference toward AI Network Society published Draft AI R&D Guidelines for International Discussions in July 2017. The guidelines elaborate on key ethical principles. Developers should strive to:

  • pay particular attention to the need to respect human dignity and personal autonomy;
  • take necessary measures to prevent unfair societal discrimination resulting from prejudice in the data learning processes of AI systems (eg, the big data used for algorithmic judgements about financial risk, housing, insurance or employment fitness can invisibly incorporate the effects of human prejudices); and
  • take precautions to ensure that AI systems have a negligible impact on human rights.

According to the Social Principles of Human-centric AI of March 2019: ‘Policymakers and managers of enterprises involved in AI must have an accurate understanding of AI, understand the proper use of AI in society and be knowledgeable about AI ethics.’

The Conference toward AI Network Society published ‘AI Utilization Practical Reference Guidelines for AI Utilization’ in August 2019. These guidelines explain the ‘principles of human dignity and personal autonomy’ as ‘AI service providers and business users are expected to respect human dignity and individual autonomy based on the social context in AI utilisation.’

In July 2020, the said conference published the report that introduces cases of the enterprises and individuals such as AI service providers and business users of AI.

4 What is the government policy and strategy for managing the national security and trade implications of AI? Are there any trade restrictions that may apply to AI-based products?

The Foreign Exchange and Foreign Trade Act regulates, among other things, export control from a national security and international trade administration perspective. Export control mainly focuses on classes of products, including dual technologies that can be used to develop nuclear weapons and missiles and biochemical weapons among others, with control extending to certain types of high-tech materials and machines. While such restrictions may be applicable to certain AI-based products, it is a general restriction unlike the US export control, which specifically focuses on AI-based products.

5 How are AI-related data protection and privacy issues being addressed? Have these issues affected data sharing arrangements in any way?

The main piece of data protection legislation in Japan is the Act on the Protection of Personal Information (APPI). The APPI was significantly overhauled in May 2017 to strengthen data protection. When sharing personal data with a third party located in Japan (unless the data is anonymised), consent of the data subject is required unless certain exemption requirements are met. The use of a ‘joint-use’ arrangement is frequent for data sharing as it allows group companies or entities involved in the same project to share personal data without the need to secure data subject consent if certain disclosure requirements are met. Furthermore, the transfer of personal data to persons located outside Japan is subject to data subject consent unless:

  • one can ensure that the receiving party has a structured data protection compliance system meeting Japanese law standards, through binding corporate rules or data transfer agreements (cross-border data transfer restriction); or
  • the receiving party satisfies Asia-Pacific Economic Cooperation Cross-Border Privacy Rules requirements.

Reciprocal adequacy decisions on cross-border data transfers between the EU and Japan came into effect on 23 January 2019 and the above-mentioned cross-border data transfer restrictions do not apply to data transfers to the European Economic Area countries.

6 How are government authorities enforcing and monitoring compliance with AI legislation, regulations and practice guidance? Which entities are issuing and enforcing regulations, strategies and frameworks with respect to AI?

The Japanese government ensures due compliance with the safety management regulations, among others, for AI-based products.

The Road Transport Vehicle Act provides that an automobile may not be driven unless it satisfies technical standards for safety and environment protection prescribed by ministerial ordinances (articles 40 and 41). This means that, even if an AI-based automatic control system is developed, an automobile fitted with this type of new automatic driving system will not be allowed to be driven on public roads if it fails to satisfy the technical standards. A ministerial ordinance is yet to be issued to lay down technical standards applicable to such a system.

With respect to increasingly popular AI-based electronic appliances, the Electrical Appliance and Material Safety Act regulates electronic products for general use. The statute gives rise to such obligations as regulator notification at the manufacturing and importation stages, compliance with technical standards, periodical checks and labelling. A supplier of electronic appliances must make sure it complies with the technical standards prescribed by ministerial ordinances (article 8(1)).

See questions 1 to 3 on organisations formulating rules and strategies in relation to AI technology.

7 Has your jurisdiction participated in any international frameworks for AI?

Japan has participated in international discussions on AI under the aegis of various international organisations or treaty frameworks, such as the Organisation for Economic Co-operation and Development (OECD), G20, G7, UNESCO and the International Conference of Data Protection and Privacy Commissioners (ICDPPC), Global Partnership on AI (GPAI).

In May 2019, the OECD adopted its Recommendation of the Council on Artificial Intelligence, which is the first international standard agreed by governments for the responsible stewardship of trustworthy AI. This recommendation consists of measures and policies that governments should implement, including complying with the principles of ‘responsible stewardship of trustworthy AI’, and seeks to enlist major AI players.

In June 2019, ministers of trade and digital economy of participant countries discussed research, development and utilisation of AI based on a human-centred AI approach. At this meeting, the G20 Ministerial Statement on the Trade and Digital Economy, which includes ‘G20 AI Principles’, was adopted. This statement is the first consensus in the G20 regarding AI. The G20 Summit held in Osaka also discussed G20 AI Principles, which were adopted as an annex to the official G20 Summit declaration.

In UNESCO and the G7, issues on ethics relating to AI are an important topic being debated and the ICDPPC has begun discussions to develop the AI guidelines from an ethical and data protection perspective.

GPAI is established in June 2020, which supports most-advanced research and implementation of AI in order to realise the development and utilisation of ‘Responsible AI.’

8 What have been the most noteworthy AI-related developments over the past year in your jurisdiction?

In Japan, there is no area that could be singled out as drawing the most attention regarding AI over the past year, although discussions continue on a wide array of AI-related topics.

As in the answer to question 10, in the privacy area, a bill amending APPI, including the introduction of a rules on the handling of pseudonymised personal data was promulgated in 2020 and will come into force on 1 April 2022 (see question 10). Moreover, in the field of competition law, a bill of the Digital Platform Transparency Act (DPTA) was promulgated in 2020 and came into force on 1 February 2021.

The background of the DPTA is the increasing role of digital platforms such as large-scale e-commerce websites and app stores, and growing concerns about the transparency and fairness of terms and conditions. The DPTA is intended to regulate the activities of operators of certain digital platforms, by requiring them to appropriately disclose their terms and conditions of contracts with users and take measures to ensure fairness of operations in Japan. Operators subject to the DPTA are required to report the status and results of their self-assessment regarding the above to the METI, and the METI will assess the status of operations based on such reports and publish the results.

9 Which industry sectors have seen the most development in AI-based products and services in your jurisdiction?

According to a survey issued by MIC, the sectors with the highest proportion of ‘AI active players’ are technology, media and telecoms, and the sector with the second highest proportion is financial services. An AI active player is a company that has introduced AI (including on a trial basis) for part of its business and that considers the introduction of AI to be a success.

A number of companies have introduced AI to increase productivity, efficiency and reduce costs.

Although a number of major companies are considering the introduction of AI, they can be reluctant. Further, a large number of those companies that have introduced AI so far are small and medium-sized companies.

As a whole, the percentage of companies using AI in one way or another remains low, at 14.1 per cent (March 2019 data).

10 Are there any pending or proposed legislative or regulatory initiatives in relation to AI?

The amended the APPI came force into on 1 April 2022.

The APPI is being revised in pursuance of article 12 of the Supplemental Act amending the APPI, which requires a triennial review of the rules governing data protection to reflect global trends and developments. Key measures in this amendment include:

  • a new obligation to report to the Personal Information Protection Commission and notify data subjects when personal information is leaked;
  • expansion of the extraterritorial application of the APPI; and
  • new rules on pseudonymised personal data closely following the General Data Protection Regulation concept and defined as personal data processed to prevent the identification of the data subject unless it is combined with other information.

These amendments can help mitigate the risks arising from the size of the digital universe, the growth of human- and machine-generated data and machine data, and the increasing diversification of the use of personal information and its wider circulation.

11 What best practices would you recommend to assess and manage risks arising in the deployment of AI?

To assess and manage risks arising as a result of the deployment of AI, it is important to establish appropriate governance systems, such as internal policies, organisational structures, standard operating processes, rules for the oversight of management, standards and reporting and management of AI-related risks against such governance framework. However, there are no prevailing binding standards dealing with AI governance and one of the topics still under discussion globally is how to build up hard rules. Therefore, it would still make sense to refer to soft laws, such as the Japanese Guidelines already mentioned. In the structuring of a governance system, it is essential to consider frequently discussed risks and issues viewed as specifically inherent in the deployment of AI, such as fairness, ethics accountability and transparency.


The Inside Track

What skills and experiences have helped you to navigate AI issues as a lawyer?

A flexible way of thinking about the existing legal system and theories and a broad knowledge and expertise in the legal field, especially in data-related areas and obviously data, is the fuel of the future likened to oil and that AI is hungry for data. For example, a data set may involve legal issues from various perspectives: personal data protection, telecommunications, copyright and unfair competition prevention acts and anti-monopoly acts. Understanding the basic concepts under these law areas is essential to advise on the legal issues surrounding AI, especially at the development level and for service structuring.

Which areas of AI development are you most excited about and which do you think will offer the greatest opportunities?

Automated driving and the wider use of robotics in society. Once automated driving reaches a certain technical level, and robots become more widely used, legal issues will keep cropping up from such use. If an algorithm used by AI is not clear, it is sometimes difficult to comply with the transparency principle to explain how personal data is processed and for what purpose they are processed. Data regulation experts should be involved in the structuring process from the outset for the whole system to be compliant with the regulations. Furthermore, more complex issues would arise in a global context when, for example, the service offering is cross-border, as it requires the developer to take a multi-jurisdictional approach, which can be very challenging.

What do you see as the greatest challenges facing both developers and society as a whole in relation to the deployment of AI?

Allocation of risks and liability. In the case of automated driving, the Japanese legal theory and system are based on the assumption that the person in the driver’s seat has a duty of care and if an accident occurs due to a malfunction of the automated driving system (based on AI), such an accident is attributed to the lack of due care of the person (monitoring duty). As such, the Japanese tort theory currently requires human involvement. However, once an automated system based on AI is deployed, the key for developers and society is to ‘assess’ the trustworthiness of the system based on AI. Users could be exempted from liability if AI deemed safe and reliable is deployed.