China has recently promulgated a somewhat short, but comprehensive, regulation on generative Artificial Intelligence ("AI") which took effect on August 15, 2023. The overarching goals are to protect citizens from potential harms while privileging the "socialist values" of the state. But, researchers, policy makers, and tech executives around the world, including some in China, recognize that an international framework to manage AI may ultimately be preferable in order to best promote AI safety, such as to facilitate limitations on AI use cases, or to avoid divergent practices. Many argue that AI is a global technology and that any likely harms will not be limited to the borders of one country. However, AI is a tool that has many use cases which can be regulated separately based on different societal values and legal systems.
At present, different approaches to the development of generative AI, which reflect societal values and national policies, are being pursued by China, the European Union ("EU"), Japan, and the United States ("US"). China acted quickly to formulate regulations that emphasize the need for government oversight of the development of the technology, while the EU, in its forthcoming AI Act, hopes to comprehensively categorize AI models according to their level of perceived risk, with a heavy emphasis on the protection of human rights and privacy. Japan and the US are actively looking at regulatory frameworks that feature a lighter touch in order to encourage innovation, but deter particular harms such as bias, fake news, and threats to democracy.
Numerous international organizations and private research bodies have recommended policies that could underpin responsible AI development and deployment, including the OECD, the International Organization for Standardization ("ISO"), and the World Intellectual Property Organization ("WIPO"). Recently, the Hiroshima AI Process for Global Governance has stressed the need for a consistent approach but, despite the obvious need for cooperation and collaboration, no consensus has yet to develop about how to launch a specialized body that would regulate AI across international borders. Some commentators have suggested the establishment of something similar to a new "World Tech Organization" to standardize regulation but given the existing policy differences among the leading jurisdictions, this vision seems quite far off. Prior experience cautions that new international bodies take time to start because of the need for political will to mobilize the stakeholders, arrange funding, and agree on an agenda and process. A new consensus must develop first.
In the meantime, harmonization of national or regional regulatory frameworks may be the more likely route. The EU-US Terminology and Taxonomy for Artificial Intelligence is a good starting place because agreement on terminology is a necessary basis for technical standards. It creates the shared frames of reference that are necessary building blocks, and should be expanded to encompass Japan and others with similar regulatory goals and compatible legal regimes. At a minimum, countries need to induce good practices by the industry— including, for example, the need to "keep humans in the loop." In the meantime, national regulations, and later bi-lateral accords to harmonize national regulations will likely dominate cross-border efforts to promote AI safety.