The Draft AI Measures came just four months after the CAC gave effect to its first measures concerning AI, the Provisions on the Administration of Deep Synthesis Internet Information Services (the “Deep Synthesis Measures”). The reason for the CAC’s sudden return to the legislative drawing board appears to have been the recent surge in international popularity of chat-based generative AI, with the Chinese market seeing new entrants such as Baidu’s Ernie Bot1 and Alibaba’s Tongyi Qianwen2. While the Deep Synthesis Measures focused on deep fakery in audio and video content, the Draft AI Measures cast a wider regulatory net for generative AI of all types. It is also notable that while the Deep Synthesis Measures focused on AI outputs, in particular deep fake audio and video, the Draft AI Measures would apply equal focus to the regulation of training data and other inputs to generative AI models in addition to the regulation of model outputs.
The Draft AI Measures come at a time of growing international scrutiny of AI. The Draft AI Measures add an important Chinese perspective to the debate, sketching a regulatory framework that appears to be closely aligned with China’s general approach to the regulation of data, cybersecurity, and online content, one which brings a pronounced focus on maintaining political and social order.
To be clear, the CAC’s proposals do track a number of the substantive considerations seen in draft AI laws globally and in ethical frameworks for trustworthy or responsible AI – for example, the principles that AI be lawful and respectful of rights and interests and not propagate discrimination. However, the Draft AI Measures would also require that generative AI meet criteria seen in other aspects of China’s content regulation, such as the requirement that generative AI outputs be reflective of China’s socialist core values.
Critically, the Draft AI Measures would require businesses to obtain regulatory approval prior to using generative AI to provide services to the public. Given the complex nature of generative AI technologies, which are trained on vast quantities of data with a limited degree of human oversight, it is an open question as to what types of generative AI technologies can be brought within the constraints of the draft criteria for approval and, more broadly, what balance of technological innovation and state control would be achieved in practice in China if the Draft AI Measures were implemented as proposed. In practical terms, it may be the case that the Chinese government sees a far narrower and more tightly controlled set of acceptable use cases for generative AI than the more open-ended applications we are now seeing in the West.
Who would be regulated?
The Draft AI Measures apply to the research, development, and use of generative AI and to the provision of services to the public within China. Obligations under the Draft AI Measures mainly fall to “providers of generative AI services,” which are defined as individuals and organizations that use generative AI to provide services such as chat or text, image or audio generation, including service providers that allow others to generate content through APIs or other means ("generative AI providers").
Cutting across the complex debate seen in the European Union in relation to the AI Act, the Draft AI Measures simply state that generative AI providers shall bear responsibility for the content generated by their products. Generative AI providers would include both developers of generative AI that provide services directly to the Chinese public and those downstream providers of services that use others’ generative AI to provide services, including by integrating the generative AI into their own applications, products, or services through APIs.
The possibility that developers of generative AI will be responsible for the acts and omissions of collaboration partners and downstream providers will clearly raise important risk allocation issues that may be a critical constraint on the commercialization of generative AI in China.
It is not clear how the Draft AI Measures would deal with foreign market entrants to the Chinese market, in particular whether an element of targeting of the Chinese public is required in order for an offshore technology provider to be caught by the regime.
It also remains to be seen whether the use of “internal-facing” AI services used within organizations would be considered to be “the provision of service to the public.”
What are the regulatory approval and filing requirements?
Security assessment of generative AI products
Before offering a generative AI service to the public, generative AI providers must complete a security assessment (whether themselves or by a third-party security assessment institution) in accordance with the Provisions on the Security Assessment of Internet Information Services with Public Opinion Properties or Social Mobilization Capacity.
Record-filing for algorithms
Algorithm recommendation service providers are also required to complete a record-filing with the CAC pursuant to the Provisions on the Management of Algorithm Recommendation of Internet Information Services. The filing includes the name of the service provider, the algorithm type and an algorithm self-assessment report. As of April 2023, the CAC announced the results of record filings in four batches3, which included algorithms from Tencent, Baidu, Alibaba, and ByteDance.
Depending on the specific business and services, other permits or licenses may also be required, such as the Internet Content Provider license or filing required of web site operators and industry regulations applicable to the specific use case or business activity in relation to which the generative AI is used.
How will generative AI content be regulated?
As an overarching principle, generative AI providers would be obliged to take responsibility for content generated by their products and adhere to the following principles enumerated under the Draft AI Measures:
Generative AI content would be required to reflect “socialist core values,” not harm national unity, not endanger national security, and not promote the subversion of state power or the overturning of China’s socialist system (Article 4(1));
Generative AI outputs must be accurate and truthful, with measures being adopted to prevent the generation of false information (Article 4(4));
Generative AI outputs must respect lawful rights and interests, prevent harm to physical and mental health, not infringe individual rights in their likeness, reputation or privacy, and not infringe intellectual property rights (Article 4(5));
Generative AI providers are prohibited from generating discriminatory content based on users' race, national origin, or gender (Article 12); and
In line with Article 17 of the Deep Synthesis Measures, images, video, and other content which might cause confusion or mislead the public are required to be conspicuously labeled in a way that alerts the public to the fact that natural persons, scenes, or information are being simulated (Article 16).
How will generative AI algorithms and training data be regulated?
In addition to seeking to regulate the outputs of generative AI, the Draft AI Measures also apply significant focus on the inputs to AI models.
Article 4(2) of the Draft AI Measures provides that the design of algorithms, the selection of training data, model creation and optimization and service provision should all be conducted with measures in place to prevent discrimination on the basis of race, ethnicity, religious beliefs, nationality, gender, age, or profession. Generative AI inputs are regulated in a number of other ways:
Intellectual property rights and commercial ethics should be respected, and advantages in algorithms, data, platforms and so forth should not be used to engage in unfair competition (Articles 4(3) and 7(2));
Training data should conform to the requirements of the Cyber Security Law (CSL) and other laws and regulations (Article 7 (1));
Where training data contains personal information, the requirements of data protection laws should be complied with, including obtaining consent of data subjects where required (Article 7(3));
The authenticity, accuracy, objectivity, and diversity of training data must be ensured (Article 7(4));
Rules and training for data annotation must be provided (Article 8); and
Generative AI providers must comply with transparency requirements and disclose information that could impact users' choices, including a description of the source, scale, type, quality, and other details of pre-training and optimized-training data (Article 17).
If generative AI providers discover that generative AI outputs do not conform to the requirements of the Draft AI Measures, they are required to adopt content filtering and other necessary measures to prevent the generation of such content within three months from the time of discovery (Article 15).
What data protection obligations would apply to generative AI?
The Draft AI Measures require that generative AI providers protect personal data as a personal information handler under the Personal Information Protection Law (PIPL) (i.e. a status equivalent to a “data controller” under the European Union’s GDPR).
It is also a specific requirement of the Draft AI Measures that generative AI providers not (1) illegally store users’ input from which the identity of a user can be deduced; (2) conduct user profiling based on user input information and log information; or (3) disclose user information to third parties (Article 11).
How would user interactions be regulated?
The Draft AI Measures would impose a number of obligations on generative AI providers in respect of their users, including obligations to:
Conduct real-name identification and authentication (Article 9);
Take measures to prevent users from making excessive reliance on generative AI content or developing addictions to generative AI content (Article 10);
Establish a mechanism to receive and handle users’ complaints and respond to user requests (Article 13);
Ensure the stability of the lifecycle of their generative AI services (Article 14);
Provide guidance to allow users to understand the generative AI and make rational use of generative AI content (Article 18); and
Suspend or terminate service in the case of any improper use of the generative AI (Article 19).
What penalties would apply?
Generative AI providers that violate the requirements of the Generative AI Measures would be penalized in accordance with the CSL, the Data Security Law, the PIPL, and other applicable laws. In the absence of a specific penalty, the CAC and other competent authorities have discretionary powers to order sanctions, including by issuing warnings and orders to take corrective action, ordering the suspension or termination of generative AI services or awarding fines of up to RMB 100,000 (approximately USD 15,000).