On 14 March 2025, Cyberspace Administration of China and three other regulators published “Labeling Measures for Content Generated by Artificial Intelligence”, effective on 1 ​September 2025. In line with these Measures, a mandatory national standard “Cybersecurity Technology –Labeling Method for Content Generated by Artificial Intelligence” has been officially approved and published and will be implemented simultaneously with the Measures. These rules establish clear identification standards for AI-generated content, and enhance transparency, restrict misleading information, and protect public interests.

Key requirements

  1. Explicit labeling and implicit labeling
    • Service providers must add ​visible markers (e.g. text, symbols, audio cues, etc.) at specified positions for AI-generated content.
    • ​Metadata (i.e. embedded in files) of AI-generated content must include production information (e.g. creator details, content attributes, etc.).
  2. Platform responsibilities
    • Content distribution platforms must verify metadata, add warnings for confirmed or suspected AI content, and update metadata with platform information during distribution in certain circumstances.
    • App stores must review AI labeling compliance by service providers during app approvals.

Why it matters and action to be taken

  • ​Stricter requirement for identifying AI-generated content.
  • Non-compliance may trigger ​penalties from multiple authorities.
  • Bearing in mind the implementation date mentioned above, Companies involved in AI content creation or distribution should promptly prepare to:
    • Implement labeling systems for all AI-generated contents.
    • Update user agreements and internal policies to satisfy labeling obligations.
    • Prepare materials for filing and safety assessment.

China’s move reflects its proactive stance on AI governance, balancing innovation with accountability. Please stay safe and informed in the evolving world of AI.