Global EditionASIA 中文双语Français
Opinion
Home / Opinion

Progressive AI regulation

By Zhang Linghan | China Daily Global | Updated: 2026-04-28 19:28
Share
Share - WeChat
SONG CHEN/CHINA DAILY

A fast, continuous and risk-responsive approach to governance enables country to keep pace with rapid technological change

Globally, artificial intelligence governance is shifting from principles to practice. In this transition, China’s approach stands out not for a single overarching law, but for a pattern of rapid, targeted regulatory responses to concrete problems. Rather than regulating AI in the abstract, regulators have intervened at specific moments when technological applications began to generate tangible social concerns.

One of the earliest governance challenges emerged not from advanced AI systems, but from recommendation algorithms embedded in digital platforms. Systems for content generation, personalized recommendations, ranking and selection, search filtering and resource allocation have become the core engines through which platforms collect and process data, distribute information, and organize attention.

As these systems gained influence, concerns grew: information bubbles, manipulation of public opinion, unfair competition and opaque decision-making, among others. Algorithms were no longer neutral tools — they became central both to platform value and to potential societal risk.

China’s response was the Provisions on the Administration of Algorithm Recommendations in Internet Information Services, which came into force in March 2022. As the world’s first dedicated regulation targeting algorithmic recommendation systems, it marked a significant institutional step.

Rather than focusing solely on technical design, the regulation targeted how algorithms operate in real-world services. It introduced a multi-dimensional regulatory framework, including algorithm filing, security assessments, risk monitoring and enforcement mechanisms for unlawful practices. By doing so, it established clear legal boundaries for platforms’ use of algorithms, aiming to safeguard information integrity, maintain fair competition and protect user rights.

A second wave of concern arose with the rapid development of deep synthesis technologies. Tools capable of generating highly realistic images, audio and video began to blur the boundary between authentic and fabricated content.

The risks were immediate: misinformation, identity misuse and a broader erosion of trust in digital information environments. In response, China introduced the Provisions on the Administration of Deep Synthesis of Internet-based Information Services in 2022, extending governance beyond platforms to include technology providers and users.

At the center of this framework is the service provider, which serves as a regulatory nexus. Providers are required to fulfill multiple obligations: implementing clear labeling of synthetic content, establishing mechanisms for detection and correction of misinformation, conducting algorithm filing and ensuring compliance across the service chain. At the same time, they are expected to manage upstream technology providers and downstream users — requiring them to uphold information security responsibilities and ensure that synthesized content is clearly identifiable.

The provisions also established a content labeling regime, which has since become a cornerstone of China’s evolving approach to AI content governance.

The emergence of generative AI systems marked another turning point. Unlike earlier technologies, these systems are not confined to specific applications. They can generate text, images, code, and interact across a wide range of scenarios, making them foundational tools rather than discrete services.

This generality introduces a new governance challenge. Risks are no longer limited to a single domain, but may arise unpredictably across multiple contexts — from misinformation and bias to economic and social disruption.

The promulgation of the Interim Measures for the Administration of Generative AI Services in 2023 reflected an attempt to address this complexity through a layered and adaptive framework. The regulation focuses primarily on the service layer, particularly information content security, while leaving space for continued technological development at the model level.

In this sense, it balances continuity and innovation. It builds on existing regulatory structures — such as content governance and platform responsibility — while introducing new mechanisms tailored to generative AI. At the same time, the “interim” nature of the regulation signals an openness to future evolution, leaving room for a more comprehensive AI law as the technology matures.

In 2025, building on the 2022 foundation of content labeling, China issued the Measures for the Labeling of AI-Generated and Synthetic Content, alongside supporting national standards. The measures further enhanced the country’s operational regulatory framework for AI-generated content.

The latest governance challenge goes beyond information and content. It concerns the nature of human-AI interaction itself.

AI systems are increasingly designed to simulate human-like communication through virtual companions, conversational agents, and digital personas. These anthropomorphic systems do not merely provide information; they engage users emotionally and socially.

This shift introduces a distinct set of risks. Users may develop emotional dependency on AI systems, potentially leading to social isolation. Human-like interfaces can obscure the artificial nature of the system, raising concerns about cognitive manipulation and subtle value shaping. For vulnerable groups, such as minors and the elderly, these risks may be particularly acute.

China’s Interim Measures for the Administration of Anthropomorphic AI Interactive Services (2026) represent a timely regulatory response to these emerging concerns. The regulation emphasizes a human-centered approach, requiring clear disclosure of AI identity to prevent user misperception. It also mandates enhanced protections for vulnerable groups and reinforces the positioning of AI as an assistive tool rather than a substitute for human relationships.

Taken together, these regulatory responses reveal a distinct pattern. China’s approach is not centered on defining AI once and for all, but on continuously identifying where risks emerge and intervening accordingly. This model has several notable features.

First, it is problem-driven. Regulations are introduced in response to concrete and observable risks, rather than hypothetical future scenarios.

Second, it is application-oriented. Governance focuses on how technologies are used in practice, rather than solely on their underlying technical characteristics.

Third, it is iterative. Each regulatory measure builds on previous ones, gradually forming a more structured and comprehensive framework.

As countries continue to explore pathways for AI governance, China’s experience highlights an alternative to purely principle-based or single-framework legislative models. This small, fast and targeted approach allows governance to keep pace with technological change, while preserving flexibility for innovation.

It suggests that effective governance may depend less on a unified grand design, and more on the capacity to respond to emerging risks, experiment with regulatory tools, and refine them over time.

China’s evolving approach does not represent an endpoint. Rather, it is part of an ongoing transition toward a more comprehensive, higher-level AI legal framework. Its experience offers a dynamic reference point for global governance — one that underscores both the possibilities and the challenges of regulating AI at the frontier.

Zhang Linghan

The author is a professor and the director of the Institute for AI Law and Governance at the China University of Political Science and Law, and a member of the United Nations High Level Advisory Body on AI.

The author contributed this article to China Watch, a think tank powered by China Daily. The views do not necessarily reflect those of China Daily.

Contact the editor at editor@chinawatch.cn.

Most Viewed in 24 Hours
Top
BACK TO THE TOP
English
Copyright 1994 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
License for publishing multimedia online 0108263

Registration Number: 130349
FOLLOW US