AI shouldn't undermine humanity's progress

Navigating the AI regulatory paradox
In the contemporary global landscape, the intricate relationship between regulation, development, and human rights protection has emerged as a pivotal yet paradoxical challenge. The rapid advancement of artificial intelligence presents a regulatory paradox: the need to foster innovation while simultaneously safeguarding human rights.
In recent years, major developed economies have been contemplating the introduction of significant new regulatory frameworks targeting AI. However, this has diverged significantly from expectations, with countries such as the US, Japan and the EU either delaying or diluting their regulatory efforts.
This trend toward deregulation is partly due to the complexity of AI risks and the insufficient capacity to assess the security of advanced AI models. Moreover, AI technology holds an extremely important strategic position in national security and social development across all aspects. The US, for instance, prioritizes maintaining its global hegemony over implementing restrictive AI regulations. As a result, comprehensive international regulation aimed at safeguarding fundamental human rights remains an idealistic aspiration.
The criminal offenses resulting from the "misuse” of AI severely infringe on fundamental human rights, including the rights to life, health, and privacy. The "black box” nature of AI can lead to violations of human rights, posing a new type of challenge that arises from the collision of technological development and ethical norms. In circumstances where governments choose to forgo their regulatory responsibilities, it will not only bestow undue power on large tech companies but also allow technology to grow "wildly,” thereby leading to the realization of human rights to become an "empty promise”.
In the realm of social media, protecting young users has become a critical issue. Social media platforms introduce advertising models to achieve profitability by selling user attention to advertisers. These platforms often expose young users to harmful content more quickly and frequently than adults, compromising their safety and mental health. The EU focuses on transparency, accountability, and proactive measures to protect minors through regulations, while the US follows a more decentralized approach, relying on industry self-regulation. China adopts a comprehensive and proactive approach, emphasizing a safe and healthy online environment for minors.
The self-regulation model of App privacy policies is caught in a deep contradiction between "formal compliance” and "substantive infringement.” Developers leverage technical advantages to turn policy texts into legal veneers for evading substantive obligations.
Synthetic data offers solutions to the "data depletion” dilemma in AI development by simulating real-world data properties. However, it is not a foolproof method for privacy protection. Synthetic data carries re-identification risks, as anonymization techniques can still lead to personal privacy leakage. AI models trained on synthetic data may inadvertently disclose sensitive information, posing serious privacy risks. These risks, characterized by complexity and concealment, often go unnoticed by users lacking technical knowledge. This creates a power imbalance, highlighting the need for governmental intervention. Clear legal frameworks and regulatory mechanisms are essential to protect user privacy effectively.
In conclusion, the balance between AI development and human rights protection requires a global consensus on AI safety and governance. AI should not be used merely as a tool or weapon for competition. Instead, its development must be guided by ethical considerations, ensuring that technological advancements do not come at the expense of human dignity and rights.
Li Juan, researcher of the Human Rights Research Center, and associate professor at the School of Law, Central South University
The views don't necessarily reflect those of China Daily.
If you have a specific expertise, or would like to share your thought about our stories, then send us your writings at opinion@chinadaily.com.cn, and comment@chinadaily.com.cn.