Striking a balance between security and tech innovation
Generative artificial intelligence represented by ChatGPT has become popular worldwide in a very short time. Despite its disruptive technology, the value it brings has been welcomed by the global technology and industrial community, with internet giants around the world rushing to embrace it and many industries already using generative AI products as productivity tools.
However, the increasingly powerful AI systems have raised many concerns. On March 29, in a white paper on regulating the AI industry, the United Kingdom government proposed adopting a comprehensive regulatory approach to AI technology. Countries such as Italy and Canada, too, have highlighted the data security risks posed by ChatGPT and its parent company OpenAI.
As for China, the Cyberspace Administration of China issued the "Administrative Measures for Generative Artificial Intelligence Services (Draft for Comment)" to solicit public opinions from April 11 to May 10, which shows that China is striving to strike a balance between AI security and innovations through legislation.
When it comes to regulating disruptive innovations, often governments lack sufficient information on the security, ethical and other risks. In the early stages, they cannot decide whether to take immediate action and regulate the process.
However, both of these extremes have their flaws: not regulating the process creates significant risks, while excessive regulation can stifle innovation. International experiences show that governments generally adopt informal or guiding policies in the early stages. For example, the US Federal Communications Commission adopted a laissez-faire attitude toward the early development of the internet.
However, AI is vastly different from previous technological innovations, especially given the rapid and widespread development and application of large models. Due to its potential risks, governments and the public do not have enough time to prepare to deal with it, exposing them to real and imminent risks.
More important, the unpredictability of the original internet data that support AI content generation training, the questions users pose to AI content generation systems, and the output generated by the system itself make the risks that generative AI pose to society as a whole even more unpredictable. The key concerns about generative AI risks are falsehood, discrimination, privacy infringement, security, rights and ethics.
First, ChatGPT and other generative AI models can quickly produce large amounts of misleading information such as fake videos, images, voices and texts. People cannot discern whether they are real or not, leading to more and more false information being spread and giving rise to online fraud and cybercrime, which will have a huge impact on society.
Second, due to biased training data, large models may propagate violent, discriminatory, pornographic, drug-related and criminal activities, and even provide suggestions on how to commit dangerous acts. This will have serious consequences on society, and could even destabilize it.
Third, users input information when using ChatGPT, which may be used as further training data for ChatGPT's iteration, resulting in data leakage. There have already been cases where models and their services have infringed upon users' privacy, with companies such as Microsoft and Amazon warning employees not to share confidential information with ChatGPT.
Fourth, the rapid development and application of large-scale generative models like ChatGPT have raised concerns about their impact on existing intellectual property rights systems. Is ChatGPT a mere tool or a content producer? And can users be seen as participating in the creation process?
There have been many cases of academic misconduct, plagiarism and IPR infringement. These questions require careful consideration, while regulations are needed to ensure the fair and proper use of generative AI in order to protect IPR.
Based on the above risks and their potential impacts, it is inevitable that generative AI, especially ChatGPT, will be brought under regulation. Security and trust are the core requirements of AI regulation, so AI large models and systems should be designed keeping ethics and safety in mind.
While the European Union has already decided to implement strict regulations to manage generative AI, the United States appears to support innovations. China's draft of the "Administrative Measures for Generative Artificial Intelligence Services (Draft for Comment)" reflects the concept of balancing safety and development, and provides detailed provisions on content legality, data compliance, information protection, and risk prevention and control.
But due to the complexity of and difficulty in tracing data sources, there are some doubts about the enforceability of certain provisions such as obtaining consent from data subjects for training models with their personal information. Some experts and industry insiders also say early and excessive regulations may increase industry costs and hinder the pace of innovations.
Therefore, regulations should appropriately accommodate generative AI technologies and products, continuously adjust and minutely calibrate and standardize AI technologies to strike a balance between the development of AI and ensuring safety and order.
The views don't necessarily represent those of China Daily.
The author is director of and a researcher with the Institute of Innovation and Development, Chinese Academy of Science and Technology for Development.
If you have a specific expertise and would like to contribute to China Daily, please contact us at firstname.lastname@example.org, and email@example.com.