EU sets ball rolling on regulating AI
While many of its clauses are still to be updated in light of the changing situation, the European Union's Artificial Intelligence Act as the first comprehensive regulation on AI sets a precedent for efforts to put the flourishing technology under control.
By classifying AI models into four categories, namely those with unacceptable risk, high risk, limited risk and minimal risk, the EU AI Act has set standards for the AI models in the market and respective guidelines in principle for them to follow. The legislation offers food for thought for the other countries, regions or organizations of the world in drafting their own regulations. Especially, the act spends a high percentage of its text on regulating General Purpose AI, which is generally believed to be a high-risk AI system or one that can form a high-risk one, a timely echo to the times with the popularity of ChatGPT and other Large Language Models.
The act will become a spur for the improvement of the technologies and encourage strengthened oversight over AI, which is still in its infancy. That will help to push the global AI sector toward stricter regulation to reduce the risks of the potentially harmful technology.
For Chinese AI companies, that means both challenges and opportunities. They will have to heed the new act to ensure compliance in the EU market, but the stricter self-regulation will be an advantage in gaining public trust when they enter other markets of the world, as putting AI under control has already become a worldwide consensus and is a general trend that will apply sooner or later in all major regions of the world.
Domestic legislators can consider drafting a regulation or set of regulations for AI, too, so as to not only promote the orderly development of the domestic AI industry but also gain a bigger say in forming globally accepted AI regulatory standards in the future.