Stalled talks resume on EU's AI Act, with biometric surveillance the target

BRUSSELS — A third day of negotiations on Friday over landmark European Union rules governing the use of artificial intelligence will focus on military and security applications, with governments seeking to persuade lawmakers not to impose an outright ban.
Exhausted EU lawmakers and governments clinched a provisional deal on Thursday on another highly contentious issue — how to regulate AI systems like ChatGPT — after a nearly 24-hour debate.
"Batteries: Recharged. Ready to dive back into the #AIAct trialogue," EU industry chief Thierry Breton said on X on Friday. "We made major progress yesterday and the day before — let's join forces for the last mile."
The use of AI in biometric surveillance will be the main point of discussion and could determine whether Europe will take the lead in regulating the technology, two people with direct knowledge of the matter said.
EU lawmakers want to ban the use of AI in this area because of privacy concerns, but governments have pushed for an exception for national security, defense and military purposes, Reuters reported.
The prolonged talks and divisions within the 27-member bloc illustrate the challenge facing governments around the world as they weigh the advantages of the technology, which can engage in humanlike conversations, answer questions and write computer code, against the need to set guardrails to control its influence.
Europe's ambitious AI rules come as companies like Microsoft-based OpenAI continue to discover new uses for their technology, triggering both plaudits and concerns.
Alarm bells
Illustrating how fast the market is growing, Alphabet on Thursday night launched Gemini, its new AI model which it hopes will help narrow the gap in a race with OpenAI.
OpenAI's founder Sam Altman and computer scientists have also raised the alarm about the danger of creating powerful, high intelligent machines which could threaten humanity.
The European Commission, the EU's executive arm, first proposed an AI law in 2021 that would regulate systems based on the level of risk they posed. For example, the greater the risk to citizens' rights or health, the greater the systems' obligations.
And should EU negotiators reach a deal, the law would not come into force until 2026 at the earliest.
The main sticking point is over how to regulate so-called foundation models — designed to perform a variety of tasks — with France, Germany and Italy calling to exclude them from the tougher parts of the law.
"France, Italy and Germany don't want a regulation for these models," said German MEP Axel Voss, who is a member of the special parliamentary committee on AI.
Late last month, the three biggest EU economies published a paper calling for an "innovation-friendly" approach for the law known as the AI Act.
Berlin, Paris and Rome do not want the law to include restrictive rules for foundation models, but instead said they should adhere to codes of conduct.
Many believe this change in view is motivated by their wish to avoid hindering the development of European champions, and perhaps to help companies such as France's Mistral AI and Germany's Aleph Alpha.
Agencies Via Xinhua
Today's Top News
- Chengdu games hailed as the new benchmark
- Xi, Lula pledge to deepen China-Brazil cooperation
- China, US reach deal to extend tariff suspension
- Demand for Nvidia’s H20 chip lackluster
- Washington not incarnation of justice: China Daily editorial
- Tariff truce gains time for talks although some tough issues remain to be resolved