Global EditionASIA 中文双语Français
China
Home / China / GBA focus

Could supersmart machines replace humans?

By Wang Yuke | HK EDITION | Updated: 2023-06-23 19:39
Share
Share - WeChat

No quick fix

Inhibiting learning capability will curtail technological progress and innovation, which is the very engine supercharging a society's prosperity, contends Ip. A more balanced approach to foster responsible AI development is the way to go.

Ip believes "collaboration between technology leaders, policymakers, researchers, and the wider community is crucial to establish ethical guidelines, best practices, and industry standards. The recent AI petition evokes a critical reflection, evaluation and dialogue on the development of improved safety measures."

Ip notes the swift progress of AI often outstrips the pace of regulatory advancements. "The dynamic nature of AI technology calls for adaptable and agile governance approaches, as restrictive measures may not be effective or could even lead to unintended consequences." AI hallucination can be benign but can also be deadly in a medical or driverless car scenario, for instance.

There is no easy fix, argues Frey, but we should do the bare minimum to define "in which domains we can afford hallucination and in which case it is unforgivable and the companies building the AI system should be held liable." To minimize the risk of hallucination, Frey adds, the onus is on humans who develop the AI system to refine it through trial and error.

"Deep Fakes" is another haunting concern, fueling the spread of misinformation, fake identities, or harmful content, notes Ip. "AI-powered tools can automate and enhance malicious activities, such as phishing attacks, targeted negative social inputs, and exploit vulnerabilities in systems and networks. The future relationship between AI and humans depends on our collective efforts to develop and implement safeguards that protect against malicious intent," concludes Ip.

Unless there is global consensus and enforcement of a safe framework regime for AI development, bad actors could wreak havoc on a clueless world.

What's next

1. Implement standard operating procedure of audit discipline to identify ChatGPT content.

2. Ensure AI development is strictly audited by human engineers.

3. Promote equal access to AI education and use.

4. Government should guide AI leaders towards an ethical development framework.

|<< Previous 1 2 3 4 5 6 7 8   
Top
BACK TO THE TOP
English
Copyright 1995 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
License for publishing multimedia online 0108263

Registration Number: 130349
FOLLOW US