Could supersmart machines replace humans?


No quick fix
Inhibiting learning capability will curtail technological progress and innovation, which is the very engine supercharging a society's prosperity, contends Ip. A more balanced approach to foster responsible AI development is the way to go.
Ip believes "collaboration between technology leaders, policymakers, researchers, and the wider community is crucial to establish ethical guidelines, best practices, and industry standards. The recent AI petition evokes a critical reflection, evaluation and dialogue on the development of improved safety measures."
Ip notes the swift progress of AI often outstrips the pace of regulatory advancements. "The dynamic nature of AI technology calls for adaptable and agile governance approaches, as restrictive measures may not be effective or could even lead to unintended consequences." AI hallucination can be benign but can also be deadly in a medical or driverless car scenario, for instance.
There is no easy fix, argues Frey, but we should do the bare minimum to define "in which domains we can afford hallucination and in which case it is unforgivable and the companies building the AI system should be held liable." To minimize the risk of hallucination, Frey adds, the onus is on humans who develop the AI system to refine it through trial and error.
"Deep Fakes" is another haunting concern, fueling the spread of misinformation, fake identities, or harmful content, notes Ip. "AI-powered tools can automate and enhance malicious activities, such as phishing attacks, targeted negative social inputs, and exploit vulnerabilities in systems and networks. The future relationship between AI and humans depends on our collective efforts to develop and implement safeguards that protect against malicious intent," concludes Ip.
Unless there is global consensus and enforcement of a safe framework regime for AI development, bad actors could wreak havoc on a clueless world.
What's next
1. Implement standard operating procedure of audit discipline to identify ChatGPT content.
2. Ensure AI development is strictly audited by human engineers.
3. Promote equal access to AI education and use.
4. Government should guide AI leaders towards an ethical development framework.