Could supersmart machines replace humans?


Rules and guardrails
As technology evolves at speed, society lags in evolving the rules needed to manage it. It took eight years for the world's first driver's license to be issued after the first automobile was invented in 1885.
Consumer downloads of ChatGPT soared to 100 million within two months of its release. The near-human intelligence and speedy responses of ChatGPT spooked even the Big Tech leaders. That prompted an open letter to "Pause Giant AI Experiments" for at least six months, from the Future of Life organization that has garnered 31,810 signatures of scientists, academics, tech leaders and civic activists since the petition's introduction in March.
Elon Musk (Tesla), Sundar Pichai (Google) and the "Godfathers of AI", Geoffrey Hinton, Yoshua Bengio and Yann LeCun, who were joint winners of the 2018 Turing Award, endorsed the critical threat to potential humanity extinction posed by runaway Al.
Despite the acute awareness of lurking pernicious actors, legal guardrails remain a vacuum to fill. "That's probably because we still have little inkling about what to regulate before we start to over-regulate," reasons Fitze.
The CEO of OpenAI, Sam Altman, declared before a US Senate Judiciary subcommittee that "regulatory intervention by the government will be critical to mitigate the risks of increasingly powerful models." The AI Frankenstein on the lab table terrifies its creators.
- Draft law highlights cybersecurity in operation of nuclear facilities
- China advances public legal services, delivers over 170 million consultations in five years
- Chinese commercial carrier rocket places three satellites in orbit
- Sanofi's innovative diabetes injection Tzield wins approval in China
- China mulls amendment to cybersecurity law to strengthen legal responsibilities
- ROK to transfer the remains of 30 martyrs to China