Could supersmart machines replace humans?
Rules and guardrails
As technology evolves at speed, society lags in evolving the rules needed to manage it. It took eight years for the world's first driver's license to be issued after the first automobile was invented in 1885.
Consumer downloads of ChatGPT soared to 100 million within two months of its release. The near-human intelligence and speedy responses of ChatGPT spooked even the Big Tech leaders. That prompted an open letter to "Pause Giant AI Experiments" for at least six months, from the Future of Life organization that has garnered 31,810 signatures of scientists, academics, tech leaders and civic activists since the petition's introduction in March.
Elon Musk (Tesla), Sundar Pichai (Google) and the "Godfathers of AI", Geoffrey Hinton, Yoshua Bengio and Yann LeCun, who were joint winners of the 2018 Turing Award, endorsed the critical threat to potential humanity extinction posed by runaway Al.
Despite the acute awareness of lurking pernicious actors, legal guardrails remain a vacuum to fill. "That's probably because we still have little inkling about what to regulate before we start to over-regulate," reasons Fitze.
The CEO of OpenAI, Sam Altman, declared before a US Senate Judiciary subcommittee that "regulatory intervention by the government will be critical to mitigate the risks of increasingly powerful models." The AI Frankenstein on the lab table terrifies its creators.
- Authorities announce crackdown on Hanging Temple ticket scalping
- Quanzhou authority halts dismissal of teachers, student over phone incident
- Shanghai to build world's largest spokeless ferris wheel
- Chinese and Canadian universities celebrate 45 years of partnership
- Robots install solar panels at Xizang's 4,300-meter altitude project
- China's May Day holiday railway passenger trips top 100 mln
































