AI and weapons might not be the smartest move
ON THURSDAY, Google released its manifesto of principles guiding Artificial Intelligence, which states it will not support the use of AI for weaponized systems. Thepaper.cn commented on Monday:
It was under the pressure of its staff and a public outcry that Google released the manifesto, after it was reported that Google had signed a contract with the US military, according to which it would provide the military with its Tensor Flow API interface for machine learning.
That sparked criticism both within and outside Google, with many people worried that it might assist in threatening human lives. Reports even show that about 4,000 staff members wrote letters to oppose it.
Google's release of its AI principles may have pacified people. However, it raises the question: How do we prevent AI from posing a threat to humans? Science fiction writers have been asking this question in their works for a long time, and many of them have expressed worries about robots killing people.
In 2012, when US troops were reported to use intelligent unmanned aerial vehicles in the battlefield, that aroused deeper worries. Some said that when UAVs have intelligence, that means a machine will have the power to decide to kill a human.
In order to solve this problem, many have contributed their wisdom. Isaac Asimov, in his novel I, Robot listed three laws of robotics so that smart machines would not hurt humans.
However, these principles depend on AI, not humans. A more effective way is to prevent robots and AI from controlling weapons, so that they will never get the chance to kill humans.
With the progress of deep learning algorithms, AI will become increasingly more independent from humans. If they are given control over weapons, the day when they could decide to kill humans might come. It is better to prevent it from happening at the very beginning and strictly limit AIs from controlling weapons.