Photo of Slaughterbots – YouTube/Future of Life Institute and University of California-Berkeley Professor Stuart Russel
CAIRO – 28 February 2018: Who would have ever imaged that a small drone, one small enough and unthreatening enough to sit amongst our children’s toys, could pose a risk to the whole of humanity. The rapid evolution of Artificial Intelligence (AI) makes this a near possibility, and one that could threaten the essence of human morality.
A simulation of what micro-drones could do if assigned for killing was used in an international campaign to raise awareness against AI threats, calling international organizations including the United Nations to take serious actions. The simulation was part of a seven-minute fictional video released by the Future of Life Institute and University of California-Berkeley’s Professor Stuart Russell.
Photo of Slaughterbots – YouTube/Future of Life Institute and University of California-Berkeley Professor Stuart Russel
“They used to say guns don’t kill people, people do. Well, people don’t. They get emotional, disobey orders, and aim high. Let’s watch the weapons take the decisions,” the lecturer said in the video before displaying the killing simulation.
“Twenty-five million dollars order now buys this, enough to kill half a city. The bad half,” he firmly added, explaining that it’s all about AI.
The video shows a kind of robots, named "Slaughterbots", that contain three grams of explosives each, committing a massacre by killing a large number of university students after identifying them. By the end of the video, Russell explains that this simulation is not very hard to be seen on the ground.
“Allowing machines to choose to kill humans will be devastating to our security and freedom… We have an opportunity to prevent the future you just saw, but the window to act is closing fast,” Russell stressed.
Is it just about micro-drones? No. The video showed an example of how far AI can affect humanity if it fell into the wrong hands; it actually highlights all kinds of Autonomous Weapons.
A group of 116 specialists from across 26 countries including Tesla’s Elon Musk and Alphabet’s Mustafa Suleiyman wrote an open letter to the United Nations by the end of 2017, calling for the banning of Autonomous Weapons. The letter was released amid a congregation of huge AI experts, during the International Joint Conference on Artificial Intelligence (IJCAI 2017).
“Lethal Autonomous Weapons threaten to become the third revolution in warfare,” the letter states; adding that once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.
“These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora's Box is opened, it will be hard to close,” it states, concluding with an urgent plea to the UN “to find a way to protect us all from these dangers.”
Photo of Slaughterbots – YouTube/Future of Life Institute and University of California-Berkeley Professor Stuart Russel
This is, however, not the first time for Musk or other experts to express their worries over AI’s rapid evolution. In 2014, Musk said during an interview at the AeroAstro Centennial Symposium that Artificial Intelligence is "the biggest existential threat". “I think we should be very careful about Artificial Intelligence. If I had to guess what our biggest existential threat is, it’s probably that. So we need to be very careful,” said Musk.
During the 71st session of the United Nations General Assembly, on September 29, 2016, Director of United Nations Interregional Crime and Justice Research Institute (UNICRI) Cindy J. Smith announced that they are in the process of opening the first Centre on Artificial Intelligence and Robotics within the United Nations system.
“The aim of the Centre is to enhance understanding of the risk-benefit duality of Artificial Intelligence and Robotics through improved coordination, knowledge collection and dissemination, awareness-raising and outreach activities. The centre will open in The Hague, The Netherlands. The main outcome of the above initiative will be that all stakeholders, including policy makers and governmental officials, possess improved knowledge and understanding of both the risks and benefits of such technologies and that they commence discussion on these risks and potential solutions in an appropriate and balanced manner.”
In statement published on UNICRI website, it was stated that although the rising and developing of AI can be beneficial for global development and societal change, it also raises legal, ethical and societal concerns and challenges, some of which may be hazardous for human wellbeing, safety and security.
Autonomous Weapons were mentioned in the international humanitarian law and regulated according to several principles for preserving civilians' safety, but many people argue that it could not be enough.
“The core legal obligations for a commander or operator in the use of weapon systems include the following: to ensure distinction between military objectives and civilian objects, combatants and civilians, and active combatants and those hors de combat; to determine whether the attack may be expected to cause incidental civilian casualties and damage to civilian objects, or a combination thereof, which would be excessive in relation to the concrete and direct military advantage anticipated, as required by the rule of proportionality; and to cancel or suspend an attack if it becomes apparent that the target is not a military objective or is subject to special protection, or that the attack may be expected to violate the rule of proportionality, as required by the rules on precautions in attack”. Part of the law states.
Comments
Leave a Comment