Dozens of NGOs united for an international plea to put the development and use of AI-controlled weapon, — like battle drones, — to an end. Experts claim that creators of such military systems can be considered accountable for the violation of law or damage made by the robot.
Almost 90 non-governmental organizations from 50 countries have established the Campaign to Stop Killer Robots to proclaim a ubiquitous ban on the development and use of weapons controlled by artificial intelligence (AI), BBC reported on February 15.
Members of the group are certain that this type of weapon is potentially dangerous for humanity, because it may malfunction in unpredictable ways and kill innocent people.
As the representative of Human Rights Watch (HRW) Mary Wareham outlined, Campaign is not aimed at banning “talking terminator robots that are about to take over the world”, but rather the autonomous weapon, such as self-controlled military drones.
“They are beginning to creep in. Drones are the obvious example, but there are also military aircraft that take off, fly and land on their own; robotic sentries that can identify movement. These are precursors to autonomous weapons,” she told BBC.
Another advocate of the AI-controlled weapon ban is Clearpath Robotics CTO Ryan Gariepy. His company deals with military industry, yet refused to be involved in the development of AI systems for military purposes.
While autonomous weapon may act independently, it still follows an algorithm developed by engineers, including a programme to eliminate a living “target”.
Peter Asaro of the New School in New York believes that such a scenario raises a question about legal liability in case of an unlawful killing or tragic failure. In his opinion, it’s the creators of autonomous weapon who may be eventually declared responsible for its actions.