Published in News
Human Rights Watch calls for autonomous drone ban
Only humans should kill humans, not Skynet
Human Rights Watch has called for an international ban on autonomous robots capable of shooting people without intervention from human operators.
In a report co-produced with the Harvard Law School, the rights group warns about the dangers of employing autonomous robotic weapons on the battlefields of tomorrow. The report describes the contraptions as “killer robots” and calls for an international treaty that would ban their development, production and deployment.
The defense industry has been in love with tech for decades and various levels of automation have been employed in countless weapons systems, dating back to World War II. However, the distinction between them and “killer robots” is the level of autonomy.
Current generation systems are not entirely autonomous and in most cases they rely on human operators to squeeze the trigger, or push the button. The report states that fully autonomous robots that decide for themselves when to fire could be developed within 20 to 30 years, or even sooner.
Purely defensive autonomous weapons systems, like anti-missile CIWS systems, have been around for years, but the report focuses on robots that would shoot actual people rather than sea-skimming missiles.
Human Rights Watch arms division director Steve Goose argued that it would be best to preempt the development of “killer robots” before they get off drawing boards, just in case.
Robotics professor Noel Sharkey raises another problem – accountability.
“If a robot goes wrong, who’s accountable? It certainly won’t be the robot,” he said. “The robot could take a bullet in its computer and go berserk, so there’s no way of really determining who’s accountable and that’s very important for the laws of war.”
Well, to be honest, we're not doing a good job at prosecuting real flesh and blood war criminals, so why should robots be any different?