The debate dates to author Isaac Asimov’s first rule for robots in the 1942 story “Runaround”: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” According to the report’s author Christof Heyns, a South African professor of human rights law, there needs to be a worldwide moratorium on the “testing, production, assembly, transfer, acquisition, deployment and use” of killer robots until an international conference can develop rules for their use.
So far the United States, Britain, Israel, South Korea and Japan have developed various types of fully or semiautonomous weapons. Heyns dubs them “lethal autonomous robotics” and says: “Decisions over life and death in armed conflict may require compassion and intuition. While humans are fallible they might possess these qualities, whereas robots definitely do not.
One of the things he did not like was the US Phalanx system for Aegis-class cruisers, which automatically detects, tracks and engages air threats such as antiship missiles and aircraft. Another is Israel’s Harpy, a “fire-and-forget” autonomous weapon system designed to detect, attack and destroy radar emitters.
The UK has Taranis jet-propelled combat drone prototype, which can autonomously search, identify and locate enemies but can only engage with a target when authorized by mission command. It also can defend itself against enemy aircraft.