Today’s tech news is awash with Human Rights Watch’s latest campaign, the sensationally-titled “Ban ‘Killer Robots’ Before It’s Too Late”. I’m not sure which I find most irritating about it:

  • The fact that they’ve chosen to lead with a photo of a Taranis UAV, even though in the photo caption they acknowledge that it isn’t autonomous in any way they are concerned about
  • The fact that in the Guardian’s article about the campaign, the newspaper has instead opted to lead with a photo of the Terminator
  • The fact that the whole thing – Terminator image and all – is distinctly reminiscent of the year 2009.

The Human Rights Watch campaign centres upon the idea that military robots with an autonomous killing capability (i.e. those that can conduct a lethal attack without human oversight) are not only in development but, when ready, will be used naïvely and with no consideration for the law.

Yes, military robots with increasing levels of autonomy are being developed. The Guardian article rightly cites the X-47B, which is capable of autonomous landing on the deck of an aircraft carrier and is soon to undergo autonomous in-air refuelling trials. But at no point do Noel Sharkey, the Guardian or Human Rights Watch offer any examples of systems capable of lethal autonomous behaviours. Nor do they offer any evidence that the developers of these vehicles intend them to be put into service despite the fact that they “could not meet the requirements of international humanitarian law” and “would also undermine non-legal checks on the killing of civilians” (quotes: HRW).

Technology is advancing at a relentless rate, and they – we – are right to be concerned about the possibility that one day a robot may be programmed to take a human life. But the US Military at whom the campaign is targetted has been publishing studies such as “Governing Lethal Behaviour in Autonomous Robots” (part 2) since long before this HRW campaign and before 2009’s Terminator-themed article too.

Campaigns titled “Stop the Killer Robots” do little but persuade the public that the people and companies who develop these technologies are incompetent, or worse, actively opposed to the idea that human rights and legal conventions should govern robotic warfare.

Also, they make the robots sad.

Please, think of the robots.