“You can’t say that civilisation don’t advance, however, for in every war they kill you in a new way.” – Will Rogers
This piece appeared first on the e-IR website at www.e-ir.info
Targeted killings – increasingly effected through weapon systems housed in unmanned aerial vehicles, or “drones”, have become a tactic of choice in the Obama Administration’s continuing pursuit of its global counterterrorism strategy. They are highly controversial, but they are only the latest example of so-called “precision strikes”, practised in the aerial campaigns in e.g. Kosovo, Iraq, and more recently Libya, and which reflect the enormous technical superiority of the US and its resolve to use it to secure foreign policy objectives.
But what exactly is “precise” about these killings? Only, it would seem, the ability of an operator several thousand miles away to launch a deadly weapon to hit a predetermined target. Just about everything else, according to papers presented at a recent workshop convened by cii – the Centre for International Intervention – at the University of Surrey, is rather less than precise.
Much public debate has centred on the morality of these killings. Yet that in turn hinges on whether or not they are considered to be acts of war. One would hope that international law relating to “jus ad bellum” would provide a precise answer to this question, but it does not. Lawyers happily argue opposing positions, depending on their viewpoint. Thus opponents of the killings claim that the US is acting illegally by carrying them out in countries – such as Pakistan and Yemen – where it is not at war, whereas the Administration claims that since 9/11 it has been engaged in a “transnational armed conflict” with Al Qaeda and its allies. Arguments rage about whether the criterion of “imminence” of the threat is met and therefore whether pre-emptive strikes are justified; the criterion itself may be well established but its application is notoriously imprecise.
The “jus in bello” arguments are no less contested. Basic data about the effect of drone strikes on civilians are hard to ascertain, partly because casualty recording – as opposed to counting deaths – is not given sufficient importance, partly because of disagreements about what constitutes a “combatant”. A term which in ordinary parlance has a clear meaning is thrown wide open by an Administration that uses it to cover all military age males in a strike zone. Thus estimates of civilian casualties and death lose all precision – along with traditional “in bello” notions of “proportionality”, “collateral damage”, “necessity”, etc – falling foul of a new kind of warfare that traditional frameworks struggle to accommodate. Some argue that a new – and less precise – logic, that is able to make sense of the middle ground between right and wrong, is needed to restore some precision to this debate.
When it comes to human decision-making, the lack of precision is even more evident. Psychologists know from experiments that the borderline between logic and intuition in human behaviour varies according to circumstances and can be manipulated. In the high-tech world of remotely controlled weaponry the speed and complexity of decision-making can be awesome; gathering more information can often lead to worse decisions in more complex situations. For example the window in which humans can make re-tasking decisions about strikes when information arrives that contradicts the original planning assumptions is small and shrinks very rapidly. The precision of the strike is thus highly dependent on the agility of the operator.
Even the distinctions between “human controlled”, “automatic”, and “autonomous” weapons are not as precise as we might assume them to be. Positioned between modern weapons and the humans who fire them lie algorithms of varying complexity. A traditional handgun may be at one end of the spectrum and a robot at the other. But even a so-called “autonomous” weapon depends on the programme that a human has written for it. The real question is what kind of decisions are delegated to the machine without further reference to a human operator – but this could be asked of a battlefield tank gun with an interval of only a few seconds between the instruction to fire and delivery just as much as it could of a drone strike where the interval may be several hours. In this sense there is nothing particularly special about “drones”.
Confused? In sum, the only thing that is precise about drone strikes is the machine that delivers them. Unlike human warriors it will not suffer from battlefield stress or a lack of discipline with regards to the civilians it encounters: no PTSD, sexual exploitation, or prisoner abuse. Barring accidents it will do entirely what it is tasked to do. The lack of precision comes from our own human confusion about morality, law, language, truth, and logic – and our own inability to behave as robots do. That, of course, is part of the richness of our human experience – but perhaps we would do well to be more realistic about how much we think we can programme imprecision out of our lives – and more modest about the true nature of “precision strikes”.