“The spirit of our endeavour is, To strive, to seek, to find and not to yield”
Alessandro Minuto-Rizzo, President
Are Autonomous Weapons also Intelligent?
The current debate on autonomous weapon systems involves different policy communities – typically focussed on capability development, deterrence and defence, disarmament and arms control, international law and military ethics – and spans from the possible applications of artificial intelligence (AI) in warfare to widespread concerns about ‘killer robots’.
The concept of AI dates back to the early 1950s, but technological progress was very slow until the past decade; now it is in full swing. In the domain of public health and diagnostics (e.g. cancer research), these technological developments are already proving their worth, and their benefits are uncontested. In the field of security and defence, however, the jury is still out: the prospect of fully autonomous weapon systems, in particular, has raised a number of ethical, legal and operational concerns.
‘Autonomy’ in weapon systems is a contested concept at international level, subject as it is to different interpretations of its levels of acceptability. The resulting debate triggered for instance the establishment at the United Nations, in 2016, of a group of governmental experts (GGE) on Lethal Autonomous Weapon Systems (LAWS), that until now was unable to reach agreed conclusions. This is in part due to the current strategic landscape and the ‘geopolitics’ of technology, whereby some states developing these systems have no interest in putting regulations in place while they believe they can still gain a comparative advantage over others.
Yet it is also due to the fact that ‘autonomy’ is a relative concept. Few analysts would contest that, in a compromised tactical environment, some level of autonomy is crucial for an unmanned platform to remain a viable operational tool. Moreover, automatic weapon systems have long existed (e.g. landmines), and automated systems are already being used for civilian and force protection purposes, from Israel’s ‘Iron Dome’ missile defence system to sensor-based artillery on warships. In practice, with very few exceptions, current weapon systems should be considered, at best, semi-autonomous – and they tend to be extremely expensive and thus hardly expendable.
In fact, there are still technological as well as operational limits to the possible use of LAWS: while engaging targets is getting ever easier, the risk of miscalculation, escalatory effects and lack of accountability (all potential challenges to established international norms and laws of armed conflict) seem to favour maintaining meaningful human control (‘man in the loop’). Yet the temptation to exploit a temporary technological advantage through a first strike also remains; not all relevant and capable actors may play by the same (ethical and legal) rules.
In the past, international efforts to control the proliferation, production, development or deployment of new military technologies (from CBRN to landmines, from blinding lasers to missile defence systems) were all, to various degrees, driven by four distinct but potentially overlapping rationales: ethics, legality, stability and safety. The possible military use of AI, especially when related to ‘standoff’ weapons, has raised concerns on all four grounds. In the past, again, apparently inevitable arms races in those new fields have been slowed or even halted through some institutionalization of norms, mostly achieved after those technologies had reached a certain degree of maturity – and often advocated, inspired and even drafted by communities of relevant experts (from government and/or academia).
The risk of an arms race in these new technologies undeniably exists. Yet so does the hope that such technologies may still be channelled into less disruptive applications and end up in the same category as poison gas or anti-satellite weapons – in which the most powerful states will abstain from attacking each other (at least militarily) while weaker states or non-state actors may still attack, but to little effect.
*Dr Missiroli writes here in a personal capacity.
Dr Antonio Missiroli is the Assistant Secretary General for Emerging Security Challenges. He was the Director of the European Union Institute for Security Studies in Paris, Adviser at the Bureau of European Policy Advisers of the European Commission and Director of Studies at the European Policy Centre in Brussels.