This publication is licensed under the terms of the Creative Commons Attribution License 4.0 which permits unrestricted use, provided the original authors and source are credited.

Introduction

The emergence of AI-enabled weapons systems

The weaponisation of AI in the military domain is moving apace as states, particularly those of technologically mature societies, harness it for advantage over adversaries. 

This includes the development of autonomous weapon systems (AWS), which use AI to identify, select, and attack targets without human intervention and without an operator choosing a specific target. Far from being the preserve of science fiction, AWS may have already been deployed in 2019 in the ongoing conflict in Libya, and reportedly are widely used in the Russia-Ukraine conflict.

Technological developments in the military domain often herald non-state actor capabilities, particularly when those capabilities offer ‘low barriers to entry’. For example, in 2017 ISIL successfully carried out drone attacks against the Peshmerga and French Special Forces in Northern Iraq. The US Department of Homeland Security has since warned of terrorist groups applying ‘battlefield experience to pursue new technology and tactics, such as unmanned aerial systems’. Recent concerns have been raised about terrorists using AWS. ‘We are entering a world’, writes Paul Scharre, ‘where the technology to build lethal autonomous weapons is available not only to nation-states but to individuals as well’.

Making all due allowance for the difficulty in distinguishing between fear-mongering and plausible future threat assessment, commentators argue that AWS have the potential to increase the range of terrorist actors, the range of plausible targets and that, in the short term, they would be impossible to defend against. The message was dramatised in the film ‘Slaughterbots’, a viral, Black Mirror-style arms control advocacy video that depicts palm-sized autonomous drones seeking out and eliminating known individuals.

Of course, terrorists already use technology to achieve their objectives. Whilst the frontline defence against AWS may well be in the domain of target-hardening and the regulation of technology, this article also considers how current terrorism legislation measures up to the threat. Much terrorist use of AWS will offer few difficulties to the prosecutor, merely amounting to an extension of terrorist capability. Using a swarming drone as opposed to the fixed improvised explosive device is no less a terrorist attack. However, we address two points of potential difficulty. The first relates to the possession of operational information and the opportunity for detection. The second relates to the distinction between automation and true autonomy, with reference to systems such as AWS that select their own targets or methodology to produce terrorist outcomes, which may not have been chosen or even predicted by the user.

Technical feasibility of terrorist use of AWS

How seriously should these concerns be taken? 

That depends on the realistic likelihood of terrorists acquiring not only AWS, but robust AWS. Assuming that domestic UK terrorists will not have access to defence industry systems, an autonomous drone would have to be homemade. This likely envisages the use of commercially available, recreational hobby drones available with relatively sophisticated autonomous functionality that can be either converted or cannibalized for parts. For the knowledge base, there is a global network of drone hobbyists posting online guidance for developing autonomous drones. 

Altogether the components would cost no more than a new smartphone. The difficulty is engineering these parts together. Whilst much can be gleaned from online ‘how to’ manuals, this still requires technical knowledge at least to the level of an undergraduate degree in computer engineering. There are lots of unknowns involved in putting together an autonomous system, and anything ‘homemade’ would lack both the robustness and the capabilities of more expensive military systems.

The greatest challenge to technical feasibility is supplying the drone with a payload, let alone one with any degree of military effectiveness. In Ukraine, where drones have proved a critical technology against Russian forces, 3D-printed parts are used to convert recreational drones to deliver a payload. A recent Economist article referred to the use of 3D-printed clamps, electric motors and photo-receptive sensors. 

It is foreseeable that the large, online network of 3D-printed gun enthusiasts which shares advice, do-it-yourself manuals, and downloadable blueprints for printing weapon parts like gun turrets, already a source of serious consternation, might pivot to working on instructions for lethal drone use. 

Given the cost-effectiveness of drones, reusable systems could be also dispensed with in favour of ‘kamikaze’ drones carrying an improvised payload that explodes on impact, simply using a drone to dive-bomb a crowded area. Again, we underscore that these threats remain hypothetical – largely due to the difficulty of fitting a drone with a feasible payload.

The terrorist appeal of AWS

So much for technical feasibility. But what is the ‘added value’ of an AWS over the more mundane, everyday things terrorist increasingly weaponise (like cars and vans), which require little skill to use, are widely available, and can cause significant harm? 

To a large extent the added value depends on the motivations and goals of the terrorist. 

Commentators have argued that AI-enabled weapons can reduce, if not eliminate, the physical dangers of terrorism, making the terrorist ‘essentially invulnerable’. Of course for a terrorist resolved to suicide during their attack this offers small utility. It is true that it may lower ‘barriers to entry’, and that for those unconvinced of the benefits of martyrdom an AWS means that one ‘does not have to be suicidal to carry out attacks that previously might have required one to be so’. However, invulnerability is an advantage presented as much by manually-operated drones as by drones with autonomous functionality.

The same applies to the claim by commentators that autonomous weapon systems will appeal to terrorists because they offer anonymity. But as the 2018 Gatwick drone incident illustrates, identifying precisely who is deploying a manually operated drone can already be extremely difficult. Moreover, anonymity is contrary to the aims of many terrorists, who often aim for publicity for their ideology or for personal notoriety.

Rather, there are two factors that make AWS attractive in the eyes of a terrorist. The first is that weapon systems operating fully autonomously are potentially invulnerable to countermeasures like jamming. The second, and perhaps greatest attraction is the offer of force multiplication. Unlike manual systems, those deploying autonomous weapon systems are not necessarily constrained by the need for continuous intervention in the system, which opens up the possibility of a lone actor deploying multiple AWS at once. Indeed, of particular concern is the possibility of a swarm attack, whereby multiple drones adapt and learn both in interaction with their environment and with the other drones to overwhelm defences. However, the engineering required to build a successful autonomous swarm currently puts their development out of reach of non-military actors. That said, the deployment of a small number of rudimentary autonomous drones, even if unable to act in concert, could still cause significant harm and panic.

The legal perspective 

Advances in AI development in the military domain have meant that regulatory debate about AWS has taken place at the interstate level, principally in disarmament forums. 

The work done at this level and the lessons learnt are inapplicable to terrorist uses of AWS. This work concerns States’ use of AWS under international law, in particular international humanitarian law (IHL), which is applicable only to armed conflict (both international and non-international). 

Paramount here is satisfying the IHL principle of distinction, requiring parties to an armed conflict to respect the distinction between combatants and noncombatants, and the latter’s immunity from targeting. Under these obligations, States are required to assess and understand the capabilities of the weapons they deploy, being potentially liable for violations of IHL resulting from these deployments. Whilst the use of new capabilities like AWS initially generates uncertainties and risks, the existence of risk assessment frameworks, international arms control and end-user certificates, and opportunities for iterative testing, verification, and evaluation of a weapon and its effects, reduce considerably the scope for States to avoid such responsibility by pleading ignorance.

Malign use of AWS by non-State actors, or AWS use outside the domain of the law of armed conflict, is untouched by this debate. In the case of malicious drone use in the UK, individual responsibility may be incurred under laws specifically governing the civilian use of aircraft. But establishing criminal liability under terrorist legislation could sometimes prove evasive.

This is because terrorism legislation in the UK is fundamentally concerned with human actors, including terrorist organisations, who use or threaten harm against others with deliberateness or awareness. Hence the definition of terrorism speaks of design (to influence the government or intimidate a population) and purpose (to advance a religious, political or ideological goal); and terrorism offences contain a mental element that generally reflects the need for human culpability in the form of intention or recklessness.

If a defendant (D) programmes an AWS to carry out a particular attack on human targets for ideological purposes, no difficulty arises in prosecuting under existing terrorism legislation. However, terrorism laws play an established role in enabling arrest and prosecution before an attack takes place, by imposing criminal liability on precursor behaviour. A good example is the possession of information that is intrinsically useful to terrorists, such as bomb manuals, which is an offence under the Terrorism Act 2000 carrying up to 15 years’ imprisonment. However, the use of artificial intelligence relieves the terrorist from having to download or carry about operational material such as sensitive maps or schematics: it would be sufficient for a terrorist to instruct the AI component of an AWS to source operational information (for example, flight paths or the locations of army bases) from the Internet. The opportunity to intervene on the basis of the possession of this information is lost. 

Secondly, the greater the role of machine decision-making, the less easy it is to establish the necessary mental element for criminal liability. Under present conditions, this is easier to conceptualise in a non-kinetic context. Tasked by D with annoying online supporters of Football Team V, an AI model could decide to achieve this objective by sending propaganda to selected supporters’ forums. The dissemination of terrorist publications is an offence under the Terrorism Act 2006, but D could plausibly argue that he neither intended nor foresaw the risk that such propaganda would be sent. 

Although D would find it hard to avoid liability if he fitted a drone with a payload in the manner discussed earlier in this article, a future kinetic scenario could see a fly-and-forget drone choosing to crash itself into a target or acting in some other way associated with terrorist targeting. Terrorism legislation has not yet had to confront self-tasked attacks of this nature. 

Finally, if it is the case that the internet is responsible for pulling ever-greater numbers of children and young people towards terrorist violence, there is a risk that any type of artificial intelligence (such as a large language model) trained on the same source data will replicate and amplify that existing messaging – potentially acting as a force-multiplier for human-led online radicalisation. Again, it is unclear how legal culpability would be established in a scenario where an individual was radicalised (in part) by an AI system.

Conclusion

The terrorist threat picture in Great Britain remains dominated by unsophisticated attack methodologies, such as ‘low-tech’ knife and vehicle attacks. Taking account of the formidable technical barriers to assembling a viable autonomous weapons system, terrorist use of AWS may not pose an immediate concern. However, terrorist actors have proven themselves creative adopters of new technology, and consideration of future governance and regulation of AI should sensibly include recognition of potential future terrorist use, including of AWS. The ongoing debates concerning the UK’s position in global AI governance present an important opportunity to consider the regulatory implications of these potential future risks. Although no case currently exists for revisiting terrorism legislation, the information- gathering capability of AI large language models, and the possibility of truly autonomous target selection, mean that these laws need to be kept under close review.

The views expressed in this article are those of the authors, and do not necessarily represent the views of The Alan Turing Institute or any other organisation.

Citation information

Alexander Blanchard and Jonathan Hall, "Terrorism and Autonomous Weapon Systems: Future Threat or Science Fiction?," CETaS Expert Analysis (June 2023).