In 2017, artificial-intelligence researcher Stuart Russell presented the “Slaughterbots” video at a meeting of the UN Convention on Conventional Weapons. When Dr. Russell and the Future of Life Institute released the video on YouTube, it quickly went viral. In the video, fictionalized swarms of drones recognize, target, and kill opponents autonomously. The drones assassinate activists and political leaders, and a slaughterbots manufacturer claims that $25 million of drones can wipe out half a city.
Although slaughterbots are fiction, numerous states are developing both drone-swarm technology and autonomous weapons. Every leg of the US military is developing drone swarms—including the Navy’s swarming boats and the Air Force’s plan to employ swarms in a wide range of military roles, from intelligence collection to suppression of enemy air defenses. Russia, China, South Korea, the United Kingdom, and others are developing swarms too. At the same time, a range of states have developed or are developing autonomous (primarily stationary defensive) weapons, from South Korea’s SGR-A1 gun turret to the United States’ Phalanx close-in weapon system. Combining these technologies creates a slaughterbots-style weapon: an armed, fully autonomous drone swarm—or AFADS. (For the purposes of this article, I define “fully autonomous” to mean weapon systems that are both self-targeting and self-mobile; “drone” as any unmanned platform operating on land, sea, air, or space; and “drone swarms” as the use of multiple drones collaborating to achieve shared objectives.)
Because of this, AFADS should be classified as weapons of mass destruction. As I argue in my new study at the US Air Force Center for Strategic Deterrence Studies, AFADS can exceed any arbitrary threshold for mass casualties and are inherently unable to distinguish between military and civilian targets.
Why Classification Matters (and Why it’s Hard)
Classification of drone swarms as WMD has significant conceptual, legal, and national security implications. Conceptually, understanding whether AFADS are (or are not) WMD requires careful debate over the scope of the term and alternatives. While such drone swarms bare some strong similarities to traditional WMD, they also have major differences. Legally, classifying AFADS as WMD means the Seabed Treaty and the Outer Space Treaty apply to swarms. These treaties limit the placement of WMD in “global commons” areas (the ocean bed and outer space), but they do so without precisely defining WMD. Traditional WMD agents—biological, chemical, and nuclear weapons—also have numerous policies, programs, governmental and international organizations, and treaties aimed at combating their proliferation and providing a framework for responding to their use. From a national-security perspective, classifying AFADS as WMD also has an impact, since the use of WMD, including chemical agents, can radically change public support for military action. If AFADS are WMD, the non- and counter-proliferation policies, treaties, and norms applied to traditional WMD are all worth considering.
However, classifying a particular weapon as WMD is quite hard, as definitions have proliferated more than the weapons themselves. Seth Carus’s comprehensive review of WMD definitions identified twenty different definitions used by the US government alone. In part because of this, other authors disagree with the term “weapons of mass destruction” itself, highlighting its vagueness, the potential for political abuse, and the implication that all traditional WMD agents—biological, chemical, and nuclear agents—are of equal threat.
The terminological debate is too big and broad to resolve here, but regardless of which definition of WMD is preferred (or non-WMD alternative), separating these weapons from conventional weapons implies WMD are inherently different in ways that warrant special attention.
Drone Swarms as WMD
Armed, fully autonomous drone swarms should be classified as WMD because of their degree of potential harm and inherent inability to differentiate between military and civilian targets—both of which are characteristics of existing weapons categorized as WMD.
The scalability of armed drone swarms means they can bypass any arbitrary threshold for defining “mass destruction”—regardless of whether such a definition is pegged to one thousand casualties, two thousand, or any other number. Whereas the size and impact of conventional weapons are limited by a number of factors, few limits exist on drone swarm scalability. Drone platforms are known, relatively easy to acquire technologies. The Center for the Study of the Drone at Bard College has identified ninety-five countries with military drones, comprising 171 different types of drone. The technology is rudimentary enough that basic drones can be bought at Best Buy or 3D printed. Converting drones into a swarm only requires the software and hardware to enable the drones to share information and make decisions and the finances to sustain development and acquisition.
Intel’s rapidly improving ability to control increasingly larger numbers of drones illustrates the ease of scaling. In 2016 the company flew one hundred drones simultaneously. In 2017 it flew three hundred drones. By 2018 it managed to fly 1,218 drones then 2,018. Give all 2,018 drones bombs and the collective certainly could inflict mass casualties.
Of course, the exact amount of harm is highly context dependent. Defenders may be armed with counter-drone systems or sophisticated air defenses. If slaughterbots become truly ubiquitous, states may just hang nets everywhere. Conversely, the flexible nature of drone swarms allows them to incorporate adaptations, such as standoff or chemical weapons. Drone swarms may also operate in multiple domains and incorporate antitank weapons, electronic-warfare equipment, or other systems that increase survivability.
Fortunately, so far few examples exist to judge drone swarms’ capacity for harm. The closest example occurred in January 2018, when Syrian rebels launched ten crude drones en masse against a Russian military base in Syria. Although the Russian military claimed it defeated the drones, the Free Alawite movement claimed to have destroyed an S-400 missile launcher valued at $400 million. Evidence on the damage is minimal and both actors have strong incentives to exaggerate or outright lie, so the exact harm is difficult to judge.
The nature of drone swarms incentivizes high levels of autonomy. As the number of drones in the swarm grows, the difficulty in controlling them does too. The activities of each drone must be coordinated to achieve objectives and prevent collisions. As the number of drones becomes truly massive, human control over the swarm may be impossible. Already, US Air Force drone operators experience high staff shortages and higher rates of burnout compared to other career fields.
Autonomously determining whether a target is valid is an extremely difficult task. Consider an armed enemy soldier in uniform. Ostensibly, that’s an obvious legitimate target—unless they were sick or injured. But if the soldier was pointing a weapon back, then even an injured soldier might be a valid target. Even more fundamentally, the autonomous system must be able to effectively distinguish between armed versus unarmed, enemies versus friendlies, and uniforms versus civilian clothing. Even if the system can reliably distinguish between a rake and a rifle, it would need to do so under difficult conditions where the object is obscured or disguised. Reliable discrimination may require near-human levels of artificial intelligence, which is unlikely to be possible in the near future, if ever.
The degree of difficulty will also depend on the domain of operation. Sea-based swarms (either surface drones or aerial drones used at sea) will face far fewer environmental obstructions than ground-based swarms. On the open ocean, tree branches will not obscure an adversary ship. Likewise, military vessels may be more readily distinguishable from civilian vessels due to the very different designs and the presence of large weapon systems. Nonetheless, the relative ease of discrimination is just that—relative—and it is still a major challenge to address.
Some states may elect never to develop such a weapon, because of these practical difficulties or for ethical reasons; however, it should be assumed that some states will. From the firebombing of Dresden to the Syrian government’s use of chemical weapons and various African genocides, numerous states have chosen not to worry about civilian casualties in pursuit of their military objectives. Iraq even sought it as a strategic goal: a terrified populace is less of a threat to the regime. Just as the AK-47 spread to unstable regions around the world, why not autonomous drones?
The United States should limit the proliferation of armed, fully autonomous drone swarms, establish norms against their use, seriously consider military force if they are used, and prepare the US military for the possibility of conflict. Specifically, the United States should consider taking several steps.
First, the US government could formally express its position that AFADS should be considered WMD. Broad recognition would help develop international norms against AFADS and encourage discussion over substantive responses. Particularly, the United States should take the position that AFADS fall under the scope of the Seabed Treaty and the Outer Space Treaty. Banning AFADS from use in outer space and the seabed may have secondary national-security benefits, such as reducing the risk from drone swarms to sea-based nuclear forces.
Second, the United States should expand the scope of counter-WMD organizations. The United States should evaluate whether governmental organizations and international organizations concerned with countering WMD should incorporate AFADS. As AFADS are still an emerging threat, initial efforts should focus on preventing emergence and nonproliferation. Focus should be on codifying norms against usage in international treaties, expanding export-control regimes to incorporate AFADS, and developing punishing policies for violators. the Department of State’s Bureau of International Security and Nonproliferation and the Department of Commerce’s Bureau of Industry and Security are likely to be key players.
Third, Washington should explore verification and confidence methods. Verifying the use of AFADS is likely to be highly difficult, because one key aspect—full autonomy—exists primarily in code. A variety of proposals have been developed to address this problem with autonomous weapons in general, and they may be applicable to swarms. For example, states could develop control systems requiring operator authentication, air-gapped firing authorizations, and information sharing on methods to ensuring safe operation. Efforts to identify and develop effective methods should be undertaken in collaboration with other governments.
Finally, the United States and the broader international community should debate whether the use of AFADS is sufficient to merit military intervention in a conflict. This debate should focus on the scale of usage and nature of the target (military vs. civilian). The use of a ten thousand–drone swarm on a civilian population might merit intervention, but the use of a two-drone swarm against a military base during an ongoing conflict probably should not. Establishing an exact threshold is likely to be impossible; however, states may identify scenarios and broad factors that would support or reject intervention. States should also consider options below the level of military force (e.g., sanctions), and collaborate with the international humanitarian law community to identify existing legal frameworks aimed at restricting weapons that cause excessive civilian harm.
Drone swarm technology, particularly self-targeting, self-mobile drone swarms, poses a significant risk to global security. Failing to develop international norms, supported by robust policies, to prevent and counter AFADS emergence risks a less secure United States and a far more dangerous world.
Zachary Kallenborn is a Senior Consultant at ABS Group, specializing in unmanned systems (swarms), WMD terrorism, and WMD warfare writ large. His research has been publishing in the Nonproliferation Review, Studies in Conflict and Terrorism, War on the Rocks, DefenseOne, and other outlets. His most recent study, “Are Drone Swarms Weapons of Mass Destruction?” examines whether drone swarms should be considered WMD and their ability to serve in traditional WMD roles.
The views expressed are those of the author and do not reflect the official position of the United States Military Academy, Department of the Army, Department of Defense, or any of the author’s current or former employers or funders.
Image credit: Pvt. James Newsome, US Army
As for drones telling soldier from civilian. Soldiers could be microchipped and drones could have microchip readers. Could drones be disabled by radar jamming. Or radio? I think soldiers need some armor around their necks that looks most vulnerable. I also think drones should be light steel gray in color more difficult to see in the air.
Interesting but totally dumb article! You speak of swarm, of. 'evil states' actions (Syria, African states genocides) but never of US actions : did you speak of phosphorus gas bombing done by US in cities ? Irradiated ammo creating cancer for civilians? Of US action costing 500k children lives? No because you see everything US does as 'for the greater good'.
You speak of swarm, WMD, US military punitive action for users of such swarm (if such a treaty were to happen) but forget to tell that US, UK, Nato will be the main users of such. (as they have armies that must declare wars, military intervention, bombing, etc every few years to make money for their MIC, buying stockpile of bomb and ammo, etc) (well, since US&nato will be main users, you can see a response from the countries most likely to be targeted by those, like China Russia Iran maybe North Korea)
Such swarm 'a weapon against soldiers, for Intel, against air defence' like you say, which country will need it the most to destroy China Russia? US obviously (against their anti air&ships 'bastion') , your article is about how to forbid those against civilians but forget to tell that won't happen as US finds it too important. Think US invasion of Irak and Afghanistan : instead of boots on the ground and surge, you have those swarm patrolling and shooting/bombing every armed civilians (resistance(what Nato call terrorists and insurgent, despite them protecting their countries against invading armies) or just civilians needing a gun because police is too far to help)
So you can resume your article as 'swarm will be bad but all great powers will have those as those are too important weapons to use and so only small countries will want a treaty (since those swarm must be used against a target as training for great power wars, those smalls countries will be the 1st target) but won't be able as that would get against US needs as a weapon against soldiers, guerilla resistance (since they will resist US invasion), Intel and air defense (as drone bombing or as decoy to get defense to get active and then be destroyed by 'stealth' fighter bombers) '
To my understanding, the author of this article fails to understand the primary strategic role of drone swarms, at least as the US would intend to use them.
Consider that the primary obstacle to US force projection in the modern era is the proliferation of anti-access/area denial systems. What AFADS offer is the ability to saturate and eliminate air defenses – which typically rely on radars and other easily discerned radio signatures – allowing for follow up with manned platforms such as the F-35, then the F-15 and the F/A-18. Potentially, this initial stage could have zero friendly casualties, and very few enemy casualties (perhaps only radar and SAM operators). Likewise withthe swarming boats.
These drones would not be released into civilian areas, trying to sort out combatants from non combatants. That's a late phase of a war, and we already have a platform for that – the rifleman. These drones would be going after hypersonic and ballistic missiles, radars, airfields, SAM sites, and the like. These are more easily identified and targeted than men in the bushes with firearms.
A more useful course of action, in my opinion, would be to lobby the US Government for a ban on AFADS used in the antipersonnel role. This is more likely to succeed, as the government has no "drone shaped gap" outside of the "counter A2AD" role. It allows development of autonomous drone swarms to continue, while still addressing the author's major concerns – which appear to center around identifying enemy combatants, rather than dealing with materiel assets.
You're right that anti-personnel weapons are my biggest concern, as that's the hardest possible case, but definitely not my only one. While machine vision has advanced extensively, it's still incredibly brittle. Training the necessary algorithms requires huge amounts of data and subtle patterns may lead to unexpected results. While not exactly machine vision, Amazon hiring algorithm is illustrative: Amazon fed the algorithm data on every hiring decision to figure out what qualities best predicted success. The algorithm concluded the best predicators were being named Jared and playing high school Lacrosse. And, of course, this assumes the data is accurate. Relatively simple manipulations of training data can lead to major mistakes. So, while correctly identifying vehicles is probably *easier* than humans, it's hardly easy or trivial
While you're right that SEAD-focused swarms is a big focus, it's *far* from the only one. The Navy is building boat swarms, the Army is building missile-launched swarms to target tanks, the Strategic Capabilities Office has already built swarms designed for searching villages for particular people, SCO also has built swarms to provide targeting for offshore, I believe autonomous, ships, and DARPA is building swarms for subterranean operation. The Naval Postgraduate School is even working on a project to model, counter, and potentially develop swarms of up to a million drones operating on the surface, in the air, and under the sea. There's also no reason to presuppose that air defenses will be confined only to non-civilian areas, particularly if an adversary is aware of U.S. and other states' concerns about killing civilians.
The US government also is far from the only actor. China, South Korea, Turkey, Russia, the UK, France, and others are all developing swarms. Although I'm not aware of any specific swarm exports, several of these states have exported advanced drones and drones that are intended to be capable of swarming. Declaring AFADS WMD is as more about enabling and providing guidance for a collective response, than specifically limiting the U.S. (though that is one implication, limiting massive swarms is potentially a win for the U.S., given the potential vulnerabilities massive cheap swarms pose to the U.S.'s big, expensive platforms). For example, if some other state bought, built, or otherwise acquired a massive swarm and *did* target a major civilian area, that should be treated as akin to the use of any other WMD against civilians. That means potential diplomatic sanctions, economic sanctions, physical or cyber sabotage, and potentially military strikes.