Note: This essay adapts and slightly expands remarks the author made at the Royal United Services Institute (RUSI) in London, on November 7, 2018, at its “Lethal AI and Autonomy Conference.”1 Forewarning to the reader: the nature of that brief presentation left the author with little choice but to raise more questions than he could answer in a short time, hoping instead to spur lively debate and discussion amongst the conference attendees. This essay retains that strategy and embraces a fair amount of uncertainty; the author hopes this caveat will avoid claims of what his fellow lawyers would call a “void for vagueness” problem. The author thanks Dr. Peter Roberts, Adam Maisel, and Ewan Lawson of RUSI for their invitation and hospitality.
You will rejoice to hear that no disaster has accompanied the commencement of an enterprise which you have regarded with such evil forebodings.
– Mary Shelley, Frankenstein; or, the Modern Prometheus
Good afternoon, thank you for having me back here today. I am especially grateful to talk about this subject right now. Given that I just arrived this morning from the War Studies Conference at West Point,2 and I don’t sleep well on transatlantic flights, I’m fairly certain that you’re going to see a live demonstration of what a “semi-autonomous” actor looks like.
This afternoon, I would like to bring out a discussion that is regrettably absent from the public discourse, such as it is, about military applications of artificial intelligence (AI) and autonomy; and it is equally absent from the engineering labs, military training and doctrine, and legal treatises. The discussion, missing in action for now, is not about what aspect of warfare we should control or instead delegate to computer software; it is about where we choose to assign moral culpability, responsibility, and accountability for our ever-growing reliance on these applications. It is not about a militant, self-aware, autonomous AI’s reaction to us, its maker, and how we might protect ourselves from its dangers by baking in safeguards or through treaty-based international bans; rather, it presupposes a future widespread adoption of AI across all war-fighting domains and “war-fighting functions,” and is about looking at ourselves and our reactions to and with that AI.
If the current pace of innovation, engineering, experimentation, and use of autonomous AI means there is a “revolution in military affairs” (RMA) afoot, as my colleague just suggested,3 I have to ask: Is it coming before, or after, an associated revolution in military ethics? The subject of my talk is about such a revolution—and suggests it has not yet happened but ought to occur alongside of any AI and autonomy-driven RMA. I will begin with a short story:
Two friends, a computer scientist and a software engineer, colleagues working on IBM’s Watson, went for a stroll to clear their heads one afternoon in a nearby park. After a while, their thoughts turned back to work and pretty soon they were so wrapped up in their spirited and highly technical conversation that they became disoriented and totally lost in the woods. Their iPhones were low on battery power and the sun was blocked by thick clouds, but after a few minutes of animated attempts at problem solving, they noticed a young man in a well-tailored suit ambling confidently along a path a few dozen meters away.
The computer scientist and engineer called out to the man, and said, “Hello there, might you tell us exactly where we are?”
The man stopped, and replied, “Of course. You’re standing in the middle of a trail in the park at midday.” Then he promptly disappeared down the path.
The engineer turned to the computer scientist and said, “He must have been one our lawyers.”
“How can you tell?” asked his friend.
“Because . . . he was one hundred percent accurate and completely useless.”4
So, in full disclosure and in part an apology, I am one of those lawyers, but obviously not for IBM. What I hope to bring to this discussion comes from the viewpoint of a serving military attorney, former combat engineer officer with deployments to Iraq, and father of three. As someone whose childhood included the first wave of 8-bit Nintendo video games and who didn’t own a cell phone until after college graduation, I see the encroaching advance of computer technology into our lives—at play, at home, at school. Now with its seemingly inevitable encroachment into not just the weapons we will use to fight, but also the very manner in which we think about fighting, I will use this opportunity to talk about the ethics of military use of AI and autonomy; specifically, about the moral culpability that goes along with (or may go along with) these advances.
Imagine that a military’s command-and-control decision making and planning is, in some not-too-distant future, augmented, enabled, and sped up by AI and autonomous systems. For the sake of this hypothetical, it doesn’t matter if it is a president or prime minister ordering a limited missile strike, or a field general maneuvering a division through a densely populated megacity, or an F-35 pilot in a dogfight, or US CYBERCOM conducting a cyber effects mission, or a platoon leader ordering his troops to return fire into a thicket of palms trees, orange groves, and canal-laced orchards. The question that bothers me the most about AI and autonomy is this: In a field of unusually difficult technical complexity, where progress is rapid, and with a wide diffusion of resources and operators, does our reliance on AI and autonomy in some way make us more culpable and more responsible for our unarmed and armed conflict actions—those that are both just and unjust—or does our reliance, dependence, and trust in that AI and autonomy diminish that responsibility, and attenuate our individual, unit, command, or national accountability?5
This question cannot be answered by solely debating “killer robots,” legal left and right limits, or what a recent paper from Harvard Law School calls “war algorithms.”6 It ought to be, also, about responsibility of design and of use—our intent and its consequences.7 We principally talk about war-related AI and autonomy in terms of preventing, deterring, and punishing malicious uses of AI (either human-created or AI-initiated).8 In legal terms, that usually means we are concentrating our concern with two specific breeds of intent, or mens rea: the knowing act (or omission) and the purposeful act (or omission). What we do not adequately address9 are the other two forms of intent-based culpability: recklessness and negligence.10 This suggests to me that, as a community of practice or community of interest, we have not yet fully debated the depth or extent of moral accountability and responsibility for the design, creation, and use of AI and autonomy in conflict.
AI and Autonomy in conflict involves, to my mind, three elements or threads.11 The first thread is technology—which I define here as simply the tools we make to make our decisions or actions easier. The second thread is legality—what I define as the rules we impose and abide by that curtail or regulate use of that technology. The third thread is ethics. I would note that we often find ourselves in these debates referring to “ethics” and what is or is not “ethical” without ever defining this slippery term, a dangerous failure and practice in imprecision. So I define “ethics” in this context as the set of principle-based choices we make when using that technology in a manner that legality does not completely or satisfactorily prescribe, proscribe, or describe.
In order to better understand, talk about, and defend each of these three threads, it would seem we would need to untangle them and address them individually and see how one impacts the others and so on. But this is how we get to the seminal, oft-repeated questions that are stoked by Frankenstein-monster, “Terminator” from the future, slippery-slope fears—questions like: Will we be capable achieving a machine-learning skill that can operate outside closed, rule-bounded systems like Go, Jeopardy, and Chess? How in the loop, or on the loop, should humans be? Which humans, exactly, ought to be in or on that loop? At what decision-making level? With what skills? And with what oversight? And how do we hold people accountable in this space (who, specifically, do we hold accountable?)? How should we collectively define “autonomous system,” and should all autonomous weapon systems be banned, or just some kinds? What areas of technical research would be illegal or highly regulated?
These are not unimportant questions, but the reductionist approach by itself is a dead end. Untangling the threads completely is an impossible task because they all share a common human factor: that is, choice. Choice in what we design and build is related to choice in what we normatively believe is the “right” thing to do, which is related to how that action with that tool fits within our existing norms, standards, and laws. And these of course are, in turn, mutable over time as the tools and precedents in our behaviors change.
We can, of course, visualize this in images of autonomous AI on the battlefield or the skies above it. Whether it is in the form of a “killer robot” or android is beside the point—the underlying description is, at bottom, of a decision-making enabler, endowed by humans with some degree of independence and some degree of self-control, that is used by the military for military goals (whether they are intelligence collection and analysis, organization and logistics, planning, or disabling, degrading, or destroying a target).
There are two distinct ways to view this grand capability.
One: looking “up and out,” or through a strategic, operational, or tactical lens, this capability is viewed as a tool, or means to an end. It enables civilian leaders to resort to immediately responsive and unquestioning military gadgets as military options. Or, two: looking “down and in,” or through a moral and ethical lens, this capability might be viewed as a comrade, compatriot, or “battle buddy” to use US Army jargon—or if the tech isn’t anthropomorphic, we might just think of the capability as an extension of our own faculties, augmenting and enabling, but not replacing, well-known and established options.
Up and Out
In the first lens, the up and out lens, we naturally ask certain kinds of questions. For example, questions about counter-proliferation, deterrence, and asymmetry. But I think the most interesting question is: To what extent will easy access to this technology make the political decision to put humans on the ground, in the mud, in the sand easier, or to resort to military force at all, if political leadership believes they face fewer risks to mission or risks to lives, with swifter and more “sterile” precision?
The tentative answers to such questions likely come from how “successful” these tools appear to be in the field. None of these questions can be answered predictively via algorithm. They are answered in part by human judgments and evaluations about cost and benefit after some suitable period of observation, because they demand a human reaction: Am I scared or not? Am I deterred in this or future instances, or not? Do I wish my country or organization to compete in a resource-draining race to develop new and better tools, based on what I’ve seen them do? Am I more, or less, risk-averse when it comes to putting my citizens in harm’s way? Any conscious decisions, in the face of these questions, will reflect the uncertainties, irrationalities, and variations in human choice. In other words, choice is the underlying variable.
As a consequence, this way of looking at autonomy and AI has a consequentialist flavor, even utilitarian, and seems like a paradox: even though it looks backward at observable effects, it raises jus ad bellum concerns—or the justice of going to war in the first place, which is a very forward-looking and speculative analysis (for example, we would ponder whether going to war is for the right cause, whether we are likely to prevail, and whether the effect will be worth the cost).
To repeat, choice is the underlying variable, but it is choices about conflict, not choices in conflict, that matter here.
Down and In
But with the second lens, the down and in lens, we ask different kinds of questions, like: To what extent, if any, will such proliferation cause identifiable, noticeable changes in how soldiers, pilots, or sailors make choices in conflict? Specifically, with advanced AI theoretically capable of so much information collection, analysis, and rapid option generation, will there be changes in thinking about who the humans should affirmatively engage? Who they should affirmatively protect? What decisions from higher command may they question, disobey, or ignore if they contradict the solutions framed by AI? What should we think about a commander who acts contrary to his own hyper-reliable and accurate AI-driven intelligence systems?12 Is the AI, in whatever physical form, worthy of some degree of moral consideration—would we sacrifice, let alone risk, ourselves or our human subordinates to save that technological enabler from being damaged or captured by the enemy? And again, if our decision making and planning is augmented, enabled, sped up by AI and autonomous systems, does that make us more culpable and more responsible for our action, or less so?
Yet, answers to these questions also cannot be derived by calculation, in advance, and predicting more or less probable outcomes. The answers all speak to the extent to which humans are willing, consciously or by training, to view autonomy and AI as ends in themselves, just as modern military ethics conceives and articulates our martial duties to one another.
Whereas the first lens—up and out—is consequentialist, and holds implications for jus ad bellum concerns, this down and in lens is deontological and holds jus in bello considerations, professionalism considerations, and “warrior ethos” considerations. Where the first lens was colored by choice about conflict, this lens is colored by choices in conflict.
For example, would the enemy’s mechanical, artificial, autonomous combat medic be protected from attack under international law and therefore the proper subject of rules of engagement? Or is it simply another piece of equipment than can be ignored, turned off, dismantled, or targeted? Should a soldier be instilled from her days in basic training with a warrior ethos that adapts the “leave no soldier behind” ethic to include “steel and silicon soldiers” too? And to repeat the question with which I opened: Does our reliance on nonhuman, artificial programs, codes, devices, and weapons make us more, or make us less, responsible for our combat choices? Does culpability really depend primarily on where the person is with respect to the decision loop? Should we only worry ourselves about the purposeful and knowingly malicious design, creation, and use, or should we also think about moral accountability in terms of recklessness and negligence?
I am just not sure if the answers to these questions depend on natural instincts that are independent of experience. But they do, at a minimum, factor in human choice into the equation, just as the first, up and out, lens does.
The more “human” we engineer these products and systems to be, the less “tool-like” we will consider them, and less remote or desensitized we will be when it comes to time to employ them, especially under the stress and friction and fear and ambiguity and bonding of the close fight.
Because human choice—about conflict and in conflict—is the fundamental, lowest common denominator that binds the three threads of technology, law, and ethics into one really confusing and ambiguous, wicked problem, and because choice is inherent to both lenses for looking at this problem, we need to sensitize ourselves to a new way of looking at our choices.
And so it is a lesson for the “community of interest” or “community of practice”—the scientists, the engineers, the ethicists, the soldiers, the politicians, and, frankly, the public. If we want to consider whether our reliance, or dependence, or apparent control of this technology has, will, or should accent our culpability or whether instead it has diminished, will diminish, or should diminish that accountability, we should consider the three threads all at once, not compartmentalized in their traditional scholarly silos. Our debates about the technology, legal and moral, practical and philosophical, must blend jus ad bellum with jus in bello questions; they must blend consequentialist with duty-based ethics; they must blend strategic and tactical purposes with knotty subjective questions of individual virtue, principles, and moral obligations. And we must have a conversation about what it means to be reckless and negligent in this space.
This is very tricky. Technologies that enable human decisions in conflict work at inhuman speed: their application of Boyd’s OODA Loop sequence13 happens at a rate faster than we can think how to spell “OODA” by orders of magnitude. I think this means we need to move the conversation about accountability much earlier in time.
But there’s a counterpoint to consider: Is this not asking too much of the wrong people? Isn’t it like saying that the person who is morally (or even legally) culpable for a child’s self-inflicted gunshot wound is not his parent for recklessly allowing access to the gun, nor the gun shop owner who sold the gun to the parent, nor the manufacturing company that mass-produced the gun, nor the machinist who assembled the trigger, nor the mechanical engineer who designed the trigger assembly . . . but is instead the professor who wrote the textbook that the engineer studied? Maybe that’s sliding the scale of accountability too far to the “left of the boom,” so to speak, but it does at the very least provide one conceivable bookend to the possible range of morally accountable actors.
Mechanizing Morals and Digitizing Duties
Ultimately, it comes down to another choice: how much this community of practice chooses to humanize our computerized comrades while at the same time how much it chooses to mechanize our morals, or digitize our duties. We ought to ask ourselves a fundamental question that has both a technical/scientific answer and philosophical one: Soes endowing AI with more “autonomy” mean that we leave less for ourselves? Is it a zero-sum game? If so, we have a problem because our sense of moral culpability (and legal liability) is typically premised on the belief in individual, human agency—the manifestation of a person’s autonomous choices. For instance, if Person A is forced against his will by Person B to commit a crime, Person A is not criminally responsible for the act—he lacked the autonomy to act of his own free will. But when a person “purposefully” acts, we assign a greater moral stigma to him; certainly our criminal justice system exposes the knowing and purposeful actor to stronger punishments than those punishments we may levy on a person who acts merely “negligently.”14
If we sacrifice our own autonomy (purposefully or inadvertently) as we think up, design, build, test, field, and use these high-tech wonders, we begin to lose justifications for holding one another morally and legally accountable for our choices. But, if it is not a zero-sum game, and we instead are able to retain sufficient human autonomy alongside of non-threatening autonomous AI systems, the outlook for maintaining human judgment about accountability and culpability becomes more positive. We could consider the spheres of overlapping or mutually independent autonomy like we do with our kids—that is to say, a gradual release of more permission, context-dependent, to make choices themselves without our oversight or a strongly worded parental veto.15
Coming to grips with where accountability and culpability begin and end (and with whom) should put us back on the right track to answer the in-, on-, or astride-the-loop questions we are probably rushing too fast to answer. Perhaps, in the end, the real concern is not whether we are physically equipped to control, or defend ourselves from, warbots run amok, but whether we are quite yet morally equipped to build and interact with them at all. If you’re Garry Kasparov, for instance, who do you want teaching your children to play chess? You, a world grandmaster, or the computer algorithm that beat you? The answer depends, ultimately, on human choice: the choice about why—for what purpose—we want our child to learn chess. What is our intent? Unfortunately, this debate—one that should be very public and very informed—has simply not yet taken center stage.
Keeping Threads Tangled
Cursed, cursed creator! Why did I live? Why, in that instant, did I not extinguish the spark of existence which you had so wantonly bestowed?
– Mary Shelley, Frankenstein; or, the Modern Prometheus
Returning to my story about the IBM computer scientist and engineer lost in the woods: it is not the lawyer’s job to lead the way out of the woods. Nor is it fair to expect the engineer and scientist to have predicted where they ended up, or to have walked along only clearly marked, predetermined—we might say programmed—trails. But what we can hope for is that the three of them do not lose sight of the entangled nature of technology, legality, and ethics. It is this broader, entangled community of practice that should help the public understand and decide whether—in armed conflict—we will be more, or less, culpable, responsible, and accountable for our choices when we rely on autonomous AI. They should start taking those long, talkative, walks in the murky woods together.
Image credit: Spc. Brian Chaney, US Army (adapted by MWI)