Note: This essay adapts and slightly expands remarks the author made at the Royal United Services Institute (RUSI) in London, on November 7, 2018, at its “Lethal AI and Autonomy Conference.”[note]The author spoke as part of the panel discussing “What Moral and Ethical Challenges are Unique to the Military?[/note] Forewarning to the reader: the nature of that brief presentation left the author with little choice but to raise more questions than he could answer in a short time, hoping instead to spur lively debate and discussion amongst the conference attendees. This essay retains that strategy and embraces a fair amount of uncertainty; the author hopes this caveat will avoid claims of what his fellow lawyers would call a “void for vagueness” problem. The author thanks Dr. Peter Roberts, Adam Maisel, and Ewan Lawson of RUSI for their invitation and hospitality.


 

You will rejoice to hear that no disaster has accompanied the commencement of an enterprise which you have regarded with such evil forebodings.

– Mary Shelley, Frankenstein; or, the Modern Prometheus

 

Good afternoon, thank you for having me back here today. I am especially grateful to talk about this subject right now. Given that I just arrived this morning from the War Studies Conference at West Point,[note]The Modern War Institute at West Point hosts an annual “USMA Class of 2006 War Studies Conference.” This year’s Conference (November 4-6, 2018) was called Potential Disruptors of the ‘American Way of War.[/note] and I don’t sleep well on transatlantic flights, I’m fairly certain that you’re going to see a live demonstration of what a “semi-autonomous” actor looks like.

This afternoon, I would like to bring out a discussion that is regrettably absent from the public discourse, such as it is, about military applications of artificial intelligence (AI) and autonomy; and it is equally absent from the engineering labs, military training and doctrine, and legal treatises. The discussion, missing in action for now, is not about what aspect of warfare we should control or instead delegate to computer software; it is about where we choose to assign moral culpability, responsibility, and accountability for our ever-growing reliance on these applications. It is not about a militant, self-aware, autonomous AI’s reaction to us, its maker, and how we might protect ourselves from its dangers by baking in safeguards or through treaty-based international bans; rather, it presupposes a future widespread adoption of AI across all war-fighting domains and “war-fighting functions,” and is about looking at ourselves and our reactions to and with that AI.

If the current pace of innovation, engineering, experimentation, and use of autonomous AI means there is a “revolution in military affairs” (RMA) afoot, as my colleague just suggested,[note]Fellow panelist Wing Commander Keith Dear, UK Joint Forces Command. He used the definition of “revolution in military affairs” offered by Murray and Knox: “complex mix of tactical, organizational, doctrinal, and technological innovations” that manifest as a new conceptual “approach to warfare.” Williamson Murray and Macgregor Knox, “Thinking about revolutions in warfare,” in The Dynamics of Military Revolutions: 1300-2050 (New York, NY: Cambridge University Press, 2001), p. 12. On the other hand, some suggest the rapid developments and use of autonomy and artificial intelligence in military affairs, coupled with “machine learning,” “deep learning,” and unmanned systems, constitute a broader, culturally and politically altering, “Military Revolution.” See Frank Hoffman, “Will War’s Nature Change in the Seventh Military Revolution,?” Parameters 47(4) (Winter 2017-18): 19-31. Hoffman asserts that these advances in technology and changes in how we employ it will not only change the subjective character of warfare, but will strongly influence the nature of war (defined by Clausewitz’s “trinity” of passion, reason, and chance, plus fog and friction, as a continuation of politics and policy through other means).[/note] I have to ask: Is it coming before, or after, an associated revolution in military ethics? The subject of my talk is about such a revolution—and suggests it has not yet happened but ought to occur alongside of any AI and autonomy-driven RMA. I will begin with a short story:

Two friends, a computer scientist and a software engineer, colleagues working on IBM’s Watson, went for a stroll to clear their heads one afternoon in a nearby park. After a while, their thoughts turned back to work and pretty soon they were so wrapped up in their spirited and highly technical conversation that they became disoriented and totally lost in the woods. Their iPhones were low on battery power and the sun was blocked by thick clouds, but after a few minutes of animated attempts at problem solving, they noticed a young man in a well-tailored suit ambling confidently along a path a few dozen meters away.

The computer scientist and engineer called out to the man, and said, “Hello there, might you tell us exactly where we are?”

The man stopped, and replied, “Of course. You’re standing in the middle of a trail in the park at midday.” Then he promptly disappeared down the path.

The engineer turned to the computer scientist and said, “He must have been one our lawyers.”

“How can you tell?” asked his friend.

“Because . . . he was one hundred percent accurate and completely useless.”[note]This story adapts a well-known joke about lawyers. See, e.g., http://www.ahajokes.com/law095.html, or http://www.cs.utah.edu/~luke/Humor/lawyer.html.[/note]

So, in full disclosure and in part an apology, I am one of those lawyers, but obviously not for IBM. What I hope to bring to this discussion comes from the viewpoint of a serving military attorney, former combat engineer officer with deployments to Iraq, and father of three. As someone whose childhood included the first wave of 8-bit Nintendo video games and who didn’t own a cell phone until after college graduation, I see the encroaching advance of computer technology into our lives—at play, at home, at school. Now with its seemingly inevitable encroachment into not just the weapons we will use to fight, but also the very manner in which we think about fighting, I will use this opportunity to talk about the ethics of military use of AI and autonomy; specifically, about the moral culpability that goes along with (or may go along with) these advances.

The Question

Imagine that a military’s command-and-control decision making and planning is, in some not-too-distant future, augmented, enabled, and sped up by AI and autonomous systems. For the sake of this hypothetical, it doesn’t matter if it is a president or prime minister ordering a limited missile strike, or a field general maneuvering a division through a densely populated megacity, or an F-35 pilot in a dogfight, or US CYBERCOM conducting a cyber effects mission, or a platoon leader ordering his troops to return fire into a thicket of palms trees, orange groves, and canal-laced orchards. The question that bothers me the most about AI and autonomy is this: In a field of unusually difficult technical complexity, where progress is rapid, and with a wide diffusion of resources and operators, does our reliance on AI and autonomy in some way make us more culpable and more responsible for our unarmed and armed conflict actions—those that are both just and unjust—or does our reliance, dependence, and trust in that AI and autonomy diminish that responsibility, and attenuate our individual, unit, command, or national accountability?[note]See Amitai Etzioni and Oren Etzioni, “Pros and Cons of Autonomous Weapons Systems,” Military Review (May-June 2017): 72-81, p. 75.[/note]

Intentions

This question cannot be answered by solely debating “killer robots,” legal left and right limits, or what a recent paper from Harvard Law School calls “war algorithms.”[note]Dustin A. Lewis, Gabriella Blum, and Naz K. Modirzadeh, War-Algorithm Accountability, Harvard Law School Program on International Law and Armed Conflict, Research Paper (August 2016), available here.[/note] It ought to be, also, about responsibility of design and of use—our intent and its consequences.[note]These questions of depth and breadth of human moral culpability of were not raised, for example, in a recent issue of Foreign Policy devoted to “The Future of War,” featuring articles by Paul Scharre, Michael C. Horowitz, Tarah Wheeler, and Neri ZIlber (Foreign Policy (Fall 2018)), nor in the 186 short essays published in What to Think About Machines the Think, John Brockman, ed. (New York: Harper Perennial, 2015), which largely addressed questions like whether AI could really “think” in the way humans think we do, whether AI would be inherently dangerous and a risk to human safety, or instead harbinger of peace and safety, or the likelihood of machine dominance in the work place and in life (with contributions from distinguished thinkers like Steven Pinker, Alison Gopnik, Paul Davies, Nick Bostrom, and Richard H. Thaler). There is one exception I could find: Margaret Levi, “Human Responsibility,” pp. 235-36 (after briefly mentioning drones [“Drones are deigned to attack and to surveil—but attack and surveil whom?”], she concludes her short piece with: “Machines that think create the need for regimes of accountability we haven’t yet engineered and societal (that is, human) responsibility for consequences we haven’t yet foreseen”).[/note] We principally talk about war-related AI and autonomy in terms of preventing, deterring, and punishing malicious uses of AI (either human-created or AI-initiated).[note]See, for example, P.W. Singer and Allan Friedman, Cybersecurity and Cyberwar: What Everyone Needs to Know (Oxford: Oxford University Press, 2014); Tallinn Manual on the International Law Applicable to Cyber Warfare, Michael N. Schmitt, ed. (Cambridge: Cambridge University Press, 2013); “Autonomous Weapons: An Open Letter from AI [Artificial Intelligence] & Robotics Researchers,” Future of Life Institute (July 28, 2015), available at https://futureoflife.org/open-letter-autonomous-weapons/ (“they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity”); “Lethal Autonomous Weapons Pledge,” Future of Life Institute, (July 18, 2018), available at https://futureoflife.org/lethal-autonomous-weapons-pledge/?cn-reloaded=1 (“we the undersigned agree that the decision to take a human life should never be delegated to a machine. There is a moral component to this position, that we should not allow machines to make life-taking decisions for which others – or nobody – will be culpable”).[/note] In legal terms, that usually means we are concentrating our concern with two specific breeds of intent, or mens rea: the knowing act (or omission) and the purposeful act (or omission). What we do not adequately address[note]However, see Rebecca Crootof, “War Torts: Accountability for Autonomous Weapons,” University of Pennsylvania Law Review, 164(6) (May 2016): 1347-1402 (exploring the possibility of a new form of legal regime—international tort liability for states—to regulate (rather than prohibit and criminalize) states for their acts amounting to war crimes or serious violations of international law, “committed” by autonomous military systems (called “war torts”).[/note] are the other two forms of intent-based culpability: recklessness and negligence.[note]Generally, according to the Model Penal Code on which many state criminal justice systems are designed, “purposeful” means the conscious goal to engage in the prohibited conduct or to consciously cause the result of that prohibited conduct. “Knowing” means to be aware that the conduct is of a criminal nature or to be practically certain that a prohibited result will occur if one continues the prohibited act or omission. To be “reckless” means the conscious disregard of a substantial and unjustifiable risk, or the gross deviation from a “reasonable person” standard of behavior. Finally, “negligence” means that the person should be aware of (but is not) the substantial and unjustifiable risk or is a gross deviation from the reasonable person standard. This range of mental states starts with the most culpable of “purposeful” (e.g., first degree murder, or murder with malice aforethought), and ends with the least culpable of “negligent.” Generally, the more severe or culpable the mindset, the more punishment the accused is exposed to for commission of that act or omission. (A fifth form of culpability is “strict liability” where no ill intent need be proven—for this sake of this discussion, it is immaterial.)[/note] This suggests to me that, as a community of practice or community of interest, we have not yet fully debated the depth or extent of moral accountability and responsibility for the design, creation, and use of AI and autonomy in conflict.

Threads

AI and Autonomy in conflict involves, to my mind, three elements or threads.[note]See also War-Algorithm Accountability, p. 5 (“To unpack and understand the implications of [replacing human choices with algorithmically-derived choices in relation to war] requires, among other things, technical comprehension, ethical awareness, and legal knowledge. Understandably if unfortunately, competence across those diverse domains has so far proven difficult to achieve for the vast majority of states, practitioners, and commentators”).[/note] The first thread is technology—which I define here as simply the tools we make to make our decisions or actions easier. The second thread is legality—what I define as the rules we impose and abide by that curtail or regulate use of that technology. The third thread is ethics. I would note that we often find ourselves in these debates referring to “ethics” and what is or is not “ethical” without ever defining this slippery term, a dangerous failure and practice in imprecision. So I define “ethics” in this context as the set of principle-based choices we make when using that technology in a manner that legality does not completely or satisfactorily prescribe, proscribe, or describe.

In order to better understand, talk about, and defend each of these three threads, it would seem we would need to untangle them and address them individually and see how one impacts the others and so on. But this is how we get to the seminal, oft-repeated questions that are stoked by Frankenstein-monster, “Terminator” from the future, slippery-slope fears—questions like: Will we be capable achieving a machine-learning skill that can operate outside closed, rule-bounded systems like Go, Jeopardy, and Chess? How in the loop, or on the loop, should humans be? Which humans, exactly, ought to be in or on that loop? At what decision-making level? With what skills? And with what oversight? And how do we hold people accountable in this space (who, specifically, do we hold accountable?)? How should we collectively define “autonomous system,” and should all autonomous weapon systems be banned, or just some kinds? What areas of technical research would be illegal or highly regulated?

Choice

These are not unimportant questions, but the reductionist approach by itself is a dead end. Untangling the threads completely is an impossible task because they all share a common human factor: that is, choice. Choice in what we design and build is related to choice in what we normatively believe is the “right” thing to do, which is related to how that action with that tool fits within our existing norms, standards, and laws. And these of course are, in turn, mutable over time as the tools and precedents in our behaviors change.

We can, of course, visualize this in images of autonomous AI on the battlefield or the skies above it. Whether it is in the form of a “killer robot” or android is beside the point—the underlying description is, at bottom, of a decision-making enabler, endowed by humans with some degree of independence and some degree of self-control, that is used by the military for military goals (whether they are intelligence collection and analysis, organization and logistics, planning, or disabling, degrading, or destroying a target).

There are two distinct ways to view this grand capability.

One: looking “up and out,” or through a strategic, operational, or tactical lens, this capability is viewed as a tool, or means to an end. It enables civilian leaders to resort to immediately responsive and unquestioning military gadgets as military options. Or, two: looking “down and in,” or through a moral and ethical lens, this capability might be viewed as a comrade, compatriot, or “battle buddy” to use US Army jargon—or if the tech isn’t anthropomorphic, we might just think of the capability as an extension of our own faculties, augmenting and enabling, but not replacing, well-known and established options.

Up and Out

In the first lens, the up and out lens, we naturally ask certain kinds of questions. For example, questions about counter-proliferation, deterrence, and asymmetry. But I think the most interesting question is: To what extent will easy access to this technology make the political decision to put humans on the ground, in the mud, in the sand easier, or to resort to military force at all, if political leadership believes they face fewer risks to mission or risks to lives, with swifter and more “sterile” precision?

The tentative answers to such questions likely come from how “successful” these tools appear to be in the field. None of these questions can be answered predictively via algorithm. They are answered in part by human judgments and evaluations about cost and benefit after some suitable period of observation, because they demand a human reaction: Am I scared or not? Am I deterred in this or future instances, or not? Do I wish my country or organization to compete in a resource-draining race to develop new and better tools, based on what I’ve seen them do? Am I more, or less, risk-averse when it comes to putting my citizens in harm’s way? Any conscious decisions, in the face of these questions, will reflect the uncertainties, irrationalities, and variations in human choice. In other words, choice is the underlying variable.

As a consequence, this way of looking at autonomy and AI has a consequentialist flavor, even utilitarian, and seems like a paradox: even though it looks backward at observable effects, it raises jus ad bellum concerns—or the justice of going to war in the first place, which is a very forward-looking and speculative analysis (for example, we would ponder whether going to war is for the right cause, whether we are likely to prevail, and whether the effect will be worth the cost).

To repeat, choice is the underlying variable, but it is choices about conflict, not choices in conflict, that matter here.

Down and In

But with the second lens, the down and in lens, we ask different kinds of questions, like: To what extent, if any, will such proliferation cause identifiable, noticeable changes in how soldiers, pilots, or sailors make choices in conflict? Specifically, with advanced AI theoretically capable of so much information collection, analysis, and rapid option generation, will there be changes in thinking about who the humans should affirmatively engage? Who they should affirmatively protect? What decisions from higher command may they question, disobey, or ignore if they contradict the solutions framed by AI? What should we think about a commander who acts contrary to his own hyper-reliable and accurate AI-driven intelligence systems?[note]Philosopher and cognitive scientist, Daniel C. Dennett, asks a similar question: “doctors are becoming increasingly dependent on diagnostic systems that are provably more reliable than any human diagnostician [so] do you want your doctor to overrule the machine’s verdict when it comes to making a lifesaving choice or treatment?” Daniel C. Dennett, “The Singularity—An Urban Legend?” in What to Think About Machines that Think, John Brockman, ed. (New York: Harper Perennial, 2015), p. 86.[/note] Is the AI, in whatever physical form, worthy of some degree of moral consideration—would we sacrifice, let alone risk, ourselves or our human subordinates to save that technological enabler from being damaged or captured by the enemy? And again, if our decision making and planning is augmented, enabled, sped up by AI and autonomous systems, does that make us more culpable and more responsible for our action, or less so?

Yet, answers to these questions also cannot be derived by calculation, in advance, and predicting more or less probable outcomes. The answers all speak to the extent to which humans are willing, consciously or by training, to view autonomy and AI as ends in themselves, just as modern military ethics conceives and articulates our martial duties to one another.

Whereas the first lens—up and out—is consequentialist, and holds implications for jus ad bellum concerns, this down and in lens is deontological and holds jus in bello considerations, professionalism considerations, and “warrior ethos” considerations. Where the first lens was colored by choice about conflict, this lens is colored by choices in conflict.

For example, would the enemy’s mechanical, artificial, autonomous combat medic be protected from attack under international law and therefore the proper subject of rules of engagement? Or is it simply another piece of equipment than can be ignored, turned off, dismantled, or targeted? Should a soldier be instilled from her days in basic training with a warrior ethos that adapts the “leave no soldier behind” ethic to include “steel and silicon soldiers” too? And to repeat the question with which I opened: Does our reliance on nonhuman, artificial programs, codes, devices, and weapons make us more, or make us less, responsible for our combat choices? Does culpability really depend primarily on where the person is with respect to the decision loop? Should we only worry ourselves about the purposeful and knowingly malicious design, creation, and use, or should we also think about moral accountability in terms of recklessness and negligence?

I am just not sure if the answers to these questions depend on natural instincts that are independent of experience. But they do, at a minimum, factor in human choice into the equation, just as the first, up and out, lens does.

Community

The more “human” we engineer these products and systems to be, the less “tool-like” we will consider them, and less remote or desensitized we will be when it comes to time to employ them, especially under the stress and friction and fear and ambiguity and bonding of the close fight.

Because human choice—about conflict and in conflict—is the fundamental, lowest common denominator that binds the three threads of technology, law, and ethics into one really confusing and ambiguous, wicked problem, and because choice is inherent to both lenses for looking at this problem, we need to sensitize ourselves to a new way of looking at our choices.

And so it is a lesson for the “community of interest” or “community of practice”—the scientists, the engineers, the ethicists, the soldiers, the politicians, and, frankly, the public. If we want to consider whether our reliance, or dependence, or apparent control of this technology has, will, or should accent our culpability or whether instead it has diminished, will diminish, or should diminish that accountability, we should consider the three threads all at once, not compartmentalized in their traditional scholarly silos. Our debates about the technology, legal and moral, practical and philosophical, must blend jus ad bellum with jus in bello questions; they must blend consequentialist with duty-based ethics; they must blend strategic and tactical purposes with knotty subjective questions of individual virtue, principles, and moral obligations. And we must have a conversation about what it means to be reckless and negligent in this space.

This is very tricky. Technologies that enable human decisions in conflict work at inhuman speed: their application of Boyd’s OODA Loop sequence[note]O-O-D-A stands for Observe, Orient, Decide, Act—a cognitive decision-making cyclic process, developed by Air Force Colonel and fighter pilot John Boyd originally to describe jet fighter tactics and performance during the Korean War, now widely adapted into a range of fields, and incorporated in official military doctrine. See Air Force Doctrine Document 1 (“Air Force Basic Doctrine”) (Washington, D.C.: Department of the Air Force, 2003), pp. 58, 91.[/note] happens at a rate faster than we can think how to spell “OODA” by orders of magnitude. I think this means we need to move the conversation about accountability much earlier in time.

But there’s a counterpoint to consider: Is this not asking too much of the wrong people? Isn’t it like saying that the person who is morally (or even legally) culpable for a child’s self-inflicted gunshot wound is not his parent for recklessly allowing access to the gun, nor the gun shop owner who sold the gun to the parent, nor the manufacturing company that mass-produced the gun, nor the machinist who assembled the trigger, nor the mechanical engineer who designed the trigger assembly . . . but is instead the professor who wrote the textbook that the engineer studied? Maybe that’s sliding the scale of accountability too far to the “left of the boom,” so to speak, but it does at the very least provide one conceivable bookend to the possible range of morally accountable actors.

Mechanizing Morals and Digitizing Duties

Ultimately, it comes down to another choice: how much this community of practice chooses to humanize our computerized comrades while at the same time how much it chooses to mechanize our morals, or digitize our duties. We ought to ask ourselves a fundamental question that has both a technical/scientific answer and philosophical one: Soes endowing AI with more “autonomy” mean that we leave less for ourselves? Is it a zero-sum game? If so, we have a problem because our sense of moral culpability (and legal liability) is typically premised on the belief in individual, human agency—the manifestation of a person’s autonomous choices. For instance, if Person A is forced against his will by Person B to commit a crime, Person A is not criminally responsible for the act—he lacked the autonomy to act of his own free will. But when a person “purposefully” acts, we assign a greater moral stigma to him; certainly our criminal justice system exposes the knowing and purposeful actor to stronger punishments than those punishments we may levy on a person who acts merely “negligently.”[note]As a somewhat simplified, but recognizable, illustration, consider that we tend to believe the evil genius inventor bent on threatening the world’s safety for a hefty ransom poses a greater threat, deserves harsher treatment, has earned stronger social condemnation, and requires a more powerful deterrent than, say, the naïve genius bent only achieving the next technological breakthrough, solving the next step, improving his creation for the sake of improving the creation, carelessly blind to others’ self-interested applications or flees from the challenging moral implications of his invention. Bond, James Bond, might “target” the former; the latter is more often the subject of our pity because he is fueled by his own ego and felled more by his own hubris than by the military, law enforcement, or the courts. Here, I’m imagining Mary Shelley’s Victor Frankenstein, or even Dr. Emmett Brown of the Back to the Future film trilogy.[/note]

If we sacrifice our own autonomy (purposefully or inadvertently) as we think up, design, build, test, field, and use these high-tech wonders, we begin to lose justifications for holding one another morally and legally accountable for our choices. But, if it is not a zero-sum game, and we instead are able to retain sufficient human autonomy alongside of non-threatening autonomous AI systems, the outlook for maintaining human judgment about accountability and culpability becomes more positive. We could consider the spheres of overlapping or mutually independent autonomy like we do with our kids—that is to say, a gradual release of more permission, context-dependent, to make choices themselves without our oversight or a strongly worded parental veto.[note]On the other hand, this analogy may have its limits: over time, that release of permissive autonomy may be total, like when we are well past our golden years and in our dotage, or no longer mentally capable of managing our own independence, many of us grant or yield to our adult children permission to make most or all substantive life choices for us, provided it is in good faith and with our care and protection the foremost consideration. I am not sure we will ever be quite ready to rely on autonomous AI to that extent.[/note]

Coming to grips with where accountability and culpability begin and end (and with whom) should put us back on the right track to answer the in-, on-, or astride-the-loop questions we are probably rushing too fast to answer. Perhaps, in the end, the real concern is not whether we are physically equipped to control, or defend ourselves from, warbots run amok, but whether we are quite yet morally equipped to build and interact with them at all. If you’re Garry Kasparov, for instance, who do you want teaching your children to play chess? You, a world grandmaster, or the computer algorithm that beat you? The answer depends, ultimately, on human choice: the choice about why—for what purpose—we want our child to learn chess. What is our intent? Unfortunately, this debate—one that should be very public and very informed—has simply not yet taken center stage.

Keeping Threads Tangled

 

Cursed, cursed creator! Why did I live? Why, in that instant, did I not extinguish the spark of existence which you had so wantonly bestowed?

– Mary Shelley, Frankenstein; or, the Modern Prometheus

 

Returning to my story about the IBM computer scientist and engineer lost in the woods: it is not the lawyer’s job to lead the way out of the woods. Nor is it fair to expect the engineer and scientist to have predicted where they ended up, or to have walked along only clearly marked, predetermined—we might say programmed—trails. But what we can hope for is that the three of them do not lose sight of the entangled nature of technology, legality, and ethics. It is this broader, entangled community of practice that should help the public understand and decide whether—in armed conflict—we will be more, or less, culpable, responsible, and accountable for our choices when we rely on autonomous AI. They should start taking those long, talkative, walks in the murky woods together.

 

Dan Maurer is an active duty Judge Advocate and Non-Resident Fellow with the Modern War Institute at West Point. He is the author of “Does K-2SO Deserve a Medal?” and various other works on strategic civil-military relations theory (here and here) and military justice in venues like Lawfare, Small Wars Journal, Military Review, Harvard National Security Journal, and several academic law journals. This essay benefited from and was influenced by the insights of colleagues and other speakers at the United States Military Academy at West Point’s Modern War Institute 2018 War Studies Conference and the 7th Emergent Topics in International and Operational Law course at the US Army’s Judge Advocate General’s Legal Center and School, both of which immediately preceded the RUSI panel. Follow Dan on Twitter: @dan_maurer.

The views expressed are those of the author and do not reflect the official position of the United States Military Academy, Department of the Army, Department of Defense, or any institution with which the author is affiliated.

 

Image credit: Spc. Brian Chaney, US Army (adapted by MWI)