There was a time—not so long ago—when due to the technology of the era, you had to face your foe in person. Standoff distance was determined by the physical length of the weapon employed and speed was subject to the constraints of the human body and mind. Over time, projectile weapons—bow and arrow, rifle, rocket-propelled munitions, ballistic missiles—made it possible to create a threat from a growing distance, but we still needed someone to aim the weapon and decide to pull the trigger. This is no longer the case. In 2020, in a dispute over the Nagorno-Karabakh region, Azerbaijan used Turkish-made loitering drones (the Bayraktar TB2, which has gained further prominence during the war in Ukraine) to strike at Armenian tanks and command posts without warning. This is the culmination of our fears about technology and the changing character of warfare—that we can now be targeted by something relentless, a thing that does not sleep.

The time we have to analyze, consider alternatives, and plan shrinks the closer we get to the action. That decision space, regardless of physical distance, is compressed even further with the development of loitering drones and munitions that can linger just out of sight  waiting to strike, hypersonic weapons that travel at five times the speed of sound on a flight path that is difficult to predict, and the use of AI to make quick work of analysis.

The US defense community pays growing attention to the use of AI in mission command. New programs are initiated one after another, racing to secure or reinforce advantages over countries like China (AI), Russia (hypersonics), and Turkey (drone technology). Modern warfare has become like a game of double Dutch—we need to be fast enough to even compete. While there is the inevitable hype, enabling mission command with AI is not only an opportunity—it is a necessity.

One reason is the last decade’s remarkable surge of AI advancement, especially in its machine learning subfield, and pervasive successes of AI in numerous areas of industry and business. An even more important argument is the changing vision of the future warfare. Consider the Army’s view of Multi-Domain Operations (MDO). Great increases in the complexity of decision-making and mission execution are one of inevitable implications of MDO. Effective coordination of actions across multiple domains, including in the complex cyber domain and the electromagnetic spectrum, is akin to playing chess on multiple stacked boards where each move on one board influences moves on all other boards. The complexity here is not additive, it is multiplicative.

And there will be less time available to sort out all these multiple options of complex moves. MDO emphasizes the importance of rapidly exploiting windows of superiority, which appear unpredictably and fleetingly. Exploiting such short-time windows of superiority will often require rapid and major—potentially dangerous—changes to the existing plans, coordinated across multiple domains. AI can help orchestrate and assess ramifications of such changes and produce the necessary detailed fragmentary order (FRAGORD)—in seconds, if needed.

AI opens new opportunities, and the United States is not the only military power exploring AI for mission command. For example, the Russian military has fielded near-autonomous systems for large-scale, distributed, heterogeneous fires and electronic warfare, with real-time planning and automated execution. Such capabilities inevitably rely on AI technologies today, and will to an even greater degree in the future.

What Challenges Stand in the Way?

Whatever the future might be, past experiences were hardly encouraging to the champions of AI for mission command. The United States has over thirty years of history of trying to develop intelligent support tools for mission command, with relatively little success. Consider just a few of the many examples. Since the Cuban Missile Crisis and until the mid-1990s, multiple agencies worked on a sprawling collection of computer programs, including executive decision-making tools, called the Worldwide Military Command and Control System, or WWMCCS. In the mid-1990s, DARPA worked on computerized assistant for training battle staff—DARPA Battle Staff Training System (BSTS).

The alphabet soup of programs continued in the late 1990s, when the Army Battle Command Battle Lab developed and experimented with an AI-based system, ICCES (aka CADET), for generating plans (as synchronization matrices) from course-of-action sketches. Also in the late 1990s, programs like the Army/DARPA FCS MDC2 program explored new ideas, including some AI-like features. The CPOF program also explored but eventually failed to include AI capabilities. The PAL program resulted in a wildly successful Siri technology without, however, a direct impact on mission command.

In the mid-2000s, the Army’s CERDEC initiated the Tactical Information Technologies for Assured Network Operations program; it integrated intelligent agents for collaborative planning and execution monitoring. In the 2010s, CERDEC built the Automated Planning Framework, or APF, a plug-in to the Command Post Computing Environment (CPCE) that includes tools for facilitating the military decision-making process (MDMP) for commander and staff.

The tools the Army has developed focus on aggregating and displaying information for the commander and collaboration tools to support the MDMP. These capabilities mitigate some of the manual labor, but do not take advantage of the benefit of advances in computing power.

If we take only one lesson from the Nagorno-Karabakh conflict in 2020, it should be that the advantage goes to those who no longer see the world like humans—that is, looking out at the world on a horizontal plane, from our own point of view, cognizant mainly of assets under our direct control.

As identified in the 2020 Department of Defense Electromagnetic Spectrum Superiority Strategy, “Enemy activities detect . . . friendly EMS capabilities for the purpose of military advantage.” In response, the Army is developing tools to improve situational awareness of the friendly spectrum, specifically to avoid detection through EMS means. A well-known way to mitigate detection is to spread the signal out through dispersion in pursuit of a more survivable command post. We’re social creatures, though, and we are challenged in dispersed collaboration.

Human cognitive constraints are also highlighted in the discovery, development, and evaluation of courses of action. Decision makers who lack experience may be more willing to rely on recommendations by an AI agent, while more experienced decision makers will be more likely to reject the recommendations in favor of their own experience. Counterintuitively, it’s these more experienced decision makers who might benefit more from the recommendations due to the Einstellung effect, a tendency to favor the known over a potentially more successful novel approach.

One of the major lessons from the introduction of decision aids in the tactical environment is that we must think beyond the materiel aspect of the problem. If the training and doctrine support more manual methods of following the military decision-making process, that’s the process and techniques soldiers will use in battle. Availability is another big concern when using a process that relies on automation—if a decision aid relies on compute power or a network connection that could go down, a more reliably available solution has an advantage.

Expert systems” that rely on human-derived rules may double down on the same biases and narrow thinking as their human counterparts, and also be stymied by novel situations. Machine learning systems (which can also learn human biases) are more adaptable, but require large amounts of data from relevant scenarios to improve accuracy. In practice, what might be best is a hybrid solution with an “expert” knowledge base with built-in relationships, along with more flexible data machine learning models for taking in new information.

While the Army is making progress with developing data as a resource, many of the inputs we need to aid commanders in decision-making are not available, electronically or otherwise—for example, data on fuel status of individual vehicles that may help to plan a mission. The amount of effort needed to feed this data forward may not be worthwhile, but sensors and other unobtrusive methods of measurement may be able to provide the data.

We also do not yet have a digital representation of doctrine to be used as a rule set. While this may prevent AI decision aids from overly constraining their outputs, it may also raise scenarios that are simply unworkable from a tactical perspective. That back-and-forth translation between how a human solves a problem and how a computer does, and ultimately how they communicate cooperatively, requires more study to ensure that the human-machine team provides a better result than either alone.

Reasons for Optimism

There are reasons for optimism about the future of AI in mission command. For example, AI has become a dual-use technology, used by both military and commercial sectors. This means both that more research and development funds will be spent by industry to advance these technologies, but also that our soldiers will have already developed a certain level of trust that will carry over into the use of AI on the battlefield.

One particular area that benefits from status as a dual-use technology is virtual and augmented reality. These tools enable more immersive training and mission rehearsal experiences, enhance situational awareness with visual overlays, and allow for dispersed collaboration.

Using a Modular Open Systems Approach has long been a requirement for acquisition programs, but it had become more of a hand-waving exercise as each program focused on getting its capability to the field. Now, developments in microservices architectures are giving the concept new life for software capabilities. Each microservice takes data from a data repository, performs a function, and outputs a result through an application interface. Self-contained microservices can be shared and combined to build new capabilities.

There are also now tools available that allow soldiers to combine data sets and produce novel analytics without advanced programming skills. These tools also improve data fluency to provide greater transparency into how an AI agent has arrived at a particular recommendation, improving trust in the result.

AI certainly has advantages in speed, but also in countering human biases, such as the egocentric bias that leads people to consider mainly the resources under their direct control. Thus, AI decision aids can be used to broaden the decision space for a commander, while still leaving the decision to the commander.

One promising approach is to eschew the pursuit of an all-encompassing AI solution and instead apply focused techniques to specific subsets of the mission command challenges. For example, the Army C5ISR Center has developed a genetic algorithm–based capability that works as a plug-in to CPCE and serves to optimize the timing of activities within the synchronization matrix.

The last decade’s advancements in reinforcement learning techniques are already delivering successes in a number of business and industrial problems. Research by the Army Research Laboratory shows that in integration with fast military simulations, reinforcement learning techniques can automatically learn how to fight. This is different from AI systems based on rules, logic, or game solving. They require expensive, laborious, manual coding of rules, permissible moves, and their effects. Self-learning, on the other hand, eliminates a bulk of development and maintenance costs.

Similarly, DARPA is pursuing the use of recent advances in machine learning (among other approaches) in multiple programs, such as the Adapting Cross-domain Kill-webs (ACK) program, the Constructive Machine-learning Battles with Adversary Tactics (COMBAT) program, the System-of-Systems Enhanced Small Unit (SESU) program, and the Strategic Chaos Engine for Planning, Tactics, Experimentation, and Resiliency (SCEPTER) program.

Meanwhile, commercial products are maturing and exhibit robustness of AI techniques such as game solving. The Marine Corps has selected a game-solving system that originated from an earlier DARPA program (managed in collaboration with the Army) and is now becoming a centerpiece of the ambitious new Marine Corps Wargaming and Analysis Center.

High speed and complexity of future multi-domain warfare makes AI support necessary in mission command. The dramatic advances of AI in the last ten years, along with painful lessons learned earlier, make it feasible. The Army can succeed in bringing to commanders and staffs fast, easy-to-use tools that will analyze, coordinate, recommend courses of action, and generate the necessary FRAGORD products. It will do so in seconds if needed, even as the staff is dispersed and on the move.

Industry should be engaged to seek near-term technologies. Insertion of available capabilities—even if partial—should be accelerated. Adoption of such capabilities will be successful only if supported by corresponding experimentation, leading to changes in training and in tactics, techniques, and procedures. Meanwhile, the Army science and technology community must continue to close knowledge gaps to enable the next iteration of capabilities of AI in mission command.

Thom Hawkins is a project officer for artificial intelligence and data strategy with the US Army Project Manager Mission Command. He has written articles and papers focusing on issues related to AI, data, and technology adoption.

Alexander Kott, PhD, is the chief scientist of the DEVCOM Army Research Laboratory, a component of the US Army Futures Command. Earlier he served as a program manager at DARPA. He has authored over one hundred technical papers, and has edited and coauthored twelve books.

The views expressed are those of the authors and do not reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense, or that of any organization the authors are affiliated with.

Image credit: Mike MacKenzie (adapted by MWI)