During an exercise in the California desert in October 2021 a special operations forces team hit the jackpot. Beneath the team’s observation post were almost a hundred enemy vehicles rolling through a refueling point. The team had eyes on target and fires on call. It should have been a decisive moment in the exercise, the kind of opportunity that so much modern doctrine strives to capitalize upon. Alas it was not to be.
Unlike the decades-long wars following 9/11, in which NATO forces fought in small formations with few constraints imposed by enemy fires threatening supporting infrastructure and little interference from electronic warfare, this force-on-force exercise replicated a congested battlespace and a contested electromagnetic spectrum. In the face of insufficient nodes in their communications network, saturated headquarters, and enemy jamming, the kill chain for the fire mission took four hours to complete. It killed some enemy logisticians, but the opportunity had long passed.
Today, staff officers the world over are heralding the dawn of an interconnected battlefield in which data can move seamlessly between air, land, maritime, space, and cyber forces in real time. PowerPoint and CGI presentations promise commanders continual access to pervasive and perpetually relevant situational awareness. Senior officers lap it up because it is what they have always dreamed of. The ability to access the data from any battlefield sensor across a force and share it with the most appropriate shooter holds out the prospect of maximizing a force’s lethality and efficiency while denying the enemy the opportunity to achieve surprise.
But for the technicians trying to build these architectures and the soldiers, sailors, aviators, guardians, and marines trying to maintain and use them in the field, the gap between theory and practice remains wide—and risks becoming wider still. The problem is not that an interconnected battlefield is impossible, or that it isn’t advantageous. The problem is that so much of the conceptual bloviation on the subject evades any serious appreciation for the friction involved, which is all too often dismissed with the claim that artificial intelligence will function as a cure-all. By pursuing the goal of connecting everything, all of the time, military leaders are avoiding hard choices about what data to prioritize and who on the battlefield should have the most assured access to it under pressure. Policymakers need to start thinking harder about these decisions if the pursuit of connectivity is to bear fruit.
The Limits of Convergence
The conventional narrative of a connected battlefield is of an any-sensor-to-any-shooter network. In this vision of combat, data is transferred seamlessly between units, command posts have real-time situational awareness from every available source, artificial intelligence rapidly generates optimized courses of action, and the fog of war dissipates. To realize this vision, sensors from all of the military services must be connected, with data able to flow through any available path.
The ability to transfer data between units from different services, so that aircraft, ships, and armored vehicles can communicate with each other, will improve the effectiveness of the joint force. But there are limits to this vision of an interconnected command-and-control system spanning every domain of war. This is because of fundamental differences between the domains. Ships and aircraft tend to have access to a great deal of power and large directional antenna, and they operate in formations comprising a comparatively small number of nodes with which they must exchange data. What’s more, they tend to operate within line of sight of one another. The result is that through free-space optical links and other high-bandwidth transmissions naval and air forces can transfer large volumes of data in real time.
These conditions do not pertain to land forces. Where a naval task force might comprise up to a dozen vessels, a division consists of thousands of vehicles, each of which is highly constrained in its available power and can generally only carry a small antenna without sacrificing mobility. Moreover, sensors and shooters will rarely be in line of sight of one another, so passing data around the force often requires transiting key bottlenecks in a network. Nodes that are elevated or dedicated transmitters with more energy will stand out in the electromagnetic spectrum and risk being targeted. There is therefore a practical and tactical emphasis on minimizing signature to maximize survivability.
Given these disparities, the seamless transmission of data between domains along any available route must have one of two consequences. Either it must restrict air and naval forces into only sharing data packets of a size that land forces can support, or air forces in particular will perpetually saturate the available network of the land forces beneath them. The first approach would massively restrict the performance of key naval and air systems, including cooperative engagement capabilities. The latter approach would suppress land forces’ access to their own communications.
There are examples of multi-domain networks that are often touted as proving that the concept works. Many are in Israel. Others sit on testing sites in various NATO countries. Often the reports that emerge from these testing sites describe single engagements in which a single platform in one domain passes data to a platform in a different domain. The problem with these tests is that many of the testing sites—and Israel—have access to a huge amount of fixed infrastructure, including the civilian internet, that bypasses military networks. The United States and its allies are unlikely to have access to such infrastructure in a future confrontation with a peer adversary. Secondly, single tests of multi-domain data transfer fail to replicate what happens when large formations are trying to utilize a network. Simply avoiding fratricide with one’s own communications becomes difficult given the number of nodes trying to share data simultaneously, even without the effects of enemy jamming.
The Bandwidth Bottleneck
The challenge is getting harder, not easier. To be sure, bandwidth across military networks has steadily improved over the past three decades. The exact capacity of military systems is classified, but transfer rates have advanced generation by generation. Unfortunately—with the exception of some niche and not universally useable systems—the gap between available bandwidth and data is expanding as the size of files and the number of transmitters increases geometrically.
A high-resolution image likely comprises several megabytes of data. A multispectral image comprising electro-optical and thermal layers, radar overlays, and topographical information becomes orders of magnitude larger. Military sensors have massively improved in their fidelity over recent years. The result is that platforms now hoover up terabytes of information. Further exacerbating the pressure on networks is that as sophisticated sensors are added to more and more platforms there is also a higher volume of high-fidelity, multispectral data points, all competing for bandwidth.
The Department of Defense has heralded space-based communications as a way of circumventing the constraints imposed by a lack of line of sight between units. The problem with space-based communications is that the infrastructure is exceedingly expensive, often visible to the enemy and therefore able to be suppressed, and in any case imposes significant delays on the network. Since most satellites move in orbits, they can only receive data while above a unit wishing to transmit and cannot then push the data down until they are above the desired receiving base station. Sharing data between satellites is possible, but every additional link in the network imposes more delays between transmission and receipt.
As a result, there is no conceivable manner in which all of this data can be accumulated in real time. Aircraft and other systems can plug in and download what they have captured upon landing, but as more and more data is generated it will take a long time to sift and disseminate it; the tempo of distribution will remain a long way from the promised panopticon. Indeed, the volume of data gathered vastly exceeds the capacity of the crews collecting it to monitor, meaning that there is little effective means of identifying incidental detections and manually prioritizing their transfer to interested parties. Bandwidth constraints are not just a reality of networks; the sheer volume of data is saturating human capacity to monitor it, let alone analyze and understand what is being captured.
AI Is No Panacea
It is at this point that the phrase artificial intelligence inevitably enters the discussion. All too often it is with this ritual incantation that the discussion ends. Humans may not be able to work their way through the data, but the computer can, and by only selecting what is relevant the computer will thereby only transmit what is needed, alleviating the pressure on the network.
This is true, insofar as artificial intelligence, when integrated into the platforms (often described as being at the “edge” of the force to distinguish it from AI analyzing data in a central headquarters), will allow systems to identify specific kinds of return within the vast quantity of data they are collecting. The relevance of what these systems find, however, will depend entirely on what they have been programmed to look for, and so long as there is a constraint upon bandwidth there are only so many returns that a system can offload. The key question, therefore, becomes defining what is relevant.
The problem with the vision of a commander’s data-driven panopticon is that it conveys the aspiration that commanders and analysts do not need to prioritize. Although mission data files can be updated periodically the reality is that edge-based processing systems will have a set of mission data files with which they operate when deployed. Those files will include the priority stack—the preprogrammed order in which information is transmitted—that in a system with limited available bandwidth will determine what gets through and what does not. Further mission data files in each point within a network will need to sort incoming data and prioritize what to pass on if there is too much to be transmitted immediately.
The building of priority stacks is therefore the fundamental prerequisite for moving the desired data quickly around the multi-domain battlespace. This requires commanders to determine what is important, when it is important, and to whom it is relevant. To understand this, it is necessary to understand how the force wants to fight, where it seeks advantage, and where it will accept vulnerability. Commanders need to understand the vulnerabilities generated by the priorities—and therefore blind spots—they have programmed into their systems and develop training, tactics, and procedures for how to mitigate these inbuilt risks.
Some priorities are easy. Ballistic missile track data is likely to be high on anyone’s list. But when analysts start to consider the trade-off between an F-35 prioritizing the transmission of detected artillery fires versus the position of an enemy tactical radar, they run into very different risks and rewards between the services, and questions regarding who is dependent upon the F-35 as opposed to the other assets at their disposal. The priority stack therefore drives where the force needs resilient or redundant deployed capability. It literally shapes force design.
If an interconnected battlefield is going to be realized then commanders must accept that while the aspiration is any sensor to any shooter the reality in the field will always be some sensors to some shooters, some of the time. If commanders refuse to accept this then they will avoid the critical decisions that need to be made to deliver genuine advances in capability—and the interconnected battlefield will remain little more than a mirage.
Dr. Jack Watling is research fellow for land warfare at the Royal United Services Institute in London.
The views expressed are those of the author and do not reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.
Image credit: Staff Sgt. Clay Lancaster, US Air Force
The success of the Interconnected Battlefield (IB) – For the sake of argument assume the IB works perfectly, what then?
The vision of an ideal IB system can be placed in the area of capability (technology) which is only half of the solution.
The large missing piece is capacity (numbers) in other words the military could see and know all but still be overwhelmed.
Weapons platforms, shooters only can fire a limited number of bullets, missiles, bombs, etc. what happens when they run out?
Therefore, the highest priority for any military to pursue first and foremost, is strong, vast industrial, logistical capacity.
An excellent and thought provoking article that may be very close to reality in the fusion quest. This is a very good read.
Definitely a concern. Even if one assumed that the network could handle the transport of all of this data, it is a poor assumption that that data can be efficiently processed and utilized. The intelligent computing requirements are not something that exists yet either.
This is a really good article for consideration by senior leaders. When we were building the Stryker Brigades we ran into some of these same issues. The difference in capacity between the Upper and Lower Tactical internets was a big challenge. Thank you Dr. Watling for helping illuminate the very real challenges of digits on a battlefield.