Last month, David Sanger and William Broad’s article in the New York Times, “How U.S. Intelligence Agencies Underestimated North Korea,” ranked the failure to predict the recent breakout pace of North Korea’s nuclear program as “among America’s most significant intelligence failures.”

As two career military intelligence officers we appreciate Sanger and Broad’s tough critique on US intelligence agencies—especially considering one of us is a former student of Sanger’s. However, the tone and tenor that Sanger and Broad use is exaggerated and counterproductive to informing the public on the roles and capabilities of intelligence. More fundamentally, calling US intelligence on North Korea an “intelligence failure” is simply wrong.

Sanger and Broad imply through their argument that the public evaluate intelligence as if it were capable of being all-knowing, which fails to acknowledge the inherent ambiguity of intelligence. Furthermore, their indictment on the intelligence community lacks historical accountability and requires a greater burden of proof.

Dr. Mark Lowenthal, a former assistant director of the Central Intelligence Agency, suggests that true intelligence failure is when the intelligence community does not adequately explain to the public its roles and limitations. So as members of the intelligence community, we thought it appropriate to consider the question of intelligence failure and the North Korean threat.

Defining Intelligence Failure

Historical precedent must be considered when establishing a framework with which to judge intelligence failure. The strategic surprise of the attack on Pearl Harbor, the terror attacks on 9/11, and the weapons of mass destruction (WMD) assessment that purportedly contributed to the Iraq War—these are among the most notable examples from America’s recent history, and they share two fundamental attributes: negligence and consequence.

Negligence appears in several forms. It can be traced to intelligence omissions—such as the failure to properly report indicators of an imminent Japanese attack at Pearl Harbor in 1941. It can also be about getting intelligence fundamentally incorrect. When referring to the Iraq WMD failure, former CIA Director Michael Hayden remarked, “It was our intelligence estimates, we were wrong. It was a clean swing and a miss. It was our fault.” In either case, significant intelligence failures require getting it significantly wrong—not just a matter of degree.

Intelligence failures also have grave consequences. During the Pearl Harbor attack 2,335 US service members were killed and much of the US Pacific Fleet was destroyed. The 9/11 attacks killed 2,977 people and the ensuing war on terror levied tremendous economic impacts on the nation. In the case of Iraq WMD assessments, the United States is still dealing with the consequences of what many consider a strategic blunder.

If we accept the criteria of negligent analysis and grave national consequence as a framework with which to judge intelligence failure, how then do the intelligence community’s assessments on North Korea measure?

Consequence is hard to fully assess in the midst of a crisis. However, when assessing significant intelligence failures in history, there is usually a large-scale loss of life, resources, or military defeat. As of today, this has not occurred. And while some may argue that the surprising pace of North Korea’s nuclear and ballistic missile development resulted in a reduction of strategic options, this still doesn’t meet the threshold of “significant failure.” For that, one must show substantial harm to national interests.

With regard to negligence, Sanger and Broad’s argument is that the intelligence community knew the Kim regime had an active nuclear weapons and ballistic missile program, but grossly underestimated the time it would take the regime to attain a capability that would threaten the United States. While the assessments may have underestimated the North Korean development pace, they were correct overall in predicting an intent to build a nuclear weapons and ballistic missile program—this can hardly be negligence. Fundamentally, it is problematic for the public to expect intelligence to provide an answer of precise timing. It is indicative of a popular false expectation of the intelligence community: omniscience.

This distinction leads to an important point: as good as the US intelligence community may be, it is constrained by finite resources and competing priorities, like every other government agency. Prioritization of resources always introduces an element of risk. The risk that we may miss something does not always mean failure, but is rather a core characteristic of collecting and analyzing intelligence. During much of the last fifteen years, intelligence prioritization of resources shifted to counterterrorism and the wars in Iraq and Afghanistan. Shifting resources reduced the funding and analytical hours spent examining North Korea.

Furthermore, even if additional intelligence resources had been devoted to North Korea, it does not guarantee greater precision in understanding the threat. Secretive states such as North Korea are places where the intelligence community may never have perfect vision no matter the resources dedicated. This is not a criticism of our intelligence collection but rather a realistic—and vital—acknowledgement of the challenges that some intelligence targets present. Intelligence analysis enables informed decisions through clearly articulating what we know, what we think, and perhaps most importantly, what we don’t know.

Accepting that Intelligence is Imperfect

To a larger point, expectations of intelligence about North Korea have important implications for the strategy adopted. Rather than a strategy that relies on knowing the exact timing of North Korea’s weapons program, a better one considers that we may never know these details with certainty.

Nassim Taleb, author of Antifragile: Things that Gain from Disorder, writes that any strategy that requires precise information is fragile, especially within environments with variability and uncertainty. Strategies that do not require perfect information and are adaptable and flexible to changing conditions are much better suited for opaque conditions in North Korea. Any strategy that depends on precisely predicting North Korea’s weapons breakthrough will be inherently vulnerable, fragile, and prone to failure.

An example of an antifragile strategy for North Korea would be to approach it from a traditional nuclear deterrence stance, much like the one the United States took with the Soviet Union throughout the Cold War. Such a strategy would not require the intelligence community to be precise, but would instead require that the community identify the intent and willingness to act on such an intent, to better inform policy and diplomacy.

Despite criticism, the intelligence community will continue to analyze our adversaries’ intentions and ask hard, introspective questions to improve their forecasts. The public should expect this. Nevertheless, observers should be careful about labeling unanswered or imprecisely answered questions as failures. This label must be reserved for truly negligent analyses that lead to grave consequences. To do otherwise sets false expectations from the public and glosses over the fundamental nature of intelligence analysis—that it is informed forecasting affected by prioritization of resources.

Dr. Lowenthal explains in his book Intelligence: From Secrets to Policy that, “the foremost goal of any intelligence community must be to keep track of threats, forces, events, and developments that are capable of endangering the nation’s existence.” Under Lowenthal’s definition, the intelligence community lived up to its mandate with respect to the North Korean nuclear program.

To assume that intelligence can get it perfectly right discounts the nature of the world we operate in. To disproportionately assign blame to intelligence ignores the constraints on resources and the strategies that underpin our intelligence mission. By understanding what intelligence can and cannot provide, our leaders are better equipped to deal with surprise and create strategies that are more adept, resilient, and antifragile.

 

William Denn and Kevin Ryan are career military intelligence officers. The opinions in this article are their own and do not represent those of the US government or Department of Defense.

 

Image credit: Tormod Sandtorv


Print pagePDF pageEmail page