Author’s note: This is the fourth in a series of articles about the profession of arms. Over the series, I will chart the modern development of our profession in the nineteenth and twentieth centuries, examining that development through the lens of four themes that have driven and influenced it: events, technology, ideas, and institutions. I will then examine how change in the strategic environment will drive continued evolution in the profession of arms. Importantly, I will propose areas where we, as members of this profession, must lead change and ensure our military institutions remain effective—at every level—into the twenty-first century.

You can also read the previous article in the series here.

The Cold War strategic competition between the United States and the Soviet Union, and massive build up in the number of deployed nuclear weapons, drove changes in how members of the profession of arms thought about large-scale wars, and how they interacted with national leaders and policymakers. Major interstate war now held the potential to result in conflagrations that could extinguish human civilization. Many civilian and military leaders in the early post–World War II era hoped or believed that new and advanced technologies would deter large-scale conflicts and deliver enduring strategic advantages. However, the examples of protracted and indecisive war in Korea and the American experience in Vietnam disabused many of these utopians of the idea that technology would be the silver bullet against surprise challenges and the scourge of high-intensity wars. As Williamson Murray and MacGregor Knox write in The Dynamics of Military Revolution, 1300–2050, this was an old affliction. They describe how “Clausewitz had utter contempt for those of his contemporaries who suffered from similar delusions. . . . No technological marvels can alter war’s unpredictable nature as a ‘paradoxical trinity’ composed of ‘primordial violence,’ politics and chance.”

New technologies (examined in this series’ previous article) demanded new ideas to realize their potential. New institutions would also be required to wield high-tech weapon systems, to protect against the enemy use of these same weapons, and to train and educate military personnel. New ideas and institutions went hand in hand. For example, new approaches to strategy required new institutions such as the National Security Council to coordinate US strategy. The evolution of joint operations was as much about joint headquarters and staff as it was about theories of joint integration. Flawed initially, these institutions evolved over time to better integrate policy and better formulate strategy. The aim of this article, therefore, is to explore the new ideas and new institutions that emerged to maximize the impact of new technologies, and deal with Cold War events, in the decades after World War II.

While a series of different theories and structures emerged over the decades between the 1950s and the turn of the century, four stand above all others: new theories about strategy and their accompanying institutions; the ongoing development of join concepts and organizations; the idea of revolutions in military affairs; and developments in the theory of military professionalism, including civil-military relations.

Strategy for the Digital Age

As the Digital Revolution matured after World War II, strategic theory and practice continued to evolve. The strategy adopted by the United States in the immediate postwar period was shaped by a 1946 telegram from a junior American foreign service officer serving in the American embassy in Moscow named George Kennan. Dispatched on February 22, 1946, the now famous eight-thousand-word telegram outlined reasons for difficulties in the US-Soviet relationship. He described Soviet intransigence and hostility as the outcome of internal pressures and the need to justify the dictatorship that ruled the Soviet empire. Kennan summarized the Soviet approach thus: “We have here a political force committed fanatically to the belief that with US [sic] there can be no permanent modus vivendi, that it is desirable and necessary that the internal harmony of our society be disrupted, our traditional way of life be destroyed, the international authority of our state be broken, if Soviet power is to be secure.” Kennan also proposed that the Soviets would not change their strategy until they had experienced sufficient failure to convince them otherwise.

In a follow-up article in Foreign Affairs in 1947, Kennan wrote that “Soviet pressure against the free institutions of the Western world is something that can be contained by the adroit and vigilant application of counterforce at a series of constantly shifting geographical and political points. . . . This would of itself warrant the United States entering with reasonable confidence upon a policy of firm containment, designed to confront the Russians with unalterable counterforce.” These ideas, as well as alliance building, forward basing of US forces, and the build up of military power, were subsequently included in United States Objectives and Programs for National Security, also known as NSC-68, in 1950. This new form of strategy—containment at every point where Soviet actions encroached on a stable world—would guide the conduct of the United States for the remainder of the Cold War.

Concurrent with these developments, strategy was also evolving to take account of the impact of nuclear weapons and their delivery systems. At first, with only several atomic bombs, few beyond military planners thought in depth about the impact of these weapons. US policy embraced the notion that an American monopoly on atomic weapons would counter Soviet conventional superiority (particularly in Europe). War plans such as the US Department of Defense’s Half Moon, developed in May 1948, emphasized an early atomic offensive to blunt enemy offensives while destroying economic targets to compel surrender.

In August 1949, the Soviets tested their own atomic weapon. The US monopoly on atomic bombs was over. It would take some time before the Soviets would be able to translate this into a deployable weapon system, but as Lawrence Freedman notes, “The eventual Soviet accumulation of such a stockpile was virtually inevitable. This development had a paradoxical effect. While it discouraged doctrines based upon atomic weapons as a uniquely American advantage, it also locked the United States into a nuclear strategy.”

The election of Dwight D. Eisenhower as president in 1952 resulted in another shift in nuclear strategy. Viewing nuclear weapons as an opportunity to reduce expenditure on large, standing conventional forces, the Eisenhower administration continued to invest in building its nuclear arsenal. In January 1954, US Secretary of State John Foster Dulles announced a new strategy of “massive retaliation.” Interpreted by some as the United States threatening nuclear attack in response to conventional aggression anywhere in the world, it was widely criticized. The new strategy did not fully consider potential Soviet strengths, with criticisms growing in the wake of the launch of the Soviet satellite Sputnik in 1957 and the tests of the world’s first intercontinental ballistic missile by the Soviets in the same year.

Through the 1950s, development of nuclear strategy shifted from an almost entirely military undertaking and into the civilian policy and analysis arena. New analytical techniques developed at the RAND Corporation emerged. Analysts such as Albert Wohlstetter began to make significant contributions to the developing theory of nuclear strategy. In a 1958 paper called “The Delicate Balance of Terror,” Wohlstetter challenged conventional notion that if both sides had nuclear weapons, there would be a stalemate (or deterrent effect) that would minimize the risk of thermonuclear war and ensure strategic stability. Instead, Wohlstetter found that we must expect a vast increase in the weight of attack which the Soviets can deliver with little warning, and the growth of a significant Russian capability for an essentially warningless attack. As a result, strategic deterrence, while feasible, will be extremely difficult to achieve.” This undermined arguments for massive retaliation and the idea that a carefully planned surprise attack could easily be countered by the United States.

Eventually, theorists proposed the idea of a secure second-strike capability. This would address Wohlstetter’s concerns about the delicate balance. It was enabled by advances in missile range and accuracy and saw the deployment of nuclear triads (weapons deployed by aircraft, missiles, and submarines) in the United States and the Soviet Union. The acceptance of the need to develop this secure second-strike capability and continuing improvements in strategic reconnaissance and missile capabilities led to a new idea in the 1960s: mutually assured destruction.

The incoming administration of John F. Kennedy in 1961 embraced a strategy of flexible response, which would allow it to meet “military threats symmetrically rather than automatically escalating to the use of nuclear weapons.” At the same time, it was rethinking nuclear strategy, which received a massive stimulus in the wake of the Cuban Missile Crisis. As John Gaddis writes, “What kept the war from breaking out, in the fall of 1962, was the irrationality, on both side, of sheer terror.” Both sides had sought to de-escalate nuclear tensions, with a realization dawning that any nuclear exchange would be a catastrophe for all sides.

The response was a strategy of assured destruction, codified by Secretary of Defense Robert McNamara in 1964. While initially described as assured retaliation, the term assured destruction and eventually mutually assured destruction (MAD) came to underpin nuclear strategy throughout the remainder of the Cold War. In the wake of Cuba, both sides understood the costs—after the 1962 crisis emerged a series of superpower agreements on nuclear weapons. These included the 1963 Limited Test Ban Treaty, the 1968 Nuclear Non-Proliferation Treaty, and the 1972 Strategic Arms Limitation Interim Agreement.

Assured destruction proved to be a remarkably durable concept. Despite being “like scorpions in a bottle,” the two superpowers were able to avoid any exchange of these destructive weapons, and the Cold War ended with more a whimper than a bang. We are lucky that it is so. To use the words of Thomas Schelling from his classic Arms and Influence, we are the beneficiaries of a “balance of prudence.”

But developments in strategic theory were not restricted to nuclear weapons, nor just to the United States. In the immediate aftermath of World War II, most countries raised and trained conventional forces as if nothing had changed in the wake of the invention of atomic bombs. As Michael Carver writes in Makers of Modern Strategy, the war’s victors expected future wars to feature “lengthy major campaigns on land, at sea, and in the air, conducted on the same lines as those they had experienced between 1941 and 1945.”

Initially, the United States and Britain planned to use tactical nuclear weapons to offset their shortfalls in conventional forces in any war with the Soviets in Europe. Strategist Basil Liddell Hart called this approach “nothing better than a despairing acceptance of suicide in the event of any major aggression.” Hart went further, describing how “the mutual possession of nuclear weapons tends to nullify the value of possessing them.” This was a view also encouraged by Maxwell Taylor in his 1960 book, The Uncertain Trumpet. Western strategy continued to evolve and embraced an approach where conventional warfare would be limited beneath the threshold at which nuclear weapons were employed. The Korean War demonstrated that the possession of nuclear weapons would not always determine the outcomes of future wars.

In France, military officer and strategist André Beaufre was also developing his ideas on strategies for Western nations to confront the Soviet Union. For Beaufre, strategy was “the art of the dialectic of two opposing wills using force to resolve their dispute.” In his 1963 book, An Introduction to Strategy, Beaufre proposed that the West pursue a “total strategy,” which would encompass every element of political, economic, military, and diplomatic endeavor. He further differentiated between his total strategy and something he termed “overall strategy,” which governed the conduct of war. Perhaps his most important idea, according to Michael Carver, was that “no one strategy is applicable to all situations: alternative strategies should be chosen according to the circumstances of the case.”

It would be impossible to cover here the full range of contributors to strategic theory in the post–World War II era. Military officers such as J.C. Wylie made significant contributions to the profession and to strategic thinking. At the same time, new generations of civilian strategists, including Bernard Brodie, Herman Kahn, Thomas Schelling, Colin Gray, Coral Bell, Lawrence Freedman, and Andrew Marshall offered evolved and new theories on strategy in the nuclear age.

Two institutional developments arose from this expanded strategic thinking. The first was a continuation in the establishment of war colleges in nations around the world. In a previous article in this series, we examined the establishment of the first war colleges during the interwar period. The experiences of World War II drove an explosion in the number of these institutions in the decades after 1945. In the 1940s, the United States, Canada, Brazil, France, Italy, Romania, Yugoslavia, and Norway established these higher defense colleges. The following decades saw a steady growth so that by 1980, thirty-five nations possessed these centers of higher learning for senior military officers and civil servants. The number and quality of these colleges has continued to expand since. Not only have they filled an important role for preparing military officers for strategic command and leadership roles, but they have also assumed a central part in military diplomacy and international engagement and have built several generations of cohorts of senior officers from different nations. War colleges now comprise an essential step in developing members of the profession of arms.

A second institutional development in the post–World War II era was the National Security Council in the United States in 1947, and similar bodies in other nations. As Ivo Daalder and I.M. Destler write in In the Shadow of the Oval Office, the modern National Security Council dates back to the Eisenhower-Kennedy transition. While not strictly a military institution, the genesis of the National Security Council was the recognition by the president of the United States that the post–World War II era possessed multiple strategic challenges that required more unified views from the military. This was an era that demanded a more integrated consideration of national security issues and advice to the president, beyond the remit of the military staff at the Pentagon. As the National Security Act of 1947 notes, it was designed to coordinate “the activities of the National Military Establishment with other departments and agencies of the Government concerned with the national security.”

As Charles Stevenson notes, it was also a compromise. The new body balanced “advocates and opponents of a highly centralized military establishment, between supporters of a regularized process for interagency policymaking and defenders of Presidential prerogatives.” These tensions remain. And while each successive president has contributed his own changes to this advisory body, it has survived as an effective function of the US national security apparatus until this day. It has also been replicated in nations such as the United Kingdom and Australia.

The Rise of “Joint”

In some form, joint operations have been around for hundreds of years. This includes the cooperation of General Ulysses S. Grant and Admiral David Dixon Porter during the 1863 Vicksburg Campaign, or the cooperation of the Australian Army and the Australian Navy to capture German New Guinea in 1914. Based on their wartime experiences, including the Gallipoli disaster, the British before World War II had formed the British Chiefs of Staff Committee in 1923 to exercise a corporate responsibility for command and strategic direction of Britain’s armed forces and to provide advice to government. The United States also formed a joint board to coordinate war plans between the Army and the Navy.

During World War II, modern joint operations emerged and were honed through hard experience. But it took several years. Williamson Murray has written that “joint operations proved quite dubious in the early years for both the British and the Americans.” Operations against the Germans drove the need for better joint coordination. For the British, their failed campaign in France and the Low Countries, as well as the Dieppe disaster, presented them with no choice but to think seriously about interservice cooperation. The Americans were presented with similar learning opportunities with their planning and execution of the landings in North Africa, Salerno, and Anzio, which drove institutional developments in joint cooperation and operations.

The formation of the Combined Chiefs of Staff by America and Britain in 1942 was an important development. The effective functioning of the Combined Chiefs of Staff required each nation to have a coordinated position prior to their meetings. The US response was the formation of its own Joint Chiefs of Staff—an equivalent of the British Chiefs of Staff Committee, in February 1942.

By the end of World War II, the military institutions of the United States and the United Kingdom had developed high levels of tactical and operational excellence, underpinned by joint collaboration. But it was the Americans that would make the largest investment in joint operations through the creation of a permanent joint staff for the postwar era. Building on the lessons of the just finished war, the 1947 National Security Act (sections 211 and 212) formerly established the Joint Chiefs of Staff and the Joint Staff.

The pace of change in technology in the 1950s, particularly with missile systems, was transforming the pace at which war might be conducted. The faster, longer-range missiles being developed and deployed by the Soviet Union meant military forces, and their chain of command to the president, needed to be more responsive.

In April 1958, President Eisenhower revealed a reorganization of the United States Department of Defense to allow its structure, command, and capabilities better keep pace with the policy objectives of the United States in its competition with the Soviets. He was clear about the driver. “In recent years a revolution has been taking pace in the techniques of war,” he told the US Congress. “Warning times are vanishing.” Because of this revolution, Eisenhower reasoned that the United States “cannot allow differing service viewpoints to determine the character of our defenses. . . . Our country’s security requirements must not be subordinated to outmoded or single-service concepts of war.”

During the 1960s, counterinsurgency operations and the Vietnam War drove adaptation in the conduct of joint operations, particularly with unconventional operations and support to indigenous forces. On the other side of the world, joint integration activities between the United States and its European allies occurred through the framework of NATO’s defense of Western Europe against the Warsaw Pact. Joint operations also evolved an operational level that sought to provide the intellectual (and the organizational) link between military strategy and tactics. This operational level of war was borrowed from Soviet theory on operational art that was developed in the interwar period and applied against the Germans during World War II.

The basics of operational art were, in theory, simple. Soviet theorists, led by senior officers such as Mikhail Tukhachevsky, rejected a focus on pursuing victory in war through the single decisive battle. This theoretical work resulted in a concept where achieving strategic objectives was only possible through the cumulative accomplishment of successive, orchestrated operations. This connective tissue between strategy and tactics was a new area of military science that the Soviets called operativnoe iskusstvo—operational art. While many of those who had a hand in developing this military theory were purged in the lead-up to World War II, the theory was included in Soviet military doctrine. Operational art had a significant influence on the conduct of Soviet forces’ operations during their advance into Germany.

In their desire to elevate thinking beyond the tactical realm of NATO defensive operations against the Warsaw Pact, the US military rediscovered Soviet operational art theory. Through the efforts of officers such as Don Holder and Huba Wass de Czege, operational art (and the operational level of war) was gradually encoded into the doctrine of the US services and that of its allies. It was a significant theoretical innovation in the development of joint theory, and continues to influence the command, control, training, and operating mindset of Western military institutions.

In the 1980s and 1990s, the theory and practice of joint operations continued to evolve. An important development during this period was in the United States. The Goldwater-Nichols Department of Defense Reorganization Act of October 1986 was a significant reform in joint organizations, and provided additional powers for the chairman of the Joint Chiefs and geographic combatant commanders. Importantly, it also reinforced the role of the civilian secretary of defense, and it established new incentives for military personnel to serve in joint appointments during their careers.

Outside of the United States, other nations were also intensifying their reforms to become more joint. In the United Kingdom, a small team commenced work on the formation of a new permanent joint headquarters. In April 1996, the Permanent Joint Headquarters was established at Northwood, northwest of London. Its mission then, and now, was to exercise operational command of joint and multinational operations.

By the late 1990s, military organizations around the world were coming to terms with the effects of globalization, new and evolved types of threats, and new technologies. These institutions were the beneficiaries of decades of development in joint operations theory and the establishment of joint organizations. Not only was this a more effective and integrated way to employ military power, but it was also to provide an important foundation for the kinds of joint and coalition military activities that would be required in the wake of the September 11, 2001 terrorist attacks on the United States.

Revolutions in Military Affairs

In 1956, British historian Michael Roberts used the term “military revolution” in a lecture that explored Swedish military innovation during the period between 1560 and 1660. Roberts argued that during this time, a series of innovations occurred in strategy, tactics, and sociopolitical institutions that, in combination, comprised a military revolution. It resulted in a revolution in waging war.

In the 1959, German historian Fritz Sternberg wrote of a revolution in military technique, driven mainly by atomic weapons. Sternberg also spent much of his book relating the military revolution to the nascent Third Industrial Revolution that was taking place. This revolution, according to Sternberg, was likely to result in “a long armed truce, not only because [of] the enormous destructive power of the new weapons. . . but also because military strengths of the two world Powers [were] approaching parity.”

Sternberg also described how the potential for utter devastation in a world war meant that military establishments had to invest more in thinking about the implications of their doctrine well in advance. At the same time they must possess the essential organizations and industry before war. Nuclear war might occur so rapidly that the starting capabilities of protagonists might prove decisive; there would not be time for mobilization once a war began. Finally, and quite prophetically, Sternberg believed that nuclear parity would mean that “small wars” around the world would be more likely.

In the 1970s, work undertaken by the US military provided more intellectual foundations for what would later be known as RMAs. In the wake of the 1973 Arab-Israeli War, the United States military had undertaken detailed studies of new technologies. The use of electronic warfare and antitank weapons and the new lethality of infantry in particular was of interest given the likelihood of large-scale armored warfare in any war against the Warsaw Pact.

At almost the same time, Russian military leaders and theorists were examining the impact of these new technologies on warfare. From the late 1970s, Soviet theorists and academics wrote extensively about a military technological revolution. This was defined as occurring “when the application of new technologies into military systems combines with innovative operational concepts and organizational adaptation to alter fundamentally the character and conduct of military operations.” The Soviets reached the conclusion that the impact of information and long-range precision weapon systems would see a shift from quantity to quality being the most important aspect of military operations.

Soviet theorists hypothesized that this would revolutionize warfare. In the Soviet system, a key advocate for these studies on the impact of new technologies was Marshal Nikolai Ogarkov, the chief of the General Staff from 1977 through to 1984. The Soviet focus was a result of their “anxiety of watching a more technologically advanced United States develop new technologies, and move to incorporate them into new military systems (e.g., the U.S. Assault Breaker defense concept in the 1970s).” The capacity of the United States to exploit long-range precision weapons was a frightening prospect. As Murray and Knox have described, this threatened core elements of Soviet operational art, particularly the use of echeloned, mass armored formations.

The Soviets were right to be concerned. A series of strategic research and development investments from the 1970s onwards had resulted in the foundations of what was known as an “offset strategy” to counter the Soviets’ conventional superiority. As Robert Tomes describes: “Working closely with Service counterparts—and drawing on studies like the 1976 Defense Science Board summer study, and Joe Braddock’s analysis of Soviet weaknesses and how to defeat them—the research and development community benefited from insights into crucial operational requirements. Intelligence analysts informed the process from the beginning by tailoring their assessments to inform defense acquisition planning and Service doctrine. . . . By the mid-1980s, a combination of programs and initiatives that integrated operational concepts, doctrine, and a new ‘systems-of-systems’ approach cohered, leading to disruptive innovation.”

In 1991, Andrew Marshall, the head of the Pentagon’s Office of Net Assessments, commissioned a report on whether Soviet theorists were correct in their belief that technological advances would result in significant change to warfare. The resulting report noted that: “A strong consensus exists that a military-technical revolution is underway. . . But while new technologies are the ultimate cause of a military-technical revolution, they are not themselves the revolution. The revolution is fully realized only when innovative operational concepts are perfected to exploit systems based on new technologies, and when organizations are created to execute the new operations effectively. . . We are probably in the early stages of a transition to a new era of warfare.”

In the years after of the stunning US-led coalition victory over the Iraqi military in 1991, this was a compelling idea. Witnessing the achievements of stealth aircraft, precision weapons, computer connectivity, and rapid ground maneuver, many defense analysts proposed that military institutions were at the dawn of a new revolution in military affairs. The term, quickly made into the acronym RMA, gained rapid acceptance with adherents—at least in the United States and other Western nations. Military institutions, academia, and the defense industry explored historical cases of military revolutions in the hope that it might underpin a conceptual, organizational, or technological edge in future warfare.

Currently, the term RMA is rarely used, and has somewhat fallen out of favor. In a 2016 retrospective that reviewed four decades of RMA hypothesizing, Jeffrey Collins and Andrew Futter noted that “the label of a revolution is too strong for the changes experienced over the past two decades. Perhaps the biggest reason for this is the inherently inward-looking and ethnocentric nature of the RMA concept—it was essentially based on an idealised type of war that militaries wanted to fight, and therefore focussed rather less on the enemy and how they might respond.” And military innovation expert Michael O’Hanlon, assessing changes in military operations over the past two decades, has written that “there has been a great deal of innovation since 2000, but it would be hard to describe most of it as revolutionary.”

In reality, the RMA debate—whether it was in the Soviet Union, Britain, the United States or beyond—forced military institutions and members of the profession of arms to think hard about their hypotheses for successful future operations. The key idea that emerged, that technology alone is not a silver bullet and that military institutions must combine new concepts and new organizations with new technologies to generate new and disruptive capabilities, remains an influential part of the debate on the future of military institutions. And, as Williamson Murray has written in The Dynamics of Military Revolution, 1300–2050, RMA’s are not a substitute for strategy.

Military Professionalism and Civil-Military Relations

The study of the profession of arms was rekindled in the wake of World War II. Two key elements of this were a redefinition of the profession driven by new technologies, and the examination of civil-military relations.

One of the most influential modern examinations of the military profession in this postwar era is Samuel Huntington’s The Soldier and the State. In a 2015 article, William Rapp described Huntington’s 1957 book as defining “civil-military relations for generations of military professionals.” Huntington identified what he believed were the central elements of the profession. His work provided an important characterization of the post–World War II profession of arms that has been adapted by different nations accordingly to their national, strategic, and military cultures. Huntington named three core aspects of a profession in general, and of the profession of arms specifically. These are: expertise, responsibility, and corporateness.

Huntington examined the centrality of expertise in the profession of arms. He wrote that “the skill of the officer is neither a craft . . . nor an art. . . . It is instead an extraordinarily complex intellectual skill requiring comprehensive study and training. . . . The management of violence is not a skill which can be mastered simply by learning existing techniques. It is in a continuous process of development.” This is a useful description of the demands placed on military leaders. It also highlights why future military leaders need to be attuned to change, be capable of adapting, and continuously update their knowledge.

Importantly, Huntington also described a theory of civil-military relations that related to those interactions between political elites and senior military leaders. Huntington called this “objective control,” and it required clear divisions in responsibilities between civilian and military leaders. Civilian political leaders would practice policy decision making and partisan politics while respecting the military’s autonomy during war. Military leaders would focus on building expertise in the management of violence while concurrently respecting those responsibilities that were the preserve of civilian leaders. While this has been an influential model in American civil-military relations for many decades, it has recently been subject to reexamination. Scholars such as Risa Brooks and Eliot Cohen have both offered critiques on the contemporary relevance of Huntington’s work. We will return to these critiques during our examination of twenty-first-century challenges later in this series.

Another important contributor to theories of the modern profession of arms was Morris Janowitz. He established the study of the profession of arms and society as a subfield within sociology and authored numerous studies and articles on this topic. His classic study, The Professional Soldier, published in 1960, remains a landmark in defining the profession as well as a landmark study in civil-military relations. He also studied military professionalism through the lens of various models of political-military elites.

Janowitz also wrote that military activities, given their growing technical complexity, had passed from the domain of drafted citizens to become the preserve of highly trained professionals. He also described how war making in the future would rely on a highly professionalized and specialized occupation, the professional soldier. Because of the breadth and pace of technological change, professional military personnel would require more formal training and education to develop professional mastery. Temporary, large military organizations raised from society would become less relevant in national security affairs.

Janowitz even proposed that the profession of arms was undergoing a fundamental transition to what he called a constabulary model. The profession would increasingly resemble police and would organize and apply violence in tightly controlled and limited circumstances while also retaining close links to the society they served.

On the other side of the Atlantic, the British were also studying the development of the profession in the nuclear age. One such scholar was the late Professor Martin Edmonds. An expert in defense studies, in 1988 he published a book called Armed Services and Society. In it, Edmonds explored whether armed services comprised a professional body, as well as how they interacted to society and government. Edmonds proposed that military organizations were a profession, noting their organizational foundation on systemic theory, possession of a code of ethics, ability to reward and sanction members, and ongoing development of a professional culture. Key elements in the profession were also examined by Edmonds, which included organizational structures, the need for leadership, and “unlimited liability”—the obligation to respond to outside aggression even if doing so means injury of death.

Edmonds also explored civil-military relations. He wrote about the ultimate objective in civil-military relations being the harmonization of beliefs and values, agreement on security policy and the resources provided for national security, and the development of a consensus on the place of the military in society. Edmonds developed a model—the national security system—that had the government at its center, linking the effectiveness of military forces with the environments in which they operated. This system was predicated on trust. As Edmonds writes, “Fundamentally, it is a matter of trust between the public at large, the government, the security services and intelligence agencies.” The parallels here with Carl von Clausewitz’s work are clear.

Sir John Hackett was another influential British contributor in this field. In 1962, Hackett presented a series of lectures at Trinity College in Cambridge that charted the development of the military profession. While there are histories and records of warfare that go back millennia, Hackett notes in his lectures that it was not until the nineteenth century that true professionalism emerged.

In the 1980s, Hackett continued his examination of the profession of arms. Building on his Trinity College lectures, he wrote the book The Profession of Arms, proposing the need for the military profession to recognize the blurring of concepts such as “war” and “peace” and arguing for a more sophisticated understanding of how war-like capabilities might be applied across a spectrum of circumstances. Hackett also discussed the continuing inevitability of warfare and the fact that in future military activities both leadership and management were indispensable. He provides brief comment on the continuing need to distinguish between officers and noncommissioned personnel.

Importantly, Hackett focused on developing military personnel as professionals. He was an advocate for robust education that must be progressive while appreciating the need for educational support to those on different career pathways. Ongoing intellectual development of officers as members of a profession was also of interest to Hackett. He made clear why when he wrote that “the social results of inadequacy in the management of violence in two world wars have already been enormous and remains incalculable. Since war became total, we have acquired weapons which in total war can destroy mankind. The penalty of inadequacy was high before. It could now be final.”

During this period, several other academics and senior military officers from the United States provided important contributions to the development of the profession of arms in the late twentieth century. One of the more prominent of these was Charles Moskos. A sociologist by training, Moskos wrote on the composition of the United States armed forces, including areas such as race, gender, and sexual orientation. A key contribution by Moskos was his examination of the military as an institution rather than an occupation, and imperatives of institutionalism in military organizations. He wrote of this institutional model consisting of three elements: institutional leaders who are deeply involved in the organization and care about it; development of a clear vision and articulation of what it is about and how its distinct elements relate back to its core vision; and members that are values driven.

An Australian contribution was the 1980 article by Air Commodore Ray Funnell, The Professional Military Officer in Australia. Like his overseas counterparts, Funnell identified new technologies, strategic change, and organizational change as key drivers of change in the profession of arms. For Australia, the British withdrawal from east of the Suez Canal and the US withdrawal from Vietnam had resulted in a fundamentally different strategic environment. Higher defense organization and command arrangements had changed in the preceding years as a result of several reviews and the efforts of the senior civilian bureaucrat, Arthur Tange. Finally, technological change was significant, with Funnell describing it as “a treadmill where, it seems, no matter how hard you try you never get ahead.”

Funnell focused largely on the civil-military aspects of the profession. He notes that “a combat orientation is primary and predominant but it is not an end in itself. . . . The profession must accommodate to political and organizational realities.” Importantly, he did not believe that Huntington’s approach, which he says describes an “absolutist profession in which the military isolates itself professionally from society,” would meet the needs of the Australian profession of arms. Instead, he advocated for a more pragmatic approach to the profession where greater emphasis would be placed on the political and bureaucratic skills of senior military leaders so their professional military advice would be given greater weight in government deliberations.

The work of Huntington, Janowitz, Moskos, Hackett, and others stands out in the theoretical examination of the profession in the twentieth century. This is not to say that the work of other scholars and military officers was not important or influential. But these four in particular laid the theoretical foundations for the profession of arms that influenced generations of military leaders in the second half of the twentieth century. However, as the Cold War ended and a new century dawned, new challenges would arise that demanded a new look at some of the core ideas associated with the profession of arms.

The theories of civil-military relations are an important element of our profession. At heart, the idea of civil preeminence shapes how military force is employed. This in turn influences the organization, resourcing, equipping, education, training, and support institutions of the different military services. The effective and trusted interplay between the military, the government, and the people—an idea first expressed in its modern form by Clausewitz—is fundamental to contemporary and future civil-military relations. It underpins the effectiveness and reputation of the profession of arms in all democratic societies.

This concludes our examination of the profession of arms in the twentieth century. Commencing with the whirlwind of technological changes of the Second Industrial Revolution, and concluding with the end of history, the profession of arms had a tumultuous century. Two world wars saw the development of total war, while wars of decolonization drew conventional military forces into forms of combat and influence they were often deeply uncomfortable with.

The development of the twentieth-century profession of arms was driven by new and evolved technologies and by global events such as the world wars, the Cold War, and the challenges of the immediate post–Cold War era. New and evolved ideas about war and new institutions that emerged as a result have enriched the profession and provided it with the sure footing required to transition to the operations spawned by the events of 9/11.

The developments in the post–World War II era led to a well-trained and educated, and more technologically sophisticated, approach to our profession. The advent of highly destructive nuclear weapons drove the need for a new and evolved approach to civil-military relations, and a strengthening (at least in Western nations) of the notions of civil control of the military. These developments led to the profession that would fight and compete in the first two decades of the 2000s. The next article begins our examination of the perils and opportunities that await the profession of arms as it navigates the potential disruptions of the twenty-first century.

Maj. Gen. Mick Ryan is an Australian Army officer. A graduate of Johns Hopkins University School of Advanced International Studies and the USMC Command and Staff College and School of Advanced Warfare, he is a passionate advocate of professional education and lifelong learning. He has commanded at platoon, squadron, regiment, task force, and brigade level, and is a science fiction fan, a cricket tragic, a terrible gardener, and an aspiring writer. In January 2018, he assumed command of the Australian Defence College in Canberra, Australia. He is an adjunct scholar at the Modern War Institute, and tweets under the handle @WarInTheFuture.

The views expressed are those of the author and do not reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.

Image credit: Mass Communication Specialist 1st Class Ronald Gutridge, US Navy