The After Action Review. Like morning PT and issuing salutes, the AAR has become just something we do. And for good reason. It has served its purpose remarkably well—praised as “arguably one of the most successful organizational learning methods yet devised” by Peter Senge, author of The Fifth Discipline: The Art and Practice of the Learning Organization. But is it optimized for the future battlefield?

While AARs use event-based feedback for evolutionary performance improvement, they are not designed for the discovery learning and experimentation central to multi-domain operations. Overcoming this limitation and harnessing tactical innovation requires creating new tools that complement the incremental performance benefits of AARs with practices from data science and a framework called human-centered design.

Employing the right learning strategy requires first mapping problems along a continuum—something that reflects the concept of “known knowns,” “known unknowns,” and “unknown unknowns” that Donald Rumsfeld famously articulated. While AARs are ideally suited to the “known known” domain of established challenges and solutions, they are fundamentally unable to address the ambiguity of the “unknown unknown” domain surrounding emerging warfare concepts. This transition from addressing the “known known” quadrant to effectively engaging with the “unknown unknown” domain is described by Greg Galle of SolveNext  as moving from the “predictable path”—with its consistent outcomes, “right” answers, and linear chains of causation—to the “bold path” and the unprecedented frontiers of innovation.

AARs cannot make this transition in their current format because they ask the wrong people the wrong questions, in pursuit of the wrong outcomes, to handle ambiguity.

The wrong question: “What should have happened?”

When AARs were introduced after Vietnam, their use of ground-level feedback was transformational. They enhanced training quality and led to objective evaluation benchmarks like the “task, conditions, and standard” format. This success led to widespread adoption, starting with combat training centers and then across the Army as a primary driver of organizational learning.

The mass adoption, however, led to overreliance on a tool designed for the “false realities” of training environments instead of the operational complexity of the real world, as explored by Maj. Ben E. Zweibelson. This reality gap keeps expanding as the pace and technological complexity of modern battlefields increase.

The military has continually acknowledged changing operational environments—from the late-1990s VUCA acronym (volatility, uncertainty, complexity, and ambiguity) to the more recent idea of wicked problems with “no definable problem statement, no objectively correct answer, and layers of uncertainty and unpredictability.” Yet, our learning tools lag decades behind.

The jump from learning in training environments to complex operational challenges requires identifying and examining a system’s underlying assumptions. This shift can be understood as moving from single-loop (assessing if an outcome is achieved) and double-loop learning (evaluating the validity of the metric) to triple-loop learning (evaluating the metacognitive processes that develop our systems and ask if we have the right targets).

Central to triple-loop learning is challenging fundamental assumptions and unveiling cognitive biases, which requires the use of open-ended questions like “how might we” execute a task or conduct an operation. This line of inquiry prizes divergent thought, stimulates experimentation, and identifies blind spots by focusing on first principles. This is not feasible with current AARs.

These metacognitive challenges aside, AARs have become underleveraged vestiges of innovation in the eyes of most military leaders. They are less a source of inspiration than final scripted box checks in the eight-step training model.  Making matters worse, their inability to drive discovery learning has triggered ever-decreasing subordinate buy-in, resulting in a culture more likely to accept the status quo than challenge norms.

The wrong people: “What did you learn?”

Outside of combat training centers and some regional training institutes with dedicated observer coach/trainers, traditional AAR audience polling and information collection methods are insufficient to identify the areas requiring the most organizational attention and energy. This shortcoming is especially common in AARs at the battalion level and below, which often descend into a shallow, formal procedures lacking engaged dialogue or intellectual discourse.

When debate occurs, it often prioritizes key-leader inputs and lacks feedback from the most valuable subordinate components—user feedback from the primary training audience. For example, AARs at the division and corps levels often focus on senior-leader dialogue despite much of the work being done by subordinate staffs and command teams who may not be present. In cases where subordinate staffs do conduct AARs, they are often informal and not shared with the rest of the organization (often due to knowledge-management challenges). This lack of sharing amplifies gaps in awareness and shortcomings in collective learning.

Although leader-centric AARs offer forums for decision makers to publicly issue guidance, they risk awareness gaps since leaders rarely participate in every part of training given competing requirements. While leaders can partially mitigate this shortcoming through battlefield circulation, these efforts are generally ad hoc, informal, and rarely capture sufficiently diverse feedback to form a representative sample of all stakeholders.

These shortcomings are compounded by time constraints as AARs are often executed amid competing leadership requirements, limiting the context available to inform effective decision making. Strategies to gather further context during AARs, like guiding discussion with subordinate-generated talking points, still encounter time constraints. The outcome is an environment where the people who know what needs to change are not asked, and those that are asked do not know what to change.

The wrong outcomes: “What should we have done differently?”

Another AAR shortcoming is associating successful execution with outcomes like definitive guidance from key leaders on how to “fix” issues. The reason expecting key leaders to issue guidance is so dangerous in experimental-learning environments is exactly why it’s so powerful in other contexts—their responses are experience based.

According to Gary Klein’s research, experienced leaders primarily employ something called recognition-primed decision (RPD), where mental models are rapidly and intuitively generated using former experience and rapidly micro-adjusted to current requirements. Since RPD leverages prior experience, unprecedented threats exploit key leader’s cognitive vulnerabilities, often resulting in ineffectual courses of action. An example of learning the wrong lesson based on key leader experience was the Maginot Line, which failed to deter German invasion during World War II despite the French government’s investment.

Decision-making challenges aside, our institutional knowledge management practices often conflate organizational learning with filling vast and soon forgotten shared-drive folders with AAR documents. While units may have terabytes of “lessons observed,” the process of turning these challenges into refined solutions is often ad hoc, preventing them from becoming “lessons learned.” While individual leaders may remember these experiences, their knowledge is lost during personnel turnover, creating endless cycles of underperformance.

Innovative learning will require the Army to embrace new, fundamental principles.

1: Focus on outcomes, not outputs. The best private-sector product-development teams have shifted from obsessing over output features (like a computer’s size) toward a focus on outcome benefits. They prioritize value creation for the consumer instead of incremental performance upgrades. This mindset focuses leaders on delivering value for users (soldiers), and measures success with key performance indicators focused on training effect instead of relying on measures of performance (number of ranges conducted) or omnipresent yet superficial commander’s intent statements that lack the specificity to gauge achievement.

2: Leverage data analytics. Since military leaders rely on pattern recognition in decision making, enhancing their situational awareness is an imperative. Open-source software, like Survey Monkey or Google Forms, can collect and quantify survey data to reveal trends and extract insights in real time at no cost—granting unprecedented levels of understanding and accelerating operational tempo.

At a deeper level, the Army must cultivate leaders with the confidence to understand and leverage data in the context of multi-domain operations—especially given the expanding role of artificial intelligence. While the Army’s combat training centers and deployment archives offer vast data sources, the overwhelming majority exists as dark data—unstructured data that cannot generate insights in its current form and is consequently underleveraged. Training digital literacy and critical-thinking skills during professional military education is a vital and immediate step in building a force capable of dominating future rivals.

3: Embrace crowdsourcing. The power of collective intelligence, popularized in works like The Wisdom of Crowds, has been underleveraged by the Army. Strategies for the behavioral and structural changes needed to create a soldier-centric innovation sourcing funnel can be found in case studies like Turn the Ship Around from the Navy’s nuclear submarine program.

An immediately actionable idea from that case study is that of fostering subordinate ownership by giving them control over organizational direction, which can be implemented by Army commanders in the formation of innovation teams. These teams could focus on key commander priorities and can become clusters of excellence by identifying existing (and often underutilized) experts within formations. This would allow highly capable subordinates to influence decisions and shape implementation—bringing the right people into the discussion and pacing organizational progress off the speed of its top talent, instead of bottlenecking with leadership experiences.

4: Create a culture of decentralized experimentation instead of compliance. While the Army has embraced mission command in tactical operations, leaders often slow experiments by requiring approval for each phase of testing.

A more powerful paradigm focuses on eliminating barriers for subordinates instead of adding layers of approval—accepting that risk is the cost of opportunity. Relinquishing control should not be done blindly. Leaders must learn to craft initial guidance clearly enough to empower decentralized experimentation, so commanders can focus on synchronizing lines of efforts.

Even with decentralized guidance in place, leaders must create a culture where failure does not trigger micromanagement. A Google study found the company’s highest performing teams also had the highest levels of psychological safety, the “belief that you won’t be punished when you make a mistake . . . [or for] speaking your mind.”  Further Harvard research showed leaders can create these environments by integrating behavioral changes—for example, having leaders personally admit mistakes or consistently executing learning conversations that encourage subordinates to share their failures. A guiding principle in these command climates is not to mistake disagreement for disloyalty, since that triggers risk aversion and bureaucratic stagnation.

5: Embrace hackathons, design sprints, and design blitzes. Hackathons, popularized in software development, bring diverse stakeholders and experts together to develop solutions in a compressed time period. While the military has used hackathons at the enterprise level, like SOFWERX events and Hack the Marine Corps, the strategy can be applied to challenges at the installation or unit level to solve problems and develop leadership teams.

As solution-generation tools, hackathons uniquely combine the power of crowdsourcing with users (soldiers), leaders (hackathon sponsors and problem owners), and experts with an accelerated timeline. As team-development tools, hackathons introduce controlled chaos to train collaboration, communication, and design-thinking skills while racing against deadlines. Hackathons can also amplify community outreach by fostering civil-military partnerships—especially with local universities.

Army Futures Command, building on the efforts of commanders like Col. John Cogbill of the 3rd Brigade, 101st Airborne Division, has eliminated coordination hurdles by developing Education Partnership Agreements, which enable decentralized military-academic partnerships. These will eventually “extend across other universities and other units to help with emergent, disruptive technologies.”

At a lower level leaders can employ “design sprints”, five-day processes developed by Google Ventures to rapidly and systematically vet concepts during a standard work week: Monday, understand (problem curation); Tuesday, diverge (brainstorming solutions); Wednesday, decide (converge on one hypothesis); Thursday, prototype (develop a test plan); Friday, validate (gather user feedback to determine success).

Even shorter than design sprints are design blitzes, popularized by Solve Next, as a series of design-thinking drills sequenced around specific outcome goals and time periods (some as short as a few hours). These tools provide leaders with the structure required to use blitzes at scale, but still enabling the divergent thinking and nontraditional problem solving that makes these thinking styles so powerful.

6: Leverage innovative curriculum development systems. While the Army design method is taught at the Command and General Staff College, it rarely permeates to the tactical level. An easy tool to mitigate that shortcoming is importing the ADDIE model (Analysis, Design, Development, Implementation, and Evaluation).

ADDIE enhances outcome quality (defined with key performance indicators by the commander at the onset of training) via hypothesis-based training that best uses training resources and time, pivoting from executing training for the sake of checking blocks towards achieving clearly defined outcomes. In an Army training context, it might look like:

  • Analysis – Create awareness of current performance using doctrine, older AARs, and soldier interviews.
  • Design – Systematically determine training outcome goals, evaluation metrics, and fundamental assumptions underlying training structure (ideal for issuing refined commander’s intent).
  • Development – Solidify concepts via rapid testing cycles with little bets (mini-experiments checking assumptions), pre-mortems that proactively hunt for causes of failure by identifying what can go wrong in advance, and user feedback strategies from the agile manifesto.
  • Implementation – Train instructors (like the eight-step training model’s “Train the Trainer”).
  • Evaluation – Collect feedback for subsequent iterations.

Crucially, integrating ADDIE (or other systems) cannot become an excuse for lethargy and bureaucratic delays; instead, the disciplined speed of design sprints and blitzes must be leveraged for high output over short time periods.

7: Make knowledge management strategies self-sustaining. Since knowledge is expensive to generate and wasted if it is not captured and employed, leaders must create systems whose simplicity incentivizes use by saving their teams time. Ray Dalio’s Principles is a case study in exceptional knowledge management as he developed the civilian equivalent of battle drills from his team’s experiences, simplifying future decisions and creating enormous competitive advantages with speed.

Since standard operating procedures for most units are unclassified, storing them on tools like Google Drive enables real-time group access, updates, and keyword search—enabling a Kaizen-inspired approach to standard operating procedures and supporting rapid onboarding during leader transitions. This would prevent them from becoming lost in shared drives and losing their relevance.

The need to challenge how we learn has become an imperative, underscored by military intellectuals like retired Gen. Jim Mattis. While he famously demanded his subordinate leaders study history in a  viral email, emerging technologies led him to admit that “I’m certainly questioning my original premise that the fundamental nature of war will not change. You’ve got to question that now.”

While critics can argue they lack the time for aggressive and innovative tactical learning, Gen. Mattis reminds us that apathy is inexcusable “in our line of work where the consequences of incompetence are so final for young men. . . . ‘Winging it’ and filling body bags as we sort out what works reminds us of the moral dictates and the cost of incompetence in our profession.”

As military leaders engage with modernization challenges, we must design appropriate learning frameworks, both procedurally and culturally. While the procedural frameworks can be imported from innovative civilian industries, they must be complemented by cultural initiatives.

One strategy is exposing the Army’s junior leaders to civilian sector innovation techniques. For example, the Army could train disruptive thinkers to use innovation strategies during professional military education by assigning them as problem sponsors in the Hacking 4 Defense program.  Army students could use problems confronting their branch and associated Army Futures Command cross-functional teams to drive learning, while gaining experience applying design-thinking and lean-startup methods.

Taking these steps would help close capability gaps while also enriching our intellectual capital and fostering cultural change as students take these solutions to future assignments. It would also provide the Army with trained disruptive thinkers who embrace discovery learning and can diverge from the status quo. Most importantly, it will enhance the Army as a force prepared for the challenges of tomorrow’s battlefield.

 

Capt. James Long is an Army infantry officer, MD5 Innovation Fellow, and experienced tactical innovator. He currently serves as an operations officer with United Nations Command Security Battalion–Joint Security Area. The views expressed are those of the author and do not reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.

 

Image credit: Sgt. 1st Class Alexander Burnett, US Army