Abstract
Stop wasting training time and use these two tips to improve your team’s performance. The measurement of learning, planned before and conducted during the activity, is critical to understanding and improving performance during training activities. Implementing change from experiences is tough. The lack of detailed measurement before the activity starts and the motivation to enact identified changes are the key points of this paper. The paper details why these two issues are critical and not well addressed and also offers some active solutions to remedy these issues.
INTRODUCTION
The Australian Army conducts hundreds of After Action Reviews (AAR) each year across a myriad of activities, teams, and functions. Many exercise AARs appear to focus on following the cultural norm of conducting the AAR process, admiring the problems and identifying solutions and then continuing with business as usual, rather than adapting based upon those experiences. While following the doctrinal ER2TA (Engage, Review, Reflect, Transition, Act) process[1] will support the process of team learning post experiences, a review of our experiences at the Combat Training Centre (CTC) has identified that the two key areas for ensuring team feedback activity success are: ‘measurement of performance’ and ‘motivation to change’.
As a central part of the CTC experiential learning and performance improvement process, we’ve spent several years closely observing, reviewing, researching, assessing, and discussing AARs. To reduce the expectation confusion associated with AARs, at CTC, we’ve introduced a standard team and individual feedback process called the Team Reflective Activity (TRA), using the ER2TA process. This feedback activity is conducted in support of the commander’s focus and without the process limiting the topics addressed or the techniques utilised within the feedback activity itself.
ISSUES
The language in the training world has often not conceptually, nor in practice, kept pace with developments in knowledge and understanding. One lens for a commander or professional training practitioner to view this space, is to consider the descriptions of training terms in Table 1.
| Table 1 – Training Terms | ||
| TERM | DESCRIPTION | FOCUS |
| TRAINING |
|
Focus is on environment and setting, created to facilitate learning |
| LEARNING |
|
Focus is on what individuals absorb, and can demonstrate |
| CAPABILITY |
|
Focus is on consistent team improvement (measured) |
Individual training sits clearly with the individual soldier as described in the Personnel Fundamental Input to Capability (FIC)[2] whereas Collective Training and/or preparedness stands as an independent FIC. From an Advanced Collective Training (ACT) perspective, there are two key reasons limiting performance improvement: ‘measurement of performance’ and ‘motivation to change’.
Performance Measurement.
Measurement is critical in improving performance. There are myriad ways to measure, assess, or evaluate team performance outcomes. They can be qualitative or quantitative measures and can be conducted internally or externally, by outside people or systems. Honest self-assessment is critical in all training. Key measurements include the current level of team performance, intended improvement levels, and then final assessment measures after the training intervention. Measures can be quantitative or qualitative – just make sure you are measuring the important thing, not what is easy or convenient to measure.
The adoption of the doctrinal Army Collective Training Management Framework[3] has helped Army improve its performance measurement system. However, while it is useful for broad planning and assurance requirements, it remains a blunt instrument for actively improving team performance. Focussing simply on selecting tactical actions without appropriate normalised conditions and standards limits the opportunity to develop high-performing teams. Having tactical actions is important to support the design of training activities but they are, at best, an exemplar of key performance criteria. It is unlikely that the first tactical task you will undertake on operations will be the one you spent so much time on while on exercise. What that prior, or pre-operational training did was help you work as a team, to an end state, in a confusing, uncertain, challenging and threatening environment. Confirming the conditions and standards under which the task will be conducted during operations and how you are replicating them in training are important.
Measuring performance in collective training environments where processes and outcomes are uncertain can appear daunting. You could start with the two key functions of PLANNING and CONDUCT[4]: ask yourselves, ‘How well did we do?’, in those two areas. To improve focus and reduce useless measurement activities, it is worth thinking about the key areas of friction in the tactical tasks you are undertaking and then measuring these frictions. Self-assessment remains critical, and by determining key measurements in advance, you will be able to more effectively confirm how you performed. As you get better at designing and developing training activities, you will be able to become more focussed on what and how you will measure performance.
At CTC, as part of the exercise Team Reflection Activity program of feedback events, we’ve introduced the Before Action Review (BAR) to help focus teams at the start of advanced collective training exercises. This has proved useful in maximising opportunities at the start of an exercise and in reinforcing the utility of self-assessment.
To help you focus on measurement, consider the possible options below, across the range of tactical tasks.
| In PLANNING, how effectively did we: | |
| Apply the DMPP? |
|
| Issue key WNGO? |
|
| Integrate Deception? |
|
| Target orders to the task? |
|
|
|
| Leave Adequate time for ‘targeted’ rehearsals? |
|
|
In CONDUCT, how effectively did we: |
|
| Follow and adapt the plan as required? |
|
|
|
| Optimise our resources? |
|
|
|
|
|
Enact Changes.
The TRAs conducted during CTC exercises strive to support self-assessment. With guidance, teams generally remain focussed on what is in the control of the participants to achieve immediately after the TRA, before the ‘war’ starts up again. As a result, CTC has observed numerous effective changes to behaviours within exercises.
The biggest failure in learning from training experiences is the inability to enact the identified changes, based upon the TRA fixes, upon return to the barracks environment. The final TRA is arguably the least useful of the feedback sessions in advanced collective training, but is still seen by immature training designers as the most important for collective performance improvement. It was also identified in research as the one where participants are least engaged. The remedial actions (FIXES) identified in the final AAR/TRA generally relate to some form of a rewrite of SOPs which are rarely completed, and if they are written, are not practiced until months later when the new team arrives in the following year. Based upon our observations over time, the implementation of collective training learnings after larger exercises appears to be limited and inconsistent.
CTC identified this challenge and planned to support training partners by formally helping them review their learnings four to six weeks after exercises. To date, this remediation support has been limited in execution. Reviewing and repetition can be quite difficult in the complex environment of military training – nevertheless, it is essential in becoming better.
SOLUTIONS
There are simple solutions to these challenges. These can be implemented now and subsequently developed further over time in different contexts.
Measurement.
When you decide what you are going to train, confirm – preferably in writing – what success looks like for this training iteration, noting the conditions and standards under which you want tasks to be performed. In our conversations, we all are able to say whether things went well or not. After an activity, you will generally be able to identify the key areas that went well and what didn’t. That is measuring.
However, it is useful to be a little more specific and robust in your measurement than that “after activity feeling”. Looking at key task measures like mission success, time, risks, casualties, and team interaction can be useful when considering performance and what to change. Be careful of trying to measure too much. The variables that can be measured in team training can be colossal so focus is needed. Not everything has to be measured for performance to improve.
The team learning experience will also bring up emergent issues that can be more important to address for team development than the initial measures you established at the start. This emergence doesn’t reduce the impact of the measurement you have already undertaken, as it will inform these emergent issues, especially over time.
Plan Evaluation Before the Activity. This doesn’t need to mean creating extensive checklists as they’re usually poorly designed and employed, with lots of boxes to be ticked as the key measure of their effectiveness. We often use interesting but irrelevant quantitative measures. Some of these quantitative measures like time, ammunition, and casualties are very important but – given our limited ability to mimic real combat – only useful as general indicators in many instances.
If one soldier uses all their ammunition in the assault, before the break in, this may nor may not be bad. In reality, their shooting may have been the fire that allowed the break in or it may be just that they are shooting indiscriminately. It’s very hard to measure effectively in large scale training, even with simulation and instrumentation systems. Of perhaps more use for the team, in this scenario, might be to measure how reactive and adaptable they were to the unexpected situation and how effective they were in redistributing ammunition – potentially a qualitative self-assessment.
Measure Overall and Key Frictions. Rather than a checklist that measures what you can measure, pick the important things overall. Don’t try to measure everything. Why? Firstly, it is pretty much impossible to do and, secondly, you can’t adjust to everything at once even if you did have the data. Like getting good at a golf swing, look at the performance overall and then pick the couple of key friction points from your past experiences, make a pre-assessment of your performance and measure these key friction points again after the activity. Feedback is essential to ensure you don’t get good at a bad golf swing! Reflecting on this measured performance over time is how you will get better. If you’re not sure, then make assumptions, write them down, and adjust after your first activity. You can use a scale with numbers (1-10) or words (Very Poor-Very Good) as you like. This isn’t an imposed test! It’s about you getting better with your team.
Implementation.
Change isn’t easy. Change is hard. Change is necessary to get better. Linking change to self-identified improvements supports the motivation to change. The concept of ‘learning agility’[6] talks about the readiness and ability to learn from experience, and the motivation to implement those learnings after a reflective experience. Priming people to reflectively assess the right things and then be prepared to change and run another set of actions to embed the change is a key element of effective training.
Commanders set the scene for change, but they can’t be everywhere, and our experience shows that it’s the team that beds down changes into culture that is most effective. One of the big mistakes people make in TRAs is to have a big laundry list of little things to fix. Not only do these long lists rarely get completed, they are often not the key issues for teams to address if they want to improve.
For example, writing or revising often SOPs isn’t the problem. More often a better question to ask is: “Why did we come on this exercise without preparing”. That is, why didn’t we practice our SOP/Drills in the calm of the barracks? If our SOP were lacking, why didn’t we revise them before we deployed so we could practice them without extra pressure. Learning new processes while under pressure rarely works. Rather than spend time re-writing SOP – a job often conducted by a stalwart Brigade Major, in the middle of the night on their own – you are better off identifying the problems and fixes and then repeating the task quickly to judge the effectiveness of your changes.
CONCLUSION
The more focussed the development and conduct of training activities, including measuring performance, the more there are opportunities to improve the outcome. Concentrating on measuring accomplishment across time will lead to better team and individual performance. Focussing on the key areas for improvement and self-measuring team performance will supercharge team development.
End Notes
[1] Army Training Manual, Part 1 Chapter 5, The Army Collective Training Management Framework, Pg 21, para 39.
[2] Fundamental inputs to capability (FIC), ADF-P-0 Preparedness and Mobilisation. The ten fundamental inputs to capability are: organisation, command and management, personnel, collective training, major systems, facilities and training areas, supplies, support, industry and data (.
[3] The Army Collective Training Management Framework (2023), Army Training Manual Part 1 Chapter 5.
[4] Ibid
[5] Storr, J (2022) Something Rotten: Land Command in the 21st Century, Havant, UK: Howgate Publishing Limited.
[6] Derue, D. Scott, Ashford, Susan J., and Myers, Christopher G. (2012), Learning Agility: In Search of Conceptual Clarity and Theoretical Grounding, Industrial and Organizational Psychology, 5 (2012), 258–279.