What is the question? Well, let’s first consider how much data our Army, and indeed the Australian Defence Force, is collecting on a daily basis and what sense is being made of that data. Frankly, there is no point giving you numbers and statistics: it’s just a lot, in many different data silos, disparate and fulfilling specific purposes, but not joined up to create intelligent cross-silo pictures. The thing that can and does join up these data repositories are questions; intelligent questions, incisive questions, the sort of questions only data, or in fact targeted data analysis, can answer. The data is there, but the questions are absent. Or are they? What questions should we be asking? How to be better, faster, safer, more agile, more efficient, flexible, smarter – perhaps all of these? What questions are being asked of, and answered by, data right now, and how can we get better at asking them?

There is no doubt a bigger data problem (or opportunity), but there are already efforts afoot to deal with a smaller data problem, one of importance, one that could start on a dark night, with low ambient illumination levels due to thick cloud, the sort of night that night vision devices still struggle with. Heavy showers forecast, cloud down to the ground, poor visibility, an apparently essential mission, what could go wrong?

It’s every commander’s worst nightmare. A late night phone call, the frantic disembodied voice on the other end of the line - 'Ma’am, there’s been an aircraft accident...'

The ensuing hours, days and weeks invariably and frustratingly furnish more questions than answers.  But we crack on, we keep analysing the evidence and asking the questions until we have the answer; an answer to the question of ‘why and how did it happen’, or as close as we can get to it. 

We’re good at it. Very good. Gauges are placed under microscopes to ascertain the position of the needle when it struck the face of the instrument, light bulbs receive the same treatment to determine their state of illumination at impact. Every element of human participation is examined in staggering detail. Then we find it. Burnt, half buried under wreckage and earth, damaged but still a faithful guardian of its precious cargo. The Flight Data Recorder (FDR).

The FDR is the holy grail of accident investigation. A virtual treasure trove of data contained within its memory banks. Now, the mechanism of the accident can be distilled to mathematics and physics, with just the ‘why’ remaining to be established.  While there is still a tremendous amount of work still to be done, being in possession of the FDR data is a huge advantage for the investigators. It is capable of answering many questions.

Months, sometimes years pass and the investigation is complete.  Findings and recommendations are handed down, all with the sole intent of preventing reoccurrence. 

Prevention, Proaction, Prediction – Aiming to Eliminate

Prevention, yes, what if there were a way... a way to get to the preventing without suffering the indescribable anguish of having the accident. If only there were a way to perform all the analysis, the number crunching and the inquiries into the human behaviour - all without the accident.  After all, FDRs are significantly easier to find and access prior to the accident. Let’s say you did access and download FDR data in the absence of an event such as an accident - what then do you do with such data?  What do you look for? What is your focus for the examination?  What questions do you want answered? What questions should you be asking? What outcomes do you want to achieve?

What if there was a way of testing our understanding of our risk environment using data, and our subsequent approach to that environment through risk management? Risk management is an oft overused and misunderstood term.  In aviation, risk management is relatively well known and used, and has reached a high level of refinement.  But it can always be better, and surely this is not just an aviation thing. Our people are injured and killed doing things other than flying, aren’t they?

What forms does risk management come in?  In simple terms, it has historically been either reactive or proactive.  Reactive systems respond to events and seek to make change to avoid future occurrences of the same or similar events.  This approach relies on data from incidents and accident reports, and suffers from the obvious failing of only addressing what has happened.  Proactive systems attempt to address hazards/risks that have been assessed as having the potential to cause harm before this harm has occurred.  In addition to accident and incident report data, proactive systems utilise data from inspections/audits as well as voluntary reporting and surveys.  Proactive systems seek to prevent, rather than explain and mitigate, but the aforementioned data sources have their limitations. They look at what is on show and what people say they do, and voluntarily report, but they do not measure or assess what is actually done.

So how can we move beyond being proactive and into the realm of predictive risk management? In practice this involves moving to the left not only of potential causes for accidents, but also potential causes for incidents, to analysis of pre-incident trends. If we can get ahead of incidents actually occurring, and address equipment or training shortfalls before they lead to unsafe acts, we are attacking the accident vector before it gains velocity. Predictive risk management is precisely what the name suggests – it endeavours to predict what risks are not yet obvious through seeking to identify trends and allows operators to track changing risks that they currently believe are being managed. 

Achieving predictive risk management is all about asking questions, and data analysis… back to the FDR and its treasure trove.  Modern FDRs record thousands of parameters, but how do we analyse that massive amount of data to give us what we want – a predictive approach to Risk Management?   One highly effective methodology is termed Military Flight Operations Quality Assurance (MFOQA).


MFOQA, also known as Flight Data Monitoring, is the practice of obtaining and analysing flight data to enhance the Aviation Safety Management System. The accepted ADF definition is:

‘Through the routine collection, compilation and analysis of parametric flight data MFOQA programs provide insight into the flight operations environment. This allows the identification of trends and the efficient allocation of resources to reduce operational risks and improve capability’[1]

MFOQA is not new in aviation with the first such program having its origins in the early 1960s in the UK.  Early programs simply sought to check compliance and conformance with rules and procedures.  But to be truly predictive, modern MFOQA programs need to do far more.  The big shift in modern platforms is the richness and detail in the data available. As an example, the ARH Tiger, our first new generation digital aircraft, is capable of recording a combination of 1553 data bus traffic, video and audio, essentially enough data to completely reconstruct a flight from what the aircraft did, which switch was selected when, and what was said. With all recorders turned on, this equates to 12 GB of (incredibly rich) data capture per aircraft per hour, all there for the taking, waiting for the questions.

Army Aviation’s MFOQA program has been in place for close to a decade, and in this time has:

  • allowed the identification of non-compliance with flight manual limits, and other rules and procedures
  • allowed monitoring of trends in flight parameters associated with known higher risk flight profiles – allowing us to actively evolve our flying techniques
  • helped us discover previously unknown (latent) hazards in our operations
  • provided a basis to better quantify risks already addressed by the Safety Management System (SMS)
  • provided efficiencies in maintenance.

There are three main stages from data capture through to having an actionable product, with the decision to act being the fourth stage, per fig 1 below.

Figure 1 – AAAvn MFOQA Process Model


  • Stage 1 - Data collection – This involves the physical recording of the aircraft parameters and the method of getting the data out of the aircraft, into a storage repository and then onto a medium to permit analysis.  The aircraft download can be via live telemetry, manual download (post flight) or Wi-Fi data transmission. While this would appear to be a somewhat simple task, this element can be technically complex and provide significant challenges in the establishment and ongoing maintenance of the program. Importantly, a target of a minimum of 95% data capture is a key measure of an effective program that is truly looking across an entire fleet. The system needs to seek 100% data capture, but due to recorder failures and data corruptions, 95% is realistic. Ideally, knowing the value of data capture and subsequent monitoring, we would build not only data capture on new platforms, but also the analysis system required into acquisition activity. We would ask the questions up front, and ensure that by using a data analysis system, the data could answer them.


  • Stage 2 - Data Interrogation – Machine (no context) – Second, once the data is at hand, it is compared to a predetermined set of parameters (or ‘event set’ AKA ‘the questions’) to ascertain if an event has occurred.  While some of the events are relatively simple (e.g. exceeding a bank angle of 60°), others will require input from numerous sensors to establish if an event has occurred (e.g. greater than 20° of pitch below 50’ above ground).
  • In the establishment of event sets, multiple levels of event can, and should, be created.  This allows for objective assessment of the hazard and associated risk level of the event against objective parameters.  The quality of event sets develop over time, and parameters are refined until the data is providing meaningful information about aircraft operation.


  • Stage 3 - Data Analysis – Human (with context) – Once the data has been reduced by applying the ‘event set’ to the ‘data set’ in the machine, the events of interest are analysed by (human) environmental experts and context is added.  At this stage, corrupted data is identified (i.e. parameters that cannot coexist identified and discarded from consideration) and events of significance requiring action are prioritised for presentation at the MFOQA review board where action decisions will be made. At this point, the information provided by the analysis of the data is used to achieve the main goal of the program – training improvement or behavioural change to enhance the system safety. 
  • In Army Aviation, the MFOQA program is used to enable and inform the chain-of-command in order to enhance safety and is thus intended to be non-punitive in nature. In cases of deliberate repetitive violations or potential gross neglect, only the existence of the event of interest would be provided, and the Commanding Officer would get to choose the next step. This approach works because experience has shown that events of this type are almost never isolated, and effective supervisors know without identification where to look to solve these problems or apply increased supervision.  MFOQA is a ‘race-to-the-heights’ of aviation safety, and it seeks to improve and inspire rather than frighten and punish. This philosophical vector is important.


  • Stage 4 – Act and Review – The review board is held routinely to ensure identified trends are acted upon quickly before incident/accident vectors develop. Though not yet at this frequency, Army Aviation has a desire, and arguably a need, to review MFOQA trends monthly and implement changes regularly. This will stop ‘trend admiration’ and ensure that the legislative requirement to continuously review risk to ensure controls are effective is executed.


The first stage, data collection, should reach steady state operation fairly quickly once implemented with changes only required for identified equipment or process improvements.  The next three stages are subject to a continuous evolution cycle to allow us to adapt and learn from our evolving environment. 



A contemporary example from Army Aviation highlights how MFOQA can assist in identification of hazards and the management of the associated risk[2].

The flight data (fig 3) from an S70 Black Hawk revealed what appeared to be an unsafe flight manoeuvre, which required further investigation to establish the context of the event. 

Figure 3 – Flight data trace from an S70 Black Hawk showing bank angle over 90°


Contact with the crew of this sortie allowed us to learn that the flight was conducted for the purpose of flight testing an aircraft component, and the crew were flying a prescribed flight manoeuvre.  However, had the procedure associated with this test flight been appropriately risk managed?  In this case, the profile had been authorised, but not optimised for the safest outcome. As a result of this event, the procedure followed for this test flight element was modified and the appropriateness of the risk managed flight profile confirmed.  Had this not been the case, and the event had occurred in the course of a flight other than a test flight, the event might have given insight into an adverse trend emerging in our flight operations.

The following table (Table 1) shows some of the MFOQA ‘event set’ for the S70 Black Hawk in Australian service over recent years. The trends in the occurrence of trigger events are obvious, and the likely safety implications of not operating the aircraft anywhere near actual or monitored limits undoubtedly positive. Just looking at this table, even the unfamiliar observer can immediately see trends, both apparently positive and negative, and understand what elements might warrant further attention and what seems to be tracking well. This ability to ‘see’ results intuitively is an indicator of a good set of questions, in this case refined over time. In the case of this data set specifically, the adding of context significantly reduced the events of interest to single figure numbers. An important effect here is the understanding by operators what the event set triggers are, and the subsequent thought process they will engage in before triggering them through a specific manoeuvre. It doesn’t limit aircrew safely and appropriately manoeuvring the aircraft within specified limits, but it does ensure triggering events are both necessary and justifiable, because we may ask the question to confirm this during the review process.

Table 1 – S70 Black Hawk ‘Event Set’ (selective) 2012-2018


Surely this is not just an aviation thing?

Yes, surely there are applications of this methodology outside of aviation? Well, of course there are!

While data monitoring of this nature has its origins in aviation, there is an obvious and logical conclusion that the principles are just as applicable to the operation of any vehicle involving a human-machine interface.  Most major ground transport companies in Australia have implemented some kind of data monitoring (In Vehicle Monitoring Systems or IVMS).  While the majority of these systems work on a limited number of parameters, usually transmitted via a mobile telemetry system, the system can also function effectively with a fixed recording system that requires a manual download. 

In the case of a ground vehicle, it comes back to - yes, you guessed it - the questions. If, for example, we wished to track pre-cursor events to vehicle rollovers through data, then we would need some detail about what the pre-cursor conditions were so we could establish the ‘event set’ and then compare the data to it. The question might be, ‘How often have we nearly rolled over a G-Wagon on a dirt road in the last 12 months?’ Assuming of course there was a program of routine data collection in place, we could get the actual ‘event set’ for a rollover from an accident that has already occurred. It could include numerous parameters, some on vehicle, some off vehicle, such as:

  • Body critical rollover angle
  • Corner radius
  • Steering wheel position and use
  • Activation of vehicle safety functions like traction control and ABS
  • Vehicle lateral accelerations (ie. is the vehicle drifting?)
  • Speed
  • Type of road
  • Ambient weather and road conditions

Once we know what a rollover ‘event set’ looks like in data, we set a predictive ‘event set’ prior to actual rollover but providing clear indicators of potential, go to the vehicle fleet data, and then pass it through the filter. Importantly, many of these measurements that relate to vehicle dynamics may not be detected by a dashboard mounted system that isn’t in some way integrated with, or at least ‘sniffing’, on board systems for information. Also, some of the information may not be directly sensed by the vehicle but could be attained in other ways. For example, how might we establish the class of road?

In a study conducted by DSTO[3] where Health and Usage Monitoring Systems (HUMS) were fitted to M113AS4, Bushmaster Protected Mobility Vehicle (PMV) and Australian Light Armoured Vehicle (ASLAV), vehicle speed and vertical acceleration inputs were used to establish whether the vehicle was travelling on a first or second class road or moving cross country.  Once the type of road or terrain is established, context can be provided to search for triggers from other parameters of interest[4].  Of course there is GPS positioning available to confirm the location, while further information may be gleaned from satellite imagery for events of interest.

The important point here is to ensure we are ‘question led’, rather than ‘data led’. If we simply start with the data provided, and work out what it will give us, we will never go past it. But if we start with the questions we want answered, then we will understand what the current data collection systems can and can’t provide, and what needs to be done to collect or indeed combine the data sets we need to answer them.

How safe is safe enough?

The US Air Force says it very well:

‘Apply the principle that there is no limit to the amount of effort justified to prevent the recurrence of one aircraft accident or the loss of one life.’[5]

With this sage advice in mind, the vision of the Army Aviation MFOQA program is simple. 

We seek to capture and analyse the data generated by an aircraft in operation and apply the lessons from this analysis to help find new ways to improve flight safety and increase overall operational efficiency.

But this need not be limited to the aviation environment. Every life is precious, and we have a legislated responsibility to continuously do our best to ensure we are applying the effort necessary to protect not only those it is our duty to protect but, most importantly, the men and women either side of us - our mates. In many cases, we have the data available and could be using it to enhance safety; in some cases we have not deliberately decided not to, so we need to get after it.

Safety focused, safe operations (both domestically and deployed) underwrite capability delivery. Certainly for aircraft operations, over the last 30 years far more aircraft have collided with the ground or other things in theatres of operation than have been successfully engaged by threat weapon systems of all types. We must first survive peacetime domestic training operations to be ready to deploy, alive and intact, when the call comes and then apply those same practices when deployed. The statistics do not lie, and a review of vehicle accidents over the last 30 years would likely yield similar findings to aviation.

MFOQA, and military vehicle operations quality assurance (MVOQA), to coin a new term, gives us the processes and procedures to wade through massive amounts of data and create something meaningful and noble from it. Trends in our routine training operations that could cause harm need no longer remain a mystery. We must only decide and act to progress, and ask questions.

This article was co-authored by Richard Armitage.