For the classical utilitarian Jeremy Bentham, 'utility' was a simple concept which could be defined as a measure of pleasure or happiness. Of course, this is the same man who decided it would be a good idea to have his corpse stuffed and insisted in his will that his taxidermied body be present at University College London meetings. The point being, times have changed and utility is generally viewed in a much more complex way.

For most theorists, utility is a bundled measure of advantage, happiness, safety, self-perception, pleasure, prosperity, and so on. It can most simply be described in elements of states, acts, and outcomes. If I am on a firing range and it is loud (state), I can decide to put in hearing protection (act), in order to prevent me from going deaf (outcome). I can also decide not to put in hearing protection, or to use it improperly so that I can better hear what the range controller is saying. Depending on how often I go to the range, or how bad the exposure is, I will be able to hear what is said, but may end up with permanent hearing damage.

The matrix of this decision looks like this:

 

There are of course several issues with this example, like the fact that hearing protection on the range does not, in reality, stop the user from being able to hear voice commands, but this is actually beside the point. Because the calculation of utility by individuals is based entirely on what they perceive to be beneficial/harmful, and this is not necessarily tethered to objective reality.

From this matrix we can derive a formal logic proposition[1], and then all that’s required is a model to assign numerical values to each element and we can then measure expected utility. The idea being that target populations or individuals will choose the highest utility action possible in response to the stimuli we provide to them.

Expected utility models are controversial in some sectors, and in many cases people do not choose the highest utility options; however, for military application – when the use or threat of force is involved – it has been found to be up to 90% accurate[2] in the prediction of target population behaviours and reactions to policy decisions.

On our last activity, Polygon Wood, we were able to apply a limited expected utility model to the effect of various actions on the human terrain within the scenario. We did not roll out a fully realised mathematical model, but we did use expected utility matrices like the one above. This allowed us to apply some rigour to the often highly subjective evaluation of the effect of actions on host populations.

What helped to enable this was the human terrain, as it was developed in the DATE[3] construct within the Training Adversary System Support Cell (TASSC) campaign plan Operation Steel Sentinel. The complex ethnography and PMESII-PT[4] contained within these products allowed us in the ION[5] Team to create a simple expected utility model, and then use this to reward or punish training audience actions or inaction.

It was our firm belief that any part of a future-ready training system must involve the assimilation of models and theories like expected utility, and most crucially, their adaptation and application within training activities.