Defence rightly wants to prepare its members for ethical conduct – on the field, in the lines, and in the office. From sexual violence to workplace bullying to war crimes, strategies for prevention are critical, and a key component must be to strengthen our members’ moral decision-making. But, given that prevention occurs when people are not in the midst of acute ethical challenges, how can it be done effectively?
A common approach is to have people meet in groups to discuss what they would do in hypothetical or decontextualised scenarios. The facilitator might ask, ‘What would you do if you saw a higher-ranked member mistreating a civilian in such-and-such a setting?’ Group members then discuss how they would react and what they might do to ensure ethical conduct. But does this sort of hypothetical moral decision-making work? Do people’s answers to hypothetical or decontextualised ethical scenarios cohere with the moral decisions they actually make in real situations? The answer from experimental research is surprising.
An article from the journal Cognition describes experiments that compared hypothetical moral assertions with actual moral decision-making. The article finds that ‘the proscription to not harm others – predicted by our survey sample to be a powerful force in real life moral decisions – in fact has surprisingly little influence when potential significant personal gain is at stake’. An article in the journal Psychological Science finds that ‘responses to hypothetical dilemmas are not predictive of real-life dilemma behavior, but they are predictive of affective and cognitive aspects of the real-life decision’. A series of two articles in the Journal of Choice Modelling describes ‘hypothetical bias’ as a significant problem for relating hypothetical decision making to real life. The second article argues that the gap between ‘hypothetical choice data’ and ‘behavioural realism’ means that we need a range of mitigation strategies when hypothetical decision making is undertaken. An article in Social and Personality Psychology Compass argues that hypothetical dilemmas ‘suffer from low external validity’ – in other words, they don’t do well in predicting how people will act in real life. Among other reasons, the authors point to the problem that such scenarios are ‘unrepresentative of the moral situations people encounter in the real world’, and that hypothetical problems ‘do not elicit the same psychological processes as other moral situations’.
We could imagine how this might play out in the group discussion mentioned above. The facilitator asks, ‘What would you do if you saw a higher-ranked member mistreating a civilian?’ The members of the group know that the correct response is to say that they would reject and resist the unacceptable behaviour of the higher-ranked member. They know that providing this answer will result in the approval of the facilitator and confirm for themselves a sense of relative moral preparedness. Let’s focus in on one imagined member of this discussion group: CPL Smith concurs that he would refuse to condone the unacceptable behaviour, receiving an affirmation from the facilitator. After the discussion group is over, CPL Smith walks through the building and takes a shortcut through a corridor. He catches the tail end of an interaction between a field grade officer and a private soldier, with the officer gently shoving the soldier through a door, with the words, ‘Incompetent idiot!’ The anonymous soldier leaves. Other officers stand around, paying no attention. CPL Smith winces at the interaction, but then thinks, ‘I don’t know the situation – was that friendly banter? Did I actually hear things correctly? No one else seems to think anything significant happened. And the soldier didn’t say anything. In the past I’ve got in trouble for impulsively jumping to conclusions about things I didn’t understand. I don’t want to look stupid for misinterpreting something I barely saw.’ CPL Smith continues on his way, resuming his pleasant feelings about doing well in the moral decision making exercise earlier.
Real life moral decision making often involves the fog of uncertainty, in which there are tangled impulses towards self-preservation, keeping the peace, and peer approval. So how can we effectively prepare our members for these circumstances, rather than giving them (and the ADF) a false sense of moral preparedness?
The articles referred to earlier suggest a number of mitigation strategies. Two in particular seem like a good start.
First: increase contextual realism. One article found that ‘as more contextual information – like the specific nature of the reward and harm contingencies of the decision – became available to subjects, hypothetical moral probes better predicted real moral decisions’. So hypothetical scenarios should include reference to painfully realistic risks and rewards. For example: If you speak up and you are wrong, it is likely that the group will shun you, and that your word will hold less weight with the commander in the future.
Second: reward honesty. It should be clear to discussion participants that the purpose of the discussion is to explore factors impacting decision-making in difficult situations, rather than gaining an assurance that everyone can say the right thing. For example, it should be permissible for someone to say, ‘Actually, I don’t think I would call out the higher-ranked member – it would feel safer to shut up.’ An honest response like this provides an opportunity for the group to consider strategies to deal with pressures toward self-preservation.
Is our ethical training making us less ethical? There is a lot of great ethical training that happens in Army. At the same time, we should be aware of the danger of giving people the impression that a ‘correct’ answer in a hypothetical scenario is the same as being prepared for the complexity of tackling real life moral dilemmas.