Innovation and Adaptation

i-Can Help You

By Samuel C. Duckett White October 20, 2019


The benefits of Artificial Intelligence (AI) in warfighting are increasingly being covered in academic literature.[1] This paper seeks to look at how the adoption of machine learning, within the AI subset, could support and enhance[2] a spectrum of areas within the ADF through: promoting consistency in decision-making; sentencing under military justice and imposing administrative sanctions; and informing the promotions and postings functions within the Career Management Agency. This is not intended to be an exhaustive list of areas, but one to promote and stimulate debate.  

AI and Machine Learning

Intelligence can be defined as wisdom and ability. AI is a variety of human intelligent behaviours, such as perception, memory, emotion, judgment, reasoning, proof, recognition, understanding, communication, design, thinking, learning, forgetting, creating, and so on, which can be realised artificially by machine, system, or network.[3] So do we need to meet all these criteria to take advantage of the developments in AI? This field is in its infancy, but this is not to say that the ADF cannot use some of the developments in AI in decision making. AI research has led to a number of methods that are already in wide use across many industries and machine learning (ML) is one of those subsets.

ML is a subset of AI which uses statistical methods to enable computers to improve with experience. It is very good for being used for very specific tasks. ML relies on algorithms that can provide better outputs over time as they are exposed to more and more data. Through the development of the algorithm, a large amount of data is used to teach the algorithm how to come to a decision based on reducing the likelihood of creating a false positive or false negative. A core method of development is called supervised learning[4], where the use of historical data that has already provided a decision shows the algorithm the desired outcome in various data inputs and the machine learns to correctly identify the outcome types. This is very useful for organisations that produce and collect large amounts of data, and require consistent outcomes, such as the ADF.

Assisting decision-making

Automating systems can assist administrative decision-making in a number of ways. For example, they can:

  • make the decision
  • recommend a decision to the decision-maker
  • guide a user through relevant facts, legislation and policy
  • provide useful commentary as a decision-support system

Additionally, they can help identify:

  • the correct question(s) for the decision-maker to consider prior to making a determination, including the relevant decision-making criteria and any relevant or irrelevant considerations
  • whether any procedures or matters which are necessary preconditions to the exercise of the power have been met or exist
  • whether there exists any evidence in respect of each of the matters on which the decision-maker must be satisfied, and
  • particular issues which require the decision-maker’s consideration and evaluation.[5]

It is not posited that ML has reached a stage where it would be advisable for an automated system to make decisions for the ADF. Yet, it could very readily assist a decision maker, either through recommendations or through acting as a guide for policy and law. This reduces the risk that decision-makers will blindly rely on ML for their decision making because they do not want to take (or have) the time to ensure the decision is correct. There might even be a circumstance where the decision maker cannot interrogate the decision provided and is held to the decision provided leading to a question of is that person held to account on that decision.

Consistency in decision-making is an issue not only in the ADF, but more broadly in public and administrative law.[6] By standardising the tests used for administrative sanctions, sentencing, or postings and promotions, the more the algorithm is used, the more data is available. The Ryan Review states that a cognitive edge must be developed; we must know more, think faster, act smarter[7] – but the human mind has limitations of capacity. Taking the intent of the Ryan Review, utilising ML will result in quicker, more consistent decisions across the ADF.

However, there are distinct drawbacks in these early stages of machine learning. One issue at hand is the transparency and consistency of the data that is being used. This can make it hard to challenge, either within the ADF or in a civilian setting.[8] Equally, bias in ML is a significant issue that can cause long term effects on the organisation.[9] An algorithm is developed with a particular outcome in mind, but the bias of those who develop the algorithm through the design process, and how the computer is 'trained', can affect how the computer produces an output. If it is only trained on data that indicates soldiers who commit assault are not promoted and have been in for 3 years, then it will have a bias against soldiers that meet that criteria regardless of other variables.

Military Justice System

Kline and Kahneman created a theory on the validation of the environment when it came to being able to intuitively predict an outcome in an environment based on the regularity of variables.[10] Applying this idea to ML brings up the question: can we ensure that all the variables that a human would consider can be plugged into an algorithm to give us the best decision? 

Sentencing during military disciplinary proceedings consists of relatively few variables.  Under the Defence Force Discipline Act 1982 (Cth), a Service Tribunal, in sentencing, must have consideration to civilian sentencing principles[11] and the need to maintain service discipline.[12] With respect to the former, amongst other things, the following must be considered:

(a)  the person's rank, age and maturity;

(b)  the person's physical and mental condition;

(c)  the person's personal history;

(d)  the absence or existence in the person's case of previous convictions for service offences, civil court offences and overseas offences;

(e)  if the service offence involves a victim, then the person's relationship with the victim;

(f)  the person's behaviour before, during and after the commission of the service offence; and

(g)  any consequential effects of the person's conviction or proposed punishment.

These are all data points which an ML algorithm could assist a Service Tribunal. This has been the case in the United States of America since 2013.[13] In at least 10 states, these tools are a formal part of the sentencing process; elsewhere, judges informally refer to them for guidance.[14] Utilising an ADF-wide risk assessment algorithm, as an aid to Summary Authorities, would help promote consistency, transparency and accountability across the Defence Force. The data obtained could be inputted by the current practice of filling in an E1 – Pre-sentence report, which outlines financial mitigating circumstances. Moreover, it could easily take into account service records, age, rank, time in rank, qualifications, previous convictions, spent convictions, and dependents. Importantly, this would still allow the decision maker to consider variables that were not entered into the algorithm.

Also part of the military law system are administrative sanctions.[15] What might become harder for ML is assessing what is ‘in the interests of the Defence Force’ – a determination often used as the basis for the termination of an ADF member’s service.[16] It may be that a certain threshold needs to be met in order to trigger this provision under an algorithm. This threshold could include multiple layers:

  • absolute (such as sexual related criminal convictions, prohibited substance possession, high range driving under the influence), or
  • strict (such as whether or not the convictions amounted to a year’s imprisonment[17] or a substantiated complaint of domestic violence).

Here, ML could provide that when certain triggers are met, as defined by policy and law, decision-makers are notified of the appropriate administrative sanction that should be taken. This could assist commanders in navigating the complex and esoteric maze of uncertainties within ADF policies.  

ML support to administratrive functions is not a foreign idea: the Australian Department of Veterans' Affairs has established an automated compensation claims processes system to automate certain aspects of its assessment and determination of compensation claims from veterans and their families.[18] The system guides decision-makers in applying over 2,000 pages of legislation and over 9,700 different rules. The efficiency gains have been substantial. The Department now determines 30% more claims annually using 30% fewer human resources in substantially less time, resulting in departmental savings of approximately $6 million each year.[19]

Career Management

Career Management is a highly complex system trying to juggle the needs of the organisation and the desires of the individual to ensure the ADF has a highly effective workforce. The complexities of Other Ranks’ performance appraisal has been covered;[20] some of which may be managed by ML algorithms.

A particular concern by ADF members is consistency and transparency of how posting plots are developed. This is made complex through the constantly changing desires of each individual member in relation to the priorities of the ADF. A lot of these concerns can be viewed to be consistent across Defence – a member who has a family seeks stability, those who haven’t deployed are seeking a trip overseas, certain people like certain locations etc. The Royal Australian Navy has attempted to digitise and solve these issues, to an extent, through the adoption of ATHENA – a dynamically reconfigurable decision support tool.[21] These elements could be turned into data points, and overlayed with the organisational plots, to create an algorithm that can learn to place people in the best locations for both defence and the member to help reduce the dissatisfaction with career management. This would create more consistency in how people are moved around and be used to show (in a predictive model) where a person is likely to go based on their posting history, potential and organisational needs.

This would also help manage expectations. Individuals know that their preferences are plugged into a machine, removing the element of human bias that can occur when people are making the analysis. This would not replace the ultimate decision by a person, and there will always be someone who is not happy with their given posting, but ML could reduce the resources required to manage the plot and the likelihood of negative responses to a posting.

Conclusion

Machine Learning has a lot of potential to enhance the decision making of the ADF through reducing the cognitive clutter that an individual has to sift through to reach am informed decision. There are a number of issues that must be considered when looking at how ML could be used to support decision making that could have detrimental or unintended consequences. If implemented correctly, and with due consideration of potential pitfalls, the use of algorithms to help synthesise information in various administrative and disciplinarian functions could create more efficient, transparent and fairer systems for the ADF.

This article was co-authored by Lincoln Sudholz.

End Notes:

[1] Daniel Lee, ‘An interdisciplinary approach to Army’s intellectual preparation for artificial intelligence and autonomous systems.’ 24 Sep 18, accessed from The Cove at https://cove.army.gov.au/article/interdisciplinary-approach-armys-intell... Kieran Galea, ‘Enhancing Army’s Robotic and Autonomous System Strategy’ 03 Jun 19, accessed from The Cove at https://cove.army.gov.au/article/enhancing-armys-robotic-and-autonomous-... see more generally the ICRC, ‘Artificial intelligence and machine learning in armed conflict: a human-centred approach’ 06 Jun 19, accessed from https://www.icrc.org/en/document/artificial-intelligence-and-machine-lea... see Tess Bridgeman, ‘The viability of data-reliant predictive systems in armed conflict detention’ 8 Apr 19, accessed from https://blogs.icrc.org/law-and-policy/2019/04/08/viability-data-reliant-... see Lorna McGregor, ‘The need for clear governance frameworks on predictive algorithms in military settings’ 28 Mar 19 from https://blogs.icrc.org/law-and-policy/2019/03/28/need-clear-governance-f... see Ashley Deeks, ‘Detaining by algorithm’ 25 Mar 19 accessed from https://blogs.icrc.org/law-and-policy/2019/03/25/detaining-by-algorithm/.

[2] See LTCOL Greg Colton, ‘More than Just a Hashtag’ https://theforge.defence.gov.au/publications/more-just-hashtag-criticali...

[3] Deyi Li & Yi Du, Artificial intelligence with Uncertainty (2017, CRC Press).

[4] M. De Choudhury, S. Counts, E. Horvitz, A. Hoff, in Proceedings of International Conference on Weblogs and Social Media [Association for the Advancement of Artificial Intelligence (AAAI), Palo Alto, CA, 2014]

[5] Dominique Hogan-Doran, ‘Computer says ‘no’: algorithims and artificial intelligence in Government decision-making’ (2017) 13 The Judicial Review 1 – 39.

[6] See S Lohr, “If algorithms know all, how much should humans help?” The New York Times, 6 April 2015, at https://nyti.ms/1MXHcMW, accessed 23 August 2017; see also D Schartum, “Law and algorithms in the public domain” (2016) Nordic Journal of Applied Ethics, 15–26, at http://dx.doi.org/10.5324/eip.v10i1.1973, accessed 16 August 2017.

[7] Brigadier Mick Ryan, The Ryan Review: A study of Army’s education, training and doctrine needs for the future (2016), 88.

[8] See Automated Assistance in Administrative Decision Making – 2004 – Australian Administrative Review Council. See further, M Perry and A Smith, “iDecide: the legal implications of

automated decision-making” [2014] Federal Judicial Scholarship 17, at www.austlii.edu.

au/au/journals/FedJSchol/2014/17.html, accessed 9 Oct 19.

[9] Tackling Bias in Machine Learning (2019) Insight Data Science, https://blog.insightdatascience.com/tackling-discrimination-in-machine-learning-5c95fde95e95

[10] Kahneman and Klein on Expertise (2013) Judgement and decision making https://j-dm.org/archives/793

[11] Defence Force Discipline Act 1982 (Cth), s 70(1)(a).

[12] Ibid, s 70(1)(b).

[13] State of Wisconsin v Loomis 881 N.W.2d 749 (Wis. 2016)

[14] Jordan Hyatt and Steven L Chanenson, ‘The Use of Risk Assessment at Sentencing: Implications for Research and Policy’

[15] See, for example, David Letts and Rob McLaughlin, ‘Intersection of Military Law and Civil Law’ in Robin Creyke, Dale Stephens and Peter Sutherland (eds) Military Law in Australia (The Federation Press, 2019) 100.

[16] Defence Regulations 2016 (Cth), s 24(1)(c).

[17] Similar to the test under Migration Act 1958 (Cth), s 501.

[18] Department of Human Services, 2012-13 Annual Report (2013) 68, 69

[19] John McMillan 'Automated assistance to administrative decision-making: Launch of the better practice guide' (Paper presented at seminar of the Institute of Public Administration of Australia, Canberra, 23 April 2007), 10.

[20] Ben Taylor, ‘Other Ranks’ Performance Appraisal: Yet to Evolve’ 2 Oct 19 from https://cove.army.gov.au/article/other-ranks-performance-appraisal-yet-e...

[21] Department of Defence – Science and Technology, accessed from https://www.dst.defence.gov.au/research-facility/training-systems-and-ma...


Portrait

Biography

Samuel C. Duckett White

Legal Officer, Australian Army.

The views expressed in this article are those of the author and do not necessarily reflect the position of the Australian Army, the Department of Defence or the Australian Government.



Add new comment