The benefits of Artificial Intelligence (AI) in warfighting are increasingly being covered in academic literature.[1] This paper seeks to look at how the adoption of machine learning, within the AI subset, could support and enhance[2] a spectrum of areas within the ADF through: promoting consistency in decision-making; sentencing under military justice and imposing administrative sanctions; and informing the promotions and postings functions within the Career Management Agency. This is not intended to be an exhaustive list of areas, but one to promote and stimulate debate.
AI and Machine Learning
Intelligence can be defined as wisdom and ability. AI is a variety of human intelligent behaviours, such as perception, memory, emotion, judgment, reasoning, proof, recognition, understanding, communication, design, thinking, learning, forgetting, creating, and so on, which can be realised artificially by machine, system, or network.[3] So do we need to meet all these criteria to take advantage of the developments in AI? This field is in its infancy, but this is not to say that the ADF cannot use some of the developments in AI in decision making. AI research has led to a number of methods that are already in wide use across many industries and machine learning (ML) is one of those subsets.
ML is a subset of AI which uses statistical methods to enable computers to improve with experience. It is very good for being used for very specific tasks. ML relies on algorithms that can provide better outputs over time as they are exposed to more and more data. Through the development of the algorithm, a large amount of data is used to teach the algorithm how to come to a decision based on reducing the likelihood of creating a false positive or false negative. A core method of development is called supervised learning[4], where the use of historical data that has already provided a decision shows the algorithm the desired outcome in various data inputs and the machine learns to correctly identify the outcome types. This is very useful for organisations that produce and collect large amounts of data, and require consistent outcomes, such as the ADF.
Assisting decision-making
Automating systems can assist administrative decision-making in a number of ways. For example, they can:
- make the decision
- recommend a decision to the decision-maker
- guide a user through relevant facts, legislation and policy
- provide useful commentary as a decision-support system
Additionally, they can help identify:
- the correct question(s) for the decision-maker to consider prior to making a determination, including the relevant decision-making criteria and any relevant or irrelevant considerations
- whether any procedures or matters which are necessary preconditions to the exercise of the power have been met or exist
- whether there exists any evidence in respect of each of the matters on which the decision-maker must be satisfied, and
- particular issues which require the decision-maker’s consideration and evaluation.[5]
It is not posited that ML has reached a stage where it would be advisable for an automated system to make decisions for the ADF. Yet, it could very readily assist a decision maker, either through recommendations or through acting as a guide for policy and law. This reduces the risk that decision-makers will blindly rely on ML for their decision making because they do not want to take (or have) the time to ensure the decision is correct. There might even be a circumstance where the decision maker cannot interrogate the decision provided and is held to the decision provided leading to a question of is that person held to account on that decision.
Consistency in decision-making is an issue not only in the ADF, but more broadly in public and administrative law.[6] By standardising the tests used for administrative sanctions, sentencing, or postings and promotions, the more the algorithm is used, the more data is available. The Ryan Review states that a cognitive edge must be developed; we must know more, think faster, act smarter[7] – but the human mind has limitations of capacity. Taking the intent of the Ryan Review, utilising ML will result in quicker, more consistent decisions across the ADF.
However, there are distinct drawbacks in these early stages of machine learning. One issue at hand is the transparency and consistency of the data that is being used. This can make it hard to challenge, either within the ADF or in a civilian setting.[8] Equally, bias in ML is a significant issue that can cause long term effects on the organisation.[9] An algorithm is developed with a particular outcome in mind, but the bias of those who develop the algorithm through the design process, and how the computer is 'trained', can affect how the computer produces an output. If it is only trained on data that indicates soldiers who commit assault are not promoted and have been in for 3 years, then it will have a bias against soldiers that meet that criteria regardless of other variables.
Military Justice System
Kline and Kahneman created a theory on the validation of the environment when it came to being able to intuitively predict an outcome in an environment based on the regularity of variables.[10] Applying this idea to ML brings up the question: can we ensure that all the variables that a human would consider can be plugged into an algorithm to give us the best decision?
Sentencing during military disciplinary proceedings consists of relatively few variables. Under the Defence Force Discipline Act 1982 (Cth), a Service Tribunal, in sentencing, must have consideration to civilian sentencing principles[11] and the need to maintain service discipline.[12] With respect to the former, amongst other things, the following must be considered:
(a) the person's rank, age and maturity;
(b) the person's physical and mental condition;
(c) the person's personal history;
(d) the absence or existence in the person's case of previous convictions for service offences, civil court offences and overseas offences;
(e) if the service offence involves a victim, then the person's relationship with the victim;
(f) the person's behaviour before, during and after the commission of the service offence; and
(g) any consequential effects of the person's conviction or proposed punishment.
These are all data points which an ML algorithm could assist a Service Tribunal. This has been the case in the United States of America since 2013.[13] In at least 10 states, these tools are a formal part of the sentencing process; elsewhere, judges informally refer to them for guidance.[14] Utilising an ADF-wide risk assessment algorithm, as an aid to Summary Authorities, would help promote consistency, transparency and accountability across the Defence Force. The data obtained could be inputted by the current practice of filling in an E1 – Pre-sentence report, which outlines financial mitigating circumstances. Moreover, it could easily take into account service records, age, rank, time in rank, qualifications, previous convictions, spent convictions, and dependents. Importantly, this would still allow the decision maker to consider variables that were not entered into the algorithm.
Also part of the military law system are administrative sanctions.[15] What might become harder for ML is assessing what is ‘in the interests of the Defence Force’ – a determination often used as the basis for the termination of an ADF member’s service.[16] It may be that a certain threshold needs to be met in order to trigger this provision under an algorithm. This threshold could include multiple layers:
- absolute (such as sexual related criminal convictions, prohibited substance possession, high range driving under the influence), or
- strict (such as whether or not the convictions amounted to a year’s imprisonment[17] or a substantiated complaint of domestic violence).
Here, ML could provide that when certain triggers are met, as defined by policy and law, decision-makers are notified of the appropriate administrative sanction that should be taken. This could assist commanders in navigating the complex and esoteric maze of uncertainties within ADF policies.
ML support to administratrive functions is not a foreign idea: the Australian Department of Veterans' Affairs has established an automated compensation claims processes system to automate certain aspects of its assessment and determination of compensation claims from veterans and their families.[18] The system guides decision-makers in applying over 2,000 pages of legislation and over 9,700 different rules. The efficiency gains have been substantial. The Department now determines 30% more claims annually using 30% fewer human resources in substantially less time, resulting in departmental savings of approximately $6 million each year.[19]
Career Management
Career Management is a highly complex system trying to juggle the needs of the organisation and the desires of the individual to ensure the ADF has a highly effective workforce. The complexities of Other Ranks’ performance appraisal has been covered;[20] some of which may be managed by ML algorithms.
A particular concern by ADF members is consistency and transparency of how posting plots are developed. This is made complex through the constantly changing desires of each individual member in relation to the priorities of the ADF. A lot of these concerns can be viewed to be consistent across Defence – a member who has a family seeks stability, those who haven’t deployed are seeking a trip overseas, certain people like certain locations etc. The Royal Australian Navy has attempted to digitise and solve these issues, to an extent, through the adoption of ATHENA – a dynamically reconfigurable decision support tool.[21] These elements could be turned into data points, and overlayed with the organisational plots, to create an algorithm that can learn to place people in the best locations for both defence and the member to help reduce the dissatisfaction with career management. This would create more consistency in how people are moved around and be used to show (in a predictive model) where a person is likely to go based on their posting history, potential and organisational needs.
This would also help manage expectations. Individuals know that their preferences are plugged into a machine, removing the element of human bias that can occur when people are making the analysis. This would not replace the ultimate decision by a person, and there will always be someone who is not happy with their given posting, but ML could reduce the resources required to manage the plot and the likelihood of negative responses to a posting.
Conclusion
Machine Learning has a lot of potential to enhance the decision making of the ADF through reducing the cognitive clutter that an individual has to sift through to reach an informed decision. There are a number of issues that must be considered when looking at how ML could be used to support decision making that could have detrimental or unintended consequences. If implemented correctly, and with due consideration of potential pitfalls, the use of algorithms to help synthesise information in various administrative and disciplinarian functions could create more efficient, transparent and fairer systems for the ADF.
This article was co-authored by Lincoln Sudholz.
this is a very interesting article and I agree, there may be significant value to have aspects of the decision-making process assisted by AI. ADF Policy changes so often, it can be very difficult to keep up and the interaction between law, policy and Directives can be a challenge. Even the gathering of all the relevant resources related to a particular action would be helpful and save a great deal of time.
I reflect this article is now 5 years old and the use of AI is now a very hot topic!
Thank you for this article.