Artificial intelligence (‘AI’) and its latest iteration, Generative Artificial Intelligence, has already caused disruption to modern society.[1] In terms of potential military applications, it has been viewed as having the ability to ‘…field better weapons, make better decisions in battle, and unleash better tactics.’[2]

This article aims to look at AI’s potential in being implemented to create ‘better tactics’, in particular, examining how it can take two paths:

  1. complete autonomous AI that makes its own tactical decisions; or
  2. as an assistant to the decision-making processes such as Individual Military Appreciation Process (IMAP).

In other words, AI has a strong potential in acting like ‘SkyNet’ from the movie series ‘Terminator’ with complete control; or ‘Cortana’ in the video game series, ‘Halo’ where it acted as an assistant to the main protagonist. While it does sound like science fiction, at the current development pace of AI, an iteration similar to science fiction is possible.

Artificial Intelligence in Defence

The international community have been live to the possibilities of AI and this is evident by the AI introducing the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy.[3] The declaration is a commitment that AI is used within the bounds of international law, safe, and without bias.[4] In the context of this article, the declaration affirms that any AI must:

…be accountable, including through such use during military operations within a responsible human chain of command and control.[5]

The implementation of AI within defence has a broad scope to the extent that the discussion has been raised about it being used in a country’s nuclear capability. President Joe Biden and Xi Jinping of China stated that they were poised to pledge a ban on the use of AI in the control and deployment of nuclear warheads. The ban did not occur; however, both countries agreed to continue to discuss advanced AI systems.[6] In May 2024, the United States affirmed their commitment in not deferring to AI to make a decision regarding nuclear weapons and encouraged the Russian Federation and China to follow suit.[7]

The implementation of AI in the military is being studied by various countries. When considering its implementation there are broadly two methods as to how: enabling AI to run autonomously; or AI being able to be used as an assistant in the battlefield, such as conducting key analysis for the decision-maker that would speed up processes such as IMAP.

Autonomous AI in the battlespace

The use of autonomous AI systems is a real possibility for the Army. It has the potential to provide additional assets that do not require a human to manage, simply to provide direction as to its task. When considering our size relative to the size of potential threats faced by the Army, the implementation of autonomous AI could meet those shortfalls, and act as a solution to the problem.

The implementation of the Boeing MQ-28 Ghost Bat for the Royal Australian Air Force is an example of an autonomous system. Ghost Bat is a path finder for the integration of autonomous systems and artificial intelligence.[8] The system works in partnership with the pilot, creating human-machine teams.[9] The MQ-28 is a key example of AI technology and humans working together and assisting in our forces to achieve their mission.

Still, these systems raise questions: for instance, if an autonomous system breaks international law, who is responsible for the breach?

AI with Tactical and Strategic Decision-Making

The use of AI in an assistant role for combat leaders is a real possibility. There are situations where appropriately trained large language model AI could save time for the commander and provide tactical insight or options for problems being faced in an exercise or on deployment.

Some of the ongoing ‘friction points’ in tactical decision-making during the IMAP include:

  1. providing enough time to the subordinates to consider and draft their own plan;
  2. ensuring that all considerations are taken into account when making a decision; and
  3. mitigating the risk of the decision-maker’s bias i.e. conducting FASD for possible courses of action.

AI could be a solution to some of these challenges. For instance, AI could be trained to provide probabilistic data and support analysis on external factors such as current enemy analysis (data from the commander and other sources), and terrain, and provide a potential plan for consideration. Overall, it could assist the combat leader in developing a tactical plan by providing that analysis and potential approaches. As the AI is assisting, it would not displace innovation and unforeseen plans as the final decision rests with the combat leader.

In terms of practicality, the large language model would need to rely on quality data. The initial data would be doctrine, thereafter, it would no doubt need to include:

  1. Creating guardrails – it would impose limits on the model such as ensuring that any proposed tactical action would be acceptable through the lens of the Army; and
  2. Semantic search – in which only key or certain information would be provided to the model, such as tactical moves that provide surprise, it could rely on key manoeuvres that have achieved it.[10]

As the language model is acting as an assistant, the ultimate control measure is the user. However, the above control measures are needed in order to ensure that appropriate training such as the above is provided to the large language model.

 It has been established by psychological studies that human decision-making impacted by various biases such as bandwagon effects, confirmation bias, and implicit associations.[11] The biases can assist as it enables us to make faster decisions; however, it can also lead to errors in judgement.[12] Unlike a human, AI has the ability to avoid theses biases and understand the issues or material in a faster pace.[13]

As discussed in my previous article, Generative AI[14] could draft the initial ‘first cut’ of orders. On the face of it, Generative AI could act as an assistant to the commander. The benefit or ‘knock on effect’ is that it reduces time on the battlefield to consider and implement a plan and provide the subordinates more time to conduct their own appreciation.

However, such usage can create its own bias, being automation bias, in which the users that have used the program for an extended amount of time have devoted all their trust into it and do not question its results.[15] The issue is then further compounded when it can have a ‘down-stream’ issue with the de-skilling of humans within the system in terms of their analytical skills due to the semi-automation of some tasks or due to ‘automation bias’.[16]

In addition, there is a risk that the Generative AI could take up the bias of the trainer. Like any initiative, it will carry risk, and steps should be made to minimise it as much as possible. As such, any model could include control measures to mitigate the ‘trainer bias’. But this can be a complex problem as there is a cultural context to consider: the trainer could find it appropriate within their culture, but it may not align with the circumstances/completely within the boundaries given by the chain of command. As such, it is important to consider the utilisation of values-based approaches.[17]

To mitigate such biases, control measures could be considered based on principles such as:

  1. Data integrity governance via a ‘balance test’ – ensuring that data utilised by the Generative AI is not from one source or trainer, but multiple, in turn, reducing the risk of any ‘skewed’ bias output as the data has been fairly balanced; and
  2. Values based test – it is a broad control measure, but key consideration could be ethical, principles, legal obligations, and security interests (discussed later in this article), but also, providing clear rules regarding transparency, for example, Generative AI program highlighting if it is only relying on one set of data from a sole or small source.

Governance and Risk Management of Autonomous and Tactical and Strategic AI

If there is a consideration of implementing Autonomous or Assisting AI within the battlefield there are various governance measures that should be in place to maintain the integrity of the decision-making process. In terms of Autonomous AI, the key risk in its implementation is that the human decision-maker is entrusting AI with responsibilities and the ability to make decisions in the battlefield.

The following principles should be considered:

  1. Delegation of Tasks – Reviewing the delegation of tasks to Autonomous AI from a risk perspective would be important, such as how we delegate responsibilities down the chain of command and what we or the person responsible is willing to accept to delegate to AI to run on their own, and other tasks that should be in an assistance role for the commander. 
  2. Liability – Understanding as to if the autonomous AI has breached international or national law, does the responsibility fall onto the direct line of command, despite the AI being given full autonomy? Or is it legally acceptable for a task to be delegated to autonomous AI? 
  3. Tactically sound – At the circumstances is it appropriate to delegate the task to autonomous AI or should be in the form of assisting the human element. 

When considering AI in an assisting role, the implementation is most likely to hold least risk as a human is still in control. Despite human control being at the centre, there still should be considerations as to how it is implemented, including:

  1. Testing AI in the Classroom – The implementation of Generative AI could be tested within the classroom when conducting IMAP assessments, in which a Generative AI program could assist in conduct the various steps within CMAP and above. The test could assist in finding faults or limits of the program that need to be rectified.[18]
  2. Ethical principles, national security interests, and legal obligations – the Generative AI would need to consider limitations/boundaries within the system and these boundaries could also apply to autonomous AI.[19] 
  3. Conduct training regarding its implementation and avoidance of ‘automation bias’ and if is Gen AI, asking the right questions – As noted while a bias/commander bias can be avoided, a heavy reliance on the technology is a serious risk and training should be a first step in ensuring that the commander is self-aware of the bias risk in order to maintain control and use AI in the appropriate manner.
  4. Foundational Training – The foundational training that is currently taught in relation to tactics and strategy should be maintained. As while the technology is useful, it has the potential to fail, and as such, in order to mitigate the absence of the technology, members should be trained to make decisions without the need for the technology.
  5. Formulaic Approach to War Fighting – It is possible that if Generative AI or similar is used for tactics that it could bring up the same or similar approach. As a control measure to prevent a form of templating and prevent the opposing force from gathering a pattern from AS forces due to the effect of templating, a ‘guardrail’ or notice could be used to state to the user that the manoeuvre has been used before in this exercise/operation.
  6. Maintaining the integrity of Manoeuvre Warfare – It may be a concern that the Generative AI may fall into an ‘attritional’ approach with regards to tactics. In terms of a potential solution, a guard rail could be implemented where it would include the combat ratios and highlight to the combat leader if it is beyond the acceptable risk. The ‘acceptable risk’ could be determined sourced from doctrine or higher command.

In any choice or implementation of AI, the data used for the system will be key in either system being deployed. In particular, ensuring that the data is of high quality and that it is secure in order to prevent any potential issue of it being tainted by the opposing force.

Conclusion

AI will most likely be introduced into the battlefield in a complex and comprehensive way and as such, before such implementation is conducted due to the foreseeable benefits, there should be consideration about the key underlying principles of its introduction and what control measures should be in place to avoid any loss of integrity to the decision-making process.

End Notes

[1] Walden, S, (2024), ‘Does the Rise of AI Compare to the Industrial Revolution? ‘Almost’, Research Suggests’, [online], https://business.columbia.edu/research-brief/research-brief/ai-industrial-revolution, accessed at 11 November 2024

[2] Erskine, T, Miller, S, (2024), ‘AI and the decision to go to war: future risks and opportunities’, Australian Journal of International Affairs, Vol.78, No.2, page 135 – 147 at Page 136

[3] Ibid at 136; United States Department of State, 2023, [online], ‘Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy’https://www.state.gov/political-declaration-on-responsible-military-use-of-artificial-intelligence-and-autonomy-2/, Accessed on 11 November 2024.

[4] Ibid. 

[5] Ibid. 

[6] Ibid at 137.

[7] Ibid at 137. 

[8] Australian Air Force, ‘Ghost Bat’, [online], https://www.airforce.gov.au/our-work/projects-and-programs/ghost-bat, Accessed on 11 November 2024; Boeing, ‘MQ-28’, [online], https://www.boeing.com/defense/mq28#overview, Accessed on 11 November 2024. 

[9] Ibid. 

[10] Anderson, D, 2023, ‘Training a Large Language Model on your content’, [online], https://medium.com/barnacle-labs/training-a-large-language-model-on-your-content-f366ae50305b, Accessed on 14 November 2024; and Multimodal, 2023, ‘The Ultimate Guide to Building Large Language Models’, [online], https://www.multimodal.dev/post/the-ultimate-guide-to-building-large-language-models, Accessed on 14 November 2024

[11] Vold, Karina, 2024, ‘Human-AI cognitive teaming: using AI to support state-level decision making on the resort to force’, Australian Journal of International Affairs, Vol.78, No.2, page 229 – 235 at page 232.

[12] Ibid. 

[13] Ibid. 

[14] Santelises, Aaron, 2024, ‘Generative Artificial Intelligence – Easing the Administrative Burden for Army’, [online], https://cove.army.gov.au/article/generative-artificial-intelligence-easing-administrative-burden-army, Accessed on 11 November 2024

[15] Vold, Karina, 2024, ‘Human-AI cognitive teaming: using AI to support state-level decision making on the resort to force’, Australian Journal of International Affairs, Vol.78, No.2, page 229 – 235 at page 233. 

[16] Ibid at page 233. 

[17] University for Kansas, ‘Helping students to understand the biases in generative AI’, [online], https://cte.ku.edu/addressing-bias-ai, Accessed on 11 November 2024.

[19] Etzioni, A, Etzioni, O, 2017, ‘Pros and Cons of Autonomous Weapons Systems’, [online], https://www.armyupress.army.mil/Portals/7/military-review/Archives/English/pros-and-cons-of-autonomous-weapons-systems.pdf, Accessed on 11 November 2024.