It is time to start discussing Generative Artificial Intelligence (‘Generative AI’) and how it could be used in the Army.
The recent rise of Generative AI such as ChatGPT, at every turn, shows this technology will be doing nothing other than making a huge impact in our lives. The way we work and how we interact with organisations will all need to adapt.[1] It has even been said the introduction of Generative AI could be almost as transformative as the Industrial Revolution.[2]
In the context of the Army, it is important to not only consider Generative AI’s effect in the battlespace but also in the barracks environment, and how it could improve or make work easier for soldiers, non-commissioned officers, and officers. In particular, the introduction of Generative AI and how it can assist in the administration side of the job.
What is Generative AI?
Generative AI refers to its ability to create new content such as text, images, music, audio, and videos.[3] In addition, it has foundational models that can be adapted for targeted use cases with data that is ‘fed’ by the user such as the Army.[4]
Some current tools that have burst onto the scene from major companies include Microsoft’s Co-Pilot[5] and Google’s Gemini.[6] These AI provide the capability to organisations with regard to drafting of documents and finding relevant information that an employee is seeking within the organisation.
How could Generative AI be applied in Army?
Generative AI could make a positive impact in the administrative side of the Army by reducing the amount of time for work or tasks that could be classed as ‘toil’ or repetitive and redirect member’s efforts to more productive tasks within a project.
Examples where applying Generative AI could be used are:
- Doctrinal and compliance searches – a member could ask the program a question about a certain element of doctrine or policy, and thereafter, within a shorter amount of time, the AI could provide the relevant answer with references – in turn, saving time for the member and allowing them to focus on other aspects of the task such as considering the overall purpose or intent of the training, etc.
- Drafting an Administrative Instruction – the program could assist in ensuring that the document compliance with the Defence Writing Manual, and assist in writing a Schedule of Events by providing timings for each day that can be modified by the member.
- Battlespace Assistance for Combat Team Headquarters and above – as an example, Gen AI could assist in drafting Task Orders and radio messages at the HQ, and extract information from previous orders/Task Orders. The benefit of Gen AI assisting with drafting and analysis is that it will save time for the staff at HQ and allow them to divert more time to the tactics of the battlespace.
Overall, the examples alone show that AI has the potential to reduce the time spent on the ‘mundane’/repetitive side the project or task and shift the saved time to parts of the project that can enhance the quality of the training/exercise.
Generative AI – a solution to Army’s staffing challenge?
In addition to creating efficiencies, the use of Generative AI could be another element that assists with dealing with the effects of enlistment shortfalls which are being felt across the Army.[7]
The Government is taking steps to address the shortfalls such as allowing recruits from foreign nations[8] and adjusting entry standards.[9] However, these policy changes will take time to have an impact on the Army, as the intake and foundational training processes will need to be undertaken before they reach the level of a trained workforce (and in due course higher ranks).
As an additional measure, Generative AI could be implemented in a shorter time frame which then could potentially free up capacity for current members of the Army. In turn, those members could devote more time to other tasks that are not being conducted, and as a result, alleviate the pressure of the administrative side of the Army due to the assistance of Generative AI.
Governance of Generative AI
Like with any introduction of a new piece of software or program within an organisation, the governance of Generative AI must be considered. Key considerations should be ensuring:
- It is a closed loop system – in order to allow Doctrine, policies and any other relevant Defence items to be kept purely within the Defence network.
- Policies regarding usage – guidelines will be important in outlining as to when and how Generative AI can be used by a member. An example is if a document is produced by Generative AI, that it is appropriately reviewed by the member that requested the same and checking the references of the document. It is important as Generative AI is still prone to what the industry refers to as ‘hallucinations’, in which the AI can produce an incorrect answer,[10] and as such, the member requesting the task would mitigate the said risk.
- Training – training will be required in order to ensure users understand how to use it and are aware of what it could be used for in Army.
- Implementation – the implementation of Generative AI should be phased with the awareness presentation, training, and then gradual deployment in respective corps and units. The gradual implementation would allow a smooth introduction and allow any ‘bugs’/issues to be resolved.
- Cybersecurity – especially if Gen AI is deployed in the battlespace, the importance of maintaining cybersecurity will be compounded considering the information that it may hold and the assistance it is providing to the Army on the ground.
The deployment of Generative AI is still developing; however, it can occur with key governance measures outlined above. There is no doubt it will become a key program moving forward for organisations such as the Army.
Overall
The implementation of Generative AI should be considered by Army. Understandably, there are still risks of AI’s hallucinations which would limit how and when it could be used. However, Generative AI is technology the Army must not ignore and should implement as it will only improve/evolve over time. It has clear or tangible benefits that would ease the burden of a ‘strapped workforce’ by ‘giving back time’ to members by reducing the work of the ‘mundane’ and allowing them to focus on more important tasks. As Microsoft Co-Pilot has informed me...
[G]enerative AI can be harnessed for good or misused. While we’re not quite at the Skynet stage, responsible development and ethical usage remain critical as we explore the possibilities of this fascinating field.[11]
End Notes
[1] Clark, Elijah, ‘Unveiling The Dark Side of Artificial Intelligence In The Job Market’, https://www.forbes.com/sites/elijahclark/2023/08/18/unveiling-the-dark-…
[2] Walden, Stephanie, ‘Does the Rise of AI Compare to the Industrial Revolution? ‘Almost’, Research Suggests’, https://business.columbia.edu/research-brief/research-brief/ai-industri…
[3] Google, ‘Generative AI use cases’, https://cloud.google.com/use-cases/generative-ai#overview; McKinsey & Company, ‘What is generative AI?’, https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-generative-ai, April 2024.
[4] Ibid.
[5] Microsoft, ‘Microsoft AI’, https://www.microsoft.com/en-us/ai?ef_id=_k_EAIaIQobChMIqsvaosSGhwMVqcJ…
[6] Google, ‘Google AI’, https://ai.google/gemini-ecosystem
[7] Turnball, Tiffanie, ‘Australian army to allow recruits from foreign nations’, 4 June 2024, https://www.bbc.com/news/articles/cv22v0wg8v3o
[8] Ibid.
[9] Sky News Australia, ‘ADF to axe 14 health entry requirements to boost number of recruits’, 25 May 2024, https://www.skynews.com.au/australia-news/defence-and-foreign-affairs/a…
[10]GoogleCloud,‘https://cloud.google.com/discover/what-are hallucinations#:~:text=AI%20models%20are%20trained%20on,making%20incorrect%20predictions%2C%20or%20hallucinating.’
[11] Microsoft Co-Pilot, Question asked on 11 July 2024 – ‘write me a quote about generative artificial intelligence and Skynet’.
Though as you mentioned, “hallucinations” are quite common even with closed loop systems trained on relevant doctrine or policy, currently mitigation would require the user to verify all the information as it is prone to create false sources and quotes as well as providing false confirmation if questioned.
Granted each GPT model has been able to successively reduce the number of hallucinations, with GPT4 being the most stable so far.
At present I agree that the best method of implementation would be doctrine and compliance searches to locate where to find the information, or for plug and play information, still requiring human verification.
There is certainly risk that users could utilise it for assessments or guidance, do you think it would be best to address that through training or attempting to limit the output of the AI?