Artificial Intelligence (AI) is a rapidly evolving technology that has the potential to transform the way we live and work. In recent years, AI has made significant advances in fields such as computer vision, speech recognition, and decision making, and these advances are beginning to have an impact on a wide range of industries, including the military. The Australian Army is no exception, and it is facing both opportunities and challenges as it prepares for the future of warfare in an era of rapid technological change.
The potential benefits of AI for the Australian Army are substantial. AI can help soldiers make better decisions in complex and rapidly evolving situations by providing them with real-time information and insights. For example, AI algorithms can analyse vast amounts of data to identify patterns and trends, providing soldiers and officers with a better understanding of the battlefield. AI can also automate routine tasks such as logistics, freeing up soldiers and officers to focus on more critical tasks.
Another key advantage of AI is its ability to augment human decision-making. In a combat scenario, AI algorithms can help soldiers quickly identify threats and respond to changing conditions, reducing the risk of casualties, and improving operational efficiency. This could lead to a more effective and agile Australian Army that is better equipped to meet the challenges of modern warfare.
However, there are also significant risks associated with AI, and the Australian Army must be careful to consider these when developing its AI strategy. One of the biggest challenges is ensuring that AI algorithms are trustworthy and unbiased. AI systems can learn from large amounts of data, but the data they learn from may be biased or incomplete. This can lead to unfair or inaccurate decision-making, with potentially serious consequences for soldiers and the overall mission.
Another concern is the risk of AI systems becoming autonomous and making decisions without human oversight. This is particularly relevant in the military context, where the consequences of autonomous systems making the wrong decision could be severe. The Australian Army must ensure that AI systems are designed with robust human-in-the-loop processes, and that soldiers and officers have the training and skills to understand and control these systems.
Finally, the Australian Army must consider the ethical implications of AI and ensure that its use of AI is consistent with the values and principles of the Australian Defence Force. This includes ensuring that AI systems are used in a manner that is consistent with international human rights law and that soldiers and officers are trained to understand and apply ethical considerations in the use of AI.
The future of the Australian Army is likely to be shaped by the development of AI. This technology has the potential to revolutionize the way soldiers and officers operate, providing them with new capabilities and making them more effective and efficient. However, it is important that the Australian Army approaches the use of AI with caution and that it carefully considers the opportunities and risks associated with this technology. This will require a clear strategy for the development and use of AI, along with robust processes for ensuring that AI algorithms are trustworthy, that soldiers and officers have the skills and training to use these systems, and that the use of AI is consistent with the values and principles of the Australian Defence Force.
Author's note
What did you think of this article? Did you agree with its points? What if I told you that this article wasn’t written by me but rather ChatGPT, an AI-powered language model developed by OpenAI. The only guidance I gave to ChatGPT was:
‘Write me an 800-word blog article on the implications of artificial intelligence for the future of the Australian Army. The article should be provocative highlighting both the opportunities and threats that soldiers and officers of all levels should consider.’
Followed by:
‘Great, now give me a clever title for this article.’
On reflection, you may notice that there is nothing particularly original or ground-breaking about the points described in this article. Yet, the implications of this simple use of such technology provides limitless implications for our profession, and the world. What applications can you already imagine?
There are situations when that output will be superficially passable to a cursory inspection or skimming. But just like it’s graphical equivalents, extra limbs, and elongated 6 fingered hands will often be present, at least metaphorically. It’s often the written word version of the uncanny valley.
In the short term, it will probably help people churn out better quality spam emails, and maybe the occasional (barely) passing grade Staff College essay at the last minute.
Current examples typically involve exposing data training sets to an external entity , so probably try to avoid using it for your next range instruction or OPORD..
It is likely similar to a lot of other technology in that it occasionally pulls some great but shallow stunts , but is actually most useful as a more nuanced part of human—machine teaming : perhaps when the models are refined enough to run locally, it can be built in to a word processor a bit like a spell checker or locally running Grammerly like engine, and help people tailor the form and style of the semantics they have articulated. There are even techniques like private federated learning that may help deal with the data leakage associated with centralized learning models.
Fitting that kind of data set locally is not crazy - after all the entirety of English language content of Wikipedia fits locally in the storage of an an Apple Watch these days.
Just like we did not end up with robot butlers - we got washing machines and dishwashers: technology to be truly useful, often ends up being more pedestrian and less visible, than early hype.