Future Operating Environment
An Interdisciplinary Approach to Army’s Intellectual Preparation for Artificial Intelligence and Autonomous SystemsBy Daniel Lee August 5, 2020
“I’m certainly questioning my original premise that the fundamental nature of war will not change. You’ve got to question that now. I just don’t have the answers yet,”
US Secretary of Defense Jim Mattis, on the impact of intelligent machines
The Chief of Army’s Intent Statement, Army in Motion, names offset technologies as part of the Technology and Innovation theme of Army’s future warfighting philosophy, Accelerated Warfare. Artificial intelligence (AI) and autonomous systems are two technologies which may be developed to provide an asymmetric advantage to an Australian Army which is relatively small in terms of equipment and personnel. Current discussion on these topics is often clouded by hype, emotion and references to popular culture; moreover, most discourse is framed from separate perspectives, such as technology, strategy or ethics. In order to intellectually prepare for the debate around the development and employment of these technologies, Army, as well as the broader Australian Defence Force (ADF), needs to build an interdisciplinary understanding of at least three fields of study.
Framing the Conversation
Firstly, the technology in question must be understood. It involves answering the question of "what" AI and autonomous systems are, and what they can do. A common lexicon is necessary if people are going to talk to each other (rather than past or at each other) about this contentious and nascent field of study. Understanding some of the various definitions of AI and autonomy, what some machine learning methods are, and what the technology can do now and will be able to do in the future forms the basis of investigating potential uses. The second question is one of strategy; answering "how" and “why” we intend to use AI and autonomous systems. Current technology might allow for the development of some exquisite AI-enabled autonomous systems; without an end state or objective mind, however, procuring these systems may produce tactically excellent solutions which are not integrated into a long-term strategic plan. For example, the use of autonomous systems may be used to mitigate the relatively small size of the Australian Army in comparison to a potential adversary’s land force. Once it is understood why Army needs these capabilities and how they might be used, it can then be explored whether or not they should be used at all. The third question Army should ask about the use of AI and autonomous systems is one of ethics; even if there is a valid strategy which would benefit from the use of these systems, "should" they be pursued? A precursor to answering this question is a basic understanding of the ethical frameworks that may be used to assess the use of emergent technologies by the military in war and peacetime. A broad understanding of the technological, strategic and ethical principles relevant to the potential employment of AI and autonomous systems will set the foundations for an informed discussion not only within Army, but also within the wider ADF, society and Parliament. Only when Army understands “what” AI and autonomous systems are, and “why” and “how” they may be used, can we begin to discuss if they “should” be used in a military context at all.
Technology - what are AI and Autonomous Systems?
Army needs to develop a broad understanding in its officers and soldiers of what AI and autonomous systems are and what they can do. Most of what the general population understand about AI and autonomy comes from the media or from popular culture; both of these may have contributed to the current hype and emotion around the topic. As a result, frequently asked questions include "how do we stop the machines from attacking us?" and "what’s to stop the machines from deciding we aren’t necessary and killing us all?” A basic introduction to the technology and how it works may help allay these fears, and would further allow Army to engage in informed, logical discussions on the issue. Definitions are important; it is hard to debate the utility and ethics of the military use of AI and autonomous systems if there is no common understanding on what is being debated. To illustrate the point, the US Department of Defense’ DoD Directive 3000.09 defines an Autonomous Weapons Systems as “a weapon system that, once activated, can select and engage targets without further intervention by a human operator.” Conversely, the Campaign to Stop Killer Robots, comprised mainly of non-governmental organisations, states “…fully autonomous weapons. These robotic weapons would be able to choose and fire on targets on their own, without any human intervention…” (Emphasis added by author). Notably, this definition does not encompass systems which operate within the cyber-domain, such as an autonomous Stuxnet-style program which could select and attack computer systems without human intervention, resulting in effects in the physical world such as the destruction of a nuclear power plant. In most cases, machines with any level of autonomy need to be enabled by AI. Definitions of AI are varied, with no one universally accepted definition; in general, it can be understood to involve computer systems performing tasks which would normally require human intelligence. The machine may learn to do this via a variety of methods. Supervised learning involves a human training the AI by showing it correct outputs for a given input; the mathematical equation at the heart of the program, or algorithm, then determines which mix of variable inputs produce a given output. For example, rather than trying to code in instructions of how to recognise a main battle tank, a human would train the AI by showing it many pictures of tanks (input), and telling the machine that they are tanks (output). After sufficient training, the system should be able to correctly classify a given object as “tank” or “not tank”. Unsupervised learning involves the use of plain data without associated outputs; it can be used to detect patterns or group objects by similarity. Reinforcement learning involves the setting of a “reward’ system for the algorithm, where the AI learns through replicating behaviours which maximise the reward received. In a combat simulator, this could involve scoring 10 points for destroying enemy vehicles and deducting 1000 points for every simulated collateral damage incident; the algorithm would attempt to find ways to maximise combat effectiveness while minimising non-combatant casualties to achieve the best outcome. Importantly, each type of machine learning allows the human to dictate to the AI what success looks like, and what outputs and behaviours are and are not desired. It is unlikely that robots or AI would completely replace individual human roles. A more nuanced examination of the technology reveals a number of alternative possibilities for the future human/intelligent robot relationship. Major General Ryan’s Human Machine Teaming for Future Ground Forces lists three key endeavours where machines might assist rather than replace humans. These include Human/Robot teaming, perhaps employing something like Boston Dynamics’ LS3 AlphaDog to carry a section’s equipment; Human/AI teaming, for example AI assisting humans in detecting cybersecurity threats; and Human Augmentation, such as exoskeletons which enhance soldiers’ physical and cognitive abilities. A Deloitte Review article by David Schatsky and Jeff Schwartz Redesigning Work in an Era of Cognitive Technologies outlines ways in which AI and autonomy may assist human workers: these systems may replace humans completely (but then allow them to be redeployed to do tasks which only humans can do, such as career counselling); automate processes, such as travel acquittal; relieving humans from doing dull, dangerous or dirty tasks, like picking up brass after a range practice; or empowering people to do things they could not do before, perhaps singlehandedly commanding an Armoured Fighting Vehicle Troop using a mix of inhabited and uninhabited ground vehicles.
Strategy - how and why would Army use AI and Autonomous Systems?
Once there is a general understanding of “what” AI and autonomous systems may include, the conditions are set to explore “why” and “how” they may be employed. In terms of strategy; this may involve determining the “ways” in which we might use our “means” or resources, such as the ADF, funding, AI or autonomous systems to achieve our “ends” - deterring and defeating armed attacks on Australia. The first part of the question is “to what end?” A programmer currently could design an autonomous reconnaissance helicopter, but without an end state or objective in mind, producing expensive and exquisite AI-enabled autonomous weapon systems may have the same result as Hitler’s reliance on Wunderwaffen in lieu of strategy. A possible strategic imperative for AI and autonomous systems would be as part of an Offset Strategy; that is, as a means of asymmetrically compensating for a disadvantage, particularly in a military competition. Even though there is understandable swell of interest in semi-autonomous and autonomous tanks, and autonomous ground combat vehicles, this new technology is not just about AI-enabled autonomous “killer robots”. The procurement of an autonomous tank might have high military value, but add little value to Australia’s other instruments of national power (diplomatic, informational and economic). If the strategic end state sought is to strengthen total national power, then it may make more sense to use Australia’s means (AI/autonomous systems) in ways that offsets the disadvantage of a relatively small population other than combat systems. In this instance, the development of driverless trucks would be able to relieve or replace soldiers from driving trucks and free them up for combat duties; this technology would also have dual applicability, allowing Australian commercial logistics companies to produce more with a given workforce, thus building national power through military and economic development. Diplomatic and informational effects could be generated by gifting these systems to regional partners to assist with development. AI could also have non-combat applications across all the warfighting domains, in roles such as cybersecurity, tactical simulations for training, and operational planning assistants. Already, many systems enabled by AI and autonomy are employed by militaries around the world; Lockheed Martin’s Aegis combat system, the Patriot missile system, and reactive armour are all able to engage targets with limited human intervention. Army, however, needs to beware of being so preoccupied about whether it can develop working autonomous systems, without considering whether or not it should.
Ethics - should Army use Army use AI and Autonomous Systems?
As well as discussing what constitutes AI and autonomous systems and how they are able to be used, Army will need to investigate what is legally allowable, and if the technology ethically “should” be used for military purposes. Answering this requires an understanding of at least some of the ethical frameworks which could be used. One framework from which to judge if we should use AI and autonomous Ssstems in warfare is the Deontological ethical perspective, which in essence assesses actions against a set of rules or morals (such as the Ten Commandments) in order to judge them as right or wrong. The Campaign To Stop Killer Robots states that “allowing life or death decisions to be made by machines crosses a fundamental moral line.” In his book Wired for War, Peter Singer discusses how this line may have already been crossed; regardless, insisting that humans maintain control over machine decision making may be for accountability, explainability and transparency of decision making. Another perspective from which to assess the potential military use of autonomous systems would be Utilitarianism, which essentially aims to achieve the greatest good for the greatest number. Under this framework, Army would be ethically obligated to use an AI-enabled autonomous weapons system if it was shown to reliably cause less collateral damage than weapons systems involving human decision makers. Further, the lack of emotion in machine decision making may further reduce the incidence of war crimes. Just War Theory outlines a number of principles which should guide ethical conduct in war, or Jus in Bello. These include discrimination to avoid targeting non-combatants, proportionality in the assessment of collateral damage and the prohibition on the use of means malum in se, or those which are universally deemed as evil, such as mass rape, torture and biological agents. Paul Scharre’s book Army of None discusses the possibility that machines could be programmed to adhere to these principles, and their lack of emotion would remove any motivation to violate the rules. Arguments against the use of AI and autonomous systems on the basis of their potential use by malicious actors have tended to be Jus ad Bellum, and could extend to the use of any weapon system for nefarious purposes.
The way ahead
There are a number of ways in which Army can begin to develop a broad interdisciplinary understanding of these AI and autonomy-related areas. Unit-level professional development programs and ad hoc professional development events such as the ASPI AI Masterclass provide good introductory discussion opportunities. However, these must necessarily be underpinned by structured education, such as Long Term Schooling serials and military curricula through either corps or non-corps training, to enable officers and warrant officers to intelligently lead or coordinate such activities. Understanding the technological, strategic and ethical aspects around the use of AI and autonomous systems within Army will allow for a more informed discussion about what the military might use, how it could use it, and whether it should be used at all.