Future Operating Environment
War machines: Can AI for war be ethical?By Aaron Wright June 29, 2021
Since the iconic opening of Terminator 2: Judgment Day, featuring an army of machines controlled by an artificial intelligence (AI) called Skynet waging a war of extermination against mankind, the idea of robots as dangerous tools of conflict has lingered in the public consciousness. Modern day battlefields are becoming increasingly proliferated with artificial combatants, from the machine-assisted USAF Predator UAV that ruled the skies above Iraq and Afghanistan(Li et al., 2018), to the more modern MQ9 Reaper drones that autonomously conduct search and destroy missions, relying on human operators only to authorise the final kill (Sharkey, 2008).
As a result, real life firepower catching up with the fictional threat, the US Department of Defense (DoD) on 24 Feb 20 announced a number of ethical guidelines (US Dept of Defense, 2020) to help assure the public that as they continue to develop AI for battlefield use, it would be designed and deployed in an ethical manner. Those guidelines are: Responsibility, Equitability, Traceability, Reliability and Governability. It is through the adoption of these standards the DoD hopes convince the public that the continued use of battlefield AI is ethical. Setting aside whether warfare is inherently unethical or not, let us focus on how four schools of ethical thought might react to these battlefield machines, and whether the DoD argument would convince them.
Deontology and its followers are mostly concerned about following the ‘moral law’. Most widely recognised in the form developed by Immanuel Kant in the late 18th century (Burton et al., 2017), deontology followers are concerned with obedience to a strict set of moral laws that dictate the righteousness inherent in an action, not concerning themselves with the consequences of that action (Waller, 2011). A prime example is a Japanese samurai committing hara-kiri (ritual suicide) for his Lord upon defeat because it is what his code of honour dictates to be ethical, without concern for the future consequences his death may inflict.
Deontologists opinion of AI will be influenced by the moral law or duty in the culture/country in which they were raised. For example, the American tech-company Google took a deontologist approach with battlefield AI with their reaction to Project Maven. When it was revealed that Google's Maven AI face recognition software could be used for lethal battlefield applications, Google cancelled all ongoing projects with the DoD, stating “Google should not be in the business of war." (Global News, 2018) Google went so far as to later disavow all lethal applications of its technology within its own statement, Ethical AI Principles (Google, 2018). This approach is built upon the assumption that AI in war is inherently unethical, and therefore any action taken in support of it is also unethical. It does not consider consequences of such a decision, such as the DoD being forced to develop its own, perhaps inferior battlefield AI that could result in larger collateral damage or overall loss of life (Sewell, 2018).
Followers of Deontology are unlikely to be swayed/assured by the DoD’s ethical guidelines if their existing moral code had already classified the use of AI in war as unethical. Rather, Deontologists will use the ethical guidelines in order to determine if the DoDs stance aligns with their pre-existing assumption or not and act accordingly.
Utilitarianists are the opposite side of the coin to Deontologists, ultimately considering the consequences of actions, more than the ethics of the action itself. They are concerned with creating the greatest balance of good over evil (Frankena, 1963). In the film ‘I, Robot’ the AI character VIKI enacts a plan that will restrict the freedoms of humanity as well as result in many immediate deaths because by utilitarianist logic, the end result would be a greater number of humans living in safety than the maintenance of the current system (Grau, 2006). The ‘bad’ outcomes of people dying or loss of freedoms, are outweighed here by the endgame of a safer/happier planet for the majority.
A utilitarianist would evaluate each option, in this case to use or not to use AI for war and evaluate each outcome by how much well-being it contains (Bykvist, 2010). A Utilitarian might determine that the precise targeting systems of fully autonomous AI can be assured to only exercise lethal force in very specific instances with significantly less collateral damage than a human operator (Guarino, 2013), creating a net positive and therefore the use of AI in war is indeed ethical. Conversely, they may determine that the risk of military commanders now considering wars to be ‘risk-free’, due AI systems removing the fear of own causalities (Sharkey, 2008), may result in a greater number of overall conflicts, erasing any benefit of more precise AI weaponry.
Utilitarianists would be assured by the DoDs guidelines for reliability, traceability and governability. These allow for AI decision logic to be frequently checked in order to continually calculate the AI is acting in a way that creates the greatest balance of good. Governability ensures the failsafe that if the AI ever upsets the Utilitarian calculations by making incorrect determinations or operating in ways unforeseen by its designers, it can be deactivated until a more appropriate use for it is found. Furthermore, machine thinking is inherently Utilitarian, weighing the probabilities of different outcomes based on existing data sets, so fits in well with a Utilitarian world view (Burton et al., 2017).
Virtue as a framework for ethics differs from considerations of ‘duty’ or calculations of consequence by its focus on good character (Neubert & Montanez, 2020). Where Utilitarianists will define virtue as an action that yields good consequences and deontologists as the fulfilment of duty or a moral code, virtue ethicists will resist the attempt to define virtues in terms of some other concept, rather asserting that virtuous behaviour stems from undefinable, innate characteristics of human consciousness (Stanford University of Philosophy, 2016).
Virtue ethics reliance on innately human ‘essence’ raises the question whether, as machines move from weak AI to stronger robust AI, they will approach the human consciousness required for virtue ethics or remain nothing more than a complex amalgamation of millions of utilitarian consequence comparisons (Guarino, 2013). Virtue ethics may argue machines, not having true consciousness, will never be able act in an ethical manner and as such should always be restricted to a subservient role (Li et al., 2018), simply assisting humans who are able to determine whether the choices made are ethically virtuous or not. Machines, acting only on pre-programmed rules, should never operate on the battlefield autonomously, as when a human makes the critical decision, they are able to be held ethically accountable, while a robot cannot be (Sharkey, 2008).
For example, Google Translate, a machine learning AI, in 2017 when translating phrases in Turkish, associated female pronouns with being lazy, and male with being hardworking (Sonnad, 2017). It came to this conclusion through its own analysis of data. A virtue ethicist will argue that a human would innately understand the unethical problems of this association, whereas the machine, simply following a set of algorithmic rules, is unable to make this determination. Thus, by transferring battlefield agency to AI incapable of virtue ethics, humans become mere targets, data, objectives and objects, ultimately being deprived of their dignity and worth (Gomez de Agrenda, 2020).
The DoD’s ethical guidelines for battlefield AI would likely convince a virtue ethicist to a degree, but not completely. They would be especially encouraged by the focus on governability, granting humans ultimate control; however, likely would require further assurance that battlefield AI would not be able to make operational choices, such as engaging targets of taking human life, without direct human approval before being entirely convinced. They would also express concern that, as human commanders become overwhelmed by competing battlefield priorities, that AI, crunching the data, would ultimately be steering human decision making (Li et al., 2018), rather than the other way around.
Contract theory, popularized by Thomas Hobbes postulates that no person is naturally so strong they could be free from fear of another person, and no one so weak they could not present a threat. Thus, it is logical to form networks of mutual obligations for ongoing survival. An example being how we surrender some freedoms to the ‘state’, which enforces rules to guarantee and protect every person’s rights in return (Ethics Centre, 2016). These mutual agreements between individuals and collectives as to what is proper is known as the social contract.
Contract theorists will approach whether battlefield AI is ethical in two directions. Firstly, that nation states, as part of their contract with citizens have an obligation to do their best to defend their security and rights. In war, whomever has the best, most innovative technology has an advantage, and all major world powers are currently engaged in an AI arms race with one another (Williams, 2018). Thus, the state is required to maintain AI parity, lest it be defeated in the next major world conflict and fail in its social contract. Secondly, contract theorists would argue AI policy must be aligned with the general desire of the populace. John Locke, a famous contract theorist noted that if people didn’t agree with the significant decisions of a ruling government, they should be allowed to form a new social contract and create a new government (Ethics Centre, 2016). An example of this would be the U.S. initial involvement in the Vietnam War. Once public support turned against the war, the governments social contract with its people obligated it to surrender and depart.
The DoD ethical guidelines are well suited to appease a contract theorist. The focus on responsible and equitable design help to ensure that AI will be developed in a way that aligns with the current will of the populace and will only be applied when appropriate. If the populace feel that it is no longer being used in such a manner, AIs governability will allow it to be shut down. Thus, a contract theorist would have no inherit ethical problem with the use of battlefield AI, so long as it was utilised in a way that aligned with current desire of the citizenry. The DoDs focus as well on traceability/transparency in the operations of battlefield AI will allow the citizenry to make an informed decision about its ongoing application.
The use of AI on the battlefield is a complex ethical question, further complicated by how little we ultimately understand of what AI will become. AI has been the subject of constant speculation, with outlandish claims made and often disproven over the last fifty years (Sharkey, 2008). We are only now beginning to understand the broad reaching implications of AI in our civilian lives, let alone on the battlefield. Often the impact weapons of war will have can never be fully understood until the weapons themselves are employed. I am confident the participants of the Manhattan Project at the time felt their work ethical, but one wonders what their opinions were witnessing the devastation of Hiroshima and Nagasaki, or during the Cuban missile crisis.
Ultimately, whether one considers the use of AI in war to be ethical, and whether the DoDs new guidelines help make it so will rely upon your inherent optimism towards the field of AI. You may take a Utilitarian approach and consider all the lives saved by precise and calculated robot strikes without the loss of human soldiers, or take a Virtue ethics approach and lament killer robots erasing humans from existence based upon how a sequence of numbers arrange inside their internal algorithms.
Ultimately, AI will, and is being used for war, in both active and supporting roles. Lieutenant General Jack Shanhan said in response to Google's withdrawing from Project Maven that, "DoD research into battlefield AI would be continuing regardless" (Shanahan, 2019). As the use of AI on the battlefield cannot seemingly be prevented, all agree regardless of their personal philosophies that careful and rigorous standards, like those put forward by the DoD, are a required step to make AI for war ethical but not a guarantee that it will be so.
Burton, E., Goldsmith, J., Koenig, S., Kuipers, B., Mattei, N., & Walsh T. (2017) Ethical Considerations in Artifical Intelligence Courses. AI Magazine (Volume 38: Issue 2). https://doi.org/10.1609/aimag.v38i2.2731
Bykvist, K. (2010). Utilitarianism: A Guide for the Perplexed (1st ed.). Continuum.
Frankena, W. (1963). Ethics. Prentice-Hall.
Global News. (2018). What is Project Maven? The Pentagon AI project Google employees want out of. Global News. https://globalnews.ca/news/4125382/google-pentagon-ai-project-maven/
Gomez de Agrenda, A. (2020). Ethics of Autonomous Weapons systems and its applicability to any AI systems. Telecommunications Policy: Elsevier (Volume 44: Issue 6). https://ideas.repec.org/a/eee/telpol/v44y2020i6s0308596120300458.html
Google. (2018). Artificial Intelligence at Google: Our Principles. https://ai.google/principles/
Grau, C. (2006) There is no “I” in “Robot”: Robots and Utilitarianism. IEEEIntelligent Systems (Volume 21: Issue 4). https://ieeexplore.ieee.org/document/1667954
Guarino, A. (2013). Autonomous Intelligent Agents in Cyber Offence. 2013 5th International Conference on Cyber Conflict. Tallinn, Estonia. https://ieeexplore.ieee.org/document/6568388
Li, Shan., Wang, Yuning & Chen, Zhaoyu. (2018). Artificial Intelligence and Unmanned Warfare. 2018 5th IEEE International Conference on Cloud Computing and Intelligence Systems, Nanjing, China. https://ieeexplore.ieee.org/document/8691248
Neubert, M. & Montanez, G. (2020). Virtue as a framework for the design and use of artificial intelligence. ScienceDirect. https://www.sciencedirect.com/science/article/pii/S0007681319301545
Sewell, Sarah. (2018) Google was working on two ethically questionable projects. It quit the wrong one. The Washington Post. https://www.washingtonpost.com/opinions/google-was-working-on-two-ethically-questionable-projects-it-quit-the-wrong-one/2018/09/07/f95ee7b4-a639-11e8-97ce-cc9042272f07_story.html
Shanahan (Lt. Gen), J. (2019). Lt.Gen. Jack Shanahan Media Briefing on A.I.-Related Initiatives within the Department of Defense. U.S. Dept of Defense. https://www.defense.gov/Newsroom/Transcripts/Transcript/Article/1949362/lt-gen-jack-shanahan-media-briefing-on-ai-related-initiatives-within-the-depart/
Sharkey, Noel. (2008). Cassandra or False Prophet of Doom: AI Robots and War. IEEE Intelligent Systems (Volume 23: Issue 4). https://ieeexplore.ieee.org/document/4580539
Sonnad, N. (2017). Google Translate’s gender bias pairs “he” with “hardworking” and “she” with “lazy” and other examples. Quartz. https://qz.com/1141122/google-translates-gender-bias-pairs-he-with-hardworking-and-she-with-lazy-and-other-examples/
Stanford University of Philosophy (2016). Virtue Ethics. https://plato.stanford.edu/entries/ethics-virtue/
The Ethics Centre. (2016). Ethics Explainer: Social Contract. https://ethics.org.au/ethics-explainer-social-contract/
United States Department of Defense. (2020). DOD Adopts Ethical Principles for Artificial Intelligence. Department of Defense. https://www.defense.gov/Newsroom/Releases/Release/Article/2091996/dod-adopts-ethical-principles-for-artificial-intelligence/
Waller, B. (2011). Consider Ethics: Theory, Readings and Contemporary issues, 3rd Edition. Pearson.
Williams, L. (2018). Artificial Intelligence: REAL WAR. Engineering and Technology. https://eandt.theiet.org/content/articles/2018/11/arti%EF%AC%81cial-intelligence-real-war/