Warfare is consistently shaped by forces beyond the battlefield, creating complexities for military operations and challenges for the application of International Humanitarian Law (IHL) that regulates armed conflicts.

Entering into the third revolution in warfare, the enhancement of Autonomous Weapon Systems (AWS) with machine learning capabilities and Artificial Intelligence (AI) can generate highly effective weapons to replace human combatants and conventional armaments. With the implementation of such technology in warfare, understanding the legal, moral and ethical considerations of their application in the battlespace has resulted in a considerable chicken-and-egg conundrum: whether IHL needs to adapt in anticipation for the development of AWS and AI technology, or if such technology needs to comply with the preexisting IHL principles.

Adhering to International Humanitarian Law

As AWS and AI technology develops, it's critical to begin discussing how such technology might interfere or change IHL. Military minds are at a crossroads as to whether IHL should adapt to this new approach to warfare or whether AWS and AI needs to comply with IHL. The four principles (distinction, humanity, proportionality and military necessity) of IHL is the measuring stick used for determining the legality of new technologies in the battlespace.

Distinction

Whilst AWS and AI does pose an opportunity for ‘bloodless’ battles due to the exclusion of humans from the battlefield, many claim that AWS and AI are not yet capable of determining between combatants and non-combatants. The Campaign to Stop Killer Robots argues that ‘as machines, they lack the inherent human characteristics that are necessary to make ethical choices’. Better yet, a lack of definition surrounding the inherent differences between a combatant and a non-combatant does not provide programmers with a workable definition to integrate into technology. However, it may still be possible to utilise AWS and comply with the principle of distinction with Kenneth Anderson and Matthew Waxman arguing that AWS can still be used against other mechanised weapons or in situations where non-combatant populations are scarce.

Humanity

The second of the four principles of IHL, humanity, prohibits the infliction of suffering, injury, or destruction not necessary for achieving the legitimate purpose of a conflict. Under the protection of this principle, combatants are obligated to refrain from further infliction of suffering once military goals have been met. Arguably, this principle could be the easiest for AWS and AI to comply with. Following an investigation into the utilisation of drones, it was noted that there was a considerable drop in civilian deaths whilst increasing the safety for drone operators, seemingly removing one of the greatest deterrents to combat.

However, Goose and Wareham believe ‘AWS can also undermine human dignity as they could not comprehend or respect the value of life, yet are afforded the power to take it away’. Ozlem Ulgen agrees that the use of AWS and AI ‘will treat humans as disposable inanimate objects rather than with intrinsic value’. Therefore, the use of AWS and AI in warfare can face considerable difficulties in certifying that their acts are fundamentally humane and comply with the principle of humanity.

Proportionality

Under IHL, the principle of proportionality ensures that the collateral damage of an attack will not be excessive in relation to the military advantage anticipated. Collateral damage also includes non-combatant casualties, where an attack that results in a superfluous number of civilian casualties to achieve a military objective would be unlawful. However, as proportionality is fundamentally context-specific, this introduces an issue for the legality of conflicts utilising AWS and AI and how to appropriately program a weapon system to comply with this principle. Currently a metric to objectively quantify excessive destruction does not exist, and programmers are uncertain if AWS and AI can be pre-programmed with responses to all situations akin to warfare. Soldiers are also fallible in determining proportionality due to the subjective nature of each situation encountered during conflicts, so is a decision made by AI.

Military necessity

Finally, military necessity permits measures that are necessary to weaken the military capacity of parties involved in conflicts with minimum expenditure of life or resources. Therefore, analysis of the use of force must proceed employment and be proven to completely adhere to this principle. Like proportionality, adherence to military necessity requires a subjective judgment of a situation. For example, AWS may find it difficult – or are unlikely to be better at determining – if an enemy has become hors de combat without sufficient human involvement. A discussion surrounding military necessity is also pertinent as the use of AWS and AI could in itself become a military necessity, as such systems could prove far superior to any other type of weapon. However, a country’s perceived ‘necessity’ to utilise AWS and AI does not meet the standards required to legally justify their unrestricted use and still must show that the proposed use of AWS and AI will accomplish their objectives with minimum expenditure of resources or detriment to life.

Conclusion

Ultimately, the use of AWS and AI in the battlespace presents a unique and complex ethical dilemma, exacerbated and complicated by how little we know about what such technology will become. We are only now starting to comprehend the number of implications that AWS and AI will have on warfare let alone on civilian lives. Although it remains unclear as to whether AWS and AI can comply with IHL, the contrary is yet to be proven – or better yet, may suggest that the laws of war may need to adapt to accommodate such technological advances. AWS and AI will be utilised for war and, therefore, rigorous standards and guidelines are necessary to make the use of such technology ethical – though there can never be a guarantee that it will be.