In November 2017, The Future of Life Institute released its Slaughterbot video which featured swarms of miniature, autonomous drones killing U.S. senators on Capitol Hill and students on a university campus. Hyperbolic, scary, and simulating drones beyond today’s capabilities, the video is part of the Institute’s campaign to ban autonomous weapons systems. The Future of Life, which seeks to ban all autonomous weapons, represents one end of the debate over these new technologies. They treat autonomous weapons as fundamentally new and different weapons.

In fact, only the general public awareness of autonomous weapons systems is new. As a Brookings Report notes, autonomous weapons have existed and been in use in a variety of settings for decades. Admittedly, due to technical limitations, the application of autonomy has not been widespread but rather limited to specific operational problems. However, recent advances in task specific or limited artificial intelligence means that autonomous weapons can now be applied to a very wide range of weapons. Therefore it is important we explore ways to exploit autonomy for national defense while addressing the ethical, legal, operational, strategic, and political issues. Perhaps the question generating the most discussion is whether an autonomous weapons should be allowed to kill a human being.

Human Rights Watch notes that the level of autonomy granted to weapons systems can vary greatly and its categorization is worth quoting at length:

Robotic weapons, which are unmanned, are often divided into three categories based on the amount of human involvement in their actions:

Human-in-the-Loop Weapons: Robots that can select targets and deliver force only with a human command;

Human-on-the-Loop Weapons: Robots that can select targets and deliver force under the oversight of a human operator who can override the robots’ actions; and

Human-out-of-the-Loop Weapons: Robots that are capable of selecting targets and delivering force without any human input or interaction.

The first, human-in-the-loop, provides the greatest level of human supervision of the decision to kill. The human is literally in the decision loop and thus the weapons system cannot complete the kill cycle until the human takes positive action to authorize it. In a human-in-the-loop system, machines analyze the information and present it to the operator. Then that operator must take time to evaluate the information provided and take action to authorize engagement. While this theoretically provides the tightest control, the fact is humans will be too slow to keep up during time-critical engagements. Even if the operator simply accepts the machine’s recommendation, he or she will inevitably slow the response. In an environment of multiple inbound missiles, some traveling faster than the speed of sound, a human cannot process the information fast enough to defend the unit. The Navy recognized this fact 30 years ago when it developed the autonomous mode for the Aegis Combat System as well as its Close In Weapons Systems to defend the fleet against missile attacks. But technological advances will soon expand the number of engagements that need to be conducted at machine speed.

In the second approach, human-on-the-loop, a human monitors the autonomous system and intervenes only when he or she determines the system is making a mistake. The advantage is that it allows the system to operate at the machine speed necessary to defend the unit while still attempting to provide human supervision. The obvious problem is the human will simply be too slow to analyze all of the system’s actions in a high-tempo engagement. Thus, the human will often be too late in attempts to intervene.

The third, human-out-of-the-loop, is frankly a bad definition. Until artificial intelligence gains the ability to design, build, program, and position weapons, humans will both provide input and interact with autonomous systems. At a minimum, humans will set the initial conditions defining the actions of the weapon after its activation. Even something as simple as a land mine requires human input. A human designs the system to detonate under particular conditions.  Other humans select where it will be planted to kill the desired targets. Thus, contrary to Human Rights definition, just because a weapon does not have a human on or in the loop, does not mean it does not require human input.

Rather than 'humans-out-of-the-loop,' this third category is really 'human-starts-the-loop.' Autonomous systems do not deliver force 'without any human input or interaction.' In fact, autonomous weapons require humans to set engagement parameters in the form of algorithms programmed into the system before employment.  And they will not function until a human activates them or 'starts the loop.' Thus, even fully 'autonomous' systems include a great deal of human input, it is just done before the weapon is employed. This is the system actually in use today for smart sea mines, Patriot Missile System, Aegis Combat System in its Auto-Special or autonomous mode, advanced torpedoes, Harpy drones, and Close-in Weapons Systems.  In fact, for decades, a variety of nations have owned and used systems that operate on the concept that a 'human-start-the loop.'

Unfortunately, much of the discussion today focuses on the first two systems – human-in-the-loop and human-on-the-loop.  We already know that neither really works in time-critical engagements. If we limit the discussion to these two systems, we have to either accept the risk of a person responding too slowly to protect his/her own forces or accept the risk the system will get ahead of the human and attack a target it shouldn’t. The literature on these two systems – from books to research papers to articles -- is deep and rich but does not thoroughly examine how humans will deal with the increasing requirement to operate at machine speed. Rather than trying to manage this new reality with the fundamental flaws of the first two approaches, human-starts-the-loop accepts the reality that in modern, time-critical combat humans simply can’t keep up. Of course, human-starts-the-loop is only necessary for these operations.  For operations where speed of decision is not a key element, like today’s drone strikes, the only acceptable system remains human-in-the-loop. In these situations, operators have minutes to hours to decide to fire or not.  This is more than sufficient time for a human to make the decision. Similarly, human-on-the-loop will continue to be used in some situations.

That said, since humans are too slow to effectively employ either human-in-the-loop or human-on-the-loop in time-critical engagements, it is more productive to accept reality and focus our research, debates, and experimentation on how to thoughtfully implement human-starts-the-loop for time-critical engagements where humans simply can’t keep up.

Fully autonomous weapons are not only inevitable; they have been in America’s inventory since 1979 when it fielded the Captor Anti-Submarine Mine, a torpedo anchored on the bottom that launched when onboard sensors confirmed a designated target was in range. Today, the United States holds a significant inventory of smart sea mines in the form of Quickstrike which are MK 80 series bombs equipped with a Target Detection Device. It also operates torpedoes that become autonomous when the operator cuts the wire. At least, six nations operate the Israeli developed Harpy, a fully autonomous drone that is programmed before launch to fly to a specified area and then hunt for specified targets using electromagnetic sensors. The follow on system, Harop, adds visual and infrared sensors as well as an option for human-in-the-loop control.  Given the Skydio R1 commercial drone ($2,499) uses visual and infrared cameras to autonomously recognize and follow humans while avoiding potential obstacles like trees, buildings, and other people, it is prudent to assume a Harop could be programmed to use its visual and infrared sensors to identify targets.

And of course victim initiated mines (the kind you step on or run into), both land and sea, have been around for well over 100 years. These mines are essentially autonomous. They are unattended weapons that kill humans without another human making that decision. But even these primitive weapons are really human-starts-the-loop weapons. A human designed the detonators to require a certain amount of weight to activate the mine. A human selects where to place them based on an estimation of the likelihood they will kill or maim the right humans. But once they are in place, they are fully autonomous. Thus, much like current autonomous weapons, a human sets the initial conditions and then allows the weapon to function automatically. The key difference between the traditional automatic mine and a smart, autonomous mine, like the Quickstrike with at Target Detection Device, is that the smart mine attempts to discriminate between combatants and non-combatants. Dumb mines don’t care. Thus, smart mines are inherently less likely to hurt non-combatants than older mines.

Fortunately, the discussion today is starting to shift to how 'human-starts-the-loop' can minimize both types of risk.  The United States is already expanding its inventory of smart sea mines. They comply with current DoD policy that states 'autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operations to exercise appropriate levels of human judgement over the use of force.'

By adopting human-starts-the-loop, we can both deal with the operational problem of human limitations and ensure autonomous weapons meet legal and ethical standards. The key element in the success of an autonomous system is setting the parameters for the system. These can range from the simple step of setting the trigger weight for an anti-tank mine high enough that only heavy vehicles will set it off to the sophisticated programming of a smart sea mine to select a target based on the acoustic, magnetic, and pressure signatures unique to a certain type of target. Today’s task-specific or limited artificial intelligence is already capable of making much finer distinctions and cross checking a higher number of variables.

Commercial firms are already deploying autonomous air taxis and ground vehicles based on a range of ever more effective, precise, and cheaper sensors which have obvious applications in improving the hunting capability of autonomous drones. Aerialtronics just put a new AI-driven camera on sale that is 4 inches by 4 inches by 3 inches and weighs only 1.5 pounds yet has a 30x magnification HD camera with an integrated forward looking infrared camera. It can integrate the two images to provide better target identification. In March 2018, researchers announced they had developed a 3D printed, hyperspectral imager light enough to mount on a small drone – for only $700. Hyperspectral imagery can be used to characterize the objects in the scene with great precision and detail. Google has released its MobileNets family of lightweight computer vision models that can identify objects, faces, and landmarks. Each of these technologies can be applied to improve autonomous targeting.

It is too late to argue about whether weapons should be autonomous. Further, rapid technological advances means their widespread employment on the battlefield is inevitable. With some exceptions in the area of weapons of mass destruction, if weapons are practical, affordable, and advance a nation’s interest, they are adopted.  Even a papal bull couldn’t stop the spread of the crossbow.

If we really want to fulfill our ethical responsibilities concerning these weapons, it is essential we expand the discussion beyond on-the-loop or in-the-loop.  Rather than continuing to focus on the merits of these systems, which do not effectively deal with the issue of time-critical engagements, it is essential we focus on establishing procedures and parameters that will maximize the probability that autonomous systems will act in accordance with the moral and legal parameters as well as the user’s intent. These weapons are already present and proliferating rapidly.

Each conflict will present a unique set of terrain, weather, opponents, political conditions, rules of engagement, and strategic objectives.  Therefore the guidance for each must be carefully considered and tested in simulations and exercises. For instance, the threat to naval forces in the Gulf is vastly different than the threat in a conflict with China. Thus an Aegis equipped ship operating in the confines of the Persian Gulf will set different air and sea engagement parameters than a ship operating far out in the Second Island Chain against a potential Chinese threat.  Similarly, the guidance for that same ship will be different if it operates closer to China’s shore.  As engagement times decrease, autonomous defensive systems will become more important.

Nor should the discussion be limited to defensive weapons. Offensive systems under development will have capabilities far beyond the decades old Harpy. As these systems operate in a communications denied environment, careful consideration must be given to the minimum sensor correlations necessary to confirm the target before the weapon decides to attack. And, the parameters for different types of targets in different situations will have to be thought through. While some will argue that we should not allow autonomous offensive weapons, the increasing range and capability of new weapons make it impossible to define a weapon as purely offensive.  For instance, an anti-air or anti-ship missile is considered defensive if it engages a target 100 miles away, why is it offensive if a smart, autonomous weapons kills the same target at its home airfield or in port?

Aegis crews already go through the process of establishing parameters before they deploy.  The process they use can be an initial template for developing guidance for each autonomous weapons system as it is developed. And like Aegis guidance, it must be subject to regular updating based on experience and new system capabilities. But it needs to extend beyond that.  Even in its automatic mode, the Aegis system is monitored by the crew who will intervene if they perceive it is malfunctioning. Yet they know they may not be fast enough to interrupt the kill chain and thus work very hard at getting the processes and programs right before they activate the system. We have to assume tomorrow’s autonomous weapons will often operate under conditions that prohibit human oversight after launch.

Just as important as getting the weapon’s AI code correct is training our operators on the decision process for putting the weapon in an autonomous mode. What criteria allow an operator to shift the system to fully autonomous? When does he/she either change the autonomy guidance provided to the weapon or take it out of autonomous mode? What are key indicators the tactical situation is changing and may require a change in the concept of employment?  How do these weapons change the responsibility of the commander for what actions they take after launch?

The fact is autonomous weapons are being fielded around the world. We are no longer dealing with a theoretical question. Rather than continuing to debate whether autonomous systems will be employed or what level of human supervision they will receive after launch, we need to focus our intellectual energies on how we will refine the guidance provided to operators and systems prior to launch. Only through careful study, experimentation, and testing will we have reasonable confidence that after we launch them our autonomous systems will engage within the ethical, legal, operational, strategic, and political parameters we desire. It’s time to get on with it.