Future Operating Environment

Enhancing Army’s Robotic and Autonomous System Strategy

By Kieran Galea June 3, 2019


In October 2018, the Army released its Robotic and Autonomous Systems (RAS) Strategy.[1] While this is an important step forward, the pace of technological advances means the Army needs to explore and develop complementary strategies for adversarial machine learning, combating enemy RAS, and improvised RAS capabilities.

Adversarial machine learning

Adversarial machine learning involves experimentally feeding inputs into an algorithm to reveal the information it has been trained on or distorting inputs in a way that causes the system to misbehave.[2] By flooding an autonomous targeting system with thousands of inputs it is possible to reverse-engineer its function. This allows the attacker to peer within the “black box” of the neural network providing an understanding that can then be used to determine how to defeat the system.

In order to employ this technique an adversary requires access to the autonomous system. This could be achieved through the capture of a friendly weapon or vehicle platform, or a cyber-attack, enabling extraction of the software and supporting neural network.

Examples of this strategy can already be seen with commercial semi-autonomous systems. Keen Labs, a Chinese security research group, successfully deceived the Tesla autopilot system by placing stickers on the road causing the vehicle to move into the wrong lane[3]. This has limited impact within a human-in-the-loop system where the driver can correct the error. However, the same methodology can be used to deceive autonomous weapon systems into not recognising appropriate targets or being redirected to unsuitable targets.

A proof of concept for a more robust adversarial system has been conducted at the University of Washington. A team of researchers has proposed a general attack algorithm, Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions.[4] This algorithm was able to generate stickers that, when placed over a stop sign, caused the self-driving neural network to believe it was approaching a speed limit sign. Whilst, this capability is not currently able to be used in a real world situation it does demonstrate potential weaknesses in current and future autonomous systems.

The Army should therefore develop a strategy to mitigate against adversarial machine learning. This strategy will benefit from additional studies, similar to the development of RP2, conducted by Australian universities in partnership with Defence.

Enemy RAS

Countries around the world are pushing the envelope in autonomy.[5] It can be expected that Australian Soldiers will be engaged in direct combat against autonomous systems in the next conflict. Within current Joint Operations, it can be expected that we will encounter autonomous sub-surface or aerial surveillance systems in the near future. Thus it is essential that the Army develops a strategy to defeat enemy robotic and autonomous systems.

This strategy may be a combination of existing and enhanced deception techniques, or the employment of our own adversarial machine learning methods as mentioned above. This will include inputs from the intelligence community, surveillance and target acquisition subject matter experts, and should be included within the capability statement of vehicles and equipment under development.

We also need to consider if the employment of our own autonomous systems is the only method for combatting enemy RAS; enhancing human abilities is an alternate pathway to an increased decision efficiency. Augmentation can increase the capability of humans, giving us the ability to become faster, stronger, and smarter.[6] However we would also need to understand the implications of enhancing humans to think and learn faster, as these will be essential capabilities if we were to augment humans sufficiently to compete with autonomous systems. The Army should consider options to develop a suitable strategy with appropriate ethical consideration. It is likely too early in the advent of this technology to discuss the physical or chemical changes to the human brain; however, we can consider improvements to our training and education systems. If the most effective way of improving our soldiers’ cognitive ability can be determined then we would have a viable alternative to a reliance on autonomous systems.

Improvised RAS capabilities

We are entering a threat environment where the technology to build lethal autonomous weapons is available not only to nation-states but to individual non-state actors too.[7] Cheap electronic kits (Raspberry pi and Arduino), open source machine learning tools and libraries (TensorFlow), as well as the proliferation of affordable drones, has democratised technology to a point at which the production of RAS is available to anyone with an internet connection and relatively few resources.

How can ther Australian Army prepare to face these threats within the current operating environment? This question needs to be answered before we are forced into doing so. Social media has already been used as a state propaganda tool and to influence election outcomes; the Army needs to prepare for open source machine learning technologies to be used in similar ways.

Improvised RAS capability provides the Army with opportunities. These open source tools have made bottom up, lean innovation more accessible than ever before. How can the Army use this capability to innovate? Can we leverage off computer science and engineering programs at the Australian Defence Force Academy to develop corporate knowledge of robots and automation? Answering these questions will enable the Army to unlock the full potential of the next technological revolution.

Conclusion

RAS technology presents both threats and opportunities. The current RAS strategy outlines a vision for the Army in leveraging this rapidly advancing field of technology in the generation of land power. Further augmentation of this strategy with guidance on adversarial machine learning, RAS threat defeat mechanisms, counter improvised RAS methodology, and concepts on training and education will further develop the Army’s intellectual capital. It will enable the Army to fully participate as a key component of a future joint force enabled by RAS to be more lethal, precise, efficient and effective.

 

[1] Robotics and Autonomous Systems Strategy, Future Land Warfare, Oct 2018

[2] Knight, W. How malevolent machine learning could derail AI, MIT Technology Review 25 Mar 2019

[3] Brewster, T. Hackers Use Little Stickers To Trick Tesla Autopilot Into The Wrong Lane, Forbes, 1 Apr 2019.

[4] Eykholt, K., Evtimov, I., Fernandes, E., Li, B., Rahmati, A., Xiao, C., Prakash, A., Kohno, T., Song, D. Robust Physical-World Attacks on Deep Learning Models. arXiv preprint arXiv:1707.08945 (2017).

[5] Scharre, P. Army of None: Autonomous Weapons and the Future of War. New York, W.W. Norton & Company, 2019, p 102.

[6] HRyan, M. Human-Machine Teaming, For Future Ground Forces. Centre for Strategic and Budgetary Assessment, Apr 2018. P 25.

[7] Scharre, P. Army of None: Autonomous Weapons and the Future of War. New York, W.W. Norton & Company, 2019, p 134


Portrait

Biography

Kieran Galea

Kieran Galea is a Project Manager within LAND 53 PHASE 1BR. He has a keen interest with the technical and practical aspects of the implementation of Artificial Intelligence within Army.

The views expressed in this article are those of the author and do not necessarily reflect the position of the Australian Army, the Department of Defence or the Australian Government.



Add new comment