Metacognitive Dragons; An Argument in Favour of Human Centric Information OperationsBy Christopher Lake July 21, 2020
Deep-coded into human consciousness are stories of dragons, monsters and magic swords. The way it usually goes is that some new enemy, mythical and unknowable, begins a reign of terror until some kind of special being, equipped with some sort of magic weapon, slays it.
This can be understood in several ways, but the most relevant for our purposes is that we as humans will always, in the first instance, couch a new or emergent threat in the semiotics of myth. Take a look at any typical cache of stock images of hackers, for example, and a startling consistency in visual language reveals itself. In almost every case, the hacker is represented as either a cartoonish exaggeration of a thief, or as some kind of dark wizard, complete with hood and glowing magical weapon – usually some hyperbolically bedazzled, indeterminate computing “device”.
It’s worth paying attention to these unrealities in the way we represent the world and especially those parts of the world which threaten us. Not for some obscure philosophical end (worthy as this author believes them to be), but as a forensic metacognitive exercise to assess just how valid our current epistemologies actually are and therefore how likely they are to enable future capability. Or, in other words, it’s good to sanity check our thinking and determine whether there’s any chance of it leading us to concrete, productive action.
The broad trend in responses to the information domain of warfare, especially since the massive and ongoing disruption represented by the development of internet technology, has been a more or less classic example of this human tendency to frame the unknown in mythos. In this case, the ‘dragon’ has been the complex synergy of discrete layers and elements of the information threat and the magic sword has been technology, both solid state and software based. In many fields, both civilian and military, the uniform and somewhat disappointing tendency is to baulk at the cognitive requirements of detailed response and instead rely on automated systems, bots, and other technological tools. The predictable result of this has, of course, been an arms race between artificial and real intelligence; one which real intelligence is realistically set to keep winning for quite some time.
The idea we need to socialise, especially within the ADF, is that there is no single, mono-definable threat represented by current and emergent information capabilities. The information space is far better characterised by means of ‘mosaic’ or ‘cloud’ metaphors and even these are somewhat awkward and will, I’m confident, be superseded in the near future when our digital ontologies improve. This is a domain defined by its lack of stable or singular morphology, rather like the vague threats symbolised by our mythical monster. As a result, any attempt to model a clean, single threat picture always produces some kind of implausible chimera. Some of these chimera are currently in active employment, not just in public understanding (or misunderstanding), but also in industry and government.
This is the point – reflexive mythopoeism in human thought has led many of us down research and development paths which have necessarily led nowhere. Understanding information warfare, and especially influence, disinformation and disruption as a numinous collection of ‘dark arts’ impervious to standard military measures of effect or impact, has produced a dearth of practical answers. This is natural enough, given that this process has so often commenced with an unanswerable problem set.
Some excellent work is being done here and elsewhere to reverse this trend and bring the ecosystem of ‘soft’ or ‘sharp power’ information threats firmly into cold-eyed reality. Dr Gary Buck, in a contribution to the NATO Strategic Communications Journal, outlined a plan to refine measurement of effect in the information space by creating a system in alignment with the OODA loop. Dr James Giordano at Georgetown University Bioethics speaks elegantly and persuasively about properly defining problems and threats within emergent and ethically dubious spaces. And there is, of course, the emergent Information Operations and Information Warfare capability here in Australia.
Given that some of our own approach to Information threats in general is still in its formative stages, there exists an opportunity to avoid the pitfalls of mythification and to recognise from the outset that no magical set of tools or bots is going to match the central thinking engine of warfare – namely, the soldier.
This recognition is why the Australian Army is developing a training methodology which emphasises the skills and behaviours of humans – a human-centric doctrine of Information Domain Operations. Since the most valuable real estate to be won and lost in so much of this domain actually exists within the human psyche, it seems the most rational approach is to focus on the human operators when training to meet these threats.
This will require the creation of a training concept, targeted at the human and grounded in the ways humans interact and respond, especially within the framework of the human as warrior, law enforcement officer or intelligence operative. This strikes Army as being the most rational approach for two reasons. The first is that this is the exact vector via which threats and effects travel in this space. The second, and no less important, reason is the unavoidable future nexus of human and machine.
The Training Adversary System Support Cell (TASSC), more commonly known as the DATE Team, are working on exactly that. By synthesising the Decisive Action Training Environment (DATE) with the Information Operations Network (ION), we are working with various domestic and international partners and academics to create the ION Human Network. This will be a scalable, re-deployable network of thousands of human actors, complete with detailed, realistic and richly narrative network and real-world behaviours. Thanks to technology, we are able to generate much of this group, but it’s the ‘manual labour’ of hand crafting the core human network which makes it capable of creating better detection and open source intelligence (OSINT) training. The necessary logic being, that this will eventually create better information operations /information warfare (IO/IW) and OSINT doctrine. Because if we don’t make significant efforts to change the way we train and act in this space across the largest segments of Army, we will have lost the next contest before we even begin to fight.
The digital/informational universe must not exist in parallel or outside of the training environment, but as a closely woven element of it. Traditional narrativisation methodologies should be used in non-traditional ways to create a rich and meaningful, live and constructive training space. Just as in games, social networks and life as it is experienced on the ground, the centre of every digitally enabled reality is the human user. And given this, no number of robots or automatic generative entities, however clever or cleverly constructed, can or should replace the human element. Machines, after all, are not yet able to slay our metacognitive dragons.