An analyst sits jacked into cyberspace, cables plugged into his temples, drifting in his mind’s eye through a four-dimensional mosaic of data. Bored, he finds a woman he’s been watching in his spare time, just as a hobby, and to keep his hand in. And possibly also because he is lonely. He looks at her purchase history and location data. Three times in the last week she has purchased rope. Today she’s bought razor blades, sleeping pills, and wine. Apart from her commute to work, she hasn’t left her apartment for months. The analyst marks her cluster of information as a significant node and notifies emergency services, as he assesses that she’s about to take her own life.
If this sounds like science fiction, it’s because it is. But somewhat more disturbingly, if elements of it sound plausible, or even familiar, it’s because they are. The idea of casually stalking a stranger online is now passé (though still very wrong and potentially illegal). And using marketplace and maps data to get a picture of not only their habits but also their likely future behaviour is now the core business of some of the world’s largest and richest companies. Take away some of the more fanciful literary elements and what we have is an eerily accurate picture of today’s information environment.
The author of this particular story is one William Gibson, a futurist cyberpunk science fiction writer who won many plaudits and prizes for his seminal work, Neuromancer, published in 1984. Gibson is often credited with defining the entire genre of cyberpunk. Many also credit him with predicting the internet, hacking, the rise of tech super-corporations, the information market, and a host of other contemporary staples which, for many authors of his period, were unthinkable at the time. Gibson’s seemingly uncanny knack for prediction begs the question of how he was able to be so accurate. Accurate predictions about the near future are important in a whole raft of fields, and especially in strategic and military planning.
There is a popular misconception that creatives conjure their ‘somethings’ out of nothing, but this is not only untrue, it is logically impossible. A productive creative process always involves keen observation of that which already is, and the synthesis of those things with other, seemingly unrelated phenomena, as well as the human experience. Which is how Gibson, using imagination and observation, was able to extrapolate so much of our modern information landscape, whereas someone like Malthus, using mathematics, was unable to imagine a future in which humanity did not starve. This astonishing achievement is attributable to the particular mode of thinking which Gibson used, which is not only exemplary, but also raises the issue of metacognition and its role in effective thought, analysis, ideation, and a raft of other cognitive activities.
Metacognition, or ‘thinking about thinking’, is a vital part of the cognitive process, enabling us to hold our own mental processes to account and, perhaps more importantly, opening us to alternative ways to process available data and information. When dealing with known quantities and deterministic outcomes, deductive reasoning and formal, systematic, and linear modes of cognition are not only beneficial, they are essential. When, however, the imponderable encroaches upon us, our bias towards inductive thought can become counterproductive. Bertrand Russell, one of the most important thinkers and mathematicians of the 20th century, waxed lyrical about the “curse” of our overvaluation and misuse of deductive, systematic reasoning[1], which he argues dates back to the mediaeval “fetishization” of Aristotle.
As we can see from the example of William Gibson’s uncannily accurate vision of the future as opposed to Malthus’ carefully worked out and completely wrong prediction, systematic logical thought does not always reign supreme. In many circumstances where what we are pondering is hyper-complex, partly unknowable, and above all, human, it’s important to practise metacognition in order to determine whether we are in the right cognitive mode. Are we attempting to think linearly in a mosaic problem space? Are we attempting to think deductively through a problem set which can yield nothing better than probabilities and which would yield more readily to induction? Are our a priori assumptions, like Malthus’[2], hopelessly wrong? And perhaps most importantly, are we being sufficiently imaginative in our thinking?
Dr Martha Whitesmith, Research Fellow at the Department of War Studies, King’s College, London, has pointed out manifold problems with thinking which is inappropriately structured[3]. In particular she talks about analysis of competing hypotheses (ACH), a method of thinking which is much favoured by intelligence agencies the world over. ACH is almost purely deductive in both the literal and technical senses of that word, consisting of the extrapolation of a range of hypotheses. An evidence ranking system is then created/chosen and applied to each hypothesis in turn until either all have been disproved, at which point the analyst starts again, or one remains. This one remaining hypothesis is then considered to be true[4]. This raises a raft of problems, not least of which is what Dr Whitesmith refers to as “a false impression of rigour”[5], an issue which can lead to a form of myopia where analysts cease to see possibilities outside the framework of their hypothetical and evidentiary structures[6].
Which brings us back to the process of imagining. Imaginative thought is at least adjacent, if not identical, to subjective inductive thought processes. It requires the careful comparative measurement of belief; a comprehensive rather than a targeted collection strategy; and an end-state which yields not a single, clear ‘truth’ or course of action, but rather a collection of more or less probable conclusions – the ranking of which requires heuristic and experiential measures to be applied. This mode of thought, anathema to most structured thinkers, is the one which has proven most effective when it comes to large imponderables such as visualisation of the future, as we can see from the unusually prophetic works of Gibson and a few others. Given this, it seems necessary in our chaotic and complex times to put some imaginative thought towards the development of this kind of thought as an analytical capability.
End Notes
[1] History of Western Philosophy, Betrand Russell, ch. XIX - XXI
[2] Malthus is generally acknowledged to be a mathematical genius, making it perhaps unfortunate that this is what he is mostly remembered for.
[3] Jane’s World of Intelligence Podcast, https://www.youtube.com/watch?v=IzSxofKBXf8
[4] Within a given value of ‘true’, depending on a variety of predicates and even personal beliefs.
[5] ibid
[6] Analysis of Competing Hypotheses Using Subjective Logic, Pope & Jøsang is a fascinating look at how to adapt inductive/abductive thought to ACH.
“William Gibson and ‘Meta Cognition’ - Australian Army Philosopher Conflates Imagination With Reflection, Rejects Deduction, Abduction and Intuition “.
At the Chief of Army's history conference the week before last, one of the speakers drew attention to the amazing work that was done in intelligence/signals in WWII, as people were talent-spotted for their intuitive abilities. One was an expert in Greek vases - on the surface of it an unusual choice, but crucial in developing our ability to crack codes.
In the sphere of higher education it has been interesting to see that as the US has moved away from liberal arts and towards STEM, there has been a concurrent movement in Asia towards the inclusion of liberal arts style components, as they have recognised that a purely 'scientific' approach is insufficient to create tomorrow's leaders. I've seen Chinese universities advertising positions for 'professor of imagination' and 'professor of creativity'!
One problem, however, with a move towards fostering imaginitive thought, is that it is hard to measure when one is doing it rightly, and it is hard to measure whether it does indeed produce people who are better equipped to creatively solve problems as 'future ready' soldiers. I think this will cause some to hesitate to embrace such 'subjective' modes of thought.