This publication is licensed under the terms of the Creative Commons Attribution License 4.0 which permits unrestricted use, provided the original author and source are credited.

Introduction

Technology is commonly seen as the servant of war. However, interrogating the assumptions we make during the everyday practices of using that technology transform it into an ambiguous figure with more agency that we believe. What the defence community decides are the right and wrong kinds of data used to understand the battlefield lead to technology mobilising in ways that arguably undermine effective strategy.

At a simple level, war should be thought of as a tool used by state and non-state actors to a political end. Famous military philosopher von Clausewitz made this point hundreds of years ago, an axiom still taught to military leaders. Returning to the present day the ways in which the ‘West’ – meaning NATO and its allies – use data arguably undermines this purpose.

Warfare as we practice leans heavily on a rapid decision-making cycle, enabled by advanced information technology generating digital representations of the battlefield from which targets can be extracted. However, this is founded on beliefs about technology and the nature of knowledge itself, which are left largely unchallenged. These beliefs have drawn us down an expensive and potentially dangerous tactical rabbit-hole and worse, they prevent us from being able to challenge implicit assumptions about war and its purpose.

In this article, I will have to drag you, the reader, into philosophical territory, but I hope by doing this I can explain why our current understanding of technology fails us and is leading us into a dead-end in warfare practices.

The positivist paradigm and its implications for our understanding of conflict

Technologies of all kinds mobilised for war are understood through terms such as 'enabler', or 'force multiplier', which means factors that are implied to materially improve the performance of fighting forces on the battlefield, regardless of setting. For over a century, the West has explicitly written strategic defence doctrines and concepts around technology which is perceived to create detailed knowledge of the battlefield and its contents. Collective adoption of incrementally advancing technology is linked to improved accuracy, lethality and the efficiency and speed at which forces can be brought to bear in battle.

However, this institutional determinist attitude toward technology – the normative belief that technology is a principal factor in battlefield success or failure – has hidden consequences. Acknowledging how determinism obfuscates the consequences of our technological choices will allow a more honest appraisal of the utility of technology in battle. Before exploring how determinism frames a collective understanding of the effects of technology that obfuscates its social component, it is necessary first to address the question of how we collectively understand knowledge as a phenomenon.

Positivism is a branch of epistemology – the study of the form of knowledge itself – which focuses on that derived from sensory experience and logic, at the expense of others. It is founded on enlightenment rationality, and the belief that objective knowledge of the world is possible. Positivism is the principal lens through which Western military forces understand knowledge.

Digital representations are created in software principally through observation, drawing on data from sources including surveillance platforms, signals intelligence and open-source intelligence. This is not the totality of knowledge that can exist about war and battle, but our institutional positivist preferences mean we instinctively exclude some data and information, and prioritise some ways of understanding over others. Thus, some forms of knowledge about the battlefield are not possible. I will argue that these two underlying assumptions around technology and knowledge are problematic where tactical operations connect to wider questions of strategy.

The filters we all put on how knowledge is understood shape what data is used, what isn’t used, how it is processed and thus constrain understanding of the battlefield. Thus, complex unstructured data such as sociocultural context is excluded from our decision support tools, including those used for targeting. Warfare is founded on the digital representations we create through our software tools and based on these unacknowledged filters. This arguably leads to a reductive analysis because of what is excluded by the highly structured data on which we end up relying.

Put simply, positivism seeks rational and objective structure in a domain which is inherently subjective and founded on human experience. Epistemological preferences, which are the ways we understand and construct knowledge. are therefore more important than commonly acknowledged. Hard to quantify knowledge that supports intuitive decision-making (such as why people fight and persist in fighting war) is excluded from the algorithms at the heart of decision support tools as it is not directly compatible with the epistemological foundation of technology.

In environments where these tools are at the heart of processes resulting in violence on battlefields this matters, and the potential of harms is demonstrably real. This is also why philosophy matters, and the use of technology is as much a question of epistemology as it is technical development. The tools we use to construct digital representations in battle arguably provide only narrow situational awareness based on epistemological preferences for data types rather than the totality of possible battlefield knowledge. The digital representation of the enemy constructed under this paradigm provides a limited understanding of what the tactical enemy is doing in the moment, rather than why and how the moment arrived. Strategic analysis rarely lends itself to quantification, and thus epistemological and technological preferences keep war focused on the tactical battle. 

How narrow situational awareness limits our strategic understanding

All of this points to a future where technical processes reliant on narrow situational awareness move the practice of war ever further from a wider strategic purpose towards management of individual tactical engagements. Narrow situational awareness enforces a reductive attitude which rewards tactical success over strategic progress, reinforced by foregrounding of technology driven by our determinist preferences. The tools which support digital representations of battle do not produce the forward thinking, intuitive and strategic understanding of the purpose of war – rather that war is happening and must be fought.

Thus, uncritical consumption of positivist analysis on a foundation of a determinist relationship with technology presents a very real danger of war forever treated as a servant to process. This is exactly because the purpose of decision support software and its algorithms, including those used for targeting, is to constantly reconstruct a digital representation of the battlefield. The combination of tools, data and epistemological attitudes rewards management and delivery of tactical process, which then tacitly becomes the principal purpose of war. In other words, the ever-increasing technological focus on support to the tactical battle draws attention and support inexorably toward it, and away from questions of strategy in ways that are hard for us to collectively resist.

As discussed previously, a determinist view of technology also limits how this problem, if it exists, can be understood. Systemic, process-based, datafied warfare is widely understood as the naturally evolved apex form of warfare. It has been at the core of military thought amongst NATO allies for decades, but it is difficult for those involved to acknowledge its limitations.

This is precisely because common understanding of the system of which they are part is effectively shaped by founding assumptions so fundamental they are not seen, let alone addressed. The basis of institutional understanding of technology and epistemology dictate what problems can be known because they shape how technology is understood, and what knowledge is deemed right and wrong.

Frameworks of epistemology shape what measures of success and failure look like, how the results of technology are judged and therefore how successful the digital representations produced by software are understood to be. For example, in contemporary conflicts, the destruction of a target is often confirmed through observation, but our epistemological frameworks mean that consequences are left unexplored because the social cost is judged unknowable. Interrogation of the epistemological consequences of datafied warfare would likely highlight how tactical efficiency is undermined by the social and economic costs of strategic failure.

The way forward: Reversing deterministic assumptions 

To address this impact on strategic understanding, two key intersecting issues must be addressed. The current paradigm relies, as we have already seen, on digital representations of the battlefield which are constrained by epistemological preferences. That this limits what can be understood about battle itself is especially problematic when human terrain is the subject of the representation in situations such as counterinsurgency or counterterrorism. Recognising for example that a digitally constructed view of battle cannot explain why people fight is to recognise in turn that narrow situational awareness can only tell us what the underlying data and algorithm is constructed to report. It does not understand for what purpose war is being fought, or where targets fit into wider questions of strategy, and the political purpose of war.

The premise of our technological reliance must be challenged to bring a greater measure of subjectivity and unstructured understanding back into analysis of battle. This article argues that our frameworks for understanding technology prioritise efficiency optimisation at the expense of efficacy, which in turn means tactical battles are prioritised over strategic and political ends of war. The result of this is a framework for understanding war which results in violence but contributes little to overall strategic success.

The second issue that must be tackled is the consequences of datafication on force structures. Determinist foregrounding of technology to support datafied warfare comes at an enormous skills cost and burden to the armed forces. Expertise in data must be procured often at the expense of other trades and skills.

The narrative of technological efficiency and optimisation is familiar to the many private sector industries which look to data science for competitive advantage in crowded markets. This narrative of efficiency is one where more must be done with less, and in resource constrained armed forces this creates conditions which drive further optimisation. More optimisation around datafication and narrow situational awareness consumes more limited resources, necessitating further efficiency and causing the whole structure to eat itself in pursuit of adoption.

Thus, a question for defence should be to what extent does optimisation represent a strategic goal of warfare? It must also ask itself to what extent a model of adoption drawn from Silicon Valley suits a domain which is fundamentally not driven by profit-making ends?

Conclusion

Ultimately this article points to a phenomenon where attitudes toward technology abstract war into process. Abstraction withdraws and narrows our gaze away from the violence of fighting into the comfort of process management, and simultaneously narrows warfare into a resource optimisation question located around the efficiency of tactical combat.

Warfare is arguably not a question of optimisation, but a political activity driven by culture and belief which benefits from judicious use of technology. It is now inevitable that the future of war will be dominated by data, and the experience of combatants mediated by technology. However, the extent to which our current relationship with technology allows the strategic dominance in war promised by its champions is far from certain.

The focus of technological support and interest on optimising tactical combat is a vicious circle which is abrading our ability to think strategically. This requires those of us involved in war to have a more reflexive attitude towards framing of technology to properly interrogate where reliance on it is harmful. This should start by looking at what data is excluded from analysis because it doesn’t suit current epistemological preferences. Arguably exclusion opens space for harms, and makes technology a vehicle for more rather than less indiscriminate violence.

The consequences of limits imposed by data we exclude from analysis, and the consequences of a ruthless pursuit of data-led efficiency must be better acknowledged. By doing this we can relegate technology into its role as a servant of lawful and ultimately strategic and political ends in war, and extract ourselves from the dead end of doctrine that we find ourselves in.


The views expressed in this article are those of the author, and do not necessarily represent the views of The Alan Turing Institute or any other organisation.

Citation information

Rupert Barrett-Taylor, "The Limits of Digital Representations of the Battlefield," CETaS Expert Analysis (June 2024).