The Alan Turing Institute

Human-Machine Teaming in Intelligence Analysis

Research Report

Anna Knack, Dr Richard Carter, Alexander Babuta

Abstract

This report presents the findings of a CETaS research project examining the use of machine learning (ML) for intelligence analysis within the UK national security context. The findings are based on in-depth interviews and focus groups with national security practitioners, policymakers, academics and legal experts.

The aim of the research was to understand the technical and policy considerations arising from the use of ML within an intelligence analysis context. Specifically, the research explored how to calibrate the appropriate level of trust users should have in machine-generated insights, and best practice for integrating ML capabilities into the decision-making process of an analyst.

Intelligence analysts working in national security face a major challenge in coping with massive volumes of data that may yield crucial insights to current and future events. The ongoing global expansion of data presents both risks (that a crucial ‘needle in the haystack’ is missed), and also opportunities (more ‘haystacks’ to look for new ‘needles’ to gain deeper insights). The use of ML offers real potential to simultaneously reduce such risks and to pursue such opportunities.

There are important considerations to make when deploying ML to support a human decision-making process, including (i) the challenge of explaining and understanding why, and how, the model is functioning the way it does, and (ii) the risk of harm to society and citizens if ML capabilities are used inappropriately. It is recognised that clear guidance on the safe and effective use of ML is required prior to its widescale adoption in high-stakes contexts such as national security.

ML explainability is multifaceted and can refer either to technical properties of model performance, such as expected precision and recall rates at different thresholds (sometimes described as ‘global explanations’); or to the specific factors the model took into account to arrive at a particular prediction (sometimes described as ‘local explanations’). This study sought to examine intelligence analysts’ requirements and priorities regarding both global and local model explanations.

Authors

Citation information

Anna Knack, Richard Carter and Alexander Babuta, "Human-Machine Teaming in Intelligence Analysis: Requirements for developing trust in machine learning systems," CETaS Research Reports (December 2022).