Abstract

Emerging technologies are transforming national security data processing, as many analytical tasks can now be automated – including through machine learning and artificial intelligence. However, despite much discourse on the opportunities and risks presented by these technologies, there is a lack of empirical research assessing the UK public’s attitudes to data processing for national security. This report fills this gap, presenting new evidence from a citizens’ panel and representative survey of more than 3,000 UK adults. The research shows that there is strong overall support for national security data processing, but that support is significantly lower among young adults and vulnerable adults, or for non-operational use cases. The findings suggest that the public do not perceive automated data processing as inherently less intrusive than human processing, and they care just as much about accuracy and fairness as they do about privacy intrusion. Robust oversight is considered a top priority across all data processing contexts, but there is very limited public understanding of existing national security oversight structures. Improving public awareness of the existing oversight regime is likely to significantly increase trust in national security data processing, and should be prioritised in future public engagement.

For the full dataset gathered in the Savanta poll, see here.

Executive Summary

Emerging technologies are transforming national security data processing, as many analytical tasks can now be automated – including through machine learning (ML). This presents national security analysts with an opportunity to tackle ever-expanding volumes of data more efficiently. But the adoption of these technologies also creates new risks, especially around individual privacy and the fairness of data processing.

Given the challenging trade-offs involved, with potential impacts on citizens’ rights, decisions around whether and how to automate data processing should be informed by independent public attitudes research, to ensure that trust is maintained in national security agencies’ uses of data.

This study examines UK public attitudes towards automated data processing across a range of national security contexts, focusing on questions around the perceived privacy intrusion that occurs when people’s data is analysed by automated systems instead of humans. The findings here should inform future policy and strategic decision-making concerning the use of automated data processing (including AI) within a UK national security context.

The study gathered UK public perspectives through:

  1. A nationally representative survey eliciting 3,035 responses from the public, in addition to a further 519 responses from young adults (18-25).
  2. A three-part deliberative citizens’ panel with 33 participants, using a nationally reflective sample.

Key findings across the survey and citizens’ panel are as follows:

Knowledge of and support for national security data processing

Key finding 1: The UK public self-report having limited understanding of national security agencies’ work and low awareness of their data processing powers. 

Key finding 2:The definition and scope of ‘national security’ remains unclear to the public, and they want more clarity from Government on this – including on, for instance, the distinction between covert and overt activities, and between national security and law enforcement. 

 

Key finding 3: Generally, there are high levels of support for national security data processing. However, there is a wide variety of views, with a small proportion of participants demonstrating blanket distrust of national security institutions in all contexts the study explored. 

Key finding 4: Young adults and vulnerable adults are less supportive of national security data processing compared to other groups; this difference is statistically significant.

Key factors influencing support

Key finding 5: Levels of support for national security data processing vary across datasets, with higher levels of support for agencies analysing public data such as openly available social media posts compared with private data such as text messages.

Key finding 6: Public support is highly dependent on context. The public are very supportive of national security agencies processing personal data in operational scenarios (e.g. when tackling terrorism or serious crime), but less supportive of the processing of personal data for non-operational purposes (e.g. to shape long-term strategies or to develop new technology).

Key finding 7: There is very limited public understanding of existing oversight mechanisms for UK national security, with many assuming there is little oversight in place. Panel members were reassured to learn about the oversight that is in place – especially from the independent Investigatory Powers Commissioner’s Office (IPCO) – and supported efforts to raise awareness about this.

Key finding 8: Panel members expressed enthusiasm for having more of a voice in national security oversight discussions, arguing that diverse input is essential for democratic legitimacy.

Key finding 9: Not all safeguards are viewed as equally important. Independent oversight, secure data storage and regular data deletion are most important to the public, while government ministers’ role in approving warrants is seen as less important.

Human and machine intrusion

Key finding 10: Survey evidence suggests the public find both human and automated processing to be similarly intrusive. Furthermore, the citizens’ panel reveals that many people find data collection without consent to always be intrusive, even if the data processing pipeline is fully automated and no human ever sees the data.

Key finding 11: There is public support for national security agencies to engage in both human and automated data processing despite this intrusion, provided certain conditions are met. Panel members place just as much importance on the accuracy and fairness of data processing, and the implementation of oversight, as they do on privacy intrusion. 

Key finding 12: Panel members support the combined use of automated, ML-based and human data processing, to avoid overreliance on one single approach.

Key finding 13: Panel members perceive humans and machines to be suited to distinct tasks within intelligence analysis, meaning each method can compensate for the weaknesses of the other. 

Public expectations of automation and machine learning

Key finding 14: The public expect innovation and agility from national security agencies in the face of an ever-changing threat landscape.

Key finding 15: Accuracy and fairness (in addition to independent oversight) were seen by panel participants as the key determinants of trust in any data processing activity. They view automation as potentially beneficial if it helps improve the accuracy or fairness of data processing. 

Key finding 16: In relation to automation and ML, the public appear most concerned with adequate quality assurance to ensure the accuracy and fairness of algorithms, and the presence of independent oversight to maintain sufficient human accountability. 

Perhaps most importantly for future policy decisions, while the public view fully automated data processing as involving a degree of privacy intrusion, they also prioritise other factors alongside privacy and, therefore, still support the use of new technologies. When introducing automation and ML in a national security context, it will be essential to balance protections against privacy intrusion with the public’s other two main priorities. Namely, that data processing must be as accurate and fair as possible; and that independent oversight bodies should continue to hold agencies to account for their use of data-driven technologies. 

1. Introduction

1.1 The context: Data processing for national security

National security agencies are responsible for protecting people across the UK from the most severe threats, including terrorism, cybercrime, serious and organised crime, and espionage. To do this, they process data – including personal data about people in the UK – both in the context of urgent investigations and to shape long-term strategies. Given the privacy intrusion involved, oversight structures are essential to ensure this data processing is necessary and proportionate in the interests of national security or crime prevention, and to ensure this activity is in the public interest. 

In the UK, the main piece of legislation that governs covert data collection is the Investigatory Powers Act, introduced in 2016 and updated in 2024 with the Investigatory Powers (Amendment) Act. This legislation sets out how national security agencies and other authorities, including the police, may use investigatory powers to collect digital data, and the authorisation they need to obtain before using these powers.[1]

Three key independent oversight structures are closely involved with ensuring that national security data processing is lawful, proportionate and in the public interest: 

  • The Investigatory Powers Commissioner’s Office (IPCO) oversees the use of covert investigatory powers, reviews warrant applications and conducts inspections of national security data processing. IPCO’s Technology Advisory Panel (TAP) provides advice on how IPCO should respond to the evolving technology landscape.
  • The Independent Reviewer of Terrorism and State Threats Legislation independently reviews the operation of legislation relating to national security.
  • The Intelligence and Security Committee of Parliament (ISC) oversees the policies and operations of national security agencies, conducts inquiries and publishes annual reports.

1.2 The challenge: Updating data processing for the 21st century

National security data processing cannot remain static. The challenges facing security agencies are evolving rapidly, as the volume of digital information now available makes it impossible for human analysts to manually sift through all relevant data.[2] Simultaneously, new capabilities have emerged for automating data processing, potentially enabling more efficient analysis of data. 

This report focuses on two ways technology is already transforming national security data processing: 

  • Automated analytics: Automating aspects of the data analysis workflow that would ordinarily be manual, thereby reducing human involvement.
  • ML: A subset of AI that involves training a machine to identify patterns in data to make inferences, conclusions or predictions at a speed and scale greater than human decision-making allows. This includes models that identify patterns in data to classify content, but also generative models that produce content themselves (e.g. text, images, code).

Automation and ML both present clear opportunities for national security agencies. These techniques could speed up the process of identifying salient information[3] and enable agencies to develop new capabilities for target identification.[4] But there are significant risks. For example, biased training data could result in inaccurate targeting, and large volumes of data may be needed to train new models, risking further privacy intrusion.[5] 

ML models are often more complex than other forms of automation and can learn from data without explicit programming. Consequently, ML applications have additional benefits and risks. For example, upon learning subtle indicators, ML could help spot patterns in data that human analysts might miss. However, opaque ML models could make it harder to explain the rationale for decisions, thereby increasing concerns about accountability.[6] 

As data processing evolves, it is essential to assess levels of public trust and confidence in the use of these new technologies for national security purposes.

1.3 The motivation: Why we need public voices 

Currently, policymakers and other key stakeholders do not have a clear understanding of public priorities and concerns in this area. As a result, they cannot determine whether future changes to national security data processing would maintain public trust and legitimacy. This gap in knowledge makes it difficult for public bodies to make decisions about whether and how to automate different aspects of national security data processing. 

For example, previous CETaS research has identified disagreements among experts about whether an increased use of automation would significantly affect the degree of privacy intrusion involved in national security data processing. On one side, there are arguments that increased automation can enhance privacy, as irrelevant data can be automatically flagged and deleted before it is ever seen by a human. On the other side, it has been suggested that privacy intrusion has already occurred at the point of data collection, which makes the process intrusive regardless of whether it is automated. Furthermore, some argue that automation is likely to have a negative impact on privacy, by enabling faster data processing that ultimately leads to the analysis of more data.[7] 

While policymakers recognise that public expectations of privacy are highly dependent on contextual factors, very little evidence currently exists around what these contextual factors are with respect to national security. For instance, some experts suggest that analysing certain datasets would be seen as ‘inherently deeply intrusive’ while others would not, and that the public’s expectations of privacy are likely to change over time as algorithms become increasingly embedded in daily life.[8] So far, these hypotheses, including around the perceived relative intrusion of human versus machine data processing, remain largely untested.

A commitment to provide transparency, build consensus and welcome public views is a key principle that distinguishes democratic governments from authoritarian regimes. Ensuring active public dialogue on the state’s use of data and technology will reinforce the UK national security community’s democratic licence to operate. It will set a global example of how the promotion of public engagement increases trust, providing UK agencies with the support they need to continue operating effectively in a rapidly evolving threat landscape. 

This sentiment has been recognised in numerous reviews of UK investigatory powers over the last decade, following the Snowden allegations of 2013.[9] In his independent review ‘A Question of Trust’, Lord Anderson draws on survey research examining the UK public’s attitudes towards privacy.[10] Similarly, the independent RUSI review ‘A Democratic Licence to Operate’ drew on polls examining how the UK public weigh issues such as national security against the protection of their personal information.[11] 

Despite this recognition of the importance of public trust to national security, current understandings of how the UK public feel about national security remain “fragmented.”[12] Recent polling provides high-level insights – showing that, for example, support for the intelligence agencies remains fairly strong, with 58% of Britons saying that they trust these organisations.[13] Some insights can also be gained around international attitudes to new technologies in national security, with a recent European Centre for Not-for-Profit Law study finding that 55% of EU citizens were ‘concerned’ about the use of AI in national security and defence.[14] 

But in-depth, deliberative research is sorely lacking. As pointed out by YouGov’s academic director, there are “limits to what single, closed-response questions can capture on attitudes towards the detailed specifics or implications of investigatory powers, and more deliberative forms of enquiry would doubtless paint a more variegated picture.”[15] This gap persists despite there being a wealth of publicly available, unclassified information about the agencies’ use of data, which could form the basis of in-depth public engagement. 

Some recent cross-sector studies have produced broader insights into public perceptions of AI.[16] The Department for Science, Innovation and Technology found that, as the public’s understanding and experience of AI grows, they are becoming more pessimistic about its societal impact.[17] The Office for National Statistics reports that 72% of the public believe that AI could negatively impact their lives by using personal data without their consent,[18] while the Ada Lovelace Institute and Alan Turing Institute identify strong public support for the protection of fundamental rights, including the right to privacy,[19] and support for AI regulation.[20] Although people are supportive of specific uses of AI to improve public services, they also express distrust that most public bodies (apart from the NHS) will protect the privacy of personal data.[21] 

These general concerns are likely to somewhat shape public attitudes towards national security. However, research has repeatedly found that the UK public’s attitude towards privacy varies significantly by context, to the point where “there is no one public opinion on data privacy.”[22] We must, therefore, be cautious about extrapolating from existing research on AI and build an evidence base that is specifically applicable to the national security context. That is the focus of the current study. 

2. Research Methodology and Limitations

Data for this study was collected in July–November 2024. The key research phases are outlined in Figure 1.

Figure 1. Overview of research methodology

Figure 1

Below, we summarise the key features of the survey and citizens’ panel that inform this research. (Further detail on these methods is included in the Annex.) 

2.1 Survey methodology

Sample: The survey was commissioned through Savanta, a member of the British Polling Council.[23] 3,554 UK adults completed the survey online between 31 October and 25 November 2024. The base sample (n=3,035) was nationally representative, ensuring a representative proportion of vulnerable audiences via a boost of these groups. Vulnerable adults were defined based on positive responses to a series of questions focusing on participants’ economic and social situations.[24] An additional boost of young adults (18–24) was also sampled (n=519), providing an additional dataset for analysis. 

Survey design: The survey was designed by the CETaS team and reviewed by external academics and Savanta experts. Multiple choice questions focused on: public awareness of national security; public attitudes towards a range of prompts describing how national security agencies might process data; and public responses to a series of comparative scenarios involving automated versus human data processing. Participants were also invited to offer their views on the risks and opportunities of automation in a national security context. 

Analysis: Savanta analysed the data to produce summary charts on key trends, identifying statistically significant findings and demographic variations with a 95% confidence interval. Based on this analysis, we generalise from a nationally representative sample of the UK population to refer to the “UK public” throughout this report – at a confidence level of 95%. 

2.2 Citizens’ panel methodology

Sample: The citizens’ panel was commissioned through Hopkins Van Mil, a leading public deliberation consultancy.[25] 33 members of the public from across the UK participated in three sessions. Participants were recruited through the Sortition Foundation[26] to reflect the national population. (A more detailed recruitment methodology and demographic breakdown are included in the Annex.)

Citizens’ panel design: Workshops were collaboratively designed between CETaS and Hopkins Van Mil. Session 1 focused on the national security agencies, the legal context of investigatory powers, and the privacy and human rights implications of national security data processing. Session 2 focused on automation and ML. Session 3 focused on the future of data processing for national security. Participants saw presentations by academics on these topics, followed by Q&A with the speakers. All sessions were facilitated by independent and experienced facilitators from Hopkins Van Mil. 

Analysis: Anonymised transcripts were analysed in NVivo to identify key themes. Further insights were gained through whiteboard notes taken during the sessions, and from online polls conducted via Menti.com.[27]

Throughout this report, we integrate insights from the survey and citizens' panel, clearly indicating the source of each finding. This mixed-methods approach enables a deeper exploration of the research questions, as each method helps offset the limitations of the other.

The survey reveals the UK public’s instinctive reactions to a range of prompts, and sheds light on how these reactions vary across the population. Surveys are effective in measuring public opinion because, with a sufficient sample size, it is possible to make population-level generalisations and demographic comparisons.[28] But surveys have inherent limitations. In the absence of any pre-existing views on a topic, participants will often base responses to specific questions on more general preferences, making it challenging to disaggregate the aspects of a question that have primarily shaped the response.[29] Furthermore, information gleaned through surveys is highly sensitive to questionnaire design.[30] This means that one must exercise caution when generalising findings beyond the specific prompts used.

Given these weaknesses, there are limits to how much nuance a survey can reveal. Yet a mixed-methods approach helps provide further evidence where survey findings are insufficient. Citizens’ panels are a more deliberative approach, enabling in-depth discussions and creating an opportunity to share contextual details with panel participants.[31] However, what citizens’ panels gain in detail and consideration, they sacrifice in generalisability due to the small sample size.

While this study benefits from a mixed-methods approach, including both a large, nationally representative survey and in-depth citizen discussions, some limitations remain: 

  1. Limited range of scenarios explored: The scenarios tested, though grounded in unclassified literature, are not exhaustive. One should not generalise conclusions beyond these scenarios.

     

  2. Acquiescence bias in survey responses: There is a tendency in survey research for respondents to agree with statements regardless of their true preferences.[32] Steps were taken during survey design to mitigate this effect – by, for example, using balanced scales and setting up mirrored scales, so that half the respondents saw answer options from positive to negative, and the other half saw them from negative to positive. Nevertheless, there may be a residual effect whereby the public would be biased towards supporting the data processing scenarios explored. (Further detail on how this effect was minimised through survey design is included in the Annex.)

     

  3. Sampling constraints and biases: Though we used both online and offline methods, we may not have fully succeeded in reaching those who are digitally excluded. We used sortition, a random lottery recruitment method, to ensure citizens’ panel participants were nationally reflective. However, we acknowledge that certain groups – such as those who are more politically engaged – may have been more likely to participate.[33] 

     

  4. Access to information: Presentations during the citizens’ panel were delivered by academics, and participants did not have direct access to national security practitioners to ask first-hand questions about real-world uses of data for national security.

     

  5. Overcoming knowledge asymmetries: A key challenge was ensuring the views of researchers and presenters did not introduce bias into the findings of the citizens’ panel during either the workshop design, the deliberations or the analysis. These risks were mitigated by project design choices such as those to co-design all workshop activities with public deliberation experts, take a structured approach to analysis and use independent, trained facilitators to chair discussions.

     

Despite these limitations, this study makes a substantial contribution to our understanding of the key factors that shape the UK public’s attitudes towards national security data processing. The remainder of this report lays out research findings across four key areas:

  1. Overall knowledge of and support for national security data processing: How familiar are the public with national security data processing powers? And how is support for or opposition to this data processing spread across the UK public?

     

  2. Key factors influencing public support: How do the public feel about national security agencies processing data in a range of contexts? What shapes their level of support? And what role, if any, does oversight play in influencing public trust?

     

  3. Human and machine intrusion: What determines public perceptions of privacy intrusion? Do the public find automated processing to be more or less intrusive than human processing? Do they prefer one over the other? 

     

  4. Public expectations of automation and ML: Where automation and ML are used, what can be done to ensure these technologies serve the public interest? What specific safeguards might be needed to ensure trust in these technologies?

3. Knowledge of and Support for National Security Data Processing

3.1 Knowledge of national security

Key finding 1: Public awareness of national security data processing powers is limited

To understand public awareness of national security, survey participants were asked to self-report on how much they understood national security agencies and their work, and whether they were aware that national security agencies have powers to collect information about people in the UK without their knowledge. 

This process revealed that the UK public’s self-reported awareness of national security agencies’ work is low. The majority (61%) report that they understand the work of the agencies “slightly” or “not at all,” with just 7% feeling that they understand the national security agencies’ work “a lot” (Figure 2). This finding suggests that, despite agencies’ efforts to engage more directly with the public in recent years, there is still a significant gap in public understanding.[34]

Figure 2. Public understanding of the work of national security agencies 

Figure 2

Survey question: Overall, how well do you feel you understand the work of the UK’s national security agencies? (n=3,035)

The majority of the UK public (64%) are at least somewhat aware that national security agencies have powers to collect information about people in the UK, without their knowledge (Figure 3). 

Figure 3. Public awareness of national security agencies’ data collection powers

Figure 3

Survey question: National security agencies have legal powers to collect information about people in the UK, and their activities, without those people’s knowledge. Before today, to what extent were you aware of this? (n=3,035)

Nevertheless, there is a wide variation, with almost half of participants (49%) self-reporting as “somewhat aware.” This suggests that while the majority know national security agencies have some powers to collect information about people in the UK, few people feel they fully understand these powers.

Awareness of these data collection powers is not evenly spread across the population. People in the youngest age group (18–24) are significantly more likely than the oldest (55+) to say they are not at all or not very aware of the powers (40% versus 34%). Vulnerable respondents are significantly more likely than others to say they are not at all or not very aware of the powers (39% versus 34%). It is possible that the variation across age groups is partly shaped by lower exposure to the investigatory powers debates following the 2013 Snowden allegations, which provoked the most widespread public discussion of this topic in the UK for decades. 

The citizens’ panel reinforced the finding that public awareness of national security agencies is low. Several panel members were surprised by the data collection powers, with one commenting that “I didn’t realise how much information they had,”[35] and “I probably didn’t know how much data could be collected on me. I don’t think that’s just generally common knowledge.”[36] Panel members expressed particular surprise regarding the UK’s international intelligence sharing agreements, the range of collection powers available and, most of all, the range of approvals agencies must obtain before collecting data in the first place.[37] 

Respondents’ limited understanding of national security agencies partially explains the high number of neutral responses throughout the survey, in which they felt unable to form an opinion either way. This is consistent with prior studies on national security, which are characterised by large numbers of “don’t know” responses.[38] 

Key Finding 2: Interpretations of national security are diverse

In advance of the sessions, each panel member shared an image to represent their understanding of our topic, data and national security. Panel participants then reflected on what these images revealed about their attitudes towards national security uses of data. The key themes which emerged during those discussions are summarised in Figure 4. 

Figure 4. Illustration depicting panel members’ first impressions of the topic of discussion, “data and national security.” This illustration is based on images panel members shared in advance of the workshops and their associated themes. (Designed by Jonny Lighthands.)

 Figure 4

These discussions suggest, in contrast to previous studies,[39] that people’s preconceptions of national security and data are not primarily shaped by James Bond-style fiction but instead by their real-life experiences and stories they have seen in the media. Previous scandals or incidents play a disproportionately large role in shaping panel members’ perspectives, due to the lack of other publicly available sources of information. 

Given the public’s low exposure to national security data processing, their preconceptions are often shaped by their interactions with the private sector or experiences of cybercrime: during initial discussions, panel members focused heavily on data selling, misleading privacy policies and targeted advertising, despite these concerns not being directly relevant to national security agencies’ uses of data.[40] 

While privacy was the most common theme, several panel members put forward images representing their worries about biometric technologies, several were primarily concerned with data security online, while a couple were concerned about new ways criminals could leverage technology to their advantage.[41]

Discussions during the citizens’ panel demonstrate that the public do not have a single definition of national security but do recognise its breadth. Two definitions from panel members illustrate this broad interpretation:

“National security impacts all the different aspects – whether it be army, police, health, data.”[42]

“That is what national security means for me: things like diplomacy, protecting our people abroad, intelligence, information-sharing and our defence, like our army and stuff like that.”[43]

However, for several panel members, the lack of a clear definition of national security was a concern. Panel members repeatedly raised this with expert speakers. For example, one asked: “things are done under the banner of national security, but who is determining this as national security?”[44] The distinction between national security, policing and public safety was particularly blurred – likely due to the different powers and oversight regimes that exist for national security agencies compared with law enforcement agencies. 

3.2 Support for national security data processing

Key Finding 3: Support for national security data processing is widespread but not universal

Survey participants were asked to indicate their support or opposition to a range of data sharing/processing activities involving either a national security agency or a regional police force. 

Results indicate generally high support for national security data processing among the UK public. Across all the datasets tested, there is more support than opposition to a national security agency processing that data, even for sensitive datasets such as identifiable medical data. There is also generally high support for police uses of data, although support is slightly lower for regional police forces than for national security agencies (Figure 5). 

Figure 5. Public support for different organisations’ collecting and processing data 

Figure 5

Survey question: If each request for information was authorised/not authorised by an independent oversight body in advance, to what extent would you support or oppose this public sector organisation collecting and processing the following information about a person of interest, without their knowledge? (n=3,035) Arrows indicate statistically significant differences between policing and national security (95% confidence interval).

However, strong support for national security data processing is not universal – with a sizeable minority still opposed. For each dataset tested, more than 20% of the public are opposed to national security processing, and more than 25% are opposed to police processing. It is also notable that, across the datasets tested, between 19% and 22% “neither support nor oppose” a national security agency processing this data. This group should not be ignored, as many of them are likely unsure about how they would feel due to a lack of further context (even if their responses could indicate that they do not care).

One should be cautious in interpreting these results, given the limited number of data processing activities tested. These datasets have only been tested with reference to data about a “person of interest.” Therefore, the results do not indicate how the public would feel about people’s data being processed if they were not of interest to the intelligence services (or in respect of their own data). Nor is it clear how the public define a “person of interest.” Indeed, during the citizens’ panel, facilitators found that panel members used the term synonymously with “suspect” and often assumed this person was guilty of a crime. 

Discussions during the citizens’ panel further reveal a broad range of views. A small number of panel members were strongly distrustful of national security data processing across the board, largely based on pre-existing concerns around the institutions involved.[45] A similarly small number were consistently supportive of national security interests to the point where this outweighed almost all other concerns.[46] For many, the factors which shaped their support for national security data processing were more complex and, as a result, their support varied significantly depending on the context of data processing and the safeguards in place. 

Key Finding 4: Support varies across the population, particularly by age 

Young adults are less supportive than older adults of national security agencies processing a range of datasets (Figure 6). This difference is statistically significant for all datasets other than identifiable medical data.

Figure 6. Public support for national security agencies collecting and processing data without a person’s knowledge, by age

Figure 6

Survey question: If each request for information was authorised/not authorised by an independent oversight body in advance, to what extent would you support or oppose a national security agency collecting and processing the following information about a person of interest, without their knowledge? (18–34, n=831; 55+, n=861.) Arrows indicate a statistically significant difference (95% confidence interval).

Additionally, vulnerable adults are less supportive than non-vulnerable adults of national security agencies’ data processing across a range of datasets (Figure 7). This difference is statistically significant for all datasets other than identifiable medical data.[47] 

Figure 7. Public support for national security agencies collecting and processing data without a person’s knowledge, by vulnerability

Figure 7

Survey question: If each request for information was authorised/not authorised by an independent oversight body in advance, to what extent would you support or oppose this public sector organisation collecting and processing the following information about a person of interest, without their knowledge? (Vulnerable, n=1,463; not vulnerable, n=1,572.) Arrows indicate a statistically significant difference (95% confidence interval).

People who report a greater understanding of the work of national security agencies are more likely to support data processing across the datasets tested (Figure 8). One interpretation of this finding is that efforts to educate people about why and how national security agencies process data could improve public awareness of the issue and, accordingly, increase support. However, further research is needed to explore this, given that the study measured self-reported awareness – which may not reflect the public’s actual knowledge of how national security agencies work.

Figure 8. Public support for national security agencies collecting and processing data, by understanding of their work

Figure 8

Survey questions: Overall, how well do you feel you understand the work of the UK’s national security agencies?; If each request for information was authorised/not authorised by an independent oversight body in advance, to what extent would you support or oppose a national security agency collecting and processing the following information about a person of interest, without their knowledge? (n=3,035) Arrows indicate a statistically significant difference (95% confidence interval).

4. Key Factors Influencing Public Support

4.1 How context shapes support

Key Finding 5: Support for data processing differs across datasets

National security agencies analyse a variety of private and public datasets, ranging from location data to the content of messages and public posts on social media. To minimise privacy intrusion, agencies must go through warrant application processes to access datasets that are deemed to have a higher expectation of privacy.[48] 

Survey participants were asked whether they support or oppose the processing of a range of datasets by a national security agency. The findings show that public support for national security data processing varies by dataset (Figure 9). The public are most supportive of public social media posts being processed, while they are most opposed to the content of text messages and identifiable medical data being processed. The public were significantly more supportive of communications data being processed, compared with the content of messages on their phone – which is consistent with existing policy.[49] 

Figure 9: Public support for national security agencies analysing data, by dataset

Figure 9

Survey question: If each request for information was authorised/not authorised by an independent oversight body in advance, to what extent would you support or oppose a national security agency collecting and processing the following information about a person of interest, without their knowledge? (n=3,035)

Biometric data was one of the dataset categories in which support for national security processing was highest. This may be indicative of the public’s high level of familiarity with biometric technologies such as facial recognition – in contrast to their limited awareness of agencies’ covert analysis of information such as text messages and location data. 

While there is broad public support for national security agencies processing the public posts of a person of interest, 23% of the public oppose it – indicating that a proportion of the population may still have some expectation that this information will remain private. This aligns with academic commentary recognising that an individual may make something ‘public’ – in the sense that it can be viewed by anyone – without losing all expectations of privacy.[50] 

The citizens’ panel offered further evidence that many find it less intrusive for national security agencies to process public posts than private messages. For example, several panel members argued that, given it’s their “choice to share it with everybody else, it’s not intrusive”[51] and “privacy wouldn’t really be much of a factor if it’s based on public posts.”[52]

However, a couple of panel members did still find the processing of public posts to be intrusive. One panel member summed this up by arguing, “there's still some level of intrusion, even if it's a public post. Most of the time, even with public things, you only look at things that relate to you or someone you know. You wouldn't just go out there just to search for data.”[53]

Findings suggest that while the public have different privacy expectations about different datasets, these are not easy to predict. Instead, the public’s privacy judgments are highly dependent on the reason each dataset is being processed.

Key Finding 6: Perceptions are shaped by the context in which data is processed

Survey participants were asked to indicate their support for, or opposition to, national security agencies processing personal data for a range of purposes.[54] The results show that the public care a lot about why their data is being processed, further illustrating that context is key. 

The public are far more supportive of personal data being used in operational scenarios than non-operational scenarios, with these being some of the largest variations in public support observed throughout the survey (Figure 10). 

Figure 10. Public support for national security agencies processing personal data for operational purposes

Figure 10

Survey question: If each request for information was authorised by an independent oversight body in advance, to what extent would you support or oppose personal data (which could include names, addresses, phone records, locations, contacts and other activities) being used by a national security agency in the following ways? (n=3,035)

In contrast, the public would be opposed to national security agencies sharing personal data with political parties or commercial organisations (Figure 11).

Figure 11. Public support for national security agencies processing personal data, by context

Figure 11

Survey question: If each request for information was authorised by an independent oversight body in advance, to what extent would you support or oppose personal data (which could include names, addresses, phone records, locations, contacts and other activities) being used by a national security agency in the following ways? (n=3,035) 

Only 52% of respondents support national security agencies’ use of personal data to shape their long-term strategies, and 42% their use of it to create new automated tools that predict future behaviour. However, around 30% of the public answered that they neither support nor oppose this data processing. This unusually large proportion of neutral responses may indicate a high degree of public uncertainty about these use cases. 

The survey also tested comparisons between a national security agency, a regional police force and the UK Health Security Agency (UKHSA) for all datasets and all purposes for data processing. The UKHSA was included to provide a public sector comparator from outside the national security context. 

Survey responses further reveal that context predicts support, as the public are far more likely to support data processing if it is consistent with their understanding of the organisation’s role. As a result, the only dataset the public are more supportive of the UKHSA processing is medical data (Figure 12). And, while the public are significantly more supportive of national security and police forces processing data for operational purposes, they come to similar conclusions about each of these organisations using personal data to “shape long-term strategies” or “create a new automated tool” (Figure 13).

Figure 12. Public support for national security agencies processing personal data, by dataset

Figure 12

Survey question: If each request for information was authorised / not authorised by an independent oversight body in advance, to what extent would you support or oppose this public sector organisation collecting and processing the following information about a person of interest, without their knowledge? (n=3,035) Bold text indicates a statistically significant difference between the organisations (95% confidence interval).

Figure 13: Public support for national security agencies processing personal data, by context

Figure 13

Survey question: If each request for information was authorised by an independent oversight body in advance, to what extent would you support or oppose personal data (which could include names, addresses, phone records, locations, contacts and other activities) being used by this public sector organisation in the following ways? (n=3,035) Bold text indicates a statistically significant difference between the organisations (95% confidence interval).

Members of the citizens’ panel came to similar conclusions, reaching a clear consensus that they would be most supportive of national security agencies processing data to tackle terrorism and serious crime, detect foreign governments’ spies and investigate an individual with a suspected connection to a crime – even if they were that individual. There was much more disagreement around whether national security agencies should use data for the other purposes mentioned: many participants argued that they would need more context to come to a view. For example, more information on the safeguards in place and the public benefit of such data processing.[55] Further research could explore the boundaries of public support for data processing outside operational contexts.

4.2 The role of safeguards and oversight

Key Finding 7: Oversight builds trust when it is understood

In 2016, as part of the IPA, a new oversight regime was introduced to ensure the existing surveillance powers were used in the public interest. Our findings suggest that while there is substantial potential for this oversight regime to bolster trust, it is not yet doing so. This is simply because many people are unaware that any such safeguards exist. 

The role of oversight was challenging to test through the survey alone. For survey questions involving national security agencies processing a range of datasets, half of participants saw option one, where data processing had been ‘authorised by an independent oversight body’. The other half saw option two, where data processing had ‘not been authorised by any independent oversight body’. Results indicate that support tends to be higher when the request is authorised, and that this is statistically significant for biometric and location data (Figure 14). 

Figure 14. Public support for national security data processing, by approval of an independent oversight body

Figure 14

Survey question: If each request for information was authorised/not authorised by an independent oversight body in advance, to what extent would you support or oppose this public sector organisation collecting and processing the following information about a person of interest, without their knowledge? (n=3,035) Arrows indicate a statistically significant difference (95% confidence interval).

However, for the remaining datasets there is no significant difference in support depending on whether the request for information has been authorised by an oversight body. The absence of any difference is likely caused in part by high background levels of trust in the agencies, which leads to a reasonable amount of public support for their activities without independent oversight. However, the constraints of the survey methodology may be more important here, given that it prevents participants from being fully briefed on the nature of independent oversight. 

The significant role that oversight can play in building public trust came through much more clearly in the citizens’ panel. Many panel members were surprised to discover the existence of the oversight regime. Some panel members had imagined there was simply a “free for all” regarding how national security agencies process data.[56] Panel members commented they had “thought that the Government had far more freedom to just take data,”[57] that there are “a lot more layers [agencies] have to go through to get approval than I realised” and that this “is not well known.”[58] In response to the closing questionnaire, 25 out of 33 panel members said that the key lesson across the sessions was that there was more oversight than they would have expected.[59] 

For most panel members, greater awareness of these safeguards engendered greater trust, with panel members stating they felt “reassured,” “consoled” and “pleasantly surprised.” In contrast, they said that “ignorance” bred “conspiracy theories.”[60] 

Several members of the panel called explicitly for more education from the Government, arguing that “if the public have more knowledge, proper education, then some of the scepticisms and all that will go down because you’ll know why it should be done,”[61] and that the UK needs an “education campaign” – perhaps one built into the “school curriculum.”[62]

Another participant noted that the information the public currently hear from agencies is not focused on the topics that are most relevant for them, arguing that agencies tend to share “the glamourous stuff” while the public “just want to know what it means to live as a citizen in the UK.”

This call for greater education was summed up by another panel member, who said that “the assumption is that we don’t – or don't need to – know. What we’re saying is: actually, a lot of the public have more capability than we’re being given credit for, and we do need to know.”[63]

Key Finding 8: Public engagement could improve confidence in oversight

While panel members were reassured by the IPCO oversight mechanisms and the existence of the ISC, many felt that oversight could be improved. Panel members’ suggestions focused on the need for oversight bodies and agencies to diversify the perspectives feeding into decisions and to open up to public consultation.

Several panel members were concerned about diversity within oversight organisations. The metric they raised repeatedly was the inclusion of young people. Panel members argued: “you’ve got a new generation coming up underneath, you’ve got a new understanding, and they’re not going to fully understand what this new generation are wanting or needing,”[64] and that we need to “change the people who oversee and regulate, getting new views more generationally.”[65] This call for generational diversity is especially notable given the survey findings that young adults are less supportive of national security data processing.

For some, calls for diverse perspectives went further, with members of the panel calling on national security agencies and oversight bodies to involve the public more directly in decision-making. When discussing who should set the rules for data processing, one panel member argued “it’s got to be everyone; it’s got to be citizens – those with experience” while another said, “most of us feel that we don’t have any input … it’s frustrating.” Some suggested introducing “citizens’ juries” through which representative members of the public would be regularly consulted on key decisions relating to automated data processing by national security agencies. 

Given the enthusiasm of the members of this initial citizens’ panel (which was oversubscribed due to widespread interest), it would be easy to recruit panel members for another such exercise in future. This would ensure that their perspectives regularly informed policy decisions, as part of an ongoing citizens’ jury for national security.[66] 

Comments from panel members suggest such a consultation exercise would be greatly valued. For example, “we’re here because we’re interested. We didn’t know a lot of this stuff; it’s been really interesting – a bit of an eye opener” and “that’s refreshing to hear but I wouldn’t have heard it unless I was here.” 

However, this increase in public deliberation and education will only be valuable if the public can see they are being listened to. As one panel member noted, it is difficult for the public to trust government to act on their input:

“For groups like this [citizens’ panel] to come together and create opinions and then that to be passed to the government, and there is evidence of them listening to it. So yes, I'm reassured that the stuff we've talked about here will get fed back. Whether it actually affects it, I don't know.”[67]

This would require a fundamental shift in national security agencies’ and oversight bodies’ approach to public engagement, but much could be done in the short term to initiate this. For example, the IPCO or the Home Office could commission or run their own citizens’ panels, to pre-empt potential public responses to proposed policy changes. Agencies themselves could invest more in public deliberation and could communicate the ways in which they value the public’s perspective. 

Adopting public deliberation methods in national security would not come without challenges, especially given the limited information available in the public domain. Nevertheless, in a context where the choices of national security decision-makers are highly contested and depend on public trust for long-term legitimacy, trials of these methods on selected policy questions could reveal promising areas for larger-scale projects.[68] 

Key Finding 9: Not all safeguards are perceived as equally important

The survey results show that not all safeguards are equally important to the public, based on their ranking of seven different potential measures (Figure 15).

Figure 15. Ranking of important safeguards

Figure 15

Survey question: Please rank the following safeguards from most important to least important in a context where national security agencies are collecting and processing data about you. (n=3,035)

These safeguards were discussed in more depth in the citizens’ panel, revealing a variegated picture of why the public value these safeguards (see table below).

Table 1. Safeguards

Safeguard

Citizens’ panel findings

All personal data (which could include name, address, phone records, location, contacts and other activities) will be stored securely and deleted as soon as possible. 

Panel members see data security as essential and have significant concerns around breaches, arguing that “history has proven time and again” that data leaks cannot be controlled. Their perceptions of this are shaped by examples from outside national security, and there is potential here for education on national security data security practices to resolve many concerns.

Many see data deletion as a top priority, arguing that stringent safeguards should be in place. Some also identify a tension, noting that “we do have rather contradictory feelings ourselves, don’t we? We get very upset when data that could’ve been used hasn’t been and something awful happens.” 

An independent oversight body will regularly conduct inspections of all the instances in which a national security agency has collected and/or processed personal data. 

For many, independent oversight is a top priority. Panel members are reassured by regular inspections and emphasise the importance of ensuring this oversight body continues to be independent. 

 

The data processing activity must first be approved by a senior judge. Judicial approval is a top priority for many, with widespread trust among panel members in the expertise of the judiciary. However, concerns were raised around the weight of decisions being placed on individuals. 
Initiatives are put in place to increase transparency around methods of data processing (for example, sharing with the public information about what measures are in place to protect privacy).

Transparency is important for panel members, but there are varying views on how this should be achieved. 

Some panel members are more absolute, arguing the public needs the ability to query what information agencies hold on them.

For many, the priorities are A) transparency about the rules national security agencies follow; and B) transparency around any past mistakes. 

A senior person who works for the national security agency will regularly assess whether all data collection is proportionate to the scale of the threat to national security and/or public safety. 

For a couple of panel members, internal assessments are especially important as internal reviewers are perceived to “have more information on the actual threat” than the independent oversight body, which will focus “on the procedures being followed.”

However, concerns were also raised around agencies “marking their own homework.”

An independent court is available to respond to any complaints about the collection and analysis of data by national security agencies. While independent legal redress is important to panel members, it is unclear to many how such a court could serve the public’s interests if the public did not know how their data was being processed. The relevance of this court is dependent on them trusting that an independent oversight body will raise issues on their behalf. 
The data processing activity must first be approved by a government minister.

Ministerial approval was not viewed as very important, as panel members do not see government ministers as having the relevant expertise to assess the proportionality of data processing activities. That perception may be affected by levels of public trust in the government of the day.

Panel members understand that ministers receive expert advice, but still see this safeguard as less of a priority than the others. 

Proportionality

Beyond these specific safeguards, panel members value the central principle of proportionality in shaping the timing and method of data processing. But several of them are concerned that proportionality is a highly subjective concept. 

One panel member summarised this, saying: “the trouble with this word ‘proportionate’ is that it sounds great, but it’s a judgement. So, I suppose the question is: should there be more of an open set of criteria?” This need for a more specific articulation of proportionality has been explored by previous CETaS research on privacy intrusion and national security.[69]

 

5. Human and Machine Intrusion

5.1 The false dichotomy of human versus machine intrusion

Key Finding 10: Human and machine processing are perceived as similarly intrusive

The relative privacy intrusion associated with human versus machine-based data processing is the subject of ongoing debate in national security. As Alexander Babuta, Marion Oswald and Ardi Janjeva noted in 2020:

“The use of AI arguably has the potential to reduce intrusion, both in terms of minimising the volume of personal data that needs to be reviewed by a human operator, and by resulting in more precise and efficient targeting, thus minimising the risk of collateral intrusion. However, it has also been argued that the degree of intrusion is equivalent regardless of whether data is processed by an algorithm or a human operator. According to this view, the source of intrusion lies in the collection, storage and processing of data. The methods by which this is achieved – whether automated or manual – are immaterial.”[70]

In national security, it is imperative to minimise the privacy intrusion involved in data processing. Therefore, if there was a clear public consensus on the relative intrusiveness of human and machine processing, this would have numerous policy implications.

Yet, so far, there is no such consensus. Existing academic research outside national security has found that automated recommendations in combination with a final human decider are perceived to be as fair as decisions made by a single human decider, and fairer than decisions made only by an algorithm. For example, Cristophe Kern and his co-authors found that a degree of human involvement or oversight was linked to increased comfort levels for high-stakes decisions involving punitive consequences.[71] Furthermore, Henrietta Lyons, Tim Miller and Eduardo Velloso have found that a desire for control in connection with a high-impact decision influences the preference for human review.[72] For mechanical tasks (such as work scheduling), machine and human decisions were seen as equally fair and trustworthy, contrasting with human tasks (such as hiring) where algorithmic decisions were seen as less fair.[73]

Until now, this comparison of human versus machine processing had not been tested in a national security context, nor with respect to the relative degree of privacy intrusion rather than fairness or trustworthiness. 

To assess public perspectives on this topic in a law enforcement context (a live terrorism investigation), each survey participant was asked to respond to two scenarios randomly selected from four possible ones (Figure 16). They rated whether each scenario was fair or unfair and intrusive or not intrusive, and whether they supported or opposed a law enforcement agency processing data in this way.

Figure 16. Data processing scenarios shown to participants

Figure 16

The responses to these scenario-based questions reveal that while the public find it more intrusive for national security agencies to analyse confidential messages than public posts, they do not differentiate significantly between the privacy intrusion resulting from human versus machine analysis (Figure 17).

Figure 17. Perceived intrusiveness of data processing practices, by scenario tested

Figure 17

Survey question: How would you feel about the law enforcement agency processing data in this way? (n=1,502-1534) Arrow indicates a statistically significant difference between scenarios involving “confidential messages” versus “public posts” (95% confidence interval).

Similarly, there is no consistent, significant difference between the perceived fairness of human and machine data processing (Figure 18). 

Figure 18. Perceived fairness of data processing practices, by scenario tested

Figure 18

Survey question: How would you feel about the law enforcement agency processing data in this way? (n=1,502-1,534) Arrows indicate a statistically significant difference between scenarios involving automated processing of “confidential messages” versus human processing of “public posts” (95% confidence interval).

Finally, a large majority of the population support data processing in all four scenarios, with no significant patterns distinguishing between their responses to human processing and automated processing (Figure 19). 

Figure 19. Support for data processing practices, by scenario tested

Figure 19

Survey question: How would you feel about the law enforcement agency processing data in this way? (n=1,502-1,534) Arrow indicates a statistically significant difference from the other categories.

Together, these results suggest that in the context of a live terrorism investigation, the public do not distinguish in a significant way between the fairness or intrusiveness of human compared to machine data sifting, nor do they support one option more than the other. 

These results should be interpreted with some caution given that the case study involved a live terrorism investigation – which immediately confronts participants with an urgent and high-risk scenario. It is, therefore, unsurprising that public support was high across the board. It also involved data being processed about “a person of interest” – which may not give a clear indication of how participants might feel if their own data was processed.

Based on this evidence, there is no indication that the public find automated data processing to be inherently more or less intrusive than human processing. 

Findings from the citizens’ panel suggest a more nuanced picture. Discussions support the conclusion that both human and automated data processing are, to some extent, intrusive. During the workshops, panel members were presented with the below data processing workflow and then they worked together to discuss which elements they would find intrusive and why. Results indicate that while the automated processing itself may be perceived as less intrusive than human review, privacy intrusion has nevertheless occurred at the point of data collection (Figure 20). 

Figure 20. Perceptions of privacy intrusion at each stage of data processing. (Votes indicate that panel members found this stage to be intrusive. Panel members were able to vote for multiple stages.) 

Figure 20

Some panel members commented that data collection is inherently intrusive because consent is taken “implicitly,” and nobody has “opted in”. For several panel members, including the automatic data filtering stage reduced intrusion, but this conclusion was not universal, especially if the algorithm was perceived to be inaccurate or biased. 

The citizens’ panel expressed a wide spread of views on whether human or machine processing was inherently more intrusive. Some found human processing to be less intrusive; a larger number found machine processing to be less intrusive. Many argued that it depends on context or that there is no meaningful difference between the two. 

Those who cited machine processing as less intrusive had two main reasons. They saw machines as less intrusive either because they lacked “judgement” or because they lacked “memory,” and irrelevant data could simply be deleted without living on in a human mind. 

For those who saw human processing as less intrusive, the main rationale was the potential scale of machine processing. These panel members were concerned that because algorithms can process more data faster, it would be easy to slip into a world where “everyone would just get reviewed” because it is so easy to do.

Key Finding 11: Perceived intrusion does not determine support for data processing

Perspectives raised at the citizens’ panel suggest that it would be misguided to focus too much on the relative privacy intrusion incurred by different data processing methods as this was not the most important factor in determining whether they support data processing. Instead, the application of relevant oversight mechanisms, and fair and accurate data processing were as, or more, important than the degree of privacy intrusion. 

This view was summarised by a panel member who argued:

“Of course, it’s intrusive but that, to me, isn’t really the issue. The issue is: has it gone through the hoops? Has it been safeguarded? This is a person. This is an investigation going on. Any investigation is intrusive for anything, but if it’s gone through the hoops, then I would feel pretty confident in it happening.”[74]

Another agreed, arguing that, “for me, the privacy issue was addressed with the safeguards,”[75] and that “all of this is intrusive, and so the context in which this is all put into place is really important.”[76]

In discussions of privacy intrusion, panel members often steered the debate to focus on the most effective method. For example, they argued that “there’s a difference between intrusive and effective, isn’t there? It’s not two sides of the same coin,”[77] and “I’d probably just go with whichever is more effective, but how do you decide that?”[78]

For many, privacy was still a key priority. But for others, this went towards the bottom of the agenda, with one participant stating:

“I know the question that was being asked is: how intrusive do you find these messages? Extremely intrusive. I wouldn’t want anyone reading mine. But at the end of the day, this is a suspected terrorist. Don’t care.”[79] 

5.2 The role of human–machine teaming

Key Finding 12: Human–machine teaming is favoured

Wholly separating human and machine analysis is an oversimplification that fails to reflect the reality of modern intelligence analysis.[80] The importance of integrating human and machine processing came through in the citizens’ panel, many members of which agreed that “you’ve got to use both. You’ve got to keep up to date with technology, to use automation, but you also need the human intervention as well.”[81]

Panel members saw humans and machines as having distinct weaknesses which could be mitigated by using the two together. Some panel members perceived an innate human ability to consider emotion, display empathy and recognise mitigating factors that machines would be poorly placed to do. Others were concerned about human biases in comparison to machines’ perceived neutrality. Given these contrasting weaknesses, many therefore advocated for the more ‘objective’ assessment offered by machines being harnessed in combination with the human analyst’s familiarity with human emotion.

Panel members raised concerns about potential overreliance on any one method – human or automated processing. For example, some were worried about an overreliance on automation due to cost and time constraints.[82] Others suggested that an overreliance on human decisions could lead to delays.[83] Several panel members raised doubts about blind trust in automated decisions. They referenced the Post Office Horizon scandal – in which more than 900 sub-postmasters were prosecuted over 16 years for theft and false accounting based on evidence produced by a flawed computerised system.[84] Again, many panel members concluded that human and machine analysis needs to operate in tandem.

Key Finding 13: Humans, automation and ML are perceived as complementing each other and should have distinct roles

Panel members also had in-depth discussions on how the tools at national security agencies’ disposal could be deployed for the best results. Context was seen as crucial to explaining whether humans, machines, or both are needed. 

One example of this concerned the severity of the crime that is being investigated – the premise being that the more severe the offence, the more important it will be to have both a machine and a human analysing the data. One participant said, “I think if it was something like this [terrorism investigation] then you would want to be 100% accurate and you would have the algorithm and the human. If it was something like a misdemeanour, then I think accuracy matters less and you would just have the algorithm – like, for me, they’d both depend on what the level of threat is.”[85]

Participants saw not just the severity of the offence but also its urgency as crucial to informing a more machine-based approach. One commented, “the automated version seems more efficient and urgent if it’s more of a time-based risk.”[86]

Several panel members provided more detail about the specific stages of the investigative process in which they thought automation should play a role. With regards to intrusion, one insight here was a preference for the deployment of automated tools at an earlier stage: “if they have some inkling that something’s going on, using an algorithm to have a look into it feels less intrusive, and then if they get to a more concrete stage, then a human stepping in to double-check or look in more depth feels right to me.”[87]

Another panel member considered this from the perspective of fairness and efficiency: “I’m going to go back and agree with [Participant 2] on liking the bot first or the machine. First, looking at it, partly because of fairness, partly because of efficiency. It can go and do it without investing the human resources and flag if a human needs to look at it, and then the human can go back over and look for false negatives or false positives. I see it working hand in hand, but the machine doing the hard bit – and I wouldn’t find it intrusive.”[88] While not all panel members expressed a preference for automating a specific stage, there was a consensus that humans, automation and ML complemented each other. 

Finally, these views depended on the task designated to an ML tool, especially in relation to whether the technology was being used to classify data, make predictions or generate content. Some participants expressed greater comfort with using ML at earlier stages of the investigative process if its job was to classify certain objects or humans rather than to make predictions or generate new content.[89]

In the same vein, panel members expressed concerns about using ML’s predictive capabilities, with one suggesting that “the predicting of future behaviours is really dystopian” and another that “to create new automated tools for predicting future behaviours. That is a really Big Brother-style look at things.” 

Indeed, when tested through the survey, 30% of respondents reported that they would “neither support nor oppose” national security agencies creating a new automated tool to predict future behaviours (Figure 21). These neutral responses likely mask a high degree of uncertainty from respondents and may be explained by the highly contextual nature of public preferences in this area – the public are keen for automated tools to be used, but less so for prediction and only in certain contexts, and in tandem with human expertise. 

Figure 21. Public support for national security agencies collecting and processing personal information to create a new automated tool for predicting future behaviours

Figure 21

Survey question: If each request for information was authorised/not authorised by an independent oversight body in advance, to what extent would you support or oppose personal data (which could include names, addresses, phone records, locations, contacts and other activities) being used by a national security agency to create a new automated tool for predicting future behaviours? (n=3,035)

6. Public Expectations of Automation and Machine Learning

6.1 Priorities for trust in automation and ML 

Key Finding 14: There is a strong appetite for technological innovation within UK national security 

To explore perspectives on automation and machine learning, panel members saw presentations on A) automation and machine learning and B) responsible innovation, walking them through how these technologies work, and some of the associated risks.

Discussions reveal that, despite risks, there is strong appetite for national security agency innovation. Panel members commented that they wanted agencies to ‘continue to invest, develop and learn’ and ‘never reach good enough’,[90] and that national security agencies must consider, ‘is it fair to not use the tools at their disposal?’[91] 

So long as the emerging technologies meet the criteria discussed below, there is high public demand for the agencies to adopt them. 

Key Finding 15: Accuracy and fairness are the key determinants of support for automation

Panel members explored the risks they were most concerned about in the use of automation and ML for national security. They repeatedly referred to accuracy and fairness as their biggest concerns.

For many, effectively tackling national security threats was the top priority. Hence, the accuracy of ML tools was essential – and there was some concern that automation might lead to a reduction in accuracy and loss of a nuanced understanding of data. For some, this concern for accuracy came with a willingness for personal data to be used to train ML systems, reducing the likelihood of errors: “if there isn’t a large enough dataset, the model only learns what it’s given. If you give it a very small subset, it will have errors in it. If it’s going to go live anyway, I’d rather it went live with less errors, so I would sacrifice my personal information for a public good.”[92]

However, this expression of support for using data to train ML should be viewed with caution, as the survey indicated that the public are hesitant about the prospect of national security agencies collecting data for the creation of an automated predictive tool, with only four in ten supportive of this (Figure 21).

Panel members also recognised that the use of automation and ML could allow for analysis of data that might otherwise go unreviewed due to resource constraints, thereby contributing to the goal of leaving “no stone unturned”[93] and to an improvement in analytic capabilities. As one participant put it: “I think with breadth, we might also add depth because I think there is a joining together of all the data in ways at a speed and efficiency that [is] just impossible for humans.”[94]

Beyond accuracy, fairness was a significant priority for many, as they equated it with a system that is transparent, reliable and operates with privacy in mind:

“We have talked about whether it’s fair, haven't we? We’ve said that clearly, this investigation has got to the point where there is really, really good evidence there’s something was seriously amiss here. That’s what makes it fair. Nothing else would. The fact that it’s gone through what [Participant 5] calls the hoops, that confirms that the evidence is weighty. That’s what makes it fair. It’s an appropriate response to the information you have.”[95]

Some were concerned that biases would be “amplified” through algorithms and automation, causing information on certain groups of people to be targeted “disproportionately” without the public becoming aware that this was happening.[96] Many raised concerns that those designing automated systems would themselves “have a bias” and that “biased data” would be used to train AI models.[97] Some panel members also raised concerns about how the security services define “fairness,” referring to prior instances in which particular groups have been targeted disproportionately.

Many believed that whether data processing was human or automated, bias couldn’t be eliminated entirely. As a result, safeguards including independent oversight that incorporates diverse perspectives would be essential, as well as further efforts to ensure adequate and ongoing testing of systems for bias. 

6.2 Safeguards for automation and ML

Key Finding 16: Quality assurance and human oversight are viewed as essential

Throughout the three workshops, panel members continued to discuss the importance of existing safeguards, raising concerns that these checks might be bypassed in the use of automated analytics:

“Before some data can be looked for and collected, there’s a certain number of things that have to be gone through. Is that built into the algorithms? Are those who are writing those algorithms subject to that? Have they got those in mind? It’s that, really. My fear is that they’re not. My fear is that there might be all kinds of data collected that might not be necessary and might intrude into privacy or used in the wrong way.”[98]

Panel members also considered the need for additional safeguards that were specifically relevant to automation and ML. One of the most common suggestions was auditing. Panel members argued that auditing could help “reduce the amount of bias and ensure effectiveness” of the automated systems in use,[99] while ensuring they were not “outdated.”[100] 

Panel members also argued that an algorithmic audit function could improve transparency, with one suggesting:

“I do think there should be clear kind of auditing on what they’re doing with data. So, who has access to it, who’s gone in and done what. And that’s transparency. If something does go wrong, they can look back and they can say, oh, they looked at that data, they made that decision.”[101]

Some argued that this audit function should not focus just on accuracy but also on ethics, with one panel member asking, “is there a standard or guiding document for how to produce these algorithms in line with ethics?”

Many of these checks are already in place within national security agencies and are publicly documented – as seen in, for instance, GCHQ’s AI Ethics publication[102] and the Bailo model card framework.[103]

However, some panel members made proposals that would mark a more significant departure from current practices. One suggested that “all of our concerns are around who is running this” and asking “who is going to approve it? Do you approve your own work?”[104] Another suggested that there should be an “independent audit authority”[105] to ensure checks were robust and agencies were not marking their own homework. 

Overall, such discussions reveal that while many of the existing quality assurance processes for algorithms are in line with public priorities, the UK Government should pay attention to the issue of independence – and whether there should be greater separation between those who design and deploy automated systems, and those who are responsible for reviewing their efficacy. The enhancement of current quality assurance processes, together with efforts to educate citizens about the existence of these processes, may increase public confidence in questions of oversight and responsibility. 

Members of the citizens’ panel also prioritised human oversight of automated decisions as an additional safeguard. However, there were differing views of what this should look like. Some argued that “there should always be human oversight. Because I don’t trust machines.”[106] Others were comfortable with more intermittent human oversight, suggesting that national security agencies could “just do a little spot check to make sure that the decision you made at the end was fair.”[107]

Despite this variation, many agreed that there always needed to be human checks in place prior to decision-making. For example, one participant argued that “somewhere down the line, you will always have to have someone to make that decision at the end of the day, rather than a machine,”[108] and that “there needs to always be a human that’s responsible at the end of any decision so that it does away with the ambiguity of complex use.”[109] In line with previous research,[110] it is essential that the people performing such checks are empowered, and given the training they need, to question machine recommendations.

From an oversight perspective, many panel members were concerned that ML and automation could lead to an erosion in accountability. As one put it, there is a concern that organisations try to excuse their mistakes by saying, “AI did it.”[111]

This problem was seen as particularly concerning in the context of opaque AI systems, with one panel member saying:

“I am terrified of black ops [sic] decisions. I don't even trust statistical software to tell me what’s right. I want to write the code because I want to be able to say to someone at some point, ‘this is why it’s because of this thing; I can justify this.’ So, rather than not knowing what’s gone on to that system, and even not knowing what it’s not included and what it has included, I can’t interrogate the answers, and I can’t justify it because I don’t have enough information about what's happened. So, I’m just highly mistrustful.”[112]

Overall, panel members favour a high degree of human oversight for any use of automation by national security agencies. The public prioritises explainable systems and more regular checks, even if this might have an incrementally intrusive impact as more human operators become privy to sensitive data. 

7. Conclusion 

This report provides new insights into how the UK public feel about national security data processing. It does so at a time when policymakers are actively considering the future of investigatory powers, and how new technologies might be used to enhance investigative capabilities. In light of this, our primary recommendation for policymakers is to ensure public priorities and concerns shape decisions around future automated data processing within UK national security, to maintain public trust and confidence. 

The evidence presented within this report will be useful for a range of audiences, including:

  • The UK’s national security agencies in shaping whether and how data processing activities are automated, and in understanding how public trust can be measured and maintained throughout this transition.
  • The Home Office in scoping the long-term future of investigatory powers legislation within a rapidly changing technology landscape.
  • IPCO in determining their approach to overseeing automated and ML-driven data analysis by national security agencies.

Each organisation will draw distinct but overlapping conclusions from this research and should combine the evidence presented here with internal priorities to reach more granular recommendations. Nevertheless, we recommend policymakers pay particular attention to three of our key findings: While this study benefits from a mixed-methods approach, including both a large, nationally representative survey and in-depth citizen discussions, some limitations remain: 

  1. While the proportionality test in the legal context focuses on finding the least intrusive option, this research reminds us that intrusion is not a consideration that exists in a vacuum. Instead, it links strongly to the context and purpose for which data is being analysed, and importantly to how data processing is overseen and regularly reviewed.

     

  2. There is public support for both human and automated data processing by national security agencies despite both resulting in privacy intrusion. However, this support depends on certain conditions. The public place just as much importance on the accuracy and fairness of data processing, and the implementation of oversight, as they do on privacy intrusion. National security agencies and oversight organisations should, therefore, be steadfast in ensuring that quality assurance and oversight procedures are robust enough to remain effective in a rapidly changing technological landscape.

     

  3. There is very limited public understanding of existing oversight mechanisms for UK national security, with many assuming there is little oversight in place. Questions were raised repeatedly around the lack of awareness raising from government around the role of national security oversight, suggesting there are opportunities for government to fill knowledge gaps in future.

Annex: Methodologies Explained

Survey methodology

Survey design

Survey questions were designed by the CETaS team, informed by a literature review exploring the key trade-offs shaping decisions around national security data processing and existing public attitudes research on national security and on automation and ML. Questions were reviewed by external peer reviewers with methodological and subject matter expertise, followed by QA and final adjustments with the Savanta research team. 

The survey was designed to test public attitudes towards a range of data processing scenarios, including those which may be used in future, and including relevant comparators. As a result, all prompts were designed to be hypothetical, and survey participants were told that questions did not necessarily reflect any real-world or planned future use cases. 

At the outset, participants were provided with further context on the survey through the following disclaimer, in addition to standardised information from Savanta about how the survey data would be processed:

“This survey is being conducted by the Alan Turing Institute, the UK’s National Institute for AI and Data Science. It focuses on public attitudes towards data processing for UK national security.”

Participants were then guided through a series of multiple choice questions, as follows:

  1. Understanding of national security agencies:

    1. Participants were asked to self-report on how well they understand the work of the UK’s national security agencies and on how aware they are of the powers national security agencies have to collect data about people in the UK without their knowledge.

       

  2. Support for processing a range of datasets:

    1. Participants were asked: “If each of the following requests was authorised/not authorised by an independent oversight body in advance, to what extent would you support or oppose personal data (which could include names, addresses, phone records, locations, contacts and other activities) being used by this public sector organisation in the following ways?”

    2. This question was repeated for three public sector organisations, a national security agency, a regional police force and the UK Health Security Agency.

    3. Half of participants saw option 1, in which the question noted that the request was “authorised” by an independent oversight body in advance, while the other half saw option 2, “not authorised.” 

    4. Participants were given the following definitions:

      1. National security agency: government organisations that collect, analyse and exploit information to protect UK national security. They work both openly and in secret to support law enforcement, military, public safety and foreign policy objectives. 

      2. UK Health Security Agency: A government organisation that provides public health services and advice in England, such as advising on disease prevention.

    5. For each organisation, the following datasets were tested and presented in a random order for each participant:

      1. Information about their location when travelling in public. 

      2. Identifiable medical data, including their GP records.

      3. Data on which phone numbers they have been in contact with and when.

      4. The content of their text messages from their phone.

      5. Biometric data (for example, fingerprints or facial scans).

      6. Their public posts on a social media site.

         

  3. Support for data processing for a range of purposes:

    1. Participants were asked the following question: “If each request for information was authorised by an independent oversight body in advance, to what extent would you support or oppose personal data (which could include names, addresses, phone records, locations, contacts and other activities) being used by this public sector organisation in the following ways?”

    2. This was repeated for the same public sector organisations as above, with the same definitions given.

    3. For each organisation, the following reasons for data processing were tested:

      1. To investigate terrorism and serious crime.

      2. To detect foreign governments’ spies.

      3. To investigate a crime to which you are suspected to be connected.

      4. To shape long-term strategies and policies of the organisation.

      5. To create a new automated tool for predicting future behaviours.

      6. To monitor political views and preferences and share with political parties.

      7. To share with commercial organisations.

         

  4. Ranking of safeguards:

    1. Participants were asked to rank seven safeguards from most important to least important in the context where their data was being processed by a national security agency.

    2. The seven safeguards listed were as follows:

      1. All personal data (which could include name, address, phone records, location, contacts and other activities) will be stored securely and deleted as soon as possible. 

      2. An independent oversight body will regularly conduct inspections on all the instances where a national security agency has collected and/or processed personal data. 

      3. The action must first be approved by a senior judge. 

      4. Initiatives are put in place to increase transparency around how data gets processed (for example, sharing information with the public around what measures are in place to protect privacy)

      5. An independent court is available to respond to any complaints made surrounding the collection and analysis of data by national security agencies. 

      6. A senior person who works for the national security agency will regularly assess whether all data collection is proportionate to the scale of the threat to national security and/or public safety. 

      7. The action must first be approved by a government minister.

         

  5. Responses to scenarios:

    1. Participants were randomly presented with two out of four variations on a hypothetical scenario involving a live terrorism investigation. Two versions involved private messages being processed, while the other two versions involved public social media posts being processed. Two versions involved a human operator sifting data, while the other two involved an automated algorithm sifting data (Figure 22).

    2. For each scenario, participants were asked how they felt about the scenario, and self-reported on this using three Likert scales: Fairness, Privacy intrusion and Support. 

Figure 22. Data processing scenarios shown to participants

Figure 22
  1. Open-ended reflections

    1. At the end of the survey, an open-text question invited participants to feed in their views: “When considering the use of automated data processing techniques by national security agencies, what do you think are the opportunities and/or risks that this presents?”

Acquiescence bias is the tendency for research participants to agree with research statements, regardless of their underlying preferences. Savanta took several steps to mitigate any impact of acquiescence bias, including: avoiding ambiguous, unclear, or complicated questions; varying scale options so they are adapted to the item being tested (i.e. varying scales such as agree/disagree, important/unimportant, fair/unfair, etc); ensuring answer options present the full range of possible perspectives (rather than just yes/no); using a balanced scale that includes equal numbers of positive and negative options; and setting up scale questions to be mirrored so that half of respondents see answer options from positive to negative, and the other half see them from negative to positive. Savanta also monitored fieldwork continuously to identify speeders, flatliners and nonsensical responses to open questions, and employed “trap” questions to further highlight poor-quality respondents. Such respondents were removed from the dataset and replaced with fresh samples. These checks were repeated throughout the fieldwork until a good-quality final dataset was acquired.

Sample Recruitment and Data Collection:

Data collection for this study occurred between 31 October and 25 November 2024. The sample was drawn from Savanta’s research panel. This is a standing panel of people who have been recruited to take part in surveys using random sampling methods. 

Prior to going live, a pilot was conducted with over 500 participants. As small adjustments to the questions were made following this pilot, the pilot results are not included in the final analysis. 

Quotas were applied to the online sample to ensure it was representative of the UK adult population based on age, gender, occupation, ethnicity and region. Quotas are based on the most up-to-date data from the Office for National Statistics. 

The online sample was weighted based on official statistics on age, gender, ethnicity, region and occupation in the UK to correct any imbalances between the survey sample and the population, further ensuring it is nationally representative. 

Within this nationally representative sample, we ran an additional boost to recruit vulnerable adults, ensuring that the overall sample contained a representative proportion of vulnerable groups. 

Savanta’s methodology was used to define vulnerable adults as participants who either:

  1. Rated themselves between zero and three out of ten in response to a question on their confidence in using the internet without support, OR

  2. Selected one of the following statements:

  • I struggle to manage my finances; it is difficult for me to keep in control of my money.

  • My income is unpredictable; sometimes my income does not cover my cost of living.       

  • I struggle to keep up with the money I owe to different companies and organisations.

  • I sometimes feel left out or treated unfairly due to my race, ethnicity, gender or sexual orientation.

  • I experience addiction to substances or behaviours.

  • My living conditions sometimes lack safety and stability.

  • Where I live, it is difficult to access the things to support my basic needs.

  • I am an unpaid carer for someone who relies on my support.

  • I struggle to communicate in English much of the time.

  • I am not confident using the internet without support.

Finally, a boost was run to recruit additional young adults, aged 18–24. This boost was weighted by age, gender, region, ethnicity and occupation to be nationally representative of 18–24-year-olds in the UK. 

Consequently, for this survey, two datasets were available for analysis:

  1. UK adults (ensuring a representative proportion of vulnerable audiences). Sample size: n=3,035.

  2. 18–24-year-olds in the UK. Sample size: n=519.

Due to demographic analysis and split sampling, where participants were randomly shown different versions of the questions to allow comparison, the sample size available for analysis is smaller in some instances. This is relevant for the scenario-based questions, where around 1,500 participants were shown each version of the scenario. For clarity, the sample size is included in all relevant figure captions. 

Analysis

Data was analysed by Savanta and the CETaS team via the AllVue platform and through data tables to reveal key trends in participant responses. For demographic analysis, the team focused especially on patterns among young adults and vulnerable groups, with the potential to conduct further analysis of our data in future.

Throughout, a 95% confidence interval (p<0.05) is used to indicate statistically significant trends in the data and arrows are used in some charts to indicate significant differences. To test whether the results were statistically significant, Savanta used a Z-test at the 95% confidence level. Tests were performed on columns within the same break; e.g. male versus female.

Z-tests are based on the normal distribution, which provides a reliable way to make inferences about population parameters. They are particularly useful when dealing with large sample sizes (n>30), which is common in market research. Z-tests allow Savanta to compare different groups, such as through conversion rates between two marketing campaigns or public awareness before and after a campaign. By using Z-testing, Savanta ensures that its market research findings are statistically valid and can be confidently used to guide policy decisions.

For some charts, Likert-scale data has been aggregated to group together “strongly support” and “somewhat support” into “net support,” and to group together “strongly oppose” and “somewhat oppose” into “net oppose.” 

To ensure transparency and enable further analysis on this topic, data tables for this survey have been published and can be accessed via Savanta’s website.

Citizens’ panel methodology

What is a citizens’ panel?

A citizens’ panel, or mini-public, is a deliberative process, a tool for engaging citizens on a wide range of policy issues. Citizens’ panels consist of a representative group of citizens randomly selected to deliberate a particular issue and provide recommendations to inform public policy. They place members of the public at the heart of processes. 

The recommendations formed by the citizens’ panel are intended to deepen societal understanding of public views and help policymakers, decision-makers and wider civil society better understand public perspectives on the topic. They are particularly effective in exploring value-laden and controversial questions, where knowledge is contested and there are important ethical and social repercussions. 

This citizens’ panel had a number of key features, including:

  • The deliberative process: panel members went through a three-stage process of learning, discussion and decision-making.

  • Independent facilitation: to support the panel and ensure the deliberations are independent from the commissioning body. 

  • Evidence and informationpanel members were presented with relevant and accurate evidence during the learning phase.

Citizens’ panel design

Workshop activities were designed by the CETaS team in collaboration with Hopkins Van Mil, informed by the literature review and by initial survey findings. A three-part format with two online, evening workshops and one in-person, full-day workshop was selected to ensure there was enough time for panel members to fully interrogate the topic at hand, while also minimising the time commitment from them and thereby maximising the diversity of people able to join. 

Panel members took part in: 

  • Small group sessions to identify key questions of interest to the panel. 

  • Q&A sessions as a whole group with speakers.

  • Small group reflections on the expert witness and speaker presentations.

  • Small and whole group discussions on areas of interest and importance.

  • Prioritisation exercises to provide a clear record of what was seen as most important to panel members. 

  • An anonymous questionnaire to gather individual perspectives on the topics of discussion.

Each session focused on a distinct topic, with academic speakers providing presentations to inform discussions:

  • Session 1 (online, evening) – Introduction to national security data processing

    • Presentation 1: An introduction to national security

    • Presentation 2: Oversight of national security data processing

    • Presentation 3: The importance of human rights

  • Session 2 (online, evening) – Introduction to automation and ML

    • Presentation 1: Introduction to algorithms and ML

    • Presentation 2: Responsible innovation

  • Session 3 (in-person, daytime) – The future of national security data processing

The activities were designed with the research questions in mind. “Process plans,” which provide a detailed rundown of all activities and presentations included in these sessions, are available on request. 

Sample recruitment

This citizens’ panel on data processing for national security comprised 33 members who were recruited using a stratified sampling method which creates a mini-public broadly representative of the national population of the UK. This is a civic lottery method called ‘sortition’. The process was delivered by the Sortition Foundation.

The recruitment process had three stages: 

  • Stage 1: The Sortition Foundation randomly selected 9,600 addresses from across the UK, each of which received a letter in the post. This invited those aged 18 years or older, living at an address that received a letter to register their interest in participating in the citizens’ panel. 192 people responded to this invitation to express an interest in taking part in the citizens’ panel. 

  • Stage 2: As part of the sign-up procedure, all potential panel members were required to share responses to a small number of demographic and attitudinal questions. This was needed to ensure that the final makeup of the citizens’ panel was broadly representative of the UK population. 

  • Stage 3: This information was then used as input into a ‘sortition algorithm’ which randomly selected 34 panel members, over-recruiting by two to ensure a final 30 members of the jury. In fact, 33 people took part in the panel process.

Through this recruitment process, several demographic targets were set to ensure the group reflected the UK’s national population. An additional benchmark was added to ensure the group’s attitudes towards national security were reflective of the UK population. Specifically, the target was for the group to match national results to the following question: “do you think the security agencies are spying on you personally in any way?” I think they are (21%), I think they’re not (46%) and don’t know (33%).[113] Figure 23 summarises the demographics of the final group. 

Figure 23. Final demographic breakdown of citizens’ panel participants

Figure 23

An inclusive process

Panel members were given a handbook in advance of their participation. This was designed to provide information so that people could understand their role in the deliberation and take part fully in the process. 

The citizens’ panel was designed to be as accessible as possible. Support included: 

  • Providing support for participation, including one-to-one phone calls and online introductory sessions.

  • Paying all jury members in recognition of the time and commitment given to taking part.

  • Lending those who did not have access to a suitable device an internet-enabled tablet.

  • Lending those who did not have access to a reliable internet connection a portable Wi-Fi hotspot device.

  • Lending additional equipment such as webcams and headsets with microphones as required.

  • Holding in-person jury sessions at a fully accessible venue.

  • Providing access to a prayer room and a quiet space during the in-person workshops.

  • Providing any additional support – for example, translation or childcare – where needed. 

Data collection and analysis

Data collection occurred over three sessions which took place on:

  • 20 November, 17:00–21:00, online.

  • 28 November, 17:00–21:00, online.

  • 30 November, 10:00–16:00, in person.

The data collected included anonymised transcripts of the sessions; facilitator notes taken on either online whiteboards, flip charts or Post-it notes using panel members’ own words; results from online polls, conducted via Menti; and completed questionnaires.

Thematic analysis was conducted via NVivo to group insights into key themes and identify quotes for inclusion in this report. NVivo analysis was based on transcripts and questionnaires, while summative whiteboard notes and summary reports from Hopkins Van Mil also fed into the thematic analysis. 

Combining quantitative and qualitative insights

To ensure the robust integration of quantitative and qualitative data, the project team conducted a detailed comparison of the themes emerging from the survey and the citizens’ panel, identifying areas of alignment and divergence. A findings workshop was also held with experts from both the quantitative team at Savanta and the deliberative team at Hopkins Van Mil to stress-test areas of divergence between the methods and ensure findings were accurately incorporated through thematic analysis.

References

[1] National security agencies were the primary focus of this study and were defined for survey participants as “government organisations that collect, analyse and exploit information to protect UK national security. They work both openly and in secret to support law enforcement, military, public safety and foreign policy objectives.” However, the public’s own definitions of national security and related law enforcement activities and bodies are broad and blurred (as discussed below). This means that our findings will have some applicability to law enforcement more generally. Wherever possible, we distinguish between these contexts while acknowledging that it is impossible to do so completely. 

[2] Anna Knack, Richard Carter and Alexander Babuta, “Human-Machine Teaming in Intelligence Analysis,” CETaS Research Reports (13 December 2022), https://cetas.turing.ac.uk/publications/human-machine-teaming-intelligence-analysis.

[3] Ibid.

[4] Alexander Babuta, Marion Oswald and Ardi Janjeva, “Artificial Intelligence and UK National Security: Policy Considerations,” RUSI, 27 April 2020, https://rusi.org/explore-our-research/publications/occasional-papers/artificial-intelligence-and-uk-national-security-policy-considerations.

[5] Ardi Janjeva, Muffy Calder and Marion Oswald, “Privacy Intrusion and National Security in the Age of AI,” CETaS Research Reports (May 2023), https://cetas.turing.ac.uk/publications/privacy-intrusion-and-national-security-age-ai.

[6] “Explaining decisions made with AI,” Information Commissioner’s Office and the Alan Turing Institute, 17 October 2022, https://ico.org.uk/media/for-organisations/guide-to-data-protection/key-dp-themes/explaining-decisions-made-with-artificial-intelligence-1-0.pdf.

[7] CETaS proportionality.

[8] Ardi Janjeva, Muffy Calder and Marion Oswald, “Privacy Intrusion and National Security in the Age of AI,” CETaS Research Reports (May 2023), https://cetas.turing.ac.uk/publications/privacy-intrusion-and-national-security-age-ai.

[9] In 2013, Edward Snowden, a former employee of the US National Security Agency), leaked classified documents to share details of several secret surveillance programmes allegedly run by the US, the UK and others, prompting widespread public and political debate around the norms of surveillance. 

[10] David Anderson, “A Question of Trust: Report of the Investigatory Powers Review,” June 2015, https://assets.publishing.service.gov.uk/media/5a7f9b66ed915d74e622b7ca/IPR-Report-Web-Accessible1.pdf.

[11] RUSI, “A Democratic Licence to Operate: Report of the Independent Surveillance Review,” July 2015, https://static.rusi.org/20150714_whr_2-15_a_democratic_licence_to_operate.pdf.

[12] Daniel Lomas and Steven Ward, “Public Perceptions of UK Intelligence: Still in the Dark?,” RUSI Journal 161(2), 2022, https://www.tandfonline.com/doi/epdf/10.1080/03071847.2022.2090426?needAccess=true.

[13] Milan Dinic, “The YouGov Spying Study Part Four: Trust in UK Intelligence and Security Agencies,” YouGov, 30 September 2021, https://yougov.co.uk/topics/politics/articles-reports/2021/09/30/part-four-trust-uk-intelligence-and-security-agenc.

[14] European Center for Not-for-Profit Law, “New Poll: Public Fears over Government Use of Artificial Intelligence,” 14 November 2022, https://ecnl.org/news/new-poll-public-fears-over-government-use-artificial-intelligence.

[15] Joel Rogers de Waal, “Security Trumps Privacy in British Attitudes to Cyber-surveillance,” RUSI, 5 June 2017, https://rusi.org/explore-our-research/publications/commentary/security-trumps-privacy-british-attitudes-cyber-surveillance.

[16] The Alan Turing Institute and Ada Lovelace Institute, “How do people feel about AI?,” June 2023, https://www.turing.ac.uk/news/publications/how-do-people-feel-about-ai; DSIT, “Public Attitudes to Data and AI: Tracker survey (Wave 4) Report,” 16 December 2024, https://www.gov.uk/government/publications/public-attitudes-to-data-and-ai-tracker-survey-wave-4/public-attitudes-to-data-and-ai-tracker-survey-wave-4-report; Office for National Statistics, “Public awareness, opinions and expectations about artificial intelligence: July to October 2023,” 30 October 2023, https://www.ons.gov.uk/businessindustryandtrade/itandinternetindustry/articles/publicawarenessopinionsandexpectationsaboutartificialintelligence/julytooctober2023; Jonathan Dupont et al., “What Does the Public Think about AI?,” Public First, July 2024, https://ai.publicfirst.co.uk/.

[18] Office for National Statistics, “Public Awareness, Opinions and Expectations about Artificial Intelligence: July to October 2023,” 30 October 2023, https://www.ons.gov.uk/businessindustryandtrade/itandinternetindustry/articles/publicawarenessopinionsandexpectationsaboutartificialintelligence/julytooctober2023.

[19] The Alan Turing Institute and Ada Lovelace Institute, “How do people feel about AI?.” 

[20] The Alan Turing Institute, “7 in 10 say laws and regulations would increase their comfort with AI amid rising public concerns, national survey finds”, 25 March 2025, https://www.turing.ac.uk/news/7-10-say-laws-and-regulations-would-increase-their-comfort-ai-amid-rising-public-concerns.

[21] Emily Middleton and Kirsty Innes, “AI-pocalypse? No,” Labour Together, 1 November 2023, https://www.labourtogether.uk/all-reports/ai-pocalypse-no.

[22] RUSI, “A Democratic Licence to Operate: Report of the Independent Surveillance Review,” July 2015, https://static.rusi.org/20150714_whr_2-15_a_democratic_licence_to_operate.pdf.

[23] Savanta is a market research company and a member of the British Polling Council, with expertise in conducting public opinion polling. Find out more at: https://savanta.com/about/.

[24] Full details of how vulnerable adults were defined within the survey sample can be found in the Annex.

[25] Hopkins Van Mil is an independent public deliberation consultancy, working with researchers to bring people together to inform policy futures. Find out more at: http://www.hopkinsvanmil.co.uk/what-we-do.

[26] Sortition is a random lottery process for choosing research participants. Find out more at: https://www.sortitionfoundation.org.

[27] NVivo is a software tool for qualitative data analysis, enabling unstructured data such as workshop transcripts to be organised thematically. Menti.com is a software tool enabling panel members to take part in live polls.

[28] Anthony Masters, “Sampling Fractions and Populations,” Medium, 22 September 2020, https://medium.com/swlh/sampling-fractions-and-populations-dc48bc482187.

[29] John Zaller and Stanley Feldman, “A Simple Theory of the Survey Response: Answering Questions versus Revealing Preferences,” American Journal of Political Science, 36 (3), August 1992, 579–616, https://www.jstor.org/stable/2111583.

[30] John Burn-Murdoch, “Poll-driven Politics Does Nobody Any Favours,” Financial Times, 29 September 2023, https://www.ft.com/content/4ee301ce-e6ba-45ab-8fd5-0cbe3e5c5980.

[31] Hopkins van Mil, “Our Tools,”, http://www.hopkinsvanmil.co.uk/our-tools

[33] Adela Gąsiorowska, “Sortition and its Principles: Evaluation of the Selection Processes of Citizens’ Assemblies,” Journal of Deliberative Democracy, 19(1), 2023, https://delibdemjournal.org/article/id/1310/.

[34] Lomas and Ward, “Public Perceptions of UK Intelligence.”

[35] CETaS Citizens’ Panel, Workshop Two, 28 November 2024.

[36] CETaS Citizens’ Panel, Workshop One, 20 November 2024.

[37] CETaS Citizens’ Panel, Workshop One, 20 November 2024.

[38] Lomas and Ward, “Public Perceptions of UK Intelligence.” 

[39] Dan Lomas, “Forget James Bond? Public Perceptions of UK Intelligence,” RUSI, 12 October 2021,

 https://rusi.org/explore-our-research/publications/commentary/forget-james-bond-public-perceptions-uk-intelligence.

[40] CETaS Citizens’ Panel, Workshop One, 20 November 2024.

[41] CETaS Citizens’ Panel, Workshop One, 20 November 2024.

[42] CETaS Citizens’ Panel, Workshop One, 20 November 2024.

[43] CETaS Citizens’ Panel, Workshop One, 20 November 2024.

[44] CETaS Citizens’ Panel, Workshop Two, 28 November 2024.

[45] CETaS Citizens’ Panel, Workshop Three, 30 November 2024.

[46] CETaS Citizens’ Panel, Workshop One, 20 November 2024.

[47] Further, subtler patterns in our survey data may also be possible to analyse in future – in relation to, for example, ethnicity and political leaning.

[48] Janjeva, Calder and Oswald, “Privacy Intrusion and National Security in the Age of AI.”

[49] Communications data is defined by the Home Office as “information about communications: the ‘who’, ‘where’, ‘how’ and ‘with whom’ of a communication but not the content of the communications”. Home Office, “Investigatory Powers (Amendment) Bill: Communications Data and Internet Connection Records,” 26 April 2024, https://www.gov.uk/government/publications/investigatory-powers-amendment-bill-factsheets/investigatory-powers-amendment-bill-communications-data-and-internet-connection-records.

[50] Helen Nissenbaum, Privacy in Context: Technology, Policy, and the Integrity of Social Life, Stanford University Press (2009); Marion Oswald, “Jordan’s Dilemma: Can Large Parties Still Be Intimate? Redefining Public, Private and the Misuse of the Digital Person,” Information & Communications Technology Law, 26 (1), 20 January 2017, https://www.tandfonline.com/doi/full/10.1080/13600834.2017.1269870#d1e103.

[51] CETaS Citizens’ Panel, Workshop Two, 28 November 2024.

[52] CETaS Citizens’ Panel, Workshop Two, 28 November 2024.

[53] CETaS Citizens’ Panel, Workshop Two, 28 November 2024.

[54] Personal data were defined in the survey as data that could include names, addresses, phone records, locations, contacts and other activities.

[55] CETaS Citizens’ Panel, Workshop One, 20 November 2024.

[56] CETaS Citizens’ Panel, Workshop One, 20 November 2024.

[57] CETaS Citizens’ Panel, Workshop Two, 28 November 2024.

[58] CETaS Citizens’ Panel, Workshop Three, 30 November 2024.

[59] CETaS Citizens’ Panel, Closing Questionnaire, 30 November 2024.

[60] CETaS Citizens’ Panel, Workshop Three, 30 November 2024.

[61] CETaS Citizens’ Panel, Workshop Three, 30 November 2024.

[62] CETaS Citizens’ Panel, Workshop Three, 30 November 2024.

[63] CETaS Citizens’ Panel, Workshop Three, 30 November 2024.

[64] CETaS Citizens’ Panel, Workshop Three, 30 November 2024.

[65] CETaS Citizens’ Panel, Closing Questionnaire, 30 November 2024.

[66] More information on the value of ‘citizens’ juries’ can be found via Involve. See: https://www.involve.org.uk/resource/citizens-jury.

[67] CETaS Citizens’ Panel, Workshop Three, 30 November 2024.

[68] Miriam Levin et al., “Citizens’ White Paper,” July 2024, https://demos.co.uk/wp-content/uploads/2024/07/Citizens-White-Paper-July-2024_final.pdf.

[69] Janjeva, Calder and Oswald, “Privacy Intrusion and National Security in the Age of AI.” 

[70] Babuta, Oswald and Janjeva, “Artificial Intelligence and UK National Security.” 

[71] Cristophe Kern et al., “Humans Cersus Machines: Who is Perceived to Decide Fairer? Experimental Evidence on Attitudes toward Automated Decision-making,” Patterns, 29 September 2022, https://pubmed.ncbi.nlm.nih.gov/36277823/. 

[72] Henrietta Lyons, Tim Miller and Eduardo Velloso, “Algorithmic Decisions, Desire for Control, and the Preference for Human Review over Algorithmic Review,” FAccT ‘23, 12 June 2023, https://dl.acm.org/doi/10.1145/3593013.3594041.

[73] Ibid.

[74] CETaS Citizens’ Panel, Workshop Two, 28 November 2024.

[75] CETaS Citizens’ Panel, Workshop Three, 30 November 2024.

[76] CETaS Citizens’ Panel, Workshop Three, 30 November 2024.

[77] CETaS Citizens’ Panel, Workshop Two, 28 November 2024.

[78] CETaS Citizens’ Panel, Workshop Two, 28 November 2024.

[79] CETaS Citizens’ Panel, Workshop Two, 28 November 2024.

[80] Knack, Carter and Babuta, “Human-Machine Teaming in Intelligence Analysis.”

[81] CETaS Citizens’ Panel, Workshop Two, 28 November 2024.

[82] CETaS Citizens’ Panel, Workshop Two, 28 November 2024.

[83] CETaS Citizens’ Panel, Workshop Two, 28 November 2024.

[84] “Post Office Horizon Scandal: Why Hundreds Were Wrongly Prosecuted,” BBC News, 30 July 2024, https://www.bbc.co.uk/news/business-56718036.

[85] CETaS Citizens’ Panel, Workshop Three, 30 November 2024.

[86] CETaS Citizens’ Panel, Workshop Two, 28 November 2024.

[87] CETaS Citizens’ Panel, Workshop Two, 28 November 2024.

[88] CETaS Citizens’ Panel, Workshop Two, 28 November 2024.

[89] CETaS Citizens’ Panel, Workshop Three, 30 November 2024.

[90] CETaS Citizens’ Panel, Closing Questionnaire, 30 November 2024. 

[91] CETaS Citizens’ Panel, Workshop Two, 28 November 2024.

[92] CETaS Citizens’ Panel, Workshop One, 20 November 2024.

[93] CETaS Citizens’ Panel, Workshop Two, 28 November 2024.

[94] CETaS Citizens’ Panel, Workshop Two, 28 November 2024.

[95] CETaS Citizens’ Panel, Workshop Two, 28 November 2024.

[96] CETaS Citizens’ Panel, Workshop Three, 30 November 2024.

[97] CETaS Citizens’ Panel, Workshop Three, 30 November 2024.

[98] CETaS Citizens’ Panel, Workshop Two, 28 November 2024.

[99] CETaS Citizens’ Panel, Workshop Three, 30 November 2024.

[100] CETaS Citizens’ Panel, Workshop Three, 30 November 2024.

[101] CETaS Citizens’ Panel, Workshop Three, 30 November 2024.

[102] GCHQ, “Pioneering a New National Security: The Ethics of AI,” 2021, https://www.gchq.gov.uk/artificialintelligence/index.html.

[103] GCHQ/Bailo, “Bailo – Managing the Lifecycle of Machine Learning to Support Scalability, Impact, Collaboration, Compliance and Sharing,” GitHub, https://github.com/gchq/Bailo

[104] CETaS Citizens’ Panel, Workshop Two, 28 November 2024.

[105] CETaS Citizens’ Panel, Closing Questionnaire, 30 November 2024.

[106] CETaS Citizens’ Panel, Workshop Two, 28 November 2024.

[107] CETaS Citizens’ Panel, Workshop Two, 28 November 2024.

[108] CETaS Citizens’ Panel, Workshop Two, 28 November 2024.

[109] CETaS Citizens’ Panel, Workshop Two, 28 November 2024.

[110] Knack, Carter and Babuta, “Human-Machine Teaming in Intelligence Analysis.” 

[111] CETaS Citizens’ Panel, Workshop Three, 30 November 2024.

[112] CETaS Citizens’ Panel, Workshop Three, 30 November 2024.

[113] Dinic, “The YouGov Spying Study Part Four.” 

Authors

Citation information

Rosamund Powell, Marion Oswald and Ardi Janjeva, "UK Public Attitudes to National Security Data Processing: Assessing Human and Machine Intrusion," CETaS Research Reports (April 2025).