CETaS conducts interdisciplinary research on a range of issues relating to emerging technology and national security policy. The ongoing projects below are expected to be completed in 2024-25.
To be the first to read our upcoming publications, sign up to join the CETaS Network. To view our completed projects, read our Reports and Briefing Papers.
If you would like to get involved in any of these projects, please contact the team at [email protected].
AI-Enabled Influence Operations and Election Security
With more than 60 national elections due to take place across the world this year, concerns have been raised over how new AI developments may further enhance the impact of hostile information campaigns. These threats range from highly realistic deepfake content and sophisticated hack-and-leak operations, to tailored voter suppression efforts and hyper-targeted influence campaigns.
This project will provide a comprehensive evidence base of how AI-enabled disinformation has been used in recent election campaigns, the new threats AI poses to election security, and what technical or policy tools may help to mitigate against them. It will also explore new frameworks to measure the impact of AI-enabled election interference operations. This project published a Briefing Paper in May 2024 – the first of three CETaS publications on AI and election security. Our interim Briefing Paper in September 2024 will contain an overview of AI election threats in the UK and Europe, while a final Research Report in November 2024 will provide similar analysis on the US election and longer-term recommendations for protecting the integrity of democratic processes.
Public Attitudes to Data Processing for National Security
Emerging technologies are already transforming the ways in which national security and law enforcement agencies use their investigatory powers, as AI and other data processing methods offer increasing options to automate parts of this covert information-gathering process. The impact this is having on privacy intrusion for the UK public is uncertain. Some argue emerging technologies could worsen privacy intrusion, for example if an AI system led to personal data being incorrectly flagged as of interest to national security decision-makers. Others argue emerging technologies could improve privacy, for example lessening the volume of data that needs to be processed by human operators because AI can filter out most data as irrelevant.
Despite this ongoing debate in academic, policy and legal circles, little has been done to consult the public on what they think about human versus machine intrusion in national security. For example, are the public concerned that data-driven technologies might lead to more data getting collected by national security in the long term? Would the public perceive their privacy to be protected by automated methods which reduce human involvement in the processing of their data? And, do the public think the current UK regime for investigatory powers oversight supports a level of privacy intrusion that is proportionate to the national security threat? This project aims to address these questions by consulting the public, aiming to understand what they really think about AI, privacy and national security.
Securing the UK’s AI Ecosystem
The topics of trusted research and security awareness are increasingly being publicly discussed by key national security figures in the UK and overseas. In April 2024, the UK security services briefed vice-chancellors of Russell Group universities on the threat posed by foreign states targeting academic research. The Deputy Prime Minister also announced a consultation on measures to protect UK universities, in particular those developing dual-use academic research.
This project will focus on the threats posed to academic AI research by hostile state actors (HSAs). The research will look to identify unique national security risks associated with the theft of potential dual-use research data, algorithm research, and trained models. The research will identify best practice guidelines for protecting academic AI research, as well as policy recommendations for strengthening the resilience of the UK’s AI ecosystem to HSA activity.
AI for Strategic Warning
Predicting political change, both stabilisations and escalations, has been an important function of the IC, which has traditionally relied on deep expertise of human analysts to make qualitative predictions about the likelihoods of belligerent activities and conciliatory responses. The use of human analysts has been particularly important in understanding human behaviour for assessments on leadership decision-making.
Current conflict modelling data is relatively static and able to identify the persistent hot spots in the world but is not yet able to predict new outbreaks or escalations in violence or political instability in real-time. While analysts have a growing number of data sources available to them, the data picture is fragmented and inconsistent and there is limited ability to forecast flashpoints and escalations/de-escalation of instability (e.g. as a result of political transitions) reliably. Industry and open-source tools predominantly show intra-state, not inter-state conflict and do not incorporate multi-domain conflict and several important contested spaces such as space or international waters. Moreover, there are innumerable factors represented in the academic literature on social complexity (e.g. dissent, infighting, collective memory, public opinion, realistic information/disinformation flows), which are not incorporated in the majority of existing conflict modelling tools. Data gaps to build AI-based conflict modelling tools are an important challenge and building the data infrastructure and data sharing practices to enable a performant AI-based conflict modelling tool is an enormous, complex and expensive undertaking.
This Special Competitive Studies Project (SCSP)-CETaS study aims to develop a deeper understanding of the next frontier for AI in conflict modelling and whether AI should be adopted for conflict modelling.