CETaS conducts interdisciplinary research on a range of issues relating to emerging technology and national security policy. The ongoing projects below are expected to be completed in 2024-25. 

To be the first to read our upcoming publications, sign up to join the CETaS Network. To view our completed projects, read our Reports and Briefing Papers

If you would like to get involved in any of these projects, please contact the team at [email protected].

AI-Enabled Influence Operations and Election Security

Box with a Piece of Paper

With more than 60 national elections due to take place across the world this year, concerns have been raised over how new AI developments may further enhance the impact of hostile information campaigns. These threats range from highly realistic deepfake content and sophisticated hack-and-leak operations, to tailored voter suppression efforts and hyper-targeted influence campaigns. 

This project will provide a comprehensive evidence base of how AI-enabled disinformation has been used in recent election campaigns, the new threats AI poses to election security, and what technical or policy tools may help to mitigate against them. It will also explore new frameworks to measure the impact of AI-enabled election interference operations. This project published a Briefing Paper in May 2024 – the first of three CETaS publications on AI and election security. Our interim Briefing Paper in September 2024 contained an overview of AI election threats in the UK and Europe, while our final Research Report in November 2024 provided similar analysis on the US election and longer-term recommendations for protecting the integrity of democratic processes.

Public Attitudes to Data Processing for National Security

Abstract image showing with a globe and depictions of technology

Emerging technologies are already transforming the ways in which national security and law enforcement agencies use their investigatory powers, as AI and other data processing methods offer increasing options to automate parts of this covert information-gathering process. The impact this is having on privacy intrusion for the UK public is uncertain. Some argue emerging technologies could worsen privacy intrusion, for example if an AI system led to personal data being incorrectly flagged as of interest to national security decision-makers. Others argue emerging technologies could improve privacy, for example lessening the volume of data that needs to be processed by human operators because AI can filter out most data as irrelevant. 

Despite this ongoing debate in academic, policy and legal circles, little has been done to consult the public on what they think about human versus machine intrusion in national security. For example, are the public concerned that data-driven technologies might lead to more data getting collected by national security in the long term? Would the public perceive their privacy to be protected by automated methods which reduce human involvement in the processing of their data? And, do the public think the current UK regime for investigatory powers oversight supports a level of privacy intrusion that is proportionate to the national security threat? This project aims to address these questions by consulting the public, aiming to understand what they really think about AI, privacy and national security.

Securing the UK’s AI Ecosystem

Abstract image showing with a globe and depictions of technology

The topics of trusted research and security awareness are increasingly being publicly discussed by key national security figures in the UK and overseas. In April 2024, the UK security services briefed vice-chancellors of Russell Group universities on the threat posed by foreign states targeting academic research. The Deputy Prime Minister also announced a consultation on measures to protect UK universities, in particular those developing dual-use academic research.

This project will focus on the threats posed to academic AI research by hostile state actors (HSAs). The research will look to identify unique national security risks associated with the theft of potential dual-use research data, algorithm research, and trained models. The research will identify best practice guidelines for protecting academic AI research, as well as policy recommendations for strengthening the resilience of the UK’s AI ecosystem to HSA activity.

AI for Strategic Warning

buildings

Predicting political change, both stabilisations and escalations, has been an important function of the IC, which has traditionally relied on deep expertise of human analysts to make qualitative predictions about the likelihoods of belligerent activities and conciliatory responses. The use of human analysts has been particularly important in understanding human behaviour for assessments on leadership decision-making. 

Current conflict modelling data is relatively static and able to identify the persistent hot spots in the world but is not yet able to predict new outbreaks or escalations in violence or political instability in real-time. While analysts have a growing number of data sources available to them, the data picture is fragmented and inconsistent and there is limited ability to forecast flashpoints and escalations/de-escalation of instability (e.g. as a result of political transitions) reliably. Industry and open-source tools predominantly show intra-state, not inter-state conflict and do not incorporate multi-domain conflict and several important contested spaces such as space or international waters. Moreover, there are innumerable factors represented in the academic literature on social complexity (e.g. dissent, infighting, collective memory, public opinion, realistic information/disinformation flows), which are not incorporated in the majority of existing conflict modelling tools. Data gaps to build AI-based conflict modelling tools are an important challenge and building the data infrastructure and data sharing practices to enable a performant AI-based conflict modelling tool is an enormous, complex and expensive undertaking. 

This Special Competitive Studies Project (SCSP)-CETaS study aims to develop a deeper understanding of the next frontier for AI in conflict modelling and whether AI should be adopted for conflict modelling.

National Security Implications of International AI Regulation

background image

Around the world, new initiatives to regulate AI are coming into force. In this explainer series, CETaS will provide concise analysis of the implications of these regulatory frameworks, focusing on how the national security community should prepare for AI regulation. So far, this project has covered the EU AI Act, the Council of Europe Convention on AI and the US Executive Order on AI, with further explainers planned for the months ahead.

AI and Online Criminality

anton-maksimov-5642-su-vtbgolmpeg4-unsplash.jpg

The landscape of online criminality is constantly evolving in response to new technological developments. The recent explosion in popularity of generative AI systems has lowered the barriers to experimentation for online criminals, as well as for the public. Yet there remains much that is unclear about how criminal tradecraft reflects the pace of change in the AI space.

It is important for researchers and the security/law enforcement community to know the evidence of whether AI tools are significantly empowering online criminals, and to forecast trends in such activity over the next five years. They need a detailed understanding of not only whether and how AI tools have become more integral to practices such as cyber reconnaissance, the creation of malware, phishing and the generation of child sexual abuse material, but also the roles of audio, text, image and video in this. They also need to consider how criminals could increasingly use AI in areas where it has not yet reached its full potential, and how the adaptation or jailbreaking of industry AI tools could accelerate these processes. Finally, it is crucial to understand how malicious actors will commit new types of crime in response to changes in economic incentives brought about by the development of AI.

In this operating environment, the security/law enforcement community also needs a better understanding of measures to effectively counter AI-enabled online criminality now and in future. It will need to identify barriers to delivery in the area to stay on top of the threat.

By focusing on online criminality, this project will build on CETaS research into harms created by malicious actors’ uses of AI. It will produce evidence-based analysis of how AI tools are transforming and empowering the types of criminality that the public are most likely to experience on a day-to-day basis, and will provide actionable suggestions for how law enforcement can more effectively counter the threat.

AI Safety and Generative AI Evaluation

A conceptual image showcasing 'AI security risk' with a cityscape and a hint of surveillance. The scene features a sprawling urban skyline at twilight. Image generated by DALL-E.

CETaS has an ongoing programme of work on AI safety and generative AI evaluation. In August 2023, in the run-up to the Bletchley AI Safety Summit, CETaS co-published with the Centre for Long-Term Resilience a briefing paper titled 'Strengthening Resilience to AI Risk: A guide for UK policymakers'. This paper informed various CETaS contributions to the November 2023 AI Safety Summit, garnering extensive engagement across the UK technology and security policy community. The paper preceded a longer-form research report titled 'The Rapid Rise of Generative AI: Assessing risks to security and safety.' The most comprehensive UK-based study of the national security implications of generative AI, the report is based on extensive engagement with more than 50 experts across government, academia, industry and civil society. The report laid the foundations for follow-on papers that focused on 'Generative AI in Cybersecurity' and 'Evaluation of Malicious Generative AI Capabilities'. These outputs have been supported by various expert workshops that convened world-leading thinkers in AI safety and generative AI evaluation, forming the basis of several CETaS briefings to policymakers and presentations at international conferences.

Privacy-Preserving Moderation of Illegal Online Content

Image of a computer laptop

With the passage of the Online Safety Act (OSA) in 2023, online platforms now have a legal requirement to actively monitor and remove illegal content. However, in needing to implement comprehensive strategies and sophisticated tools capable of identifying such content, there is also a desire to reduce the impact of these processes on user privacy. This is particularly the case on services which use end-to-end encryption protocols.

As current content moderation techniques continue to suffer from limitations in relation to their effectiveness, efficiency and impact on user privacy, it is vital to understand the range of nascent and future methods in this space which could enhance the tackling of illegal online content. 

This project will focus on analysing nascent and future content moderation methods, including AI-based and privacy-enhancing technologies, to assist online platforms in fulfilling their new legal duties under the OSA. The research will look to understand what metrics can be used to assess content moderation methods, as well explore the feasibility for effectively implementing any promising capabilities identified.