Foreword
By Dr Marsha Quallo-Wright (Director of Technology Futures, GCHQ) and Dr Paul Killworth (Deputy Chief Scientific Adviser for National Security)
Never before has so much attention been focused on the risks and opportunities of emerging technology.
The last year has ushered in a new era of scientific progress, a new geopolitical landscape and a new phase in the UK’s approach to AI security and governance.
2024 saw more people vote in democratic elections worldwide than ever before, driving debate around the potential impact of AI on political discourse and electoral processes. CETaS’ extensive research on AI-enabled influence operations found no conclusive evidence that AI-generated disinformation had a meaningful impact on the results of Western elections. In most cases, CETaS argued, AI-enabled disinformation served to reinforce pre-existing political beliefs rather than sway undecided voters.
While these findings are reassuring, AI-generated content will nevertheless continue to play an important role in shaping election discourse – inflaming political debates and exacerbating polarisation. Coordinated action will be needed between the UK and its international partners to ensure that powerful new AI systems do not threaten the integrity of political discourse, the cornerstone of our open democratic societies.
The start of this year was marked by the launch of the Government’s AI Opportunities Action Plan – setting out the UK’s ambitions to shape the AI revolution on principles of shared economic prosperity, improved public services and increased personal opportunities. The UK has emphasised its ambition to prioritise cross-sector AI adoption and maximise its role in frontier AI, with 50 recommendations of how this should be achieved in practice.
Safety and national security remain central to the UK’s approach to AI governance. The AI Safety Institute (AISI) has evolved to become the AI Security Institute – emphasising its focus on the most serious AI security risks with the potential to cause real-world harms. AISI will work closely with the UK national security community to advance our understanding of the most serious risks posed by AI technology, to help keep the country safe from emerging threats.
Alongside the work of AISI, November 2024 saw the launch of the government-funded Lab for AI Security Research (LASR) – a world-leading partnership between The Alan Turing Institute, the University of Oxford, Queen’s University Belfast and Plexal. LASR is already working closely with the UK national security community to understand emerging threats to AI security and novel mitigation measures. It is a leading example of cross-sector collaboration at the intersection of AI and national security.
These partnerships will be crucially important for the UK in the years ahead, against a backdrop of fierce international competition. The recent release of DeepSeek-V3 has challenged many previously held assumptions in the market. The Chinese AI model was reportedly trained in two months for less than $6 million, around 2% of the cost of comparable models. Analysts argued that rather than relying on massive volumes of training data and computing power, DeepSeek has shown how to achieve high performance with significantly fewer resources.
Our partnership with The Alan Turing Institute and CETaS is a leading example of the UK national security community’s collaborative approach to science and technology development. We recognise the critical importance of drawing on diverse networks of multidisciplinary expertise in the UK and internationally, and are grateful to all those who provide their valuable time to support our work.
This CETaS Expert Analysis Compendium provides the latest evidence-based analysis of key developments in technology and security, drawing on expertise from across academia, industry and government. We hope you enjoy reading it, and thank you again for making this work possible.
Citation information
Centre for Emerging Technology and Security, "CETaS: Expert Analysis in Technology and Security," CETaS Expert Analysis (April 2025).