This publication is licensed under the terms of the Creative Commons Attribution License 4.0 which permits unrestricted use, provided the original authors and source are credited.

Introduction

2024 was another hectic year in tech. Leading artificial intelligence (AI) companies continued to release groundbreaking new products at a rapid rate, with highly capable multimodal models firmly establishing themselves in the generative AI landscape. Market leaders such as OpenAI are becoming increasingly embedded in the defence sector – a new partnership with Anduril announced in December signals a turnaround from OpenAI’s previous position that its models must not be used for “weapons development” or “military and warfare.” On the regulatory front, the EU AI Act came into effect after years of wrangling, the Council of Europe Convention on AI was formally adopted and the US Executive Order on AI made great strides, only to be left hanging in the balance with the incoming Trump administration. And while AI appears to have had a limited role in undermining democratic elections, hostile state activity continues to threaten international security and democracy both offline and online – the discovery of a TikTok-based foreign influence operation recently triggered a rerun of the Romanian presidential election.

The emerging technology landscape will continue to evolve at pace throughout 2025, with significant implications for national and international security. While these future developments are impossible to accurately predict, this article outlines the five emerging trends CETaS researchers are most concerned about for the year ahead, and the five trends we are most excited about.

Five concerning trends 

  1. A decommissioned US AI safety apparatus. The incoming Trump administration has articulated a desire to roll back some of the keystone AI initiatives of the Biden administration – including through the repeal of the Executive Order on AI, which appointed chief AI officers across the US Government and created various AI guardrails in the absence of concrete legislation. This could put the US approach at odds with the nascent global AI safety coalition that has emerged since the first AI Safety Summit in November 2023. And it poses significant questions about the future of the US AI Safety Institute. Considering US companies’ dominance of the AI innovation landscape, there is considerable concern about losing the heft of the US Government in global AI safety. A more protectionist approach to the US AI sector under the Trump administration could also undermine AI companies’ continued cooperation with foreign AI Safety Institutes, potentially affecting safety testing agreements.
     
  2. The role of AI in fuelling public disorder. The UK riots of summer 2024 demonstrated people’s increased vulnerability and receptivity to online disinformation in the aftermath of a serious security incident – where emotions are heightened and individuals are more likely to suspend reason, critical judgment and reference to evidence. Many social media accounts spread inflammatory or incorrect details about the Southport murder suspect, and there were signs of how AI and other automated tools could be weaponised to sow division and incite violence. These events highlighted a growing intersection between easily available AI tools, extremist online networks and mobilisation to physical action and violence. Researchers need to dissect how AI-fuelled disinformation interacts with the many other complex ingredients that motivate public disorder. They should develop an analytical framework that equips decision-makers with the tools required to understand how content is represented, circulated, amplified and transmitted through interpersonal networks.
     
  3. Indirect prompt injection. A recent CETaS article authored by Advai researchers highlighted how prompt injection is one of the most urgent issues facing state-of-the-art generative AI models. This refers to the manipulation of a large language model (LLM) through crafted inputs, causing the model to inadvertently carry out an attacker’s commands – such as to interfere with the system’s decision-making, distribute disinformation to the user, disclose sensitive information, orchestrate intricate phishing attacks or execute malicious code. These are prominent risks with the increased adoption of retrieval augmented generation (RAG), a technique for enhancing the accuracy and reliability of generative AI models with data drawn from external sources. In both the public and private sectors, organisations looking to adopt LLMs will need to be cognisant of security best practice and the importance of appropriate user training.
     
  4. Generative AI with socially aware characteristics. The interim International Scientific Report on the Safety of Advanced AI described “the use of conversational general-purpose AI systems to persuade users over multiple turns of dialogue” as an “underexplored frontier.” If generative AI architectures develop higher levels of social awareness and persuasiveness, this will create new opportunities for malicious actors. A CETaS Briefing Paper published in July 2024 highlighted the potential risk of terrorist and violent extremist (TVE) groups leveraging generative AI for radicalisation purposes. The paper weighed the evidence of extremist groups experimenting with generative AI systems, with prominent examples relating to the glorification of propaganda content rather than the direct one-to-one persuasion of individuals. Potential inflection points to be concerned about include TVE groups developing enhanced RAG capabilities with datasets tailored for radicalisation; generative AI gaining the ability to compile accurate information about potential radicalisation targets; and autonomous propaganda campaigns integrating with social media accounts through open-source implementation.
     
  5. Digital forensics and evidential considerations in the age of AI. As the general population becomes more familiar with the use of AI tools, this poses new challenges for investigative capability and criminal justice processes. For example, the proliferation of deepfakes may cast doubt on the authenticity and provenance of CCTV, audio recordings and other common evidential sources, as well as forensic standards and evidence management systems. Until now, concerns around AI and criminal justice have focused on the use of AI to carry out crimes, such as through the automation of fraud. However, not enough attention has been paid to how AI could help criminals evade justice by, for instance, tampering with evidence, faking alibis or manipulating media content.

Five exciting trends

  1. UK AI policy and industrial strategy. The 2024 UK general election ushered in a new government, and the public’s expectations of delivery will begin to ramp up in 2025. In the technology domain, we hope to see greater clarity on plans for AI legislation and the UK’s industrial strategy. Peter Kyle, secretary of state for science, innovation and technology, said in November that the government aims to implement a legal framework for AI and strengthen the infrastructure required to promote the sector’s development. The legislation is expected to focus exclusively on “frontier models,” which is consistent with the remit of the AI Safety Institute to date. But, given the uncertainty around the future trajectory of the AI industry, there is a risk that the legislation will be incomplete. Regarding industrial strategy, the government published in October a green paper outlining its rationale, and we hope to see an evolution of this document into more concrete proposals in the coming months. As with any industrial strategy, success will depend on how quickly and effectively the government can direct funds to the right places, and the extent to which the private sector is primed to come on board with the government’s plans and invest commensurately.
     
  2. Small language models (SLMs). Businesses around the world are weighing up how they can most efficiently capitalise on the opportunities presented by language models. Many are realising that LLMs are not necessarily the most economical choice, and that SLMs with parameters in the millions or low billions will provide more customised outputs at a fraction of the computing and energy cost. Part of this stems from SLMs’ ability to run on-site or on a device, removing the need for masses of expensive cloud processing. This will appeal to many organisations and government departments that demand greater control over their data. While it is unlikely that SLMs will replace the demand for LLMs, their unique advantages could open the door to deployment in a wider range of sectors.
     
  3. Small modular reactors (SMRs). 2024 saw ever-increasing interest in nuclear energy, both as a means of meeting global climate and energy targets and of servicing the demand of an insatiable AI sector. Google landed the world’s first corporate agreement to buy power from Kairos SMRs to meet its electricity demand for AI. Amazon and Microsoft have struck deals for nuclear-powered data centres and nuclear plants respectively. Google’s deal has sent a market signal that it is making a long-term investment to accelerate the development of SMRs – meaning that we can expect further activity in this space in 2025. At the national level, as a signatory to the Declaration to Triple Nuclear Energy, the UK is expected to build up its network of SMRs by 2050. This year, the UK Government is likely to announce the two approved contractors for its first SMRs. Alongside the recent designation of data centres as critical national infrastructure and the forthcoming UK Cyber Security and Resilience Bill, SMRs form part of the foundations of creative solutions to both the climate crisis and AI bottlenecks that are safe, secure and trustworthy. (A forthcoming CETaS article will explore these governance imperatives in more detail.)
     
  4. Tactical training data for future AI-based decision support. As explored in a recent CETaS article, a persistent challenge for conflict modelling is that developers only have access to training datasets that are fragmented, incomplete and difficult to share between public-private, inter-agency and intergovernmental partners. Since Russia’s full-scale invasion of Ukraine, the rapid development and deployment of open-source intelligence tools have generated considerable amounts of data that could be used to better understand evolving military concepts and doctrine. In turn, this could help the defence and security community train and build better AI. Hundreds of thousands of hours of drone footage and space imagery are generating vast quantities of AI training data for intelligence and tactical decision-support systems. Nonetheless, the ability of AI to predict strategic shocks is still very nascent.
     
  5. AI and robotics. The global robotics market is currently valued at around $78bn and is projected to reach $165bn by the end of 2029. The AI boom of the last couple of years has made many investors wonder whether robotics is primed for a similar trajectory. Nvidia, one of the world’s most valuable companies, is betting on robotics as its next big driver of growth as competition in the AI chipmaking business continues to intensify. Nvidia will launch its latest generation of compact computers for humanoid robots in the first half of 2025, with the aim of leading the pack in the event of a robotics revolution. If we are indeed approaching an inflection point in the robotics market, hardware could quickly overtake LLMs as the biggest arena of competition. The overlap between AI and robotics is exemplified by how frontier models have helped train robots using simulated environments, ensuring they can operate more effectively in the real world. The need to manage unpredictable settings and varied conditions has frustrated the most advanced robot prototypes to date. AI could play a crucial role in accelerating robots’ learning process and adaptation to different environments, overcoming years of over-promising and under-delivering. While there may be a long way to go before robots truly sense their surroundings and autonomously ‘think’ about their next moves, the coming year may yield some critical innovations that bring this closer.

There will undoubtedly be many surprises along the way in 2025. But the trends summarised here should give policymakers, analysts and market watchers a sense of the key developments to expect this year, based on current indicators and the findings from recent and ongoing CETaS research. We look forward to engaging with our network of partners and collaborators throughout the year to help the UK and its allies maximise the benefits of these developments – and to strengthen our security and prosperity while defending against emerging risks.

The views expressed in this article are those of the authors, and do not necessarily represent the views of The Alan Turing Institute.

For new insights into developments at the intersection of emerging technology and security, sign up to the CETaS Network here.

Authors

Citation information

Ardi Janjeva et al., "CETaS Outlook: Emerging Technology Trends to Watch in 2025," CETaS Expert Analysis (January 2025).