Abstract

AI proliferation is reshaping serious online criminality. While the use of AI by criminals remains at an early stage, there is widespread evidence emerging of a substantial acceleration in AI-enabled crime, particularly evident in areas such as financial crime, child sexual abuse material, phishing and romance scams. Criminal groups benefit from AI’s ability to automate and rapidly scale the volume of their activities, augment existing online crime types and exploit people’s psychological vulnerabilities. This report aims to equip the UK national security and law enforcement communities with the tools to plan and better position themselves to respond to novel threats over the next five years. That process will require more effective coordination and targeting of resources, and more rapid adoption of AI itself. It should start with the creation of a new AI Crime Taskforce within the National Crime Agency – which would collate data across UK law enforcement to monitor and log criminal groups’ use of AI, working with national security and industry partners on strategies to raise barriers to criminal adoption.

This work is licensed under the terms of the Creative Commons Attribution License 4.0 which permits unrestricted use, provided the original authors and source are credited. The license is available at: https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode.

Executive Summary

This report presents the findings of a CETaS research project on the role of AI in serious online criminality. The aim of the report is to provide an evidence-based understanding of how the proliferation of AI systems is reshaping the landscape of serious online criminality. This research aims to equip policymakers and security practitioners to plan for emerging criminal use cases and better position themselves to respond to novel threats now and over the next five years. This time horizon is an operational priority for the UK security and law enforcement community, while also being the current government’s window for action until the next general election.

The key findings of the report are as follows:

  1. There is considerable evidence emerging of a substantial acceleration in AI-enabled crime. This includes evidence of the use of AI for financial crime, phishing, distributed denial of service (DDoS), child sexual abuse material (CSAM) and romance scams. In all these areas, criminal use of AI is already augmenting revenue generation and exacerbating financial and personal harms.

  2. This acceleration in AI-enabled crime is driven by several factors, including: AI’s ability to automate and rapidly scale the volume of criminal activity; the use of AI to augment existing online crime types; the diffusion of AI to criminal groups (from peers, the state and the private sector); criminal innovation in the development and use of AI; and the symmetry between AI’s deceptive capability and widespread human, psychological and cognitive vulnerabilities.

  3. Chinese innovation in frontier AI is beginning to have a significant impact on the threat landscape, with online criminals increasingly exploiting open-weight systems with fewer guardrails to carry out more advanced tasks – as seen in, for example, the CSAM domain. This demonstrates the ramifications of the geopolitical AI race in criminal tactics and innovation.

  4. Criminal attacks targeting AI systems and models are becoming more common, including in jailbreaking, the removal of guardrails, the abuse of commercial models and the development of bespoke criminal large language models (LLMs). The exploitation of AI systems by criminal groups is leading to more effective human–machine teaming to achieve illegal objectives.

  5. Criminal market dynamics are being altered by increasing AI proliferation, with crime areas such as fraud and CSAM becoming particularly lucrative and encouraging newer entrants. However, criminal groups still face numerous adoption bottlenecks that law enforcement must target, such as technical expertise, compute, operational security and the reliability of AI systems.

  6. UK law enforcement is not adequately equipped to prevent, disrupt or investigate AI-enabled crime. While legislation may help to deter criminal behaviour in the long term, a more robust and direct approach is needed, centred around proactive deployment of AI systems in law enforcement. This should include: the use of disruptive cyber capability against criminal groups; the rapid scaling of AI capability throughout the UK law enforcement ecosystem, including through education, training and the procurement of advanced AI systems to counter the threat; and the mainstreaming of AI security into the UK’s approach to international cooperation on cybercrime.

We recommend the following actions to accelerate and scale capability to counter AI-enabled crime, in the categories of “proactive disruption” and “AI testing and guidance.”

Proactive disruption

  1. A new AI Crime Taskforce should be established within the National Crime Agency’s National Cyber Crime Unit to coordinate the national response to AI-enabled crime. A Single Point of Contact should assume overall responsibility for overseeing this Taskforce, with dedicated funding provided by the Home Office. The Taskforce should collate data from across UK law enforcement to identify modes, tools, trends and geographical diffusion, with the Taskforce mapping bottlenecks in the criminal adoption of AI tools and working with national security and industry partners on strategies to raise barriers to criminal adoption.
  2. Law enforcement must rapidly adopt AI tools to counter criminals’ use of AI, embracing opportunities for proactive disruption. To adequately respond to the speed of AI-enabled crimes, law enforcement must leverage technical countermeasures for disruptive activity, learning lessons from national security partners.
  3. UK law enforcement should work with European and international law enforcement partners to ensure compatibility in approaches to deterring, disrupting and pursuing criminal groups leveraging AI. A working group should be established within Europol’s European Cybercrime Taskforce (EUCTF) specifically focused on AI-enabled crime. 
  4. As well as mapping AI tools in policing, law enforcement should ensure it is systematically logging the AI tools that are misused for criminal purposes. The proposed AI Crime Taskforce should maintain a new central database for this purpose, working closely with international partners.

AI testing and guidance

  1. The NCA (on behalf of UK law enforcement) should produce regular intelligence assessments of trends in criminal misuse of AI, to inform strategic decision-making and future AI evaluations by the AI Security Institute (AISI).
  2. AISI’s criminal misuse testing should treat the fraud domain as an immediate priority, with a focus on developing methods to reduce criminal compliance (i.e. AI models complying with criminal requests).

We are already seeing a major shift in AI-enabled criminality. There are many examples of the integration of AI into online crime and criminal behaviour, and growing evidence that such integration results in serious financial losses and personal harms. 

This threat could accelerate at an even faster rate in the next five years if we do not rapidly adopt countermeasures to mitigate risks – investing in AI to counter AI crime, and building law enforcement capacity to respond to the threat. The recommendations outlined here should form the first step in a new multi-agency approach to the UK’s response to AI-enabled criminal offending.

1. Introduction

In May 2024, the British multinational engineering firm Arup, based in Hong Kong, fell victim to an AI scam in which a worker was deceived into transferring HK$200m (£20m) to fraudsters.[1] The victim was duped by a realistic AI deepfake of the company’s chief financial officer, weaving together synthetic video and audio content. This was not the first incident in which AI-based deception led to a major financial loss, but it is one of the costliest reported to date – and it raises serious questions about the use of GenAI and deepfake content in online crime. 

Despite increasing evidence of AI-enabled crimes, several big questions remain unanswered. Will AI-enabled crime supplant more traditional forms of online crime and/or become a dominant part of the criminal business model? Will AI fundamentally change criminal behaviour, including the intent behind criminal actions and opportunities to commit crime? And, if such changes are likely, how should governments respond to the evolving threat and develop knowledge and capabilities in law enforcement?

In addressing these questions, this report seeks to provide a robust and wide-ranging analysis of the emergence of AI-enabled online crime, its evolution, impact and possible responses to the threat. The report focuses on a five-year time horizon (2025–2030), aiming to provide an evidence-based understanding of how the proliferation of AI systems is reshaping the landscape of serious online criminality. 

The report is structured around four main sections. The first section demonstrates that we have already entered a period of significant change in how criminal groups use AI in a range of crime types, including financial fraud, the dissemination of CSAM, phishing attacks, DDoS attacks, and the reverse engineering of malware, as well as in areas such as romance fraud and other deceptive campaigns and operations. While the impacts and losses are at a nascent stage, there is growing evidence of the emergence of highly lucrative criminal markets in these areas, exploratory use of AI systems by criminal groups, and widespread innovation by criminal organisations featuring AI.

The second section of the report addresses attacks against AI systems and criminal exploitation of those systems, including the emerging practice of jailbreaking and prompt manipulation. There has been widespread coverage in the technical literature of emerging tools and techniques to attack AI systems, but this work has not been contextualised and linked to cybercrime and online crime more specifically.

The third section delves into the shifting market dynamics that undergird the emergence of AI-enabled criminality. It focuses on the ways in which new crime patterns are affecting revenue streams; the nature of openly available or open-weight AI models from countries such as China, which are accelerating offending in certain domains; and various AI adoption bottlenecks that criminal groups will need to overcome to fully capitalise on the opportunities.

The final section of the report is focused on law enforcement responses to AI crime, and how wider developments in AI regulation may be harnessed to address criminal use of AI technologies. 

1.1 Methodology

This report was compiled in a three-stage process. The first involved a comprehensive review of the literature on AI and online criminality. This literature review formed the basis of the wider analysis in this project, and was derived from an extensive scoping of literature across different disciplines – including computer science, criminology, crime science, economics and political science – and an examination of the literature on AI and serious online criminality relating to the research questions that guide the project. These are: 

RQ1: How has the widespread adoption of AI tools affected cybercriminal tradecraft and the efficacy of online criminality? What trends might we expect to see in the next five years? 

  1. How integral have AI tools become to areas such as cyber reconnaissance, malware generation, phishing, the generation of CSAM, and vulnerability exploitation?
  2. Which modalities (audio, text, image, video) or combination of modalities have proven to be easiest for cybercriminal networks to leverage using AI and most effective in improving cybercriminal efficacy?

RQ2: How are online criminals adapting, exploiting and attacking industry AI tools?

RQ3: What new financial incentives exist for serious online criminality (such as CSAM distribution) and what are the global implications of this? In which criminal growth areas has AI not yet reached its full potential?

RQ4: What reforms should government and law enforcement implement to more effectively counter AI-enabled online criminality? What are the barriers to success?

After the literature review phase was completed, the project team conducted a series of semi-structured interviews and focus groups with 22 experts from government, academia, industry and law enforcement, including regional police forces. Initial questions from the literature review phase were refined for the interviews based on gaps in knowledge and the literature. The final stage of the methodological approach involved an online workshop in which the preliminary findings were presented to these experts. The findings were refined based on the feedback received. 

In parallel to the literature review and research interviews, the authors ran a data science project studying the role of AI in automating and industrialising online romance scams. This integrated both traditional natural language processing techniques and LLMs to analyse, replicate and assess the procedural structures of romance fraud. Crime script analysis was conducted using unsupervised learning methods[2] to identify the recurring language patterns and key scammer personas in victim reports. Different prompting strategies – including zero-shot, one-shot and few-shot prompting – were applied to produce scam messages at various stages of the romance fraud lifecycle. The outputs were benchmarked against real-world scam messages using BERTScore, allowing for an empirical comparison of the linguistic similarity between AI-generated and human-generated scam content. 

Additionally, controlled multi-agent chat simulations were conducted using Llama 3.2, GPT-3.5 Turbo and GPT-4o mini, each tested as an AI-generated scammer persona engaging with a simulated victim. The quality of the AI-generated scam interactions was evaluated using an LLM-as-a-judge approach, in which LLMs were tasked with assessing outputs based on coherence, consistency, emotional persuasiveness and deception sustainability. 

The full results from this parallel data science project are available in the CETaS Briefing Paper “Automating Deception: AI’s evolving role in romance fraud,” published shortly after this Research Report.

2. The Transformative Potential of AI-Enabled Crime

In this report, we use the term “AI-enabled crime” to refer to crime that is committed using frontier AI technologies, tools and processes.[3] Currently, the most serious concern in relation to AI and online crime has been the proliferation of GenAI capabilities and criminal use of LLMs. GenAI uses automated processes such as deep learning to generate content, including text, speech, audio, images and video.[4] Distinguishing between these different modalities, and their relative impact, is an important focus of this project and is discussed further in a subsequent section of the report. Our primary focus on frontier AI systems rather than criminal use of automated tools more generally was intended to give this research a forward-facing disposition, filling a gap in the literature on current and emerging tradecraft.

2.1 Transition towards AI-enabled criminality

Concerns over the intersection of AI and online crime have grown since the early 2020s, accelerated by the 2022 release of OpenAI’s ChatGPT and the subsequent release of a wide range of commercial competitors, including Google’s Gemini, Microsoft’s CoPilot, xAI’s Grok and, more recently, DeepSeek’s V3 and R1 models. The concerns expressed soon after the release of the first wave of GenAI tools related to the manipulation of prompts to induce AI systems to produce content that could be used for criminal purposes; the removal of guardrails within systems that prohibit the generation of undesirable or illicit content; and the use of LLMs to spread disinformation and, potentially, reverse engineer malware.[5] AI technologies continue to evolve with the market entry of more sophisticated forms of agentic AI, which have higher levels of autonomy and contextual awareness, as well as the ability to solve more complex problems.[6]

This raises the question of whether we are experiencing a fundamental transition towards criminal markets dominated by AI, or whether AI systems will only affect online criminality in an incremental or peripheral way. Presently, while most uses of GenAI for criminal purposes are still in a nascent, exploratory and experimental phase, many criminal organisations are ramping up innovation in this space. AI already clearly features in, and is having an impact on, many of the core online crime use cases, including:

  • Phishing. 

  • DDoS.

  • Ransomware.

  • Child sexual exploitation, including the generation of CSAM.

  • Online fraud, including financial fraud, identity theft, banking fraud and romance scams.

  • Malware generation.

AI models and tools appear to be useful for malign actors who are already engaged in cybercrime and who are now experimenting with the functions that such technology can provide, while also lowering barriers to new entrants.

This research has found growing evidence of an acceleration in many types of AI-enabled crime, and a high proportion of interviewees expected a dramatic shift in the volume, scale, and severity of activity in the coming five years and even in the immediate year ahead.[7]

2.2 Drivers of growth

The first reason for this expected growth is that AI systems allow for the automation and scaling of many types of malicious behaviour. Law enforcement interviewees noted the Dutch-led Operation Cumberland (spanning 19 countries) in which one individual was arrested in possession of 400,000 AI-generated images of child sexual abuse. Police are struggling to keep up with the volume of activity in this area.[8]

The second reason is the augmentation of certain crime types with AI, and the efficiencies and improvements AI can bring to the criminal business model. The ability to combine more realistic content with automated delivery at scale is extending the frontier of this area.[9]

The quality of text outputs, and the consequent ability to persuade targets to click on malicious links, has been a major factor in the prevalence and scope of successful attacks. The augmentation that GenAI provides is visible across many use cases. In one interview, it was noted that AI could be used at various stages of the lifecycle of a ransomware attack – including for network reconnaissance and smart payload delivery – to augment negotiations between the target and the ransomware gang, and for laundering illicit gains, including through automated cryptocurrency trading.[10] Moreover, despite some scepticism regarding AI’s malware developmental capabilities,[11] the delivery of malware at scale is a definite possibility; as one interviewee noted: with malware, the pivotal factor is “scale rather than sophistication.”[12]

A third reason for this transition towards AI-enabled criminality relates to the nature of criminal organisations themselves and technology diffusion patterns. According to one interviewee, it is a rule of thumb that new capabilities developed by state actors will start to appear in organised crime within 45 years: “anything that’s useful is being done.”[13] Criminal adoption of new technologies can happen through deliberate and active attempts by criminal groups to weaponise technologies, or through more incremental experimentation that builds on uses in other domains.

Well-organised cybercrime groups in this area can be highly innovative in adopting new technologies for criminal purposes, with some criminal leaders – including those involved in some of the prominent ransomware gangs – exercising highly strategic and often visionary thinking about crime and revenue generation, including from AI.[14] Innovation in criminal groups is also far less constrained by ethical principles or regulation compared to the private and public sectors.[15] One interviewee noted that this area has seen the emergence of an “arms race" between attackers and defenders, in which criminals are trying to acquire high-value assets by stealing data from institutions, tech companies and governments.[16]

AI-enabled crime is also expected to undergo a major change in the coming years based on the capability to use generated content to exploit human vulnerabilities as well as online systems. In this sense, there is a match between the capacity of AI to produce manipulative content and human psychological frailty, which will create and sustain widespread deceptive criminal practices.

Figure 1. Drivers of transformation in the scope, severity and impact of AI-enabled crime

Figure 1. Drivers of transformation in the scope, severity and impact of AI-enabled crime

2.3 Threats to AI

With AI-enabled serious online crime likely to increase in scope, severity and impact in the coming years, it is becoming clear that criminal threats to AI systems themselves are also growing.

2.3.1 Removal of guardrails and prompt manipulation

There is already a wide-ranging scientific literature that focuses on areas such as developing typologies and categories of attacks; exploring the attack surface of AI systems and models; and understanding the emerging practices of jailbreaking, manipulating guardrails and data poisoning, whereby the datasets used to train AI are interfered with in ways that affect the integrity of the data.[17] Recent research suggests that jailbreaking attacks against large reasoning models (versions of LLMs that go beyond text generation to provide a more complex reasoning capability) have led to a fall in malicious prompt rejection rates from 98% to 2% in Open AI’s o1/o3 model series, and to Google’s Gemini 2.0 Flash Thinking “eagerly providing harmful responses,” including those relating to terrorism and criminal strategies.[18]

A prominent fear in this research concerned the removal of guardrails, allowing models to provide information that facilitates crime planning. Interviewees noted cases where popular LLMs were used to elicit information about how to effectively “seduce a child.”[19] Anecdotal evidence from interviews[20] suggests that criminal groups are developing their own LLMs with the guardrails removed, something that is supported by recent reporting.[21] In this sense, LLMs are acting as partners in crime for such groups. 

Other interviewees noted that, in some cases, it was not necessary to remove guardrails to facilitate crime – with prompt manipulation techniques enabling more convincing phishing emails, for example. While there is a lack of data on this practice, and while it is difficult to distinguish between human and AI-crafted phishing text, interviewees noted that this could be a big change in what has been one of the most common and pervasive forms of cybercrime.[22] 

Criminal groups operating in non-Western and non-English language jurisdictions may also reap benefits from this simple functionality. A recent report on Google’s Gemini LLM links such activity to advanced persistent state threats (APTs), including APT 42, an Iranian group: 

“APT42 used the text generation and editing capabilities of Gemini to craft material for phishing campaigns, including generating content with cybersecurity themes and tailoring the output to a US defense organization. APT42 also utilized Gemini for translation including localization, or tailoring content for a local audience. This includes content tailored to local culture and local language, such as asking for translations to be in fluent English.”[23]

There is growing evidence of various criminal modifications of LLMs, including a generative AI tool called WormGPT – which is designed to help craft convincing business email compromise communications, and which is for sale on dark web forums.[24] The malicious group behind WormGPT also allegedly created FraudGPT – which is for sale on the Telegram platform, and which supports several attack vectors, including spear-phishing, cracking and carding (illicit use of credit cards).[25] This emerging field of criminal activity provides evidence linking the illicit hacking and manipulation of AI models to criminal organisations’ use of AI to enhance and amplify cyberattack methods and vectors. In all these areas, it should be noted that there are barriers to criminal attacks or exploitation of AI systems – and that these were noted widely by interviewees for the project. AI developers are also constantly refining models to prevent prompt manipulation attacks, and best practice regarding risk mitigation processes is continuously evolving.

2.4 Threats across the AI lifecycle

Noteworthy for the emergence of criminal markets in attacking AI models and systems are the wider processes surrounding AI development cycles. Attacks are possible in the development phase, the testing phase and during the rollout of AI systems (the period immediately after the model becomes commercially available). GPT-4 was hacked by a security researcher within hours of its release, and a ‘universal’ jailbreak tool that worked against LLM models Bing, Bard and Claude was released shortly thereafter.[26] Vulnerabilities also arise from the growing trend towards buying or licensing pre-trained AI models – in which pre-existing vulnerabilities are not known or fully understood by AI security teams that are less developed in terms of their structure, training and resources.[27] Arguably, these wider sociotechnical factors are as important for the emergence of criminal markets in AI attacks as the software vulnerabilities themselves.

3. AI Use Cases and Case Studies

While conclusive figures on the impact and acceleration of AI-enabled crime remain elusive, the transition to a world of increasingly pervasive AI-enabled criminality is underway. In this section of the report, we look at the evidence across three different AI and serious online crime use cases.

3.1 Case Study 1: Hong Kong heist – the impact of AI on financial fraud

The case mentioned at the start of this report relates to a £20M loss by a British multinational in Hong Kong following a scam based on synthetically generated content. The case demonstrates many of the anticipated transformational aspects of AI on online crime:

  • Criminal innovation. Multimodal sophistication in the integration of both synthetic video and audio content, involving numerous employees of the bank.

  • Augmentation of existing criminal practices. In this case, layering synthetic content onto fraudulent text/written requests for payments. 

  • Preying on human vulnerabilities by masquerading as trusted entities in company settings. The incident was classified as “obtaining property by deception.”

This case indicates that combinations of modalities (text, video and audio) may be a significant feature of the emerging AI fraud market. The incident also suggests that AI-generated media may be combined with more conventional cyberattack methods to achieve criminal impact. In this case, the employee responsible for making the transfer appeared to have received an email from the company’s chief financial officer, pointing to a combination of AI techniques with phishing attacks.

The banking sector is expecting a rapid growth in AI-enabled financial fraud in the years ahead. According to one recent projection by Deloitte, this area is expected to emerge as “the biggest threat to the industry, potentially enabling fraud losses to reach $40bn in the US by 2027, up from $12.3bn in 2023.”[28]

Figure 2. Generative AI’s projected effect on fraud losses

Figure 2

Sources: FBI; Deloitte

These projected figures should certainly be treated with caution. There will also need to be a disaggregation of financial losses from AI-enabled fraud and non-AI activity. Nonetheless, AI-enabled crime is increasingly perceived in the banking sector as one of the most serious risks, with one survey suggesting that 37% of banks viewed it as their biggest challenge – higher than the figures for account takeover and money laundering.[29] The concerns highlighted here in relation to AI-enabled crime are also beginning to be reflected in actual losses – with one report suggesting the rise in AI-based financial crime caused more than half (51%) of organisations surveyed to lose between $5 million and $25 million in total to AI-based or AI-driven threats in 2023.[30] Furthermore, the evolution of financial crime tactics – from localised schemes based on handwritten letters to large-scale AI-enabled operations using automated call centres – demonstrates the increasing scalability of these threats. Interviewees expected AI-enabled crime to affect financial fraud significantly in the years ahead, with one interviewee noting: “fraud is the obvious area, because it’s relatively easy to do and low risk for the threat actors.”[31]

3.2 Case Study 2: AI and child sexual exploitation

According to a recent report by the Internet Watch Foundation,[32] there is a significant growth in the use of AI in child sexual exploitation and the creation of CSAM. The report finds that over 3,512 AI-generated images of child sexual abuse on one dark web forum were realistic enough to be assessed under the same laws as real CSAM.[33] The current state of GenAI in this area suggests that the technical capability is there to create more severe types of abuse.

One of the few substantive analyses of the growing threat to children and women from AI-based extreme pornography highlights the creation of ‘Sweetie’, a computer-generated virtual child that attracted 1,000 adults from 71 countries, who offered to pay the child for simulated sex acts.[34] This portends an increase in webcam-driven sexual abuse using AI systems, and further opportunities for criminal offending created by AI. Law enforcement is seeing a concerning growth in this area, with one interviewee noting that this was “the most transnational bit of policing I’ve ever done.”[35] Investigations of AI-generated CSAM consume a lot of police time, with some cases taking more than a year.

AI crime committed through online environments is likely to persist because it is more easily replicable. Once developed, AI crime tools and methods can be easily shared and commercialised.[36] Police are seeing 3D visualisation of CSAM being created using more sophisticated methods and tools (e.g. Daz-3D) and, according to research interviews, there is evidence of learning within and between offenders, who are sharing images and the knowledge to produce them.

As one interviewee noted, “the communities are very open; they will upskill each other – sell tools and models. They are about developing the tactic as a collective.”[37] This approach to learning is a common factor in the growth of many forms of online crime. The revenue generated from criminal activity in the area does not appear to be undergoing a sharp increase at this time, while the image market for CSAM still attracts small but highly frequent payments for individual images.

Interviewees also noted concerns about children being groomed by chatbots, the use of gaming platforms such as Minecraft to groom children, datasets being used to train AI image generation already containing CSAM, and the automated dissemination of material on popular messaging apps such as Snapchat.[38]

This problem is likely to be compounded by the accessibility of online AI-driven platforms to youth gangs. According to recent reports, groups of teenagers have been involved in online extortion, dissemination of illicit images and the use of online platforms to spread new ideologies, including incel beliefs and extreme right-wing views.[39]

3.3 Case Study 3: Romance fraud – AI-assisted deception in the relationship-building phase

As AI tools advance, fraudsters are increasingly leveraging LLMs and deepfake technologies to enhance deception in online scams. While AI-driven systems can efficiently scale and personalise outreach efforts, the process of relationship-building, a crucial precursor to financial exploitation, still relies on a blend of automation and human oversight.

Figure 3. AI’s advantages in romance scams

A diagram of a heart

AI-generated content may be incorrect.

Beyond text-based tools, fully autonomous deepfake interactions remain technically challenging. Yet fraudsters are increasingly using deepfake media to maintain the deception and build trust with their victims.[40]

Figure 4. Capabilities of AI-powered deepfakes

A diagram of a robot

AI-generated content may be incorrect.

As AI capabilities continue to advance, fraudsters will likely refine their use of LLMs and deepfake media, blending automated deception with strategic human oversight. Future scams may leverage increasingly convincing real-time AI interactions, reducing the need for direct human involvement. The evolution of AI-driven relationship-building tactics underscores the growing challenge of distinguishing between authentic and manipulated digital identities.

4. Market Dynamics and AI Crime

This section explores how the proliferation of AI systems is affecting the market dynamics of online criminality, potentially encouraging criminals to enter areas of criminality that are new to them. There is evidence to suggest that open-source or open-weight AI tools are helping criminals develop new specialisms, reshaping their revenue models in several areas. This may also present challenges to how we currently categorise different crime types[41] – as different ecosystems come together in novel ways, our understanding of criminal motives must become more cross-cutting and less siloed.

4.1 Crime patterns and revenues

As described in the case studies above, the combination of AI-generated media with more conventional cyberattack methods will become a prominent attack vector for criminals. Interestingly, this analysis suggests that as cyber defence hardens and becomes more effective, criminal actors will increasingly see AI threats that manipulate people and human cognition as an easier target – suggesting that they will engage in the kind of significant trade-offs that are common to other types of crime.

Interviewees agreed that the profitability of AI-driven fraud is a significant concern, propelled by the low barriers to entry and easily downloadable starter packs. These would feature high-quality voice-cloning functionality and features such as background noise to make it appear that the fraudster is in a legitimate call centre.[42] There was concern among interviewees that AI could take “social engineering to another level,” with criminals tactically exploiting stretched resources in policing – which would make it difficult to seriously investigate instances cases involving the theft of less than £30,000.[43] With the growing number of high-profile data breaches in recent years, information such as credit card numbers and dates of birth is increasingly at risk of being hoovered up by maliciously tasked AI tools. Such information can be combined with biometric data such as voice samples to produce highly sophisticated attacks on an individual.[44]

Another particularly lucrative development will involve the ability to conduct extortion on a significant scale. By compromising a cloud provider, criminal groups could infiltrate hundreds of different entities. If an LLM can be leveraged to conduct extortion negotiations, the impact of this type of crime would become more immediate and more severe. This could lead to a potential dichotomy of “human-speed defenders versus AI-speed attackers,” giving criminals an edge in the event of a significant increase in supply-chain attacks.[45] As well as taking place at the enterprise level, AI-based extortion scams are becoming more common at the individual level.[46]

Finally, the creation of CSAM images is also highly profitable (albeit through smaller transactions) and is further incentivised by the fact that in many jurisdictions, the use of AI to generate CSAM is not illegal.[47] In some cases, AI may act as a bridge into new criminal markets, and as a moat to protect and strengthen criminal groups’ established revenue streams.

4.2 Transnational effects

AI will reinforce the transnational nature of online criminality. Microsoft’s 2024 assessments conclude that LLMs are being used by a variety of state actors in exploratory ways, including to enhance spear-phishing operations; develop knowledge about known cybersecurity vulnerabilities; craft more convincing social engineering text, including by targeting NGOs and academics to gain knowledge about geopolitics; obtain deeper access when a system is compromised; and generate code to bypass security controls.[48] While this observation reflects the use of AI to refine and streamline criminal processes and to achieve greater productivity, the technology does not appear to be drastically changing the revenue, reach or impact of these actors at this stage.

A large proportion of cybercrimes targeting victims in the UK are committed by overseas offenders. With the financial backing that state actors benefit from, such offenders can expected to acquire their own sandboxes to develop and deploy AI-enhanced tools. The more sophisticated criminal groups are likely trying to follow this path; notably, malware developers have become increasingly sought after by organised crime groups.[49] Further research could explore the possibility that AI is changing the dynamics between the state apparatus in countries such as Russia, Iran and North Korea and the organised crime groups operating out of those nations. Moreover, although some countries or regions have long held greater ‘market share’ in certain crime types (e.g. West Africa and romance fraud or Russia and CSAM),[50] more research is still needed on whether AI is reshaping some of these regional specialisms, and what the knock-on effects could be for public safety.

A particularly important development in recent months has been the impact of increasingly capable and easily accessible frontier AI systems emanating from China. The release of DeepSeek’s V3 and R1 models in December 2024 and January 2025 respectively caused Western governments, developers and commentators to panic that Chinese AI innovation had finally caught up to Silicon Valley and might be set to outpace it.[51] But what has not received as much attention are the implications of this for online criminality: namely, the fact that state-of-the-art AI tools from China – which Western governments have little leverage over – are easily exploitable by criminals targeting victims in places such as the UK.[52] A workshop participant in this project highlighted the “crash to bang with Chinese AI in the space of two months” and the “wholesale use of Chinese models for child sexual exploitation … which are leading the way in terms of CSAM video creation.”[53] Criminal exploitation of these tools may be indirect rather than direct, leveraging more specialised systems that are built on top of the larger, more capable foundation models. Yet the fact that this step appears to have become even easier with the proliferation of Chinese models will give UK law enforcement great cause for concern.

4.3 Adoption bottlenecks

Much like the public, criminals are tracking AI trends with a view to how the technology can make their lives easier. The wider integration of AI into everyday life will be reflected in criminal tradecraft.[54] But a consistent theme of this research has been the need, when assessing the effects of AI, to focus on criminal groups’ readiness to adopt powerful AI tools. A variety of different variables could be at play here:

  • Technical expertise. Different criminal groups will have varying degrees of access to the expertise in areas such as data science and programming that they need to leverage AI to the best of its ability. While developments in the quality of AI-generated code are rapid, skilled humans are still required to check and refine outputs.
  • Compute. Running advanced AI systems demands significant compute, which is costly and difficult to secure covertly, while customised AI systems can be expensive to maintain. Similarly, it is challenging to access cloud-based tools while avoiding the attention of law enforcement.
  • Operational security. Criminal groups often rely on interpersonal trust and secrecy, amplifying their concerns about the digital footprints they may leave behind and the risk of AI systems compromising operational control. The fear that certain deployments of AI will be traced back to them may deter criminal groups that do not possess the technical capacity to anonymise their operations effectively.
  • Reliability. For criminal groups that have spent years honing a specialism that provides reliable revenues, there is always a risk that introducing an inherently probabilistic tool into the equation will upset a careful equilibrium, particularly in high-stakes criminal operations. This may be especially true for the jailbroken AI systems that criminal groups are likely to be leveraging, which are not necessarily rigorously audited for their performance in the same way as legitimate tools – and may, therefore, introduce misinformation or other inaccuracies into criminal processes.

Partly because of these variables, it appears that AI is not yet integral to cybercrime operations and the cybercrime market is still dominated by the types of attacks that do not necessarily include, use or require AI to achieve effects. The accelerating growth of the ransomware market, for example, suggests that financial revenue will continue to be generated regardless of the integration of AI into ransomware attacks. As one interviewee for this project noted, there is also a “novelty bias” associated with GenAI, where the hype surrounding the transformative potential of frontier AI tools does not map perfectly onto the complex structures, networks and incentives that make up a criminal organisation. Further, the scientific literature and evidence base in this domain remains immature: one interviewee expressed a concern that “we’re operating still in the realm of conjecture and opinion.”

These are examples of adoption bottlenecks that the law enforcement community, in partnership with national security and industry stakeholders, need to be more front-footed in targeting. The more these bottlenecks are mapped and understood, the easier it will be to develop bespoke approaches to disruption in each case, raising the barrier to entry for criminal groups.

Nonetheless, it is still worth noting that criminal groups are unlikely to be limited by many of the bottlenecks that affect society at large. They still have time, resources and illicit financial motivations that most members of the public do not.[55] This means that it is uncertain how long the bottlenecks will remain, particularly if the rewards of integrating AI are seen to be growing exponentially.

4.4 Offence–defence battle

A further factor in establishing the relationship between modality, persistence and potential revenue generated by AI crime relates to the balance between offence and defence. In some use cases, attackers are accruing advantages over defenders, but defensive measures are evolving and will likely mitigate some of these advantages over time. The changing balance of advantage between attackers and defenders is well-studied in the defence, cyber security and strategic studies literature, and could be extrapolated to AI crime.

For example, Rebecca Slayton argues that it has been traditionally assumed that attackers will have the advantage in cyberspace (due to well-known factors such as anonymity and the large attack surface provided by computing and data) but that this is far more context-dependent than assumed.[56] The extent of the organisational resources, skills, knowledge, maturity and efficiency of various actors may drastically decrease attacker advantage. This has been observed in other security domains, including in signals intelligence and communications, where governments have accrued advantage over time. Bruce Schneier makes a similar point regarding defensive technologies, arguing that the incorporation of machine learning into software development processes could significantly narrow attacker opportunities by patching vulnerabilities before the software is released.[57] ‘Security by design’ could be greatly impacted by the defensive application of machine learning. Analysis in this area suggests that AI-enabled cybercrime may persist and generate more revenue when directed at vulnerable targets (which is likely why AI-enabled romance scams are scaling). 

Taking all this together, the following variables and factors influencing modality, persistence and revenue can be identified:

  • Cost to attacker versus cost to defender.
  • Resources required for attack versus resources for defence.
  • Complexity versus simplicity of modality.
  • Level of public awareness and resilience.
  • Extent of embeddedness in computational environments.
  • Qualitative versus quantitative nature of new AI capability.
  • Deployment of AI and ML in software development processes.

These factors will shape the level of risk that AI-enabled crime presents and will determine whether AI tools offer increasing or decreasing financial incentives for online criminality. 

Figure 5. Market dynamics that accelerate or constrain AI-enabled crime

Figure 5. Market dynamics that accelerate or constrain AI-enabled crime

5. Reforms, Responses and Barriers to Action

While criminals are likely to have the advantage in the short term, the issue of what more can be done to counter the threat from AI-enabled crime is at the top of policymakers’ minds. There is also increasing discussion of how AI should be used in this endeavour, along with ongoing debates about the role and efficacy of regulation in countering criminal behaviour – as one interviewee said quite bluntly, “criminals don’t care about regulation.”[58]

While it is beyond the scope of this report to go deeper into regulatory debates, interviewees for the project noted a wide array of challenges and opportunities in countering AI-enabled crime, which are distilled briefly here. Consistent with previous CETaS research on the security risks posed by GenAI, many conveyed concerns about the exploitation of open-source AI for crime.

There are opportunities to plug related gaps in legal frameworks. One approach could be to extend criminal provisions covering child sexual exploitation to individuals who request images of children – not just those using AI models to create CSAM. This would address the demand side of the market. Interviewees also called for banning certain tools, including ‘nudifying’ applications, especially where the companies producing them are registered in the UK.[59]

However, there was also awareness of the limits to regulation. These included the slow pace of progress and the constant bargaining between political and commercial stakeholders. Some were sceptical that it would have a specific impact on crime, as opposed to more general societal use and impact. As one interviewee argued, “until countries can regulate effectively in AI and quickly, and rapidly adapt to the evolvement of AI, then effectively we have no regulation.” It was further noted that introducing standards is “a pain” for companies and will push criminals into other countries and jurisdictions, creating “a big whack amole.”[60]

5.1 AI versus AI

The implication of the evidence presented in this report is that offensive or criminal AI has the advantage over defensive AI. However, it is likely that the defensive use of AI will be critical in addressing the rise in AI-enabled online crime. There was some optimism in this area from interviewees, with one industry analyst saying there was promise in “moving from manual firefighting to highly automated and highly sophisticated network defence, threat detection applications.”[61]

Progress is already being made in this area, with AI systems increasingly being used to identify malicious actors in real time;[62] enhance investigative tools and support analysis of large volumes of text data;[63] and counter AI-generated deepfakes, phishing scams and misinformation campaigns.[64] 

Tools to verify content authenticity are useful in combating these threats,[65] and some models have developed high accuracy rates, enabling proactive mitigation of AI-enhanced phishing attacks.[66] Efforts to create a secure-by-design market in AI may not be a panacea for AI-enabled crime – and would cut against commercial incentives for the quick release of software and models – but they could nonetheless be important in the fight against such crime.

Given that AI is vulnerable to activities such as data poisoning and integrity attacks, technical measures focused on enhancing data integrity and ensuring model robustness will also be important. For example, IBM’s “Framework for Securing Generative AI”[67] encourages organisations to harden AI environments and thereby defend against supply-chain attacks, such as those that target open-source pre-trained models with malicious intent.

5.2 Barriers and opportunities in law enforcement

As technical solutions evolve to counter AI-enabled threats, there is an equally urgent need for law enforcement to develop expertise in using AI effectively. This was highlighted in interviews, with one interviewee arguing there is an “enormous gap between the technical capability of law enforcement in the UK and the nature of the problem.”[68] Another noted they were “very concerned about the police’s ability to understand what is out there, deal with it and utilise AI itself.”[69] 

Clearly, the successful integration of AI into law enforcement requires comprehensive action across training, collaboration, regulation and public engagement. This will be challenging, given resource pressures and the push for more visible policing and safer streets. Nonetheless, obligatory and structured training on AI will likely be required, especially as police forces adopt the technology. Specialised training will enable law enforcement to keep pace with sophisticated transnational criminal networks, and leveraging AI will be increasingly important as the threat diffuses.[70]

Enhanced training should not only address ethical and legal considerations but also equip the police with the skills to deploy advanced technologies to handle large data volumes efficiently and responsibly: as one interviewee stated, “if the problem is tech-driven, the solution has to be tech-driven as well.” [71] 

This is a key motivating factor behind the recommendation to establish a new AI Crime Taskforce within the NCA’s National Cyber Crime Unit to coordinate the national response to AI-enabled crime. The collation of data from across law enforcement to monitor and log criminal groups’ use of AI, and the mapping of bottlenecks in criminal adoption of AI tools to raise barriers to adoption, would be crucial to developing law enforcement’s tradecraft in response to an evolving AI threat landscape. 

References

[1] Heather Chen and Kathleen Magramo, “Finance worker pays out $25 million after video call with deepfake ‘chief financial officer’,” CNN, 4 February 2024, https://edition.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk/index.html.

[2] Term Frequency, Inverse Document Frequency and Latent Dirichlet Allocation.

[3] Frontier AI refers to highly capable general-purpose systems that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced systems.

[4] Kim Martineau, “What is Generative AI,” IBM, 20 April 2023, https://research.ibm.com/blog/what-is-generative-AI.

[5] David C and Paul J, “ChatGPT and large language models: what’s the risk?,” National Cyber Security Centre, https://www.ncsc.gov.uk/blog-post/chatgpt-and-large-language-models-whats-the-risk.

[6] Mark Purdy, “What Is Agentic AI, and How Will It Change Work?,” Harvard Business Review, 12 December 2024, https://hbr.org/2024/12/what-is-agentic-ai-and-how-will-it-change-work.

[7] Author focus group with government participants, 27 January 2025.

[8] Author interview with government participant, 20 February 2025.

[9] Marc Schmitt and Ivan Flechais, “Digital deception: generative artificial intelligence in social engineering and phishing,” The Artificial Intelligence Review 57(12), 12 October 2024, 324, https://doi.org/10.1007/s10462-024-10973-2.

[10] Author interview with academic participant, 15 January 2025.

[11] Sarah Mercer and Tim Watson, “Generative AI in Cybersecurity: Assessing impact on current and future malicious software,” CETaS Briefing Papers (June 2024), https://cetas.turing.ac.uk/publications/generative-ai-cybersecurity. This was also noted in interviews.

[12] Author interview with industry participant, 9 January 2025.

[13] Author interview with academic participant, 15 January 2025.

[14] Author interview with academic participant (2), 15 January 2025.

[15] Author interview with academic participant, 15 January 2025.

[16] Author interview with academic participant, 15 January 2025.

[17] Zhen Xiang, David Miller and Goerge Kesidis, “A Benchmark Study of Backdoor Data Poisoning Defenses for Deep Neural Network Classifiers and A Novel Defense,” 2019 IEEE 29th International Workshop on Machine Learning for Signal Processing, Pittsburgh, US, 2019, 1–6. 

[18] Martin Kuo et al., “H-cot: Hijacking the chain-of-thought safety reasoning mechanism to jailbreak large reasoning models, including OpenAI o1/o3, deepseek-r1, and Gemini 2.0 flash thinking: arXiv preprint, 2025.

[19] Author interview with government participant, 9 January 2025.

[20] Ibid.

[21] Matt Burgess, “The Hacking of ChatGPT Is Just Getting Started,” Wired, 13 April 2023, https://www.wired.com/story/chatgpt-jailbreak-generative-ai-hacking/.

[22] Author interview with government participant, 20 January 2025.

[23] Google Threat Intelligence Group, “Adversarial Misuse of Generative AI,” 2025, https://services.google.com/fh/files/misc/adversarial-misuse-generative-ai.pdf.

[24] Andrew Blake, “Crimeware tool WormGPT: AI for BEC attacks,” SC Media, 13 July 2023, https://www.scworld.com/news/crimeware-tool-wormgpt-ai-bec.

[25] Steve Zurier, “New AI phishing tool FraudGPT tied to same group behind WormGPT,” SC Media, 25 July 2023, https://www.scworld.com/news/new-ai-phishing-tool-fraudgpt-tied-to-same-group-behind-wormgpt.

[26] Burgess, “The Hacking of ChatGPT Is Just Getting Started.”

[27] Simona Soare, “AISEC Final Research Report,” Northwest Partnership for Security and Trust, 2024 (unpublished).

[28] Liz Lumley, “Deepfake fraud directed at banks on the rise,” The Banker, 12 June 2024, https://www.thebanker.com/Deepfake-fraud-directed-at-banks-on-the-rise-1718178559.

[29] Mitek, “Identity Intelligence Index,” Mitek Systems, https://www.miteksystems.com/files/docs/Mitek__REPORT10.pdf.

[30] Biocatch, “AI, Fraud, and Financial Crime Survey,” 2024, https://www.biocatch.com/ai-fraud-financial-crime-survey.

[31] Author interview with academic participant, 15 January 2025.

[32] Internet Watch Foundation, “What has changed in the AI CSAM landscape?,” 2024, https://www.iwf.org.uk/media/nadlcb1z/iwf-ai-csam-report_update-public-jul24v13.pdf

[33] Ibid, p.7.

[34] Claudia Ratner, “When ‘sweetie’ is not so sweet: Artificial intelligence and its implications for child pornography,” Family Court Review 59(2), 386–401, 29 April 2021, https://doi.org/10.1111/fcre.12576.

[35] Author interview with government participant, 15 January 2025.

[36] Matthew Caldwell et al., “AI-enabled future crime,” Crime Science 9 (14), 2020, 6, https://doi.org/10.1186/s40163-020-00123-8.

[37] Author interview with government participant, 20 January 2025.

[38] Author interview with government participant, 15 January 2025.

[39] Rachel Hall, “Blackmailing girls and encouraging suicide: the young British men in online gangs,” The Guardian, 25 March 2025, https://www.theguardian.com/uk-news/2025/mar/25/young-british-men-convicted-for-crimes-as-online-gang-members-two-case-studies.

[40] Cassandra Cross, “Using Artificial Intelligence (AI) and Deepfakes to Deceive Victims: The Need to Rethink Current Romance Fraud Prevention Messaging,” Crime Prevention and Community Safety 24 (1), 30–41, 4 January 2022, https://doi.org/10.1057/s41300-021-00134-w

[41] Author interview with academic participant, 14 January 2025.

[42] Author interview with government participant, 14 January 2025.

[43] Author interview with government participant, 14 January 2025.

[44] Author interview with government participant, 9 January 2025.

[45] Author interview with industry participant, 19 January 2025.

[46] Justin Greene and Allie Weintraub, “Experts warn of rise in scammers using AI to mimic voices of loved ones in distress,” ABC News, 7 July 2023, https://abcnews.go.com/Technology/experts-warn-rise-scammers-ai-mimic-voices-loved/story?id=100769857.

[47] Author interview with government participant, 9 January 2025.

[48] Microsoft Threat Intelligence, “Staying ahead of threat actors in the age of AI,” Microsoft Threat Intelligence, 14 February 2024, https://www.microsoft.com/en-us/security/blog/2024/02/14/staying-ahead-of-threat-actors-in-the-age-of-ai/.

[49] Author interview with government participant, 20 January 2025.

[50] Author interview with government participant, 20 January 2025.

[51] Sarah Mercer, Samuel Spillard and Daniel Martin, "China’s AI Evolution: DeepSeek and National Security," Alan Turing Institute Expert Analysis (February 2025).

[52] Laura Dubois, “Criminals use AI in ‘proxy’ attacks for hostile powers, warns Europol,” Financial Times, 18 March 2025, https://www.ft.com/content/755593c8-8614-4953-a4b2-09a0d2794684?utm_source=chatgpt.com.

[53] CETaS workshop, 18 March 2025.

[54] Author interview with academic participant, 20 January 2025.

[55] Author interview with academic participant, 9 January 2025.

[56] Rebecca Slayton, “What Is the Cyber Offense-Defense Balance? Conceptions, Causes, and Assessment,” International Security 41(3), Winter 2016–2017, 72–109, https://www.jstor.org/stable/26777791.

[57] Bruce Schneir, “Machine Learning to Detect Software Vulnerabilities,” Schneier on Security, 2019, https://www.schneier.com/blog/archives/2019/01/machine_learnin.html.

[58] Author interview with government participant, 24 January 2025.

[59] CETaS workshop, 18 March 2025.

[60] Author interview with government participant, 24 January 2025.

[61] Author focus group with government participants, 27 January 2025.

[62] Microsoft Threat Intelligence, “Staying ahead of threat actors in the age of AI.” 

[63] Interpol, “ChatGPT impacts on Law Enforcement,” August 2023, https://www.interpol.int/content/download/20035/file/ChatGPT-Impacts%20on%20Law%20Enorcement-%20August%202023.pdf.

[64] Europol, “AI and Policing,” 23 September 2024, 49, https://www.europol.europa.eu/publication-events/main-reports/ai-and-policing.

[65] UNICRI, “Not Just Another Tool: Public Perceptions on Police Use of Artificial Intelligence,” November 2024, https://unicri.org/sites/default/files/2024-11/Public-Perceptions-Police-Use-Artificial-Intelligence.pdf.

[66] Chibuike Samuel Eze and Lior Shamir, “Analysis and prevention of AI-based phishing email attacks,” arXiv, 8 May 2024, https://arxiv.org/abs/2405.05435.

[67] IBM, “Introducing the IBM Framework for securing Generative AI,” 25 January 2024, https://www.ibm.com/products/tutorials/ibm-framework-for-securing-generative-ai

[68] Author interview with academic participant, 15 January 2025.

[69] Author interview with academic participant, 20 January 2025.

[70] UNODC, “Responsible AI Innovation in Law Enforcement: Understanding Risks and Opportunities,” 14 November 2024, https://www.unodc.org/unodc/en/human-trafficking/glo-act6/Countries/responsible-ai-innovation-in-law-enforcement_-understanding-risks-and-opportunities.html.

[71] Author interview with government participant, 20 January 2025.

Authors

Citation information

Joe Burton, Ardi Janjeva, Simon Moseley and Alice, "AI and Serious Online Crime," CETaS Research Reports (March 2025).

Back to top