Abstract
This CETaS Briefing Paper provides an evidence-based analysis of hostile influence operations enabled by artificial intelligence (AI) during UK, European Union (EU) and French elections throughout June and July 2024. It is the second in a series on AI and election security, following our publication in May 2024 that highlighted AI threats in 19 recent elections. This new study analysed 16 viral AI-enabled disinformation cases in the UK and 11 in the EU and French elections and found that AI did not have any impact on results, as most exposure reinforced existing political beliefs among voters. However, deeper concerns include deepfakes inciting hate against political candidates, confusion over whether AI content is authentic, and unethical AI use in campaigns. Emerging risks include AI-labelled satire misinforming voters, and deepfake pornographic smears of political figures. Domestic and state actors, including Russian interference, all played a role in disseminating AI-enabled disinformation, though traditional tactics remained more influential. The paper calls on legal experts to consider how to address misleading AI-generated content while protecting free speech and enhancing democratic engagement through AI. A final Research Report in November 2024 will provide similar analysis on the US election and longer-term recommendations for protecting the integrity of democratic processes.
This publication is licensed under the terms of the Creative Commons Attribution License 4.0 which permits unrestricted use, provided the original authors and source are credited.
Executive Summary
This CETaS Briefing Paper provides an evidence-based analysis of hostile influence operations enabled by artificial intelligence (AI) during UK, European Union (EU) and French elections throughout June and July 2024. It is the second of three CETaS publications on AI and election security.
Our first Briefing Paper in May 2024 presented evidence from 19 recent elections, detailing how AI was used maliciously and the subsequent impact of these threats. Our final Research Report to be published in November 2024 will provide an analysis of AI threats from the US election cycle, and present long-term policy and technical recommendations for protecting the integrity of democratic processes.
Key findings from the UK and European elections are as follows:
- We identified just 16 confirmed viral cases of AI-enabled disinformation or deepfakes during the UK general election, while only 11 viral cases were identified in the EU and French elections combined. Echoing findings from our previous research, this volume was far lower than many had feared ahead of these important campaign periods.
- As with other recent elections, there is no evidence that AI-enabled disinformation or deepfakes meaningfully impacted UK or European election results. This is because most exposure was concentrated among a minority of users with political beliefs already aligned to the ideological narratives embedded within such content. In this sense, it was more likely to consolidate those with similar pre-existing perspectives rather than sway undecided voters.
- However, the aftermath of these cases suggests wider damage to the integrity of the democratic system. We identified evidence of:
- Deepfakes inciting online hate against political figures, potentially threatening their personal safety;
- Confusion over whether AI-generated content is real, damaging trust in online sources; and
- Politicians using AI in campaign ads without clear labels, incentivising unethical election behaviour moving forward.
- While the types of AI threats identified in the UK and European elections were broadly consistent with global trends observed in other recent elections, new specific types of misuse were identified which are resulting in additional risks. This includes:
- Realistic deepfakes labelled as parody or satire containing disinformation that users interpret as factual, misleading voters over election issues;
- Media content of political candidates wrongly accused of being AI-generated, eroding public confidence in the online information environment; and
- Politicians targeted with deepfake pornographic smears, resulting in harm to their wellbeing and professional reputation.
- Tackling deceptive content (such as deepfakes circulated under the guise of satire) requires a careful balance between countering disinformation while also protecting free speech and recognising the benefits some satirical content can provide to political discourse. However, regulators and others face significant challenges in responding to viral cases of such content.
- Creation and circulation of AI deepfakes and disinformation can be attributed both to domestic actors and hostile state-sponsored groups.
- The ease with which users can now create sophisticated synthetic content using generative AI has created additional sources of risk. For example, many content creators of viral UK election deepfakes were members of the public who expressed no explicit intention to erode election integrity.
- And while we identified only one instance of a UK political candidate directly sharing AI-generated content without clear labels, this behaviour was far more prevalent in European elections.
- Alongside domestic actors, all three elections revealed signs of hostile interference linked to Russia, albeit with minimal impact.
- Generative AI played less of a role in boosting the virality of disinformation compared to traditional interference methods and human influencers. Malicious actors continued to rely on well-established disinformation tactics, such as flooding comments sections on social media with bot accounts (astroturfing), and exploiting politically-aligned social media influencers to spread disinformation, thereby enhancing its perceived authenticity (information laundering). Where generative AI was used, this was primarily to re-write news articles with additional strategic narratives, or enhance the scale of online disinformation activities.
- Our research also identified beneficial uses of generative AI during the election period. The UK election in particular revealed how generative AI can be used to:
- Amplify important campaign issues such as climate change through AI-generated parodies;
- Provide a platform to improve the link between voters and political candidates through synthetic online personas; and
- Assist fact-checkers in triaging misleading claims made by political candidates to work out the most urgent ones to debunk first.
While our research has concluded that AI played only a minor role in recent UK and European election campaigns, clearer guidance is needed for regulators on responding to misleading AI-generated parody content. This guidance should strike a careful balance between protecting democracy and upholding freedom of speech by helping to debunk harmful falsehoods made under the guise of satire, while also recognising the benefits AI can offer for improving public engagement with electoral processes.
Introduction
When the UK and France both announced snap elections for July 2024 (the month after the planned European Parliament election), concerns were raised over the potential role of generative AI in interfering with voting processes. Since late 2022, advances in generative AI have enabled users to create increasingly realistic but synthetic content based on human inputs.[1] With more than 60 countries holding elections during 2024, the ease of creating malicious AI content has led to continued speculation over what impact generative AI will have on election processes.[2]
Seeking to inform these debates with evidence-based research, CETaS published a Briefing Paper in May 2024 which highlighted how – up to that point – there was no clear evidence internationally of AI meaningfully influencing a specific election result.[3] Since then however, several important elections have taken place in the UK and Europe which reveal novel insights into the role of AI-based disinformation and deepfakes. These voting contests also had significant geopolitical implications for ongoing crises in Ukraine and the Middle East, increasing fears that hostile actors would have strong incentives to shape the results.
Although there have been articles and reports analysing AI election threats in recent UK and European elections, these investigations have often focused on individual viral disinformation cases. This risks leaving the topic open to speculation, potentially amplifying public anxiety and only serving to benefit those seeking to undermine trust in democracy.[4]
Research methodology
This study builds on our earlier research exploring the global evidence base of AI misuse in election contexts, to understand how AI was used during recent UK, EU and French elections. In particular, it aims to synthesise emerging disparate analyses on these election campaigns together in order to draw out the key themes and compare them to our previous findings.
Data collection for this study was conducted over a four-month period from May to August 2024, and involved a literature review covering public reports and news articles on AI misuse in the three elections. The search strategy ensured that only cases which attracted a sufficient level of virality to justify being reported by journalists or researchers were analysed, as these are of most significant risk for election security. However, an inherent limitation of this approach is that the search strategy would therefore not identify AI-related disinformation campaigns which fell below a certain virality threshold.
1. AI-Enabled Election Threats
This section explores the evidence base of AI misuse in the UK, EU and French elections. It presents a breakdown of cases observed by threat category and the actors responsible, their likely intentions and methods, as well as any corresponding impact from these cases.
1.1 Smear campaigns
Table 1: AI-enabled smear campaigns identified in the UK, EU and French elections
Election(s) | Summary | Instances reported[5] | Impact |
![]() UK general election ![]()
| AI-generated videos,[6] voice clones,[7] images,[8] and pornographic content[9] of political candidates making false controversial statements or being depicted in fabricated activities. | 5 | Uncertainty among users of content inauthenticity damages trust in the integrity of online sources.[10]
|
![]() French legislative and EU parliamentary elections ![]() | AI-generated images and videos of political candidates making controversial statements[13] or implicated in fabricated activities.[14] | 3 | High user engagement on fake content amplified the disinformation.[15]
|
1.1.1 Actors, motives and tradecraft
As Table 1 shows, the UK election witnessed a relatively small amount of viral smear campaigns (which we formerly labelled ‘character assassinations’) compared to the ‘tsunami of AI fakes’ some commentators had predicted.[18] The factors behind this relatively low volume were primarily due to a combination of the snap announcement, the short campaigning period, and relative confidence in the outcome from opinion polls – which all reduced the incentives for malicious interference.[19]
In contrast to our previous research exploring the evidence base of AI misuse in recent elections globally, smear campaign efforts were less diversified in the UK context; primarily implicating politicians in controversial activities or remarks, rather than candidates fictitiously dropping out of the election race or seeking to rig ballots before polling day.[20] This included one example where AI may have been used to manipulate an audio clip of a Labour Party candidate making controversial comments over the Israel-Hamas conflict.[21] Different AI threat categories also emerged which we had not identified before. This included the circulation of deepfake pornographic smears targeting female UK politicians who did not consent to having their images used.[22]
Tech companies saw fewer instances of viral AI smear campaigns in the EU and French elections.[23] There were two high-profile deepfakes implicating candidates in controversial but fake remarks and activities during the French campaign, as well as a Greek politician falsely depicted nude ahead of the EU election.[24] However, that is not to suggest that disinformation incorporating other, non-AI techniques did not proliferate in parallel. Indeed, the dissemination of deceptive political content reached its highest level across the EU bloc since tracking began in 2023, including through the use of old content incorrectly captioned as present-day developments and inflammatory political ads.[25]
1.1.2 Impact
Similar to other elections we analysed in our first Briefing Paper, the UK, French and EU elections demonstrated no evidence that AI content affected specific election results.[26] Yet as with previous voting processes, there were worrying early signs of AI-enabled disinformation damaging the integrity of the wider democratic system in all three contexts.[27]
Smear campaigns, which implicated politicians in statements they did not make, led to spikes in online harassment – including racist remarks and even death threats.[28] Repeated attacks of this kind may threaten the personal security of candidates and could lead to a ‘chilling’ effect on their willingness to participate in future elections, owing to fears over their safety. Indeed, those targeted with pornographic deepfakes in the UK said they felt distressed and ‘sick’ by the incidents, already reflecting the psychological damage these cases have caused.[29]
1.2 Deceptive political advertising
Table 2: AI-generated deceptive political ads identified in the UK, EU and French elections
Election(s) | Summary | Instances reported[30] | Impact |
![]() UK general election ![]()
| Political candidate posting a campaign ad on social media incorporating an AI-generated image.[31] | 1 | High user engagement on fake content amplified the disinformation.[32]
|
![]() French legislative and EU parliamentary elections ![]() | Political candidates posting campaign ads on social media incorporating AI-generated images.[35] | 4 | High user engagement on fake content amplified the disinformation.[36]
|
1.2.1 Actors, motives and tradecraft
Despite reported instances in recent elections of generative AI being utilised by political candidates to create deceptive campaign adverts, only one case was identified during the UK election. This involved an independent Scottish candidate posting an AI-generated picture of bearded men wearing headbands resembling those used by the terrorist organisation Hamas, with a caption containing xenophobic language.[39] An unofficial local branch of the Reform UK political party also shared similar AI-generated anti-immigration content on Facebook.[40] Such behaviour reinforces our previous assertions that political parties must do more to discourage AI ad misuse from not only official party members, but their wider support base too.[41]
While politicians themselves did not routinely circulate AI-generated content in the UK context, several harmful deepfakes were created by members of the public who expressed that they did not intend to deliberately undermine election integrity.[42] Instead, such users often stated that they wanted to ‘troll’ others by derailing discussions and provoking reactions – despite causing indirect damage to the online information space.[43] Other AI content creators appeared to be more politically active individuals seeing to push their preferred party’s ideas through fake accounts, which nevertheless confused voters over manifesto policies.[44] These observations reflect those witnessed in recent elections, where the ability for non-technical and unskilled users to now create realistic synthetic content in the first place significantly lowers the barriers for disinformation to spread, and opens up new sources of domestic risk.[45]
On the other hand, there were a handful of high-profile uses of deceptive AI-generated ads by political candidates in the EU and French elections. Prior to the EU elections, all European Parliament parties signed the non-binding Elections Code of Conduct, where they pledged not to produce or disseminate deceptive AI-generated content.[46] Yet despite this commitment, some parties in Ireland, France and Italy violated the agreement by posting AI-generated content without clear labels in both elections.[47] The majority of this content was imagery which sought to amplify specific narratives around heated campaign issues such as immigration and globalisation, using intentionally misleading, visually compelling and emotionally charged content.[48]
1.2.2 Impact
Mirroring concerns we witnessed in recent elections, a handful of European users faced challenges in determining whether campaign ad content they were viewing was AI-generated, with some even resharing deepfakes (whether intentionally or not) without explicitly notifying others as such.[49] Political candidates during the EU and French elections worsened the situation by not adding clear labels informing users that some of their campaign ads were synthetic, which risks incentivising unethical election behaviour in the future. If these trends persist, this will only serve to gradually erode the integrity of the online information space.
Analysis of user engagement with deceptive AI-generated adverts posted by political candidates also revealed many to be emotionally engaged with narratives that frequently depicted visually exaggerated anti-immigration perspectives.[50] However, the content was once again primarily designed to create conflict between different political groups and entrench voter divisions, as opposed to necessarily changing large swathes of voter attitudes towards supporting the parties circulating the ads.[51]
1.3 Voter targeting
Table 3: AI-enabled voter targeting efforts identified in the UK, EU and French elections
Election(s) | Summary | Instances reported[52] | Impact |
![]() UK general election ![]()
| Automated bot accounts spamming social media election posts with comments in support for a political party.[53]
| 3 | Genuine users politically aligned to bot account comments felt emboldened to express their support, amplifying the content.[55]
|
![]() French legislative and EU parliamentary elections ![]() | AI-generated videos and images spreading disinformation on campaign issues.[57] | 2 |
|
1.3.1 Actors, motives and tradecraft
A prominent category of AI-enabled disinformation which emerged in all three election contexts and aligns with our previous observations centred on voter targeting efforts designed to amplify deceptive content on social media. Yet instead of integrating novel AI tools, these activities involved well-established methods of automated social media bot accounts ‘astroturfing’ social media platforms.[59]
Roughly one month before polling day in the UK, social media comments sections were spammed by several profiles across different platforms which had hallmarks of inauthentic behaviour.[60] The messages called on users to vote for Reform UK, in an attempt to distort the information environment and exaggerate the perceived popularity of the party.
In some of the voter targeting activities which did emerge, there were also hallmarks of hostile foreign state-sponsored activity. Following a live TV debate between candidates, several accounts posted comments on TikTok clips of the show, which exhibited bot-like behaviour and promoted support for Reform UK.[61] All of these accounts supposedly originated in the UK, but several had a disproportionate number of followers linked to Nigeria. Although not confirmed to be connected with any bot farms or hostile states, these characteristics are consistent with previous Russia-led bot farms which sought to influence the 2020 US election.[62] An additional voter targeting effort with hallmarks of foreign interference involved the circulation of disinformation linked to campaign issues around Russia’s war in Ukraine.[63] These bot-like accounts often contained hate speech, conspiracy theories and advocated support for Russia.[64]
During the EU election campaign, an investigation also found that more than 40 TikTok accounts used AI text-to-speech software to spread political disinformation at scale, including that the parliamentary elections were ‘rigged’, and taking pro-Kremlin stances over the war in Ukraine.[65] Many of the accounts re-used identical scripts for different videos while also using different AI voices, indicating possible coordination despite the investigators’ inability to trace the source(s) responsible.[66]
1.3.2 Impact
Although much of this malicious or misleading bot content attained a high level of user engagement and amplified the reach of disinformation to more online communities, it is unclear the extent to which this led to changes in voter attitudes or behaviour. This is because such malicious content represents a very low percentage of a user’s overall online media consumption, given the diversity of media and sources available.[67] Instead, most actual exposure tends to be concentrated among a minority of users with similar political beliefs to the content, or strong motivations to seek out this information in the first place.[68] For example, bot accounts seeking to amplify support for Reform UK primarily led to users already aligned to the party being more emboldened to express their views.[69] This correlates with our previous findings that AI threats amplify existing narratives across likeminded communities and thus exacerbate wider societal issues – including political polarisation and digital echo chambers.[70]
1.4 Parody and satire content
Table 4: AI-developed parody and satire disinformation identified in the UK, EU and French elections
Election(s) | Summary | Instances reported[71] | Impact |
![]() UK general election ![]() | AI content satirising and parodying political candidates over campaign issues,[72] activities,[73] live debate discussions,[74] and exit polls.[75] | 4 | Users confused over whether misleading claims about campaign issues described as satirical were factual, damaging trust in the integrity of online sources.[76] |
1.4.1 Actors, motives and tradecraft
As with deepfake pornographic smears, parody and satire constituted another novel AI content type not observed before which received high user engagement in the UK election context. While AI-generated political satire can have benign use cases (discussed further in Section 2), some deepfakes labelled as such also contain disinformation on campaign issues and exit polls.[77] Despite being presented as satirical, the realism of this form of deepfake material can confuse voters over what aspects of the content are factual. This is particularly effective when combined with other trends observed during the UK election cycle, such as a handful of users who sought to ‘troll’ others by commenting on deepfake videos claiming them to be authentic.[78]
1.4.2 Impact
Evidence from the UK election revealed how these concerns were realised, with some users accidentally interpreting satirical claims made in parody deepfakes about campaign issues as legitimate. This included one video referencing Conservative Party pledges to re-introduce national service, where the integration of former Prime Minister Rishi Sunak’s voice and likeness in the clip enhanced its perceived legitimacy.[79] Echoing previous trends we identified, confusion over whether digital content is authentic or synthetic blurs the lines between fact and fiction, while damaging trust in the integrity of online sources.[80]
Parody deepfakes containing harmful falsehoods also represent a particular challenge for regulators such as Ofcom and fact-checkers when responding to viral instances, given the delicate balance between needing to counter disinformation, protecting free speech, and effective debunking. Clearer guidance for these stakeholders on response strategies would therefore be valuable in helping to constrain the spread of deceptive claims made in parody deepfakes, while equally recognising that other uses of AI-generated satire can have beneficial applications (see Section 2).
1.5 AI-generated knowledge sources
Table 5: AI-generated knowledge sources identified in the UK, EU and French elections
Election(s) | Summary | Instances reported[81] | Impact |
![]() UK general election ![]()
| Kremlin-affiliated social media network spreading AI-enhanced disinformation on campaign issues using fake UK news sources.[82]
| 2 | Social media page of unofficial political party branch shared xenophobic AI-generated content, incentivising unethical election behaviour.[84]
|
![]() French legislative and EU parliamentary elections ![]() | Kremlin-affiliated social media network spreading AI-enhanced disinformation on campaign issues using fake news sources and bot accounts.[86] | 2 | High user engagement on fake content amplified the disinformation, with one fabricated article being referenced by mainstream media outlets.[87] |
1.5.1 Actors, motives and tradecraft
Alongside election campaign threats primarily designed to manipulate the behaviour or attitudes of voters towards specific political candidates or policy perspectives, election information threats were also identified across UK, EU and French contexts.[88] These latter threats are designed to undermine the quality of the information environment surrounding elections, to confuse voters and damage the integrity of electoral outcomes.
A Kremlin-affiliated disinformation network known as ‘Doppelganger’ utilised AI-generated fake news sources in an operation called ‘CopyCop’.[89] Masquerading as legitimate media outlets through fake websites such as ‘The London Crier’ in the UK, the sites combined fabricated articles with real news stories pasted into AI chatbots, which were then re-written to include pro-Kremlin narratives.[90]
European voters were also targeted with high volumes of AI-generated and manually-written pro-Kremlin narratives on campaign issues through CopyCop, as well as content depicting French political candidates who voiced opposition to Russia in a negative light.[91] In all three cases however, the articles which achieved virality were often ones which were either human-crafted or recirculated by social media influencers, reflecting how AI did not play a prominent role compared to other, non-AI methods.[92]
Another investigation monitored five coordinated Facebook pages spreading pro-Kremlin political positions in the UK, and found that all of them promoted conspiracy theories, with some integrating anti-immigrant AI-generated images.[93] As with one of the voter targeting efforts mentioned before, analysis of location data identified links from the page creators to Nigeria – with articles shared from Kremlin-controlled media and pro-Russian websites.[94] The operation targeted their ads to elderly British men, seeking to stoke confusion and emotional reactions over campaigns issues such as Russia’s war in Ukraine and immigration.[95] As with the other cases showing hallmarks of foreign involvement, the campaign focused on exacerbating pre-existing domestic debates to sow political division, as opposed to crafting entirely new narratives.
1.5.2 Impact
A recurring impact throughout the UK election campaign was how some users were often unable to distinguish between authentic and AI-generated content, which aligns closely with observations in Nigerian and South African elections.[96] This challenge was made more difficult by some of the tactics witnessed during the election with AI-generated fake news stories, where a minority of users sought to enhance the perceived authenticity of the misleading content. For instance, one fake article disseminated by CopyCop was shared by the Russian embassy of South Africa’s official social media account, fictitiously attributing the source to ‘British Media’.[97] In some social media posts, individuals also expressed how they were now confused over what information they could trust online due to the realism of deepfakes.[98] If these sentiments grow among a wider proportion of the public, it risks fostering a pessimistic assumption that synthetic material is so prevalent that no one can believe what they see online anymore.[99] In turn, the integrity of digital content and the wider information environment will be eroded.
Unlike in the UK, one fake article initially created as part of the CopyCop operation managed to reach the top of Internet searches, containing a fictitious scandal involving France and the Ukrainian President.[100] The disinformation was circulated by an inauthentic French news outlet and then significantly amplified by pro-Kremlin accounts with millions of social media followers. English-language websites then began reporting on the story, citing social media posts and the fake article, resulting in it becoming the leading narrative.[101] While not appearing in the actual story content, AI prompts remained visible at the top of stories in several other posts from the same fake French source.[102]
1.6 AI misattribution
Table 6: AI-generated misattribution cases identified in the UK, EU and French elections
Election(s) | Summary | Instances reported[103] | Impact |
![]() UK general election ![]()
| Political candidate wrongly accused of being AI-generated due to use of synthetic campaign image.[104] | 1 | Conflicting media reports on the accusations fuelled conspiracy theories and confused the public over the facts.[105]
|
1.6.1 Actors, motives and tradecraft
One final new category of AI-related election disinformation which emerged during the UK election included those running on the ballot being misattributed as AI-generated by some media outlets.[107] Shortly after polling day, a candidate from Reform UK was accused of being fake due to failing to show up to voter hustings and the election count.[108] These allegations began to spread further online after an image of the candidate was used on a campaign leaflet which appeared AI-generated.[109] However, it was later reported that the accusations were incorrect, with the individual in question announcing that they were ill during the various electoral events.[110] While the candidate’s profile picture on social media platforms had been edited with AI tools to change the colour of his tie to the Reform UK party colours, it was based on a genuine photo.[111]
1.6.2 Impact
In our previous Briefing Paper, we raised concerns over the way that existing public discourse and reporting on threats from AI tools in upcoming elections is often based on significant speculation without evidence to support these concerns.[112] The misattribution of a political candidate as AI-generated during the UK election underscores how such persistent hyperbole which proliferated earlier in the year is now creating an environment where these false claims can thrive, only serving to confuse the public over the facts.[113]
At the same time, it also reveals early signs of how these threats are damaging the wider democratic system itself. As users increasingly no longer feel confident in being able to discern between authentic and synthetic content, it becomes easier to quickly reach the conclusion that digital sources which appear suspicious must automatically be AI-generated fakery. This not only undermines the integrity of the information environment but can also have consequences for those implicated in such accusations. Indeed, the Reform UK candidate expressed how he received online abuse following the false claims that he was not a real person.[114]
2. Beneficial AI Election Use Cases
This section highlights areas where new generative AI tools were used for beneficial purposes in the UK election context, offering insights into how political candidates, parties and civil society could positively integrate these capabilities into future election campaigns.
2.1 AI parody and satire content
Despite some parody clips confusing users over the truth, other satirical content helped to bring important campaign issues into the public discourse. For example, climate campaigners used deepfake content to highlight the lack of questions concerning the environment during election debates.[115] In the video, a deepfake iteration of a BBC presenter urged viewers to email the organisation and call for an emergency episode of Question Time focused on climate change, while clearly labelling the clip as AI-generated.[116]
2.2 AI campaign platforms
In addition to content applications, one political candidate also experimented with an AI chatbot as their campaign platform. Called ‘AI Steve’, the chatbot recorded questions from the public and generated manifesto policy proposals via volunteers.[117] Validators from the local area then voted on these proposals, with those receiving more than 50% being adopted by the candidate.[118] Though the concept performed poorly on the ballot, the campaign actively sought to improve the link between voters and candidates and encourage grassroots involvement.[119]
2.3 AI fact-checking initiatives
Finally, fact-checkers also benefitted from the ability to enhance their debunking efforts during a busy period of the election cycle through novel AI systems. This included triaging misleading claims made by political candidates to work out the most urgent ones to fact-check first, as well as comparing new political statements made against similar ones previously fact checked, in order to debunk repeated false claims faster.[120]
Taken together, these examples reflect the need to take a balanced policy approach to the role of novel AI capabilities in election contexts, recognising that such tools offer both benefits and risks depending on the intentions of those using them. Failure to do so may lead to a lack of public appetite for the positive integration of AI tools in future elections.
3. Conclusion
Our analysis of AI misuse in recent UK and European elections reveals no clear evidence that such threats had any impact on influencing large-scale voter attitudes or election results. Indeed, only a handful of viral cases were identified (see Table 7 below).
Table 7: Overview of total viral cases of AI-enabled election threats in UK, EU and French elections
Threat type | Viral cases identified in the UK election | Viral cases identified in EU and French elections |
Smear campaigns ![]() | 5 | 3 |
Deceptive political advertisements ![]() | 1 | 4 |
Voter targeting efforts ![]() | 3 | 2 |
Parody and satire ![]() | 4 | 0 |
AI-generated knowledge sources ![]() | 2 | 2 |
AI misattribution ![]() | 1 | 0 |
Reinforcing previous CETaS research however, the emergence of viral deepfakes, AI fake news operations and social bot activities in these campaign periods has resulted in similar consequences identified in previous elections. These include:
- Online harassment and distress caused by deepfakes targeting political candidates;
- Users increasingly being confused over whether the content they are viewing is authentic, undermining trust in the information environment; and
- Politicians using AI in campaign ads without clear labels, incentivising unethical election behaviour in the future.
Echoing concerns made in our first Briefing Paper, these impacts affect not just election processes but the wider democratic system itself; feeding into pre-existing societal harms such as entrenched political polarisation, echo chambers and truth decay.
There are likely to be higher volumes of AI-generated content utilised as part of hostile influence operations across the UK, EU and French election cases than we identified, which were not detected due to a lack of virality. Nevertheless, as we previously emphasised, it is vital that a careful balance is struck in reporting on these cases. While improving awareness of the threat landscape through evidence-based research is essential, so too is avoiding speculation or exaggeration of the impact of AI in election contexts.
Despite several key elections in the UK and Europe now being concluded, one of the most significant elections this year is yet to take place. As the US campaign period enters full swing ahead of the November vote, incentives will be high for malicious interference; not least due to the impact the outcome will have on international politics. Our third and final Research Report in November will contain an analysis of AI threats during the US election campaign, alongside recommendations for strengthening election resilience measures ahead of future national, regional and local elections across democratic countries.
References
[1] Ardi Janjeva et al., “The Rapid Rise of Generative AI: Assessing risks to safety and security,” CETaS Research Reports (December 2023), 11.
[2] Mekela Panditharatne, “The Huge Risks From AI In an Election Year,” TIME Magazine, 10 April 2024, https://time.com/6965299/risks-ai-elections/.
[3] Sam Stockwell et al., “AI-Enabled Influence Operations: The Threat to the UK General Election,” CETaS Briefing Papers (May 2024), 3, https://cetas.turing.ac.uk/publications/ai-enabled-influence-operations-threat-uk-general-election.
[4] Stockwell et al. (2024), 39.
[5] Based on cited examples in news articles and public reports between 22 May and 30 August 2024.
[6] Marianna Spring (a), “TikTok users being fed misleading election news, BBC finds,” BBC News, 2 June 2024, https://www.bbc.co.uk/news/articles/c1ww6vz1l81o; Marianna Spring (b), “Labour’s Wes Streeting among victims of deepfake smear network on X,” BBC News, 7 June 2024, https://www.bbc.co.uk/news/articles/cg33x9jm02ko; Sophie Church, “Were Fears Of Disinformation During The Election Exaggerated?,” Politics Home, 22 June 2024, https://www.politicshome.com/news/article/fears-disinformation-election-exaggerated.
[7] Stephen Wood, “Fact check: Social media user ‘sworn at by Streeting’ has history of fake videos,” The Independent, 4 July 2024, https://www.independent.co.uk/news/uk/wes-streeting-labour-palestinians-youtube-jimmy-savile-b2573911.html.
[8] Arron Williams, “AI-generated image of Keir Starmer shared to suggest Labour no longer represents the working class,” Logically Facts, 17 June 2024, https://www.logicallyfacts.com/en/fact-check/fake-ai-generated-image-of-keir-starmer-shared-to-suggest-labour-no-longer-represents-the-working-class.
[9] Cathy Newman, “Exclusive: Top UK politicians victims of deepfake pornography,” Channel 4 News, 1 July 2024, https://www.channel4.com/news/exclusive-top-uk-politicians-victims-of-deepfake-pornography.
[10] Spring (2024b); Marianna Spring (c), “This wasn't the social media election everyone expected,” BBC News, 8 July 2024, https://www.bbc.co.uk/news/articles/cj50qjy9g7ro.
[11] Spring (2024a).
[12] Newman (2024).
[13] Théophane Hartmann, “Viral deepfake videos of Le Pen family reminder that content moderation is still not up to par ahead of EU elections,” Euractiv, 16 April 2024, https://www.euractiv.com/section/artificial-intelligence/news/viral-deepfake-videos-of-le-pen-family-reminder-that-content-moderation-is-still-not-up-to-par-ahead-of-eu-elections/.
[14] European Digital Media Observatory (EDMO), EU-Related Disinformation Peaks in April (EDMO: May 2024), 7, https://edmo.eu/wp-content/uploads/2024/05/EDMO-35-Horizontal.pdf; Thanos Sitistas Epachtitis, “Made with AI software, the photo that "shows" S. Kasselakis and T. Macbeth naked on a beach,” Greece Fact Check, 3 April 2024, https://www.factchecker.gr/2024/04/03/ai-generated-image-of-kasselakis-and-tyler-naked-on-a-beach/; Full Fact (a), “Video of French president dancing in nightclub is a deepfake,” 26 March 2024, https://fullfact.org/online/macron-clubbing-video-deepfake/.
[15] Hartmann (2024).
[16] Full Fact (2024a).
[17] Epachtitis (2024).
[18] Spring (2024c).
[19] Church (2024).
[20] Stockwell et al. (2024), 36.
[21] Full Fact (b), “No evidence audio clip supposedly of Wes Streeting comments about Palestinian deaths is genuine,” 3 July 2024, https://fullfact.org/election-2024/wes-streeting-audio-clip-palestine/.
[22] Newman (2024).
[23] Supantha Mukherjee, “Few AI deepfakes identified in EU elections, Microsoft president says,” Reuters, 3 June 2024, https://www.reuters.com/technology/few-ai-deepfakes-identified-eu-elections-microsoft-president-says-2024-06-03/.
[24] Hartmann (2024); Epachtitis (2024); Full Fact (2024a).
[25] Tiffany Hsu, “‘Convergence of Anger’ Drives Disinformation Around E.U. Elections,” The New York Times, 7 June 2024, https://www.nytimes.com/2024/06/07/business/media/eu-elections-disinformation.html; EDMO (2024), 7.
[26] Mukherjee (2024).
[27] Stockwell et al. (2024), 3.
[28] Spring (2024b).
[29] Newman (2024).
[30] Based on cited examples in news articles and public reports between 22 May and 30 August 2024.
[31] Petra Matijevic, “Independent candidate posts nearly 200 Facebook election ads including AI anti-migrant fakes,” The Ferret, 25 June 2024, https://theferret.scot/tommy-macpherson-facebook-ads-ai-anti-migrant-fakes/.
[32] Ibid.
[33] Ibid.
[34] Ibid.
[35] Valentin Châtelet, “Far-right parties employed generative AI ahead of European Parliament elections,” DFR Lab, 11 June 2024, https://dfrlab.org/2024/06/11/far-right-parties-employed-generative-ai-ahead-of-european-parliament-elections/; Saman Nazari and Claudia De Sessa, “Salvini’s electoral campaign uses non-watermarked AI images,” Alliance4Europe Substack, 6 June 2024, https://alliance4europe.substack.com/p/salvinis-electoral-campaign-uses; Shane Murphy, “The Irish People's Campaign Graphics,” FuJo News, 4 June 2024, https://fujomedia.eu/news/the-irish-people-and-ai/; Miazia Schueler et al., Artificial Elections: Exposing the Use of Generative AI Imagery in the Political Campaigns of the 2024 French Elections (AI Forensics: July 2024), https://aiforensics.org/uploads/Report_Artificial_Elections_81d14977e9.pdf.
[36] Châtelet (2024); Nazari and Sessa (2024).
[37] Murphy (2024).
[38] Châtelet (2024); Nazari and Sessa (2024); Murphy (2024).
[39] Matijevic (2024).
[40] Michael Workman and Kevin Nguyen, “UK Conservatives say ABC analysis that points to foreign interference operation 'highly alarming',” ABC News, 29 June 2024, https://www.abc.net.au/news/2024-06-29/uk-election-pro-russian-facebook-pages-coordinating/104038246.
[41] Stockwell et al. (2024), 24.
[42] Spring (2024b).
[43] Ibid.
[44] Spring (2024c).
[45] Stockwell et al. (2024), 13.
[46] Clothilde Goujard, “EU political parties promise to steer clear of deepfakes ahead of election,” POLITCO, 10 April 2024, https://www.politico.eu/article/eu-political-parties-promise-to-steer-clear-of-deepfakes-ahead-of-election/.
[47] Châtelet (2024); Nazari and Sessa (2024); Murphy (2024).
[48] Schueler et al. (2024), 15-16; Mark Scott, “Finally: Someone used generative AI in a Western election,” POLITCO, 4 July 2024, https://www.politico.eu/newsletter/digital-bridge/finally-someone-used-gen-ai-in-a-western-election/.
[49] Murphy (2024); Full Fact (2024a).
[50] Murphy (2024); Schueler et al. (2024), 3.
[51] Nazari and Sessa (2024).
[52] Based on cited examples in news articles and public reports between 22 May and 30 August 2024.
[53] Marianna Spring (d), “Bot or not: Are fake accounts swaying voters towards Reform UK?,” BBC News, 14 June 2024, https://www.bbc.co.uk/news/articles/c1335nj316lo; George Hancorn, “Suspicious accounts 'with Nigerian following' being used to push pro-Reform UK content on TikTok,” ITV News, 14 June 2024, https://www.itv.com/news/2024-06-14/suspicious-accounts-being-used-to-push-pro-reform-uk-content-on-tiktok.
[54] Global Witness, “Investigation reveals content posted by bot-like accounts on X has been seen 150 million times ahead of the UK elections,” 2 July 2024, https://www.globalwitness.org/en/campaigns/digital-threats/investigation-reveals-content-posted-bot-accounts-x-has-been-seen-150-million-times-ahead-uk-elections/.
[55] Spring (2024d).
[56] Ibid.
[57] Coalter Palmer and Natalie Huet, “TikTok Content Farms Use AI Voiceovers to Mass-Produce Political Misinformation,” NewsGuard, 11 July 2024, https://www.newsguardtech.com/special-reports/tiktok-content-farms-use-ai-voiceovers-to-mass-produce-political-misinformation/; Evita Purina, “Old cars, immigrants and war – how EU related misinformation is spread in the Baltics?,” Re:Baltica, 6 June 2024, https://en.rebaltica.lv/2024/06/old-cars-immigrants-and-war-onward-to-the-european-parliament-through-scaremongering-and-lies/.
[58] Palmer and Huet (2024).
[59] David Schoch et al., “Coordination patterns reveal online political astroturfing across the world,” Scientific Reports 12, no. 4572 (March 2022), https://www.nature.com/articles/s41598-022-08404-9.
[60] Spring (2024d).
[61] Hancorn (2024).
[62] Ibid.
[63] Global Witness (2024).
[64] Ibid.
[65] Palmer and Huet (2024).
[66] Ibid.
[67] Ceren Budak et al., “Misunderstanding the harms of online misinformation,” Nature 630 (June 2024), https://www.nature.com/articles/s41586-024-07417-w.
[68] Ibid.
[69] Spring (2024d).
[70] Stockwell et al. (2024), 18, 26.
[71] Based on cited examples in news articles and public reports between 22 May and 30 August 2024.
[72] Spring (2024a).
[73] Jim Waterson, “Deepfake video of Nigel Farage playing Minecraft ‘of course’ not real, party says,” The Guardian, 18 June 2024, https://www.theguardian.com/technology/article/2024/jun/18/deepfake-video-of-nigel-farage-playing-minecraft-of-course-not-real-party-says.
[74] James Murray, “AI 'deepfake' calls for Question Time 'Climate Showdown' election special,” BusinessGreen, 21 June 2024, https://www.businessgreen.com/news/4325378/ai-deepfake-calls-question-climate-showdown-election-special.
[75] August Graham, “Fact check: Video from election night 2015 has been manipulated,” AOL News, 19 June 2024, https://www.aol.co.uk/news/fact-check-video-election-night-110735891.html.
[76] Spring (2024a).
[77] Spring (2024a); Graham (2024).
[78] Spring (2024b).
[79] Spring (2024a).
[80] Spring (2024a).
[81] Based on cited examples in news articles and public reports between 22 May and 30 August 2024.
[82] Recorded Future (a), Russia-Linked CopyCop Uses LLMs to Weaponize Influence Content at Scale (Insikt Group: May 2024), https://go.recordedfuture.com/hubfs/reports/cta-2024-0509.pdf.
[83] Workman and Nguyen (2024).
[84] Ibid.
[85] Recorded Future (2024a), 17.
[86] Recorded Future (2024a); European External Action Service (EEAS), Doppelganger strikes back: FIMI activities in the context of the EE24 (EEAS: June 2024), https://euvsdisinfo.eu/uploads/2024/06/EEAS-TechnicalReport-DoppelgangerEE24_June2024.pdf
[87] EEAS (2024); David Gilbert, “How Disinformation From a Russian AI Spam Farm Ended up on Top of Google Search Results,” WIRED, 8 July 2024, https://www.wired.com/story/ai-generated-russian-disinformation-zelensky-bugatti/.
[88] Stockwell et al. (2024), 4-5.
[89] Recorded Future (2024a).
[90] Ibid, 5-6.
[91] Recorded Future (b), Russia-Linked CopyCop Expands to Cover US Elections, Target Political Leaders (Insikt Group: June 2024), 21, https://www.recordedfuture.com/research/copycop-expands-to-cover-us-elections-target-political-leaders; EEAS (2024).
[92] Ibid, 17.
[93] Workman and Nguyen (2024).
[94] Ibid.
[95] Ibid.
[96] Stockwell et al. (2024), 17, 21.
[97] Caolan Magee, “Fake UK news websites with links to Russia targeting British voters,” iNews, 9 June 2024, https://inews.co.uk/news/world/russia-target-tory-voters-uk-fake-new-websites-3093594.
[98] Spring (2024c).
[99] Rehan Mirza, “How AI deepfakes threaten the 2024 elections,” The Journalist’s Resource, 16 February 2024, https://journalistsresource.org/home/how-ai-deepfakes-threaten-the-2024-elections/.
[100] Gilbert (2024).
[101] Ibid.
[102] Ibid.
[103] Based on cited examples in news articles and public reports between 22 May and 30 August 2024.
[104] Joe Pike and Phil Kemp, “Reform fake candidates conspiracy theories debunked,” BBC News, 11 July 2024, https://www.bbc.co.uk/news/articles/ckvgl9kzwzjo.
[105] Danny Rigg, “Were Reform UK’s candidates even real?,” Metro News, 8 July 2024, https://metro.co.uk/2024/07/08/reform-uks-candidates-even-real-21188964/; Full Fact (c), “Reform UK candidate who stood in London was not ‘AI-generated’,” 9 July 2024, https://fullfact.org/online/reform-uk-candidate-AI/.
[106] Pike and Kemp (2024).
[107] Pike and Kemp (2024).
[108] Barney Davis, “‘I am not AI’: Reform UK candidate accused of being bot speaks out,” The Independent, 9 July 2024, https://www.independent.co.uk/news/uk/politics/reform-uk-ai-candidate-mark-matlock-b2576101.html.
[109] Full Fact (2024c).
[110] Pike and Kemp (2024).
[111] Davis (2024).
[112] Stockwell et al. (2024), 39.
[113] Stockwell et al. (2024), 17, 21.
[114] Pike and Kemp (2024).
[115] Murray (2024).
[116] Scarlett Sherriff, “Fiona Bruce deepfake stars in spoof Question Time campaign,” Marketing Beat, 21 June 2024, https://www.marketing-beat.co.uk/2024/06/21/fiona-bruce-question-time/.
[117] Anna Desmarais, “Steve for PM? Meet the AI candidate standing for election as an MP in the UK,” Euronews, 13 July 2024, https://www.euronews.com/next/2024/06/13/meet-ai-steve-the-uks-avatar-election-candidate.
[118] Barney Davis, “Meet AI Steve: The bot-driven politician using artificial intelligence on the campaign trail,” The Independent, 11 June 2024, https://www.independent.co.uk/news/uk/politics/election-politics-uk-ai-steve-brighton-b2559777.html.
[119] Desmarais (2024).
[120] David Corney et al., “The AI Election: How Full Fact is Leveraging New Technology for UK General Election Fact Checking,” Full Fact Blog, 14 June 2024, https://fullfact.org/blog/2024/jun/the-ai-election-how-full-fact-is-leveraging-new-technology-for-uk-general-election-fact-checking/.
Authors
Citation information
Sam Stockwell, "AI-Enabled Influence Operations: Threat Analysis of the 2024 UK and European Elections," CETaS Briefing Papers (September 2024).