Abstract
This CETaS Briefing Paper provides an evidence-based analysis of AI-enabled influence operations that have the potential to undermine the upcoming UK general election, as well as other upcoming democratic elections. The research finds that the current impact of AI on specific election results is limited, but these threats show signs of damaging the broader democratic system. This includes an increasingly degraded and polarised information space, as well as online harassment against deepfake targets, which create risks outside of just the election process. The election threats identified in the research are not new or specific to AI but have the potential to be enhanced by AI at different stages of the election cycle, and across three categories: campaign threats, information threats and infrastructure threats. The paper provides short-term policy mitigations for UK stakeholders to enhance election resilience and is the first of three CETaS publications on AI and election security. Our interim Briefing Paper in September 2024 will contain an overview of AI election threats in the UK and Europe, while a final Research Report in November 2024 will provide similar analysis on the US election and longer-term recommendations for protecting the integrity of democratic processes.
This work is licensed under the terms of the Creative Commons Attribution License 4.0 which permits unrestricted use, provided the original authors and source are credited. The license is available at: https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode.
Executive Summary
This CETaS Briefing Paper provides an evidence-based analysis of AI-enabled influence operations that have the potential to undermine the upcoming UK general election in July, as well as other upcoming democratic elections. The paper also provides short-term policy mitigations for UK stakeholders to enhance election resilience.
This is the first of three CETaS publications on AI and election security. Our interim Briefing Paper in September 2024 will contain an overview of AI election threats in the UK and Europe, while a final Research Report in November 2024 will provide similar analysis on the US election and longer-term recommendations for protecting the integrity of democratic processes.
Key findings are as follows:
- The current impact of AI on specific election results is limited, but these threats show signs of damaging the broader democratic system. Based on our research, of 112 national elections which have either taken place since January 2023 or are forthcoming in 2024, just 19 were identified to show AI interference. As of May 2024, evidence demonstrates no clear signs of significant changes in election results compared to the expected performance of political candidates from polling data.
- However, whether intentionally or otherwise, there is evidence of impacts linked to an increasingly degraded and polarised information space, which create second-order risks outside of just the election process. These include: confusion over whether AI-generated content is real, damaging trust in online sources; deepfakes inciting online hate against political figures, threatening their personal safety; and politicians exploiting AI disinformation for potential electoral gain. The long-term consequences of these dynamics remain uncertain.
- Existing examples of AI misuse in elections are scarce, and often amplified through the mainstream media. Identifying and corroborating examples of AI use is difficult, and limited occurrences are often amplified through repeated citations in media reporting. This risks amplifying public anxieties and inflating the perceived threat of AI to electoral processes.
Figure 1: With 64 countries set to vote in 2024, an estimated 2.8 billion people will be going to the polls
Sources: International IDEA; The Guardian; Eurostat.
The election threats identified in the research are not new or specific to AI, but have the potential to be enhanced by AI at different stages of the election cycle. Phishing emails, cyber intrusions and fake news sources are all problems found in previous elections. Based on our analysis, we have identified three categories of current election security threats, noting that these threats often overlap in intent and/or impact:
- Election campaign threats: designed to manipulate the behaviour or attitudes of voters towards specific political candidates or particular views on political issues. Some of these threats may originate from hostile actors, while others originate from political parties themselves – suggesting a need for clearer guidance and restrictions on the fair use of AI during election campaigning.
- Election information threats: designed to undermine the quality of the information environment surrounding elections, to confuse voters and damage the integrity of electoral outcomes. Some of these threats pose challenges for defining the limits of legitimate electoral practices, for instance candidates dismissing potentially accurate allegations as AI-generated to avoid accountability.
- Election infrastructure threats: designed to target the systems and individuals responsible for securing the integrity of election processes, with the aim of manipulating election outcomes or eroding confidence in election results. This includes ‘hack and leak’ operations and AI-generated phishing emails against election officials.
Risks can be compounded by the timing of the activity, different intentions of malicious actors and the overlap between the three categories outlined above. Well-timed and coordinated combinations of these techniques by those seeking to either disrupt the election, acquire financial gain, or influence communities greatly expand the threat surface. This therefore requires a coordinated, whole-of-government approach to building election resilience.
Most urgently, the research has found that ambiguous electoral laws on AI use during elections are currently resulting in misuse. These legal gaps may be exploited by both domestic and foreign actors. For instance, political parties could undermine the election process by exploiting ambiguity in guidelines on AI use – such as with fabricated campaign endorsements from politicians (see Figure 2).
The next month will be critical for UK regulators and government departments to safeguard against the AI threats observed in other countries, which could impact the upcoming general election. With the UK general election announced for 4th July 2024, there is a very limited time to make significant changes to election security protections or electoral law. Nevertheless, much can still be done to enhance short-term resilience against AI-based election threats. Most urgently, new guidance is needed to set clearer expectations on political parties regarding fair use of AI in the election period, and for media organisations on AI incident reporting. The recent UK local elections in May also offer a unique opportunity to understand how AI threats were used ahead of the general election. Data from these elections must therefore be closely analysed for informing contingency planning.
Figure 2: Misuse of AI systems during elections will involve a diversity of actors and intentions, complicating countermeasures
Source: Adapted from Ardi Janjeva, Alexander Harris, Sarah Mercer, Alexander Kasprzyk and Anna Gausen, “The Rapid Rise of Generative AI: Assessing risks to safety and security,” CETaS Research Reports (December 2023).
Recommendations
The recommendations below aim to articulate clear red lines for AI use during the upcoming July general election, and propose mitigation strategies for unacceptable practices. While effective election security requires a multistakeholder approach and wider societal initiatives, such as media literacy campaigns, our scope here is focused on regulators and government departments specifically tasked with election security oversight. Owing to the very limited window of opportunity, these recommendations should be implemented urgently.
Enhance strategic communications and public guidance
Regular and accessible government-led communications are required, setting out unambiguous guidance for political parties, the media and the UK electorate, to reduce the volume and impact of malicious or misleading AI-generated content.
1) ‘Fair AI use’ guidelines and voluntary agreements for political parties and their campaign content producers:
The Electoral Commission and Ofcom should issue joint guidance and request voluntary agreements on the fair use of AI by political parties in election campaigning, akin to the Irish Electoral Commission’s recent framework on deceptive AI content. Such guidance should require AI-generated election material to be clearly labelled and embedded with content provenance features. These commitments should extend to third parties involved in producing content on behalf of political parties.
2) AI threat reporting support for the media:
The Electoral Commission should work with Ofcom and the Independent Press Standards Organisation (IPSO) to publish new guidance for media reporting on content which is either alleged or confirmed to be AI-generated. The guidance should provide a list of certified deepfake detection tools which media organisations can use to assist in verifying content, as well as reporting mechanisms for AI-generated fake news sources to support regulators in taking down prominent sites.
3) Clarifying AI incident reporting on polling day:
Clarification statements should be made in relation to Section Six of Ofcom’s Broadcasting Code and IPSO’s Editors’ Code of Practice on AI incident reporting during polling day. This will help to avoid confusion over how to address any AI threats which emerge during a critical period of the election cycle.
4) Public awareness campaigns over AI election threats:
The Electoral Commission should ensure any forthcoming voter information includes guidance for how individuals can remain vigilant to AI-based election threats. This guidance should include: how voters can spot signs of AI-generated content; trusted authorities they can contact for advice; and the risks of using AI chatbots for polling information. This guidance should also be published in the most common main languages outside of English, including Welsh, Polish, Romanian, and Punjabi, since minority demographics may be explicitly targeted.
Red lines and legal considerations
Clarity is needed on how existing electoral law applies to the use of AI systems during elections, to increase accountability and reduce incentives for the exploitation of these systems for political gain.
5) Clarifying the status of defamation legislation, in relation to falsely ascribing policy positions to individuals for political gain:
The Ministry of Justice should publish guidance for political parties and the general public on the use of AI to create fabricated election endorsements from individuals, and how this may engage existing defamation legislation.
6) Mandatory certification of digital campaign assets, to incentivise political parties to expose AI threats targeting other candidates:
The Electoral Commission should require political parties to officially register all legitimate party-affiliated websites and use content provenance techniques to sign their digital materials, to incentivise all political candidates to expose AI-generated content – regardless of whom it targets.
Election preparedness and evidence gathering
More systematic data collection and analysis is needed on the use of AI for political campaigning, to strengthen contingency plans for protecting the upcoming general election.
7) Conducting simulations of AI-enabled influence operations to improve preparedness for potential threats:
The UK Government’s Defending Democracy Task Force (DDTF) and the Joint Election Security and Preparedness Unit (JESP) should coordinate red-teaming exercises with local election officials, media outlets and social media platforms on AI-enabled election interference to improve contingency planning. Drawing on recent simulations by the US state of Arizona, these exercises should include an array of AI threats across the election cycle, including on polling day, and require participants to make decision-making trade-offs between different countermeasures.
8) AI election material repository for trend monitoring:
To improve the evidence base and prepare for the upcoming general election, the DDTF should create a live repository of AI-generated material from recent and upcoming elections – including the local UK elections in May. This central repository will provide trend monitoring to inform future public information campaigns. Similar to Taiwan’s approach to recent AI election threats, collected material should draw from governments, research organisations, social media platforms and journalists.
Timeline of AI Election Threats
CETaS has developed the following timeline (Figure 3) which maps potential AI election threats and corresponding countermeasures. The timeline shows when different threats will likely emerge and what outcomes they aim to achieve, across the following three categories:
- Pre-election (distrust): AI-enabled influence operations at earlier stages of the election period focus on undermining the reputation of targeted political candidates, or shaping voter attitudes on specific campaign issues.
- Polling period (disrupt): activities closer to polling day focus on polluting and congesting the information space, to confuse voters over specific elements of the election campaign or the voting process.
- Post-election (discredit): after the polls close, operations are designed to erode confidence in the integrity of the election outcome, for instance via allegations of electoral fraud. This also undermines longer-term public trust in democratic processes.
Similarly, the timeline shows when the aforementioned recommendations should be implemented and what outcomes they aim to achieve, across the following three categories:
- Pre-election (inform): setting out clear expectations for political parties and the media on AI, as well as informing voters of AI threats.
- Polling period (intercept): prior evidence gathering and simulation of threat scenarios enhance mitigation strategies to deal with incidents.
- Post-election (insulate): reinforcing red lines and expectations around AI incidents and usage to protect against efforts to undermine electoral integrity.
The timeline is informed by existing evidence of when AI threats have materialised in recent elections, and insights from academic literature. The length of the dotted horizontal lines next to each election stage corresponds to the estimated time window for interventions to mitigate against these threats, taking into account the likely capacity of election officials to implement them.
Figure 3: AI threats will emerge at several stages of the election cycle with different intentions.
Introduction
This CETaS Briefing Paper explores the threat to the upcoming UK general election from AI-enabled influence operations. The paper incorporates expert insight from two CETaS workshops (12 and 18 January 2024) involving 36 experts from across the UK, US, Europe and Australia, and primary analysis of AI-based election threats from 19 case studies identified through a literature review of academic publications, news articles and public reports. Together, this analysis informed short-term recommendations for strengthening the integrity of the upcoming UK general election, which should also inform other countries’ election security measures.
The global security context
UK citizens will be among nearly half of the world’s population eligible to vote this year.[1] Since the last UK general election five years ago, the global information security landscape has changed significantly. Even during 2019, there were already concerns over how at least 70 countries had been subject to influence operations.[2] These campaigns involve a combination of intelligence gathering, intentional spreading of falsehoods (disinformation) and cyberspace activities to influence audiences.[3] Over the subsequent years, this type of interference has become more complex and difficult to detect, with increased involvement of both domestic and foreign actors.[4]
The technological context
Novel AI technology adds another layer of complexity to protecting election security. New generative AI systems which have proliferated since late 2022 allow users to generate increasingly realistic synthetic images, audio, video, and text based on human inputs.[5]
Large language models such as GPT-4 and image generators like Midjourney AI offer attractive benefits to under-resourced political campaigns, and could assist in election management processes.[6] For instance, chatbots could help to improve the connection between political candidates and voters, owing to the ability to translate outputs into several languages.[7] Nevertheless, the same innovations have lowered the barriers for less technically capable actors with malicious intentions, where before only state adversaries were able to leverage such tools.[8] At the same time, generative AI also improves the speed, scale, realism and personalisation of activities to influence audiences.[9]
Methodology and structure
Within this context, the paper addresses the following research questions:
- RQ1: How has AI been used to monitor, influence or control public opinion in the lead-up to elections since the release of the current generation of large language models in November 2022?
- RQ2: What are the new threats to election security created by AI (including both malicious and non-malicious uses) and how can these be mitigated or prevented through policy interventions?
- RQ3: What are the optimal timelines for applying such interventions to different AI-enabled election threats?
While analysis has been conducted to identify any signs of both direct impact on elections and indirect impacts, there are challenges with evaluating influence operations which must be considered throughout this Briefing Paper. This includes isolating correlation from causation with the effects of individual factors like deepfakes on shaping election outcomes, as well as the lack of standardised evaluation frameworks for measuring the impact of influence operations.[10] In some case studies identified, there was also insufficient open-source evidence on subsequent impacts from malicious uses of AI in elections. Several of these issues will be explored and addressed in our final Research Report.
This Briefing Paper is divided into three sections, corresponding to the following three threat categories: election campaign threats; election information threats; and election infrastructure threats. Each of these sections includes: an overview of the specific threat; case study evidence of where and how these risks have materialised, and their observable impact; and specific considerations for the upcoming UK general election.
1. Election Campaign Threats
This section explores applications of AI designed to manipulate the behaviour or attitudes of voters in relation to political candidates or particular political issues. Some of these threats raise questions about the integrity of political parties to use AI systems fairly without appropriate red-lines (e.g. with deceptive AI political advertising).
Table 1: Overview of AI-generated election campaign threats
Election campaign threat types |
|
Recent countries affected[11] | Argentina; Bangladesh; Belarus; Colombia; France; India; Indonesia; Moldova; Nigeria; Pakistan; Poland; Slovakia; South Africa; South Korea; Taiwan; Turkey; UK; US. |
Overall impact | Evidence indicates no clear impact on election results compared to candidates expected to win from polling data. Instead, the main impact has been damaging the wider democratic system. This includes: online harassment being incited against targeted individuals, threatening their personal safety; confusion over the authenticity of content, damaging trust in the integrity of online sources; and political candidates seizing on deepfakes for potential electoral gain. |
1.1 Character assassinations
Table 2: Examples of character assassinations
Threat example | Countries affected | Impact |
AI generated voice clones, false images or videos of political candidates making controversial statements or depicting candidates in contentious activities. | Argentina.[12] Indonesia.[13] Poland.[14] South Korea.[15] Turkey.[16] UK.[17] US.[18] | Politicians implicated in fabricated content were targeted with online harassment, threatening their personal safety;[19] high user engagement on fake content amplified the disinformation;[20] allegations shared by extremist groups and rival political candidates incentivises unethical election behaviour.[21] |
AI-generated voice clones, false images or videos of political candidates pretending to withdraw from election race or endorsing other candidates. | Bangladesh.[22] Moldova.[23] Taiwan.[24] | Uncertainty among users of content inauthenticity damages trust in the integrity of online sources;[25] high user engagement on fake content posted amplified the disinformation.[26] |
AI-generated voice clones of political candidates purportedly demonstrating ballot rigging. | Colombia.[27] Nigeria.[28] Slovakia.[29] | Uncertainty among users of content inauthenticity damages trust in the integrity of online sources;[30] allegations shared by rival political candidates incentivises unethical election behaviour.[31] |
The threat
Also known as ‘deepfakes’, the creation of highly realistic AI-generated content containing false allegations about a political candidate could theoretically seek to affect voter behaviour during elections, undermine confidence in the quality of the information environment and incite online or real-world harassment against targets.[32]
Evidence and impact
The majority of character assassinations identified (see Table 2 above) were circulated by domestic actors, such as rival political opponents – as seen recently in Poland, India, Turkey and the US.[33] Although voters are still free to choose where they cast their vote, these tactics undermine appropriate election behaviour by lending credibility to disinformation when it serves the personal interests of candidates.[34] Others also show signs of possible foreign interference from state adversaries seeking to advance their strategic interests. For instance, in both Moldova and Slovakia, the targets of deepfakes were candidates campaigning against parties favouring closer ties with Russia.[35]
Close examination of AI-generated deepfakes suggests no clear impact on election results. In elections already held in Argentina, Indonesia, Poland, Slovakia and Taiwan, political candidates expected to win from polling data still did so, even after the circulation of deepfakes that went viral.[36]
Instead, the negative effects have concentrated more on undermining the quality of the information environment and deepening political polarisation, as reflected by the online outrage sparked in Bangladesh by a fabricated image of a politician wearing an outfit which upset religious communities.[37] In turn, the long-term implications of these threats emerging will likely include repercussions for the personal safety of political candidates, a chilling effect on future election participation, and an erosion of public trust in the integrity of online sources.
Some deepfake formats are more challenging to detect than others. The most recent examples of deepfake use in elections show a preference for audio and voice cloning deepfakes. Flaws can be easier to cover through use of background noise or muffled music, compared to more obvious indicators of manipulation through videos or images, such as unnatural bodily movements.[38] Based on the evidence reviewed for this report, character assassinations which have involved voice cloning techniques are also the ones having the most impact – such as in the UK (see Figure 4 below).
Malicious actors will also seek to perpetrate these attacks at different parts of the election cycle, in order to stretch the capacity of election security officials or create distractions for other attacks. Some of these will be close to polling day in a so-called AI-generated ‘October Surprise’, where fake scandals are circulated within a limited timeframe that makes fact checking difficult.[39] For example, in Slovakia a deepfake emerged just two days before the polls opened for the 2023 election, accusing a particular political candidate of ballot rigging.[40]
Considerations for the UK general election
The UK information environment has already been impacted by character assassinations, as with the deepfake audio clip of London Mayor Sadiq Khan in November 2023 (see Figure 4). Shared by far-right political groups, rapid amplification of this clip led to the Mayor subsequently receiving online harassment.[41] Concerns are also still being raised within the Labour Party over the refusal of social media platform X to take down deepfake audio clips of party leader Sir Keir Starmer from October 2023. With some clips garnering 1.5 million views, it appears to remain a prominent topic in Labour shadow cabinet briefings in 2024.[42]
Figure 4: Transcript excerpt from Sadiq Khan deepfake audio clip shows attempt to exacerbate existing political divisions in the UK
Source: Sabbagh (2023).
Given the number of examples listed which received high user engagement, the media will face dilemmas over whether to report on deepfakes, due to risks of potentially amplifying the disinformation. Although standards exist setting out the conduct of media coverage during elections (e.g. Ofcom’s Broadcasting Code and IPSO’s Editors’ Code of Practice), they do not incorporate the risks from new AI systems.[43] Providing new guidance for the media will therefore be essential in preventing the unintentional amplification of any character assassinations. This includes providing a list of certified deepfake detection tools and clarifying how media organisations deal with these threats arising on the critical polling day period, in light of broadcasting restrictions on election coverage. Ofcom should also engage with the Irish broadcasting regulator Coimisiún na Meán, who are publishing a review in October 2024 on the appropriateness of broadcast moratoriums in handling any deepfake content during upcoming elections.[44]
1.2 Deceptive political advertising
Table 3: Examples of deceptive political advertising
Threat example | Countries affected | Impact |
Fabricated endorsements of political candidates from living or deceased politicians. | India.[45] Indonesia.[46] Pakistan.[47] South Africa.[48] | High user engagement amplified the disinformation;[49] uncertainty among users of content inauthenticity damages trust in the integrity of online sources.[50] |
Political candidates claiming election victory ahead of ballot results being officially certified. | Pakistan.[51] | Protests over alleged vote rigging led to two deaths in Pakistan.[52] |
Fabricated content targeting specific demographic groups. | US.[53] | Targeted voters shown fabricated content assumed it to be authentic, damaging trust in the integrity of online sources.[54] |
Fabricated AI avatar used as a political candidate and aesthetic ‘makeovers’ of real-life candidates. | Belarus.[55] France.[56] | The artificially-generated candidate in Belarus was not formally nominated for the election;[57] French candidate failed to win over voters with AI makeover.[58] |
The threat
New AI systems can significantly enhance the realism, quality and personalisation of campaign material.[59] Although this could have benefits for under-resourced political parties, candidates could inadvertently create threats through AI-generated advertisements by flooding voters with an information overload which blurs fact and fiction.[60] Deceptive AI-generated political adverts are typically designed by candidates themselves or their supporters, rather than by political opponents or foreign malicious actors.[61]
Evidence and impact
In Indonesia and India, popular deceased politicians have been digitally ‘resurrected’ in AI videos showing fabricated endorsements of 2024 election candidates throughout the election cycle.[62] In some ways this is an old campaign tactic, given images of historical figures have regularly been used by political parties for improving support. However, actively ascribing policy positions to a famous deceased person (or a living one as in South Africa) through AI raises new ethical and legal questions about the right to use the voices or likeness of such individuals.[63]
Equally concerning is the way that AI has been used by political candidates in Pakistan, to claim victory in an election race before the ballots have been officially certified.[64] The content was circulated at a time when there were already protests over allegations of a rigged election, reflecting how this type of AI material can be coordinated to exacerbate political divisions and unrest within society.[65]
Considerations for the UK general election
Despite existing defamation laws making it a criminal offence to make false statements about the ‘personal character or conduct’ of a political candidate,[66] this legislation was created prior to the emergence of AI-generated content. For instance, defamation focuses more on protecting individuals against false statements – rather than images or videos – which are central to deceptive AI adverts.[67] There is a need to eliminate this current legal ambiguity which could be exploited by political candidates or their supporters, by clarifying how existing defamation legislation applies to AI-generated campaign endorsements.
Although there is no evidence of widespread AI use by UK political parties, there is still a worrying lack of formal guidance ahead of the general election.[68] Establishing clear red lines for those involved in election campaigning will therefore be critical to reduce the potential for misuse. Setting out fair use requirements – such as encouraging the use of clear labelling (‘watermarking’) and digital imprint requirements on AI-generated campaign materials[69] – will help to reduce confusion over whether campaign materials are authentic. Any new guidance could draw on the Irish Electoral Commission’s recent framework on AI election use, which calls on political parties to not knowingly produce deceptive AI content.[70]
Outside of political parties, election security officials should also be mindful of how loose networks of party supporters or extremists could leverage similar techniques in more nefarious ways. This will include AI-generated cartoons which do not involve explicit claims of real-life events or candidates but may exacerbate prejudicial stereotypes. Given this, political parties should be encouraged to apply the same standards outlined above to affiliate organisations producing campaign content on their behalf.
1.3 Voter targeting
Table 4: Examples of voter targeting
Threat example | Countries affected | Impact |
AI-generated avatars and images circulated on social media platforms mimicking voters and promoting false narratives about political candidates, campaign issues or election integrity. | Taiwan.[71] UK.[72] US.[73] | Pro-Beijing candidate promoted by AI avatars did not win the Taiwanese election;[74] online users reposted the fabricated US-targeted images on social media, amplifying the disinformation;[75] social media bots spreading voter fraud claims prior to London mayoral elections were amplified by verified accounts.[76] |
AI-generated voice calls and avatars addressing voters with personalised election material. | India.[77] Indonesia.[78] | Indian political candidate was able to reach roughly 1.2 million voters through AI personalised calls;[79] AI-powered voter targeting app services were allegedly sold to 700 Indonesian legislative candidates.[80] |
The threat
New AI systems will enhance the impact of well-established influence operation tactics, such as the use of ‘bot’ or inauthentic accounts that circulate and amplify disinformation on social media, designed to mimic real users. Deploying masses of these fake accounts with an aligned political narrative can manipulate information environments to create the illusion of broad social consensus on certain electoral issues.[81] Additionally, generative AI could assist in micro-targeting voters with tailored disinformation content, as the latest large language models exhibit more coherent and personalised responses to user interactions.[82]
Evidence and impact
Since the release of the current generation of Generative AI models in late 2022, AI-based voter targeting has been used for different purposes. In both Taiwan and the US, such efforts sought to influence the electorate towards supporting pro-Beijing candidates or political positions from early on in the election cycle.[83] In contrast, the focus in India and Indonesia was towards trying to make candidates more appealing to voters through providing content which addressed individuals by their name or in their preferred language.[84]
Given that Indonesian political candidates purchased AI voter targeting services from a commercial provider,[85] this raises concerns about whether financial incentives will arise for more nefarious forms of personalised engagement which integrate disinformation.[86] Nevertheless, impact from Taiwan reflected that these attempts have so far failed to change voter attitudes, which is likely due to how individuals select their own information sources and often use a variety of media.[87] As such, AI bots will be more effective in reinforcing existing echo chambers within online fora, where people are only exposed to information that confirms their existing perspectives.[88] This in itself is a concern, since the growth of these echo chambers across different online platforms only lends itself to making voters more susceptible to malicious influence.[89]
Considerations for the UK general election
The London mayoral elections were recently targeted by automated bots circulating false narratives of voter fraud in the run-up to polling day.[90] Although there was no evidence identified showing this undermined public confidence in the eventual election results, hashtag analysis of when the disinformation was trending on social media platforms show different spikes in amplification (see Figure 5).[91] The first peak in virality midway through the 29th April correlated with the automated bot activity. However, after rapidly dropping off, a second peak can be witnessed later that day, when a popular verified account posted content incorporating the hashtag.[92]
Figure 5: AI voter targeting efforts during the 2024 London mayoral elections saw brief viral engagement but no evidence of affecting election results
Source: TalkWalker analysis of #LondonVoterFraud from 27 April – 14 May 2024.
With many sensitive campaign issues at stake in the upcoming general election, masses of bots directed to exploit divisions in society could deploy more articulate disinformation content to deepen the UK’s already polarised political environment.[93] Combinations of AI systems – including generative AI, computer vision and AI-enabled sentiment analysis – could also be used in the near-future to analyse election preferences, and quickly generate and disseminate hyper-personalised disinformation at scale to misinform voters over electoral issues.[94]
Given the interplay of these different AI threats, systematic monitoring and analysis of adversarial trends will be required as in Taiwan, where fact checking organisations were able to support government officials with detecting and countering malicious AI election activity.[95] Relevant election security teams such as the DDTF should create a repository of viral AI threat material from recent and upcoming elections, including the UK local elections in May, to inform future resilience measures.
2. Election Information Threats
This section explores AI use cases designed to undermine the quality of the information environment surrounding elections, to confuse voters and damage the integrity of electoral outcomes. Some of these threats pose challenges for defining the limits of legitimate electoral practices, for instance candidates dismissing potentially accurate allegations as AI-generated to avoid accountability.
Table 5: Overview of AI-based election information threats
Election information threat types
|
|
Recent countries affected[96] | India; Mexico; Pakistan; Turkey; US. |
Overall impact | Limited evidence indicates political candidates seizing on disinformation to undermine the credibility of their rivals and failed efforts to deter individuals from voting in elections. |
2.1 AI-generated knowledge sources
Table 6: Examples of AI-generated knowledge sources
Threat example | Countries affected | Impact |
AI-generated news sites spreading disinformation against political candidates. | US.[97] | Lack of evidence showing sites received high levels of public engagement. |
AI-generated news anchors replicating real individuals and discussing political issues. | US.[98] | Video of fabricated American news anchor swearing against former US President went viral on social media platforms, amplifying the fake content.[99] |
The threat
When combined, different generative AI formats can create highly realistic false online articles, websites, and fabricated news anchors.
Evidence and impact
Since April 2024, the number of websites hosting false AI-generated articles has increased from 49 to more than 800, including in different languages such as English, Arabic and Chinese – though the actual number is likely to be much higher.[100] The small number of recent election-related AI news sources we have identified only appeared in the US, with fake allegations of US President Biden’s death or US Republican primary candidate Nikki Haley wanting the US to accept Gazan refugees.[101] However, we found no evidence showing that these particular sites attained widespread attention from the US public. While the fabricated US news anchor went viral on some social media platforms, it occurred at a very early stage of the election cycle and thus is unlikely to have a significant impact on voter behaviour.[102]
Considerations for the UK general election
Given recent experiments showing the challenges online users face in distinguishing AI-generated material, the proliferation of these information sources will make it increasingly difficult for individuals to determine the legitimacy of a particular source.[103] This is especially true if the site integrates genuine stories alongside false ones, or individuals exploit the financial gains made from luring companies into paying for professional ads on attention-grabbing AI news sites.[104] UK political parties should therefore be required to officially register all legitimate party-affiliated websites and use content provenance techniques to sign their digital materials, in order to address any potential uncertainties.
Fake news sources created closer to the UK election cycle could also emerge and seek to use more nuanced disinformation claims than the US-based examples identified, at a time when the wider information ecosystem will likely be highly congested. This could increase the risk of such content being disseminated by mainstream sources, thereby giving the disinformation more credence (also known as ‘information laundering’) and degrading trust in the information space.[105] To combat this, journalists should be provided with reporting mechanisms to help regulators identify and take down any prominent fake sites. This could be designed in a similar way to how the organisation NewsGuard offers regular alerts on AI-generated fake news sources via a ‘Misinformation Monitor’ distribution list.[106]
2.2 The ‘Liar’s Dividend’
Table 7: Examples of the ‘Liar’s Dividend’
Threat example | Countries affected | Impact |
Political candidates dismissing audio recordings showing evidence of corruption as deepfakes, despite disputed verification by content detection tools. | India.[107] Mexico.[108] | Audio clips were circulated by a rival political opponent in India and claimed to be authentic, incentivising unethical election behaviour;[109] individuals implicated in Mexican case study were targeted with online harassment, threatening their personal safety.[110] |
Political candidate circulating fake AI-edited video of an opponent linked to terrorist organisation. | Turkey.[111] | The video was shown at a political rally to voters. Despite fact-checking organisations exposing the content as AI-edited, the political candidate sought to escape accountability by asserting that claims made within the video against his opponent were still true.[112] |
The threat
The ‘Liar’s Dividend’ is a phenomenon where an individual or organisation dismisses evidence of unethical or unlawful actions as fake content, to escape accountability.[113]
Evidence and impact
Identifying clear examples of the Liar’s Dividend can be difficult, as deepfake detection tools vary in the thresholds or indicators used to determine whether samples are authentic.[114] Since January 2023, three case studies were referenced as involving a Liar’s Dividend dispute.[115] Although we found no evidence showing any clear impact on the reputation of the political candidates implicated in these incidents, opposition candidates in Turkey and India circulated the content claiming it to be authentic – seeking to exploit this uncertainty for electoral gain.[116]
Considerations for the UK general election
With deepfakes becoming increasingly realistic and difficult to discern from authentic content, this raises the threshold of evidence required to prove accusations of AI misuse. In turn, political candidates can now leverage claims that damaging evidence is actually deepfake material and thus protect their reputation.[117] In doing so, the notion that ‘deepfakes are everywhere’ will gain further resonance among the public, undermining trust in genuine sources of information.[118]
2.3 AI-supported polling disinformation
Table 8: Examples of AI-supported polling disinformation
Threat example | Countries affected | Impact |
Voice cloning fabrication of government officials encouraging voters to refrain from going to the polls. | Pakistan.[119] US.[120] | US voice cloning fabrication was circulated to between 5,000 and 25,000 voters, but did not show clear signs of leading to disenfranchisement.[121] |
Chatbots generating confusion on voting requirements and promoting invalid polling information. | US.[122] | Observed in a controlled experiment, meaning no impact on actual voter turnout. |
The threat
Although generative AI tools offer benefits for simplifying instructions for voters, the same capabilities are being used to spread disinformation about registration processes – as well as the time, manner, and place of voting.[123]
Evidence and impact
In both Pakistan and the US, AI-generated voice recordings were circulated to members of the public calling on them to boycott upcoming elections.[124] In the case of Pakistan, this was framed around allegations of the vote being rigged by authorities, while in the US, voting in primary elections was considered as detracting attention from the key presidential race in November 2024. Despite the possibility of disenfranchisement, evidence collected from the aftermath of these threats once again reveals no clear signs of shaping voter behaviour. For instance, despite the number of individuals who received the US President Biden robocall close to polling day (see Figure 6), those affected were more focused on bringing litigation charges against the perpetrators.[125]
Figure 6: Transcript excerpt from US President voter suppression deepfake reflects attempts to add personalised elements attributed to Joe Biden
Source: Seitz-Wald and Memoli (2024).
In some cases, this threat will materialise in a non-malicious way, such as through chatbots providing incorrect information to voters based on amendments in voting requirements or polling locations changing. Leading commercial chatbot models which were tested to respond to potential US voter queries were found to perform poorly on accuracy, with more than one-third of 130 responses being incomplete or potentially harmful.[126] While voters are unlikely to rely solely on chatbots for their polling information, the inaccuracy of responses will contribute to confusion over facts on the ground – particularly for first-time voters.
Considerations for the UK general election
Within the UK context, AI-supported polling disinformation could be particularly targeted at spreading falsehoods over the UK’s recent voter ID laws. Misleading individuals over either the registration process or what forms of ID they are allowed to show at polling stations would create confusion over whether they are still eligible to vote.[127] Recent surveys have shown that in particular, there is only limited knowledge of the new rules among minority demographics.[128] Polling disinformation materials could therefore be more effective if disseminated in different languages through chatbot services. Subsequently, it is vital that clear guidance is provided for voters, particularly those with English as a second language, encouraging vigilance against voter suppression efforts and other AI election threats.
Alongside focusing on dissuading people to vote, such content could also micro-target voters in marginal seats to confuse them over polling locations and times.[129] This could involve fictitious rumours of widespread travel disruption to deter attendance at polling stations.[130] Relevant government election security teams such as the DDTF and the JESP should therefore coordinate simulation exercises with other stakeholders, to understand the potential impact of these threats and clarify contingency plans. The US state of Arizona has already conducted similar tabletop activities, with election officials testing scenarios from six months leading up to polling day where AI-generated robocalls, phishing emails and polling disinformation threats emerged.[131] The same exercises also required officials to make trade-offs between different countermeasures, factoring in financial constraints.
3. Electoral Infrastructure Threats
This section explores malicious AI use cases designed to target the systems and individuals responsible for securing the integrity of election processes, with the aim of manipulating election outcomes or eroding confidence in election results. This includes ‘hack and leak’ operations and AI-generated phishing emails against election officials.
Table 9: Overview of AI-based electoral infrastructure threats
Electoral infrastructure threat types |
|
Recent countries affected [132] | None identified. |
Overall impact | Speculative concerns over new AI systems lowering the barriers for entry for malicious actors infiltrating election infrastructure, such as voter databases, to interfere with electoral processes. |
3.1 AI-based cyber intrusions
The threat
Democratic nations are increasingly transferring parts of their electoral processes to digital infrastructure – such as databases containing voter registration details and ballot counting machines – to help reduce electoral fraud and improve accessibility.[133] New AI developments could also help to enhance election resilience by detecting cyber intrusions faster or provide additional verification of mail-in or absentee ballots.[134]
However, in doing so they also become attractive targets for malicious actors. This includes those trying to interfere with electoral outcomes, damage public confidence in the electoral process, or seeking to gain financial benefits from personal data leaks.[135] New generative AI systems can bolster the capabilities of malign actors to find vulnerabilities in computer networks or conduct sophisticated cyberespionage operations. The rise of commercially-available interfaces like ‘WormGPT’, which guide hackers to develop highly persuasive phishing attacks at scale, demonstrate how these tools are already readily accessible.[136]
Evidence and impact
While we could find no recent evidence of AI being used to facilitate election-related cyber intrusions, the current focus on election-related deepfakes could mean that examples are underreported. Indeed, (non-AI related) voter data leaks were reported in several countries recently – including Indonesia, Taiwan, Turkey, and Israel.[137] Between 2023 and early 2024 there was also a 100% increase in election-related cyber incidents, with experts expressing concern over future cases materialising with the support of AI systems.[138]
The UK was itself subject to a recent cyberattack, when unauthorised access was gained into the Electoral Commission’s databases and names and addresses of all voters registered between 2014 and 2022 were made public.[139] Nevertheless, there is uncertainty over whether these intrusions were motivated more by financial incentives than any explicit desire to undermine election integrity. Political parties may also be targeted in a variety of ways to allow adversaries to gain sensitive information covertly, such as with the 2021 targeting of UK parliamentarians’ emails by a Chinese state-affiliated organisation.[140]
Considerations for the UK general election
In more extreme scenarios, customised AI malware could result in voting systems being manipulated or misreporting on votes, potentially forcing re-runs and damaging trust in democratic processes.[141] However, the UK will be more insulated against these types of threats compared to countries such as the US, owing to the continued use of paper-based voting and human ballot counting.[142]
Beyond cyber intrusions into electoral infrastructure, the UK Government must also remain cognisant of how AI-generated phishing activities could focus on targeting election officials in the upcoming general election. For instance, voice cloning techniques could be used against polling staff to acquire personal data (such as residential addresses) for exerting coercive control against election officials.[143]
Conclusion
The success of AI-enabled election threats relies on a multitude of factors. Influence operations are complex and multifaceted psychosocial activities, the effects of which can be difficult to isolate to one element such as AI. Although this Briefing Paper has highlighted how new AI systems could enhance electoral interference, this does not necessarily mean the public will be more susceptible to such manipulation than before. It is therefore vital that more research is used to inform balanced media coverage and government strategies, avoiding a risk of over-sensationalising the threat from AI.
The window to proactively safeguard the UK general election from AI threats is rapidly closing, but there are still opportunities to enhance election security. In particular, by publishing guidance for political parties on fair use of AI in the election period, and for media outlets on how they should report on incidents, clearer accountability will be placed upon these organisations to play their respective roles in preventing these risks from materialising.
Any short-term measures will need to be accompanied by longer-term interventions designed to improve societal resilience against the threat of influence operations. In addition to the potential threats to upcoming elections identified in this paper, the second-order impacts on trust in the information environment, government institutions and the democratic system itself cannot be neglected. The final CETaS Research Report on this topic will investigate these issues in more detail.
References
[1] Koh Ewe, “The Ultimate Election Year: All the Elections Around the World in 2024,” TIME, 28 December 2023, https://time.com/6550920/world-elections-2024/.
[2] Davy Alba and Adam Satarino, “At Least 70 Countries Have Had Disinformation Campaigns, Study Finds,” The New York Times, 26 September 2019,https://www.nytimes.com/2019/09/26/technology/government-disinformation-cyber-troops.html.
[3] Matthew J. Feteau, “Understanding Information Operations & Information Warfare,” Global Security Review, 7 January 2019, https://globalsecurityreview.com/understanding-information-operations-information-warfare/.
[4] Alistair Somerville and Jonas Heering, “The Disinformation Shift: From Foreign to Domestic,” Georgetown Journal of International Affairs, https://gjia.georgetown.edu/2020/11/28/the-disinformation-shift-from-foreign-to-domestic/.
[5] Ardi Janjeva et al., “The Rapid Rise of Generative AI: Assessing risks to safety and security,” CETaS Research Reports (December 2023): 11.
[6] Ethan Bueno de Mesquita et al., Preparing for Generative AI in the 2024 Election: Recommendations and Best Practices Based on Academic Research (University of Chicago Harris School of Public Policy and the Stanford Graduate School of Business: 2024), 11, https://harris.uchicago.edu/files/ai_and_elections_best_practices_no_embargo.pdf.
[7] Mekela Panditharante et al., “Artificial Intelligence, Participatory Democracy, and Responsive Government,” Brennan Center for Justice, 3 November 2023, https://www.brennancenter.org/our-work/research-reports/artificial-intelligence-participatory-democracy-and-responsive-government.
[8] Di Cooke, “Synthetic Media and Election Integrity: Defending our Democracies,” CETaS Expert Analysis (August 2023): 10.
[9] Ibid.
[10] CETaS workshop, 12 January 2024.
[11] Based on the number of AI threat instances identified in elections occurred or forthcoming between January 2023 and May 2024. This list does not factor in multiple incidents within countries cited. Local elections where AI interference has occurred have been included to ensure these threats are analysed (e.g. Colombia and France).
[12] Jack Nicas and Lucía Cholakian Herrera, “Is Argentina the First A.I. Election?,” The New York Times, 15 November 2023, https://www.nytimes.com/2023/11/15/world/americas/argentina-election-ai-milei-massa.html.
[13] Ali Swenson and Kelvin Chan, “Election disinformation takes a big leap with AI being used to deceive worldwide,” AP News, 14 March 2024, https://apnews.com/article/artificial-intelligence-elections-disinformation-chatgpt-bc283e7426402f0b4baa7df280a4c3fd.
[14] Krzysztof Mularczyk, “Row over deepfake of Polish PM in opposition-party broadcast,” Brussels Signal, 25 August 2023, https://brusselssignal.eu/2023/08/row-over-deepfake-of-polish-pm-in-opposition-party-broadcast/.
[15] Julian Ryall, “South Korea battling deepfakes ahead of key election,” Deutsche Welle, 1 April 2024, https://www.dw.com/en/south-korea-battling-deepfakes-ahead-of-key-election/a-68712855.
[16] Aylin Elci, “AI content is meddling in Turkey’s election. Experts warn it’s just the beginning,” Euronews, 12 May 2023, .https://www.euronews.com/next/2023/05/12/ai-content-deepfakes-meddling-in-turkey-elections-experts-warn-its-just-the-beginning
[17] Marianna Spring, “Sadiq Khan says fake AI audio of him nearly led to serious disorder,” BBC News, 13 February 2024, https://www.bbc.co.uk/news/uk-68146053; Ailbhe Rea, “Battle With X Over Starmer Deepfake Highlights UK Election Worry,” Bloomberg, 15 March 2024, https://www.bloomberg.com/news/articles/2024-03-15/battle-with-x-over-starmer-deepfake-highlights-uk-election-worry.
[18] Arianna Johnson, “Republicans Share An Apocalyptic AI-Powered Attack Ad Against Biden: Here’s How To Spot A Deepfake,” Forbes, https://www.forbes.com/sites/ariannajohnson/2024/05/07/ai-generated-met-gala-images-of-katy-perry-rihanna-went-viral-heres-how-to-spot-a-deepfake/.
[19] Spring (2024); Swenson and Chan (2024).
[20] Yoav Arad Pinkas, Beyond Imagining – How AI is Actively Used in Election Campaigns Around The World, (Check Point Research: April 2024), https://research.checkpoint.com/2024/beyond-imagining-how-ai-is-actively-used-in-election-campaigns-around-the-world/; GOP (@GOP), “Beat Biden,” YouTube video, 25 April 2023, https://www.youtube.com/watch?v=kLMMxgtxQ1Y.
[21] Dan Sabbagh, “Faked audio of Sadiq Khan dismissing Armistice Day shared among far-right groups,” The Guardian, 10 November 2023, https://www.theguardian.com/politics/2023/nov/10/faked-audio-sadiq-khan-armistice-day-shared-among-far-right; Mularczyk (2023); Elci (2023).
[22] Swenson and Chan (2024).
[23] “A new deepfake with Maia Sandu on social networks. Statements by the Presidency,” Radio Moldova, 29 December 2023, https://radiomoldova.md/p/27669/a-new-deepfake-with-maia-sandu-on-social-networks-statements-by-the-presidency.
[24] Microsoft, “Same targets, new playbooks: East Asia threat actors employ unique methods,” Microsoft Security Insider, 4 April 2024, https://www.microsoft.com/en-us/security/business/security-insider/reports/east-asia-threat-actors-employ-unique-methods/.
[25] Aman (2024).
[26] Radio Moldova (2023).
[27] Pinkas (2024).
[28] Allie Funk, Adrian Shahbaz and Kian Vesteinsson, Freedom on the Net 2023: The Repressive Power of Artificial Intelligence (Freedom House: 2024), https://freedomhouse.org/report/freedom-net/2023/repressive-power-artificial-intelligence.
[29] Morgan Meaker, “Slovakia’s Election Deepfakes Show AI Is a Danger to Democracy,” WIRED, 3 October 2023, https://www.wired.com/story/slovakias-election-deepfakes-show-ai-is-a-danger-to-democracy/.
[30] PRNigeria, “Fact-Check: How Deepfake Audio was used to Frame Atiku, Okowa, Others in 2023 Elections,” PRNigeria News, 24 February 2023, https://prnigeria.com/2023/02/24/atiku-okowa-election/.
[31] Pinkas (2024).
[32] Putri Rizkika Bahri S.H. et al., “Artificial Intelligence (AI)-Based Campaign in the Implementation of General elections,” International Journal of Multidisciplinary 9, no. 2 (February 2024): 119–120, https://doi.org/10.31305/rrijm.2024.v09.n02.012.
[33] Mularczyk (2023); Nilesh Christopher, “An Indian politician says scandalous audio clips are AI deepfakes. We had them tested,” Rest of World, 5 July 2023, https://restofworld.org/2023/indian-politician-leaked-audio-ai-deepfake/; Elci (2023); GOP (2023).
[34] Jack Kenny, “Advanced artificial intelligence techniques and the principle of non-intervention in the context of electoral interference,” in Artificial Intelligence and International Conflict in Cyberspace, ed. Fabio Cristiano et al., (Oxon: Routledge, 2023), 236.
[35] Radio Moldova (2023); Meaker (2023).
[36] Manuela Tobias, “Argentina’s Milei Has Narrow Lead Over Massa Ahead of Runoff,” Bloomberg UK, 3 November 2023, https://www.bloomberg.com/news/articles/2023-11-03/argentina-election-milei-holds-narrow-lead-over-massa-in-poll-ahead-of-runoff; “Prabowo Subianto will be the next president of Indonesia,” The Economist, 25 April 2024, https://www.economist.com/interactive/2024-indonesia-election-tracker; “Poll of Polls – Polish polls, trends and election news,” POLITICO, https://www.politico.eu/europe-poll-of-polls/poland/; “Poll of Polls – Slovakian polls, trends and election news,” POLITICO, https://www.politico.eu/europe-poll-of-polls/slovakia/; “Lai Ching-te will be the next president of Taiwan,” The Economist, 13 January 2024, https://www.economist.com/interactive/2024-taiwan-election.
[37] Swenson and Chan (2024).
[38] Hannah Murphy, “Audio deepfakes emerge as weapon of choice in election disinformation,” Financial Times, 23 January 2024, https://www.ft.com/content/bd75b678-044f-409e-b987-8704d6a704ea.
[39] Bueno de Mesquita et al. (2024), 5; Rami Mubarak et al., “A Survey on the Detection and Impacts of Deepfakes in Visual, Audio, and Textual Formats,” IEEE Access 11 (December 2023), 144504, DOI: 10.1109/ACCESS.2023.3344653; Kalina Bontcheva, Generative AI and Disinformation: Recent Advances, Challenges, and Opportunities (February 2024), https://edmo.eu/wp-content/uploads/2023/12/Generative-AI-and-Disinformation_-White-Paper-v8.pdf.
[40] Meaker (2023).
[41] Sabbagh (2023).
[42] Ailbhe Rea (2024).
[43] Ofcom, Guidance Notes – Section Six: Elections and Referendums (Ofcom: March 2017), https://www.ofcom.org.uk/__data/assets/pdf_file/0034/99178/broadcast-code-guidance-section-6-march-2017.pdf; Vikki Julian, “IPSO Blog: Election reporting,” IPSO Blog, 8 November 2019, https://www.ipso.co.uk/news-press-releases/blog/ipso-blog-election-reporting/.
[44] Martin Wall, “Broadcast moratorium before elections is ‘ripe for manipulation’ by bad faith actors, says Darragh O’Brien,” The Irish Times, 7 May 2024, https://www.irishtimes.com/politics/2024/05/07/broadcast-moratorium-before-elections-is-ripe-for-manipulation-by-bad-faith-actors-says-darragh-obrien/.
[45] Amrit Dhillon, “Why dead leaders are hitting the campaign trail in India’s election,” The Times, 5 April 2024, https://www.thetimes.co.uk/article/india-elections-2024-ai-campaigning-6mhfljf2w.
[46] Swenson and Chan (2024).
[47] Victoria Turk and Russell Brandom, “2024 AI Election Tracker,” Rest of the World, https://restofworld.org/2024/elections-ai-tracker/.
[48] Phumzile van Damme, “Disinformation, governance and the South African election,” ISS African Futures Blog, 19 March 2024, https://futures.issafrica.org/blog/2024/Disinformation-governance-and-the-South-African-election.
[49] Nilesh Christopher, “How AI is resurrecting dead Indian politicians as election looms,” Al Jazeera, 12 February 2024, https://www.aljazeera.com/economy/2024/2/12/how-ai-is-used-to-resurrect-dead-indian-politicians-as-elections-loom; Nilesh Christopher, “Before India election, Instagram boosts Modi AI images that violate rules,” Al Jazeera, 12 April 2024, https://www.aljazeera.com/economy/2024/4/12/before-india-election-instagram-boosts-modi-ai-images-that-violate-rules; Sejal Sharma, “Using AI, political parties are bringing back their dead leaders,” Interesting Engineering, 14 February 2024, https://interestingengineering.com/culture/political-parties-resurrect-dead-ai.
[50] Van Damme (2024).
[51] Hannah Ellis-Petersen and Shah Meer Baloch, “Imran Khan allies claim shock victory in Pakistan election despite crackdown,” The Guardian, 9 February 2024, https://www.theguardian.com/world/2024/feb/09/pakistan-election-2024-results-delays.
[52] Ibid.
[53] Marianna Spring, “Trump supporters target black voters with faked AI images,” BBC News, 4 March 2024, https://www.bbc.co.uk/news/world-us-canada-68440150.
[54] Ibid.
[55] Joshua Stein, “Belarus Dissidents Turn to AI Deep Fakes,” Center for European Policy Analysis, 13 March 2024, https://cepa.org/article/belarus-dissidents-turn-to-ai-deep-fakes/.
[56] Adam Sage, “French election candidate’s AI makeover fails to win over voters,” The Times, 15 September 2023, https://www.thetimes.co.uk/article/french-election-candidate-s-posters-get-an-ai-makeover-xp2z6g09x.
[57] Mathias Hammer, “Belarusian opposition endorses AI candidate in parliamentary elections,” Semafor, 23 February 2024, https://www.semafor.com/article/02/23/2024/belarusian-opposition-endorses-ai-candidate.
[58] Sage (2023).
[59] Janjeva et al. (2023), 4.
[60] Bueno de Mesquita et al. (2024), 11.
[61] Rumman Chowdhury, “AI-fuelled election campaigns are here – where are the rules?,” Nature, 9 April 2024, https://www.nature.com/articles/d41586-024-00995-9.
[62] Amrit Dhillon (2024); Swenson and Chan (2024).
[63] Christopher (2024); Van Damme (2024).
[64] Ellis-Petersen and Baloch (2024).
[65] Ibid.
[66] “Claims made in online political ads,” The Electoral Commission, https://www.electoralcommission.org.uk/voting-and-elections/campaigning-election/online-campaigning/claims-made-online-political-ads.
[67] Sarah H. Jodka, “Manipulating reality: the intersection of deepfakes and the law,” Reuters, 1 February 2024, https://www.reuters.com/legal/legalindustry/manipulating-reality-intersection-deepfakes-law-2024-02-01/.
[68] HM Government, The governance of artificial intelligence: interim report (Science, Innovation and Technology Committee: 2023), 30, https://committees.parliament.uk/publications/41130/documents/205611/default/.
[69] “Introducing digital imprints,” The Electoral Commission, https://www.electoralcommission.org.uk/news-and-views/elections-act/introducing-digital-imprints.
[70] “An Coimisiún Toghcháin publishes Framework on Online Electoral Process information, Political Advertising and Deceptive AI Content in advance of June elections,” An Coimisiún Toghcháin – The Electoral Commission News Release, 24 April 2024, https://www.electoralcommission.ie/media-release/an-coimisiun-toghchain-publishes-framework-on-online-electoral-process-information-political-advertising-and-deceptive-ai-content-in-advance-of-june-elections/.
[71] Albert Zhang, “As Taiwan voted, Beijing spammed AI avatars, faked paternity tests and ‘leaked’ documents,” Australian Strategic Policy Institute, The Strategist, 18 January 2024, https://www.aspistrategist.org.au/as-taiwan-voted-beijing-spammed-ai-avatars-faked-paternity-tests-and-leaked-fake-documents/.
[72] Yasmin Rufo, “London mayor election: Bots, misleading URLs cause voter confusion,” BBC News, 1 May 2024, https://www.bbc.co.uk/news/uk-england-london-68923015.
[73] Sean Lyngaas, “Suspected Chinese operatives using AI generated images to spread disinformation among US voters, Microsoft says,” CNN, 7 September 2023, https://edition.cnn.com/2023/09/07/politics/chinese-operatives-ai-images-social-media/index.html.
[74] Lai Ching-te will be the next president of Taiwan,” The Economist, 13 January 2024, https://www.economist.com/interactive/2024-taiwan-election.
[75] Lyngaas (2023).
[76] Trend analysis of #LondonVoterFraud from 30 April – 1 May 2024 using TalkWalker.
[77] Suhasini Raj, “How A.I. Tools Could Change India’s Elections,” The New York Times, 18 April 2024, https://www.nytimes.com/2024/04/18/world/asia/india-election-ai.html.
[78] Kate Lamb, Fanny Potkin and Ananda Teresia, “Generative AI faces major test as Indonesia holds largest election since boom,” Reuters, 8 February 2024, https://www.reuters.com/technology/generative-ai-faces-major-test-indonesia-holds-largest-election-since-boom-2024-02-08/.
[79] Suhasini Raj (2024).
[80] Lamb et al. (2024).
[81] Angela Köckritz, In a savvy disinformation offensive, China takes aim at Taiwan election (Mercator Institute for China Studies: December 2023), 7, https://merics.org/sites/default/files/2023-12/MERICS%20Report%20Disinformation%20Taiwan%20December%202023.pdf; Kenny (2023), 235.
[82] CETaS workshop, 18 January 2024.
[83] Zhang (2024); Lyngaas (2023).
[84] Suhasini Raj (2024); Lamb et al. (2024).
[85] Lamb et al. (2024).
[86] Tommy Shaffer Shane, The near-term impact of AI on disinformation (Centre for Long-Term Resilience: May 2024), 10, https://www.longtermresilience.org/post/the-near-term-impact-of-ai-on-disinformation.
[87] Ofcom, News consumption in the UK: 2023 (Ofcom: July 2023), https://www.ofcom.org.uk/__data/assets/pdf_file/0024/264651/news-consumption-2023.pdf.
[88] Hawes et al. (2023), 3.
[89] Ibid.
[90] Rufo (2024).
[91] Trend analysis of #LondonVoterFraud from 27 April – 14 May 2024 using TalkWalker.
[92] Ibid.
[93] CETaS Workshop, 12 January 2024.
[94] CETaS Workshop, 18 January 2024.
[95] Zhang (2024).
[96] Based on the number of AI threat instances identified in elections occurred or forthcoming between January 2023 and May 2024. This list does not factor in multiple incidents within countries cited. Local elections where AI interference has occurred have been included to ensure these threats are analysed (e.g. Mexico).
[97] Robin Guess, “Analysts Warn of Spread of AI-Generated News Sites,” VOA News, 21 February 2024, https://www.voanews.com/a/analysts-warn-of-spread-of-ai-generated-news-sites-/7497011.html.
[98] Alexandra S. Levine, “In A New Era Of Deepfakes, AI Makes Real News Anchors Report Fake Stories,” Forbes, 12 October 2023, https://www.forbes.com/sites/alexandralevine/2023/10/12/in-a-new-era-of-deepfakes-ai-makes-real-news-anchors-report-fake-stories/.
[99] Ibid.
[100] McKenzie Sadeghi et al., “Tracking AI-enabled Misinformation: 802 ‘Unreliable AI-Generated News’ Websites (and Counting), Plus the Top False Narratives Generated by Artificial Intelligence Tools,” NewsGuard, 15 April 2024, https://www.newsguardtech.com/special-reports/ai-tracking-center/; Pranshu Verma, “The rise of AI fake news is creating a ‘misinformation superspreader,” The Washington Post, 17 December 2023, https://www.washingtonpost.com/technology/2023/12/17/ai-fake-news-misinformation/.
[101] Guess (2024).
[102] Levine (2023).
[103] Andreaa Pocol et al., “Seeing is No Longer Believing: A Survey on the State of Deepfakes, AI-Generated Humans, and Other Nonveridical Media,” in Advances in Computer Graphics (CGI 2023). Lecture Notes in Computer Science, vol 14496 (Springer, 2023): 427–440, https://doi.org/10.1007/978-3-031-50072-5_34.
[104] Ibid; Tate Ryan-Mosley, “Junk websites filled with AI-generated text are pulling in money from programmatic ads,” MIT Technology Review, 26 June 2023, https://www.technologyreview.com/2023/06/26/1075504/junk-websites-filled-with-ai-generated-text-are-pulling-in-money-from-programmatic-ads/.
[105] CETaS Workshop, 12 January 2024.
[106] NewsGuard (2024).
[107] Nilesh Christopher, “An Indian politician says scandalous audio clips are AI deepfakes. We had them tested,” Rest of World, 5 July 2023, https://restofworld.org/2023/indian-politician-leaked-audio-ai-deepfake/.
[108] “The Mexican mayor and a deepfake scandal,” BBC Sounds – Trending, 27 January 2024, https://www.bbc.co.uk/sounds/play/w3ct5d9g.
[109] Nilesh Christopher (2023).
[110] BBC Sounds (2024).
[111] Rehan Mirza, “How AI deepfakes threaten the 2024 elections,” The Journalist’s Resource, 16 February 2024, https://journalistsresource.org/home/how-ai-deepfakes-threaten-the-2024-elections/; Funk et al., (2024).
[112] Pelin Ünker and Thomas Sparrow, “Fact check: Turkey's Erdogan shows false Kilicdaroglu video,” Deutsche Welle, 24 May 2023, https://www.dw.com/en/fact-check-turkeys-erdogan-shows-false-kilicdaroglu-video/a-65554034.
[113] Josh A. Goldstein and Andrew Lohn, Deepfakes, Elections, and Shrinking the Liar’s Dividend (Brennan Center for Justice: January 2024), https://www.brennancenter.org/our-work/research-reports/deepfakes-elections-and-shrinking-liars-dividend.
[114] BBC Sounds (2024).
[115] Nilesh Christopher, “An Indian politician says scandalous audio clips are AI deepfakes. We had them tested,” Rest of World, 5 July 2023, https://restofworld.org/2023/indian-politician-leaked-audio-ai-deepfake/; BBC Sounds (2024); Ünker and Sparrow (2024).
[116] Ibid.
[117] Goldstein and Lohn (2024).
[118] Mirza (2024).
[119] Nilofar Mughal, “Deepfakes, Internet Access Cuts Make Election Coverage Hard, Journalists Say,” VOA News, 22 February 2024, https://www.voanews.com/a/deepfakes-internet-access-cuts-make-election-coverage-hard-journalists-say-/7498917.html.
[120] Alex Seitz-Wald and Mike Memoli, “Fake Joe Biden robocall tells New Hampshire Democrats not to vote Tuesday,” NBC News, 22 January 2024, https://www.nbcnews.com/politics/2024-election/fake-joe-biden-robocall-tells-new-hampshire-democrats-not-vote-tuesday-rcna134984.
[121] Michael S. Garrity, “Voter Suppression AI Robocall Investigation Update,” New Hampshire Department of Justice News Release, 6 February 2024, https://www.doj.nh.gov/news/2024/20240206-voter-robocall-update.html.
[122] Julia Angwin, Alondra Nelson and Rina Palta, “Seeking Reliable Election Information? Don’t Trust AI,” Proof, 27 February 2024, https://www.proofnews.org/seeking-election-information-dont-trust-ai/.
[123] CISA, Risk in Focus: Generative A.I. and the 2024 Election Cycle (CISA: January 2024), https://www.cisa.gov/sites/default/files/2024-01/Consolidated_Risk_in_Focus_Gen_AI_Elections_508c.pdf.
[124] Nilofar Mughal (2024); Seitz-Wald and Mike Memoli (2024).
[125] Matt McClain, “New Hampshire voters sue Biden deepfake robocall creators,” NBC News, 16 March 2024, https://www.nbcnews.com/politics/2024-election/new-hampshire-voters-sue-biden-deepfake-robocall-creators-rcna143662.
[126] Julia Angwin et al. (2024).
[127] CETaS Workshop, 12 January 2024.
[128] Peter Walker, “‘Perilous and chaotic’: why officials are nervy before a likely UK election in 2024,” The Guardian, 2 January 2024, https://www.theguardian.com/world/2024/jan/02/perilous-and-chaotic-election-officials-nervous-as-sunak-prepares-to-name-date.
[129] R. Michael Alvarez, Frederick Eberhardt and Mitchell Linegar, Generative AI and the Future of Elections, (Center for Science, Society, and Public Policy: July 2023), 7, https://lindeinstitute.caltech.edu/documents/25475/CSSPP_white_paper.pdf.
[130] Ben Hawes, Wendy Hall, and Matt Ryan, Can artificial intelligence be used to undermine elections? (University of Southampton: September 2023), 3, https://eprints.soton.ac.uk/484562/; CETaS Workshop, 18 January 2023.
[131] Issie Lapowsky, “How ready are we for AI-powered election deception? Arizona just found out,” Fast Company, 14 May 2024, https://www.fastcompany.com/91124372/ai-powered-election-deception-arizona-just-found-out.
[132] Based on the number of AI threat instances identified in elections occurred or forthcoming between January 2023 and May 2024.
[133] USAID, International Foundation for Electoral Systems and DAI, Understanding Cybersecurity Throughout the Electoral Process: A Reference Document (USAID: January 2023), 1, https://www.usaid.gov/sites/default/files/2023-01/Understanding-Cybersecurity-Throughout-the-Electoral-Process_1.pdf.
[134] Edgardo Cortés et al., Safeguards for Using Artificial Intelligence in Election Administration (Brennan Center for Justice: December 2023), https://www.brennancenter.org/our-work/research-reports/safeguards-using-artificial-intelligence-election-administration.
[135] Hawes et al. (2023), 3; James Shires, “Hack-and-Leak Operations and US Cyber Policy,” War On the Rocks, 14 August 2020, https://warontherocks.com/2020/08/the-simulation-of-scandal/.
[136] Janjeva et al. (2024), 28; Arthi Nachiappan, “WormGPT: AI tool designed to help cybercriminals will let hackers develop attacks on large scale, experts warn,” Sky News, 18 September 2023, https://news.sky.com/story/wormgpt-ai-tool-designed-to-help-cybercriminals-will-let-hackers-develop-attacks-on-large-scale-experts-warn-12964220.
[137] “Global Malicious Activity Targeting Elections Is Skyrocketing,” Resecurity, 12 February 2024, https://www.resecurity.com/blog/article/global-malicious-activity-targeting-elections-is-skyrocketing.
[138] Ibid; CETaS Workshop, 12 January 2024.
[139] Dan Milmo, “Hacked UK voter data could be used to target disinformation, warn experts,” The Guardian, 9 August 2023, https://www.theguardian.com/politics/2023/aug/09/hacked-uk-electoral-commission-data-target-voter-disinformation-warn-expert.
[140] National Cyber Security Centre, “UK calls out China state-affiliated actors for malicious cyber targeting of UK democratic institutions and parliamentarians,” NCSC News, 25 March 2024, https://www.ncsc.gov.uk/news/china-state-affiliated-actors-target-uk-democratic-institutions-parliamentarians.
[141] Hawes et al. (2023), 3.
[142] Milmo (2023).
[143] CISA (2024); Lindsey Gorman and David Levine, The ASD AI Election Security Handbook (The German Marshall Fund of the United States: February 2024), 5–6; CETaS Workshop, 12 January 2024.
In the News
AI's impact on elections is being overblown
"While Meta has a vested interest in minimizing AI’s alleged impact on elections, it is not alone. Similar findings were also reported by the UK’s respected Alan Turing Institute in May. Researchers there studied more than 100 national elections held since 2023 and found “just 19 were identified to show AI interference.” Furthermore, the evidence did not demonstrate any “clear signs of significant changes in election results compared to the expected performance of political candidates from polling data.”"
- MIT Tech Review, 3 September 2024.
UK elections and AI misinformation (in Arabic)
- BBC World Service, 4 July 2024.
Is AI a threat to the UK general election?
- The Science or Fiction Podcast (KCL), 30 June 2024.
How much is AI meddling in elections?
- Reuters, 27 June 2024.
Were Fears Of Disinformation During The Election Exaggerated?
- PoliticsHome, 22 June 2024.
This election is a maze of confusing policies, but here's how AI could help
- Evening Standard, 21 June 2024.
How Worried Should We Actually Be About Election Interference?
- The Huffington Post, 19 June 2024.
Technology: legal gaps expose UK election to disinformation threat
- International Bar Association, 17 June 2024.
OpenAI is very smug after thwarting five ineffective AI covert influence ops
"OpenAI's determination that these AI-powered covert influence campaigns were ineffective was echoed in a May 2024 report on UK election interference by The Centre for Emerging Technology and Security (CETaS) at The Alan Turing Institute.
"The current impact of AI on specific election results is limited, but these threats show signs of damaging the broader democratic system," the CETaS report found, noting that of 112 national elections that have either taken place since January 2023 or will occur in 2024, AI-based meddling was detected in just 19 and there's no data yet to suggest election results were materially swayed by AI.
That said, the CETaS report argues that AI content creates second-order risks, such as sowing distrust and inciting hate, that are difficult to measure and have uncertain consequences."
- The Register, 30 May 2024.
Electoral Commission to warn voters of online disinformation amid foreign interference election fears
"A new hub will be created on the Commission’s website and include information urging voters to think critically about information they may see or hear online, particularly on social media.
It comes after The Alan Turing Institute, the UK’s national institute for data science and artificial intelligence, warned about the potential threats of artificial intelligence (AI) during the election campaign.
A new study from the institute said there was little evidence that AI had directly impacted election results. There have, however, been early signs of the damage the technology had caused to democratic systems more broadly through a “polarised information space”.
This included confusion over whether AI-generated content is real, damaging trust in online sources; deepfakes inciting online hate against political figures, threatening their personal safety; and politicians exploiting AI disinformation for potential electoral gain."
- The Independent, 30 May 2024.
General Election 2024: How to spot misinformation and fakes
"This all comes as the Alan Turing Institute warned that action was needed to protect the UK election from AI disinformation, calling on the Electoral Commission and Ofcom to create guidelines and get agreements from political parties on how AI might be used in campaigning.
However, the study did find that there was "limited evidence" that AI would impact election results, but that it could still be used to incite hate and spread disinformation online."
- ITV, 30 May 2024.
More than 90% of UK public have encountered misinformation online, study says
- Yahoo!News, 30 May 2024.
Election risks, safety summits and Scarlett Johansson: the week in AI – podcast
- The Guardian Science Weekly Podcast, 30 May 2024.
Deepfakes and AI 'unlikely to swing the election result'
- The Times, 29 May 2024 [print].
Tories fail to dent support for Labour
- The Times, 29 May 2024 [print].
"The use of deepfakes and Al to spread misinformation is not likely to affect the outcome of the general election, experts have concluded, but could still "erode trust in democracy". The Alan Turing Institute analysed 112 elections around the world and found attempts to use AI to trick voters. A deepfake is a video or audio clip that has been manipulated using AI tools to replicate a person's face or voice. However, the researchers said "to date there is limited evidence that Al has prevented a candidate from winning compared to the expected result". The study said: "The current impact of Al on specific election results is limited but these threats show signs of damaging the broader democratic system."
Poll briefing: AI 'threat' to election
- Daily Mail, 29 May 2024 [print].
Warning over political deepfakes
- the i, 29 May 2024 [print].
AI institute raises alarm about election deepfakes
- The Daily Telegraph, 29 May 2024 [print].
AI disinformation 'could disrupt election'
- Western Daily Press, 29 May 2024 [print].
Warning on use of AI to mislead voters
- Yorkshire Post, 29 May 2024 [print].
BBC Radio 5 Live - 29 May 2024
Interview with Sam Stockwell on the threat of AI to the UK general election. You can listen to the interview clip here.
BBC Radio - Good Morning Scotland - 29 May 2024
Interview with Sam Stockwell on the threat of AI to the UK general election. You can listen to the interview clip here.
BBC Radio 4 - Today - 29 May 2024
Interview with Sam Stockwell on the threat of AI to the UK general election. You can listen to the interview clip here.
LBC News with Martin Stanford - 29 May 2024
Interview with Sam Stockwell on the threat of AI to the UK general election [segment: 30:55–36:38].
Action needed to protect election from AI disinformation, study says
- Evening Standard, 29 May 2024.
"In its study, CETaS said it had created a timeline of how AI could be used in the run-up to an election, suggesting it could be used to undermine the reputation of candidates, falsely claim that they have withdrawn or use disinformation to shape voter attitudes on a particular issue.
The study also said misinformation around how, when or where to vote could be used to undermine the electoral process.
Sam Stockwell, research associate at the Alan Turing Institute and the study’s lead author, said: “With a general election just weeks away, political parties are already in the midst of a busy campaigning period. Right now, there is no clear guidance or expectations for preventing AI being used to create false or misleading electoral information. That’s why it’s so important for regulators to act quickly before it’s too late.”
Dr Alexander Babuta, director of CETaS, said: “While we shouldn’t overplay the idea that our elections are no longer secure, particularly as worldwide evidence demonstrates no clear evidence of a result being changed by AI, we nevertheless must use this moment to act and make our elections resilient to the threats we face. Regulators can do more to help the public distinguish fact from fiction and ensure voters don’t lose faith in the democratic process."
AI-generated photos and videos pose threat to General Election as 'deep-fake' images could be used to attack politicians' characters, spread hate, and erode trust in democracy
- Daily Mail, 29 May 2024.
UK Government Urged to Publish Guidance for Electoral AI
- Bank InfoSecurity, 29 May 2024.
Alan Turing Institute warns of AI threats to general election
- UK Authority, 29 May 2024.
AI deepfakes and election misinformation
- Science Media Centre, 29 May 2024.
Time running out for regulators to tackle AI threat ahead of general election, researchers warn
- Sky News, 29 May 2024.
"Sam Stockwell, research associate at the Alan Turing Institute and lead author of the report, said online harassment of public figures who are subject to deepfake attacks could push some to avoid engaging in online forums.
He said: "The challenge in discerning between AI-generated and authentic content poses all sorts of issues down the line… It allows bad actors to exploit that uncertainty by dismissing deepfake content as allegations, there's fake news, it poses problems with fact-checking, and all of these things are detrimental to the fundamental principles of democracy."
The report called for Ofcom, the media regulator, and the Electoral Commission, to issue joint guidance and request voluntary agreements on the fair use of AI by political parties in election campaigning.
It also recommended guidance for the media on reporting about AI-generated fake content and making sure voter information includes guidance on how to spot AI-generated content and where to go for advice."
Authors
Citation information
Sam Stockwell, Megan Hughes, Phil Swatton and Katie Bishop, "AI-Enabled Influence Operations: The Threat to the UK General Election," CETaS Briefing Papers (May 2024).