This publication is licensed under the terms of the Creative Commons Attribution License 4.0 which permits unrestricted use, provided the original author and source are credited.
Introduction
On the eve of an extremely close US presidential election, there are widespread fears that malicious actors will interfere with the results or cast doubt on its validity. Several hostile operations linked to Russia, China and Iran seeking to manipulate voting intentions have already been exposed. US politicians, celebrities and even ordinary users are further polluting the online information space with conspiracy theories and disinformation.
The period stretching from the final days of campaigning to voting and subsequent counting is particularly important for countering nefarious activities. Owing to the speed with which these stages follow one another, election security officials suffer from capacity constraints in addressing the variety of threats that emerge.
By early 2024, many observers had new concerns about the intersection between key elections and the rise of generative AI models that allow users to create realistic but fake content at scale. With the decline in skill and access needed to use these tools, they were anxious about the disruption of election results and the erosion of trust in democratic processes.
Recognising that much of this speculation lacked evidence-based analysis, CETaS has monitored elections throughout 2024 to understand the impact of these threats. We have repeatedly cautioned against predictions of the worst-case scenario, based on our evidence that such content is affecting elections in a more limited way.
Given the prospect of last-minute hostile interference, the following analysis provides a checklist to help US voters, journalists and others protect themselves and the integrity of the election from AI-enabled threats, both during and after election day. Our forthcoming Research Report will be published in the week after the election, and will set out longer-term recommendations for securing the integrity of future democratic processes.
Guidance for voters
- Avoid relying solely on social media for election content, given the prevalence of AI-generated disinformation on these platforms. You should use a variety of trustworthy news sources and credible fact-checking websites to cross-reference any stories or content.
- Be suspicious of any election content that is emotionally charged, sensationalist or unsupported by evidence – or that singles out particular demographic groups. If in doubt, you should check other credible information sources for verification.
- Exercise caution when sharing political content that may be generated by AI or not independently verified, since this can amplify potential disinformation to others. While AI detection tools can be helpful, they should be used in tandem with fact-checking websites, due to the challenges in interpreting results from these systems.
- Avoid polling information from generative AI chatbots, since they are prone to errors. You should go to authoritative sources, such as .gov websites, for details on how, when and where you can vote.
- Develop an awareness of the telltale signs of AI-generated content – such as deepfakes – through digital literacy quizzes.
Guidance for journalists and fact-checkers
- Carefully report and contextualise any AI-enabled election interference, to avoid exaggerating its impact. Failure to do so could misinform the public and erode trust in the integrity of the election.
- When reporting on these cases, refrain from linking to the original content or referencing the content creator (unless they are in a position of power or connected to hostile foreign states). This prevents inadvertent dissemination of disinformation to a broader audience.
- Add clear visual signs that AI-generated content is misleading (rather than just sharing the original content) and ensure that closely match the content of the article, avoiding sensationalism.
- Cultivate authoritative sources on the election and maintain clear channels of communication with election officials to rapidly verify or rebut any claims that have gone viral.
Guidance for election security officials
- Carefully consider whether public announcements about AI-enabled interference efforts are justified. This assessment should be based on whether the threat in question poses significant risks to the wider public or election process.
- When the threat is severe enough, swiftly attribute the culprit – particularly if they are linked to a hostile state. Any information provided to the public should focus strictly on the evidence and what voters can do to protect themselves.
- Ensure that voters receive consistent polling information, while debunking viral disinformation through trusted offline and online channels.
- Adopt a variety of cybersecurity practices. These include multifactor authentication for user accounts to reduce unauthorised intrusions, time-sensitive codewords before exchanging sensitive information on phone calls and technical controls on election websites to restrict AI-generated inauthentic requests (e.g. CAPTCHA tests).
- Debunk false information quickly by enlisting the help of trusted local voices, including religious leaders and local radio and television stations.
Guidance for social media platforms and AI companies
- Social media platforms should ensure that users have consistent access to authoritative election information during and after election day. They should amplify this through in-feed recommendations or prominent notices on in-app election hubs.
- AI companies should direct users to authoritative election sources in response to prompt queries, while barring models from answering election queries for which they cannot provide accurate information.
- Social media platforms should label or hide posts that suppress voter participation, mislead people over the voting process or spread baseless allegations of voter fraud. They should provide election information to users in not just English but also other languages to increase its accessibility.
- Social media platforms should coordinate among themselves when viral AI-generated disinformation emerges on one platform, to reduce the risk of it spreading to others.
- Social media platforms should prioritise resourcing based on the level of risk across any threat cases, especially those that may lead to voter suppression or physical violence.
- Given that there are various barriers to data access, social media moderation teams should maintain transparent and secure communication channels with third-party fact-checkers, journalists, election officials and researchers to ensure that threats can be flagged quickly.
Guidance for political candidates, parties and campaigners
- Political candidates and campaigners should refrain from creating or sharing deceptive AI-generated content targeting the opposition, as the realism of these outputs can leave voters uncertain of the truth.
- Political parties should denounce AI-generated disinformation designed to disrupt the voting process or undermine the integrity of the election, once authorised to do so by election security officials.
- When responding to malicious content such as deepfakes linked to other political parties, candidates should first report the incident to their party and election security officials before deciding on a response.
- Political parties should provide additional resources for down-ballot contests, such as congressional or state elections, given that there are fewer fact-checking organisations monitoring these contests for disinformation. If local candidates are targeted, they should report such cases to their party’s leadership and election security officials before taking action.
- To reduce the likelihood of being targeted, political candidates should always share party documents or other communications using official devices with adequate cybersecurity protections.
Conclusion
Following hostile interference attempts in the 2016 and 2020 presidential elections, the US introduced a range of measures that have increased the resilience of voting processes against subversion. But neither of these contests took place in the era of generative AI capabilities, which pose new threats to elections.
Voters, journalists and others can all play their roles in safeguarding the outcome of the 2024 election against AI-generated disinformation. They should adopt appropriate media- and cyber-hygiene practices, facilitate effective coordination between one another and carefully consider the framing of public communications.
Yet while short-term solutions are important during this crunch period for the election, the US Government and other countries should not allow complacency to creep into its long-term policymaking. The forthcoming CETaS Research Report will propose a range of long-term technical and policy measures to protect future democratic processes from AI-enabled interference.
The views expressed in this article are those of the author, and do not necessarily represent the views of The Alan Turing Institute or any other organisation.
Authors
Citation information
Sam Stockwell