AI could make misinformation campaigns more credible and persuasive, according to Ardi Janjeva, a research associate at the UK-based Centre for Emerging Technology and Security. “A deepfake video on its own might be easy to disprove, if fact-checkers can provide an article which states that the prime minister – or whoever – was somewhere else at the time. But if you’ve got the video plus a fake article saying that they were there, a voice recording of an interview he did at the same location, and some still images thrown in as well, you add to the picture of convincingness,” he tells Dazed.

Whether it’s deepfakes or bot campaigns, AI is already making it easier to spread misinformation – not just for foreign states or domestic political actors, but for private groups and individuals, whether motivated by the desire to cause mischief or in pursuit of a serious political agenda. The Centre for Emerging Technology and Security is currently exploring how these problems might be mitigated: some of the proposed solutions include watermarking technology, which would label when content is AI-generated, and new legal frameworks for regulating its use. The recent Taiwan election is an example of what happens when countries tackle the problem head-on: while there were reportedly attempts by the CCP to influence voters, the impact was ultimately judged to be minimal, which as Ardi suggests, could indicate that the effort to raise awareness and mitigate the risks was successful.

- James Greig