This publication is licensed under the terms of the Creative Commons Attribution License 4.0 which permits unrestricted use, provided the original author and source are credited.
Acknowledgement
The author is very grateful to Dr John Gallacher, Founder of Alethio, for his feedback on an earlier version of this article.
Introduction
This article examines how the use of generative AI could enhance extremist Internet memes and make them a more potent tactic for radicalisation. It starts by summarising the historical use of memes by extremist groups, before analysing emerging evidence of AI-based extremist meme usage. It finishes with a series of recommendations for protecting online communities against these threats.
Internet memes are a popular medium for disseminating ideas or messages online, with at least one million posts mentioning the word ‘meme’ being shared every day in 2020 on Instagram. Most of these are harmless, designed to elicit humour from friends, family, or the wider Internet with references ranging from popular culture and animals to politics and history.
First coined by evolutionary biologist Richard Dawkins in 1976, memes are the cultural equivalent of human genes: ‘melodies, ideas, or catchphrases[…] that leap from brain to brain through imitation’. Since the advent of social media platforms, memes have taken on a new dimension which reflects the way that digital content spreads across online communities. Specifically, three characteristics distinguish digital memes from other content:
Consistent ideas or ideologies embedded in the content;
Consistent multimodel content format (e.g. text and image composition);
Created with awareness of other content of the same kind and designed to be circulated, replicated and/or edited by other users.
While memetic content can be beneficial in helping to provide social support, and raise awareness on societal issues, it also has the potential to be exploited by extremist groups, for instance to assist in radicalisation or in normalising hateful ideologies.
Extremism is defined here according to the UK’s recently updated definition, which involves ideologies based on ‘violence, hatred or intolerance’ that seek to either undermine fundamental rights or overturn the UK’s democratic system. Evidence shows that the use of memes for extremist purposes has primarily involved those on either the extreme right (including white supremacists, antisemites, anti-LGBTQ and anti-Muslim groups), or those subscribing to an extreme militant interpretation of Islam (hereafter ‘Islamists’).
Recent advancements in generative AI may also offer new opportunities for extremist groups, for instance with the use of personalised AI chatbots to improve online recruitment tactics. However, there is a lack of research specifically examining the use of generative AI to enhance extremist memes, hence the focus of this article.
The Historical Role of Memes in Radicalisation
Memes are designed to be easily transformed by online users by the use of standardised templates, to encourage the viral spread of different iterations. However, while content editing can be straightforward, attaining widespread circulation is not. This is because memes require cultural knowledge (e.g. being aware of what a certain audience finds funny or relevant at a given period) to be deemed worthy of re-sharing. The ability to discern these nuances therefore plays a role in shaping the boundaries of digital group identities, including those that coalesce around extremist ideologies.
Indeed, terrorists linked with extreme right beliefs are not typically members of formal groups, but instead affiliated with loose digital hate communities. The use of specialised in-jokes or jargon helps to not only further isolate radicalised individuals from wider society, but can also inspire some to carry out real-world violence. Alongside memes designed to push anti-government and anti-law enforcement narratives across social media platforms, prominent extreme-right terrorists such as the Christchurch shooter Brenton Tarrant were directly inspired by meme content.
To many users, extreme right memes can be dismissed as politically incorrect or ‘edgy satire’, instead of potential radicalising content. However, this is often the intention. While memes themselves are unlikely to turn someone into an extremist, clever integration of irony, humour and evocative messaging can make those with milder political beliefs desensitised to the underlying narrative. Including recognisable symbols (e.g. Pepe the Frog) and references to pop culture act as a further hook to lure viewers, putting them at risk of having more exposure with those already radicalised.
Islamists have been observed to leverage memes for radicalisation purposes to a lesser extent than the extreme right. However, younger generations of Salafist radicals – who support the direct implementation of Sharia law – are showing increasing signs of going against this trend. In recent examples, this has included appropriation of extreme right symbols (e.g. Wojak) to either attack progressive Muslims and LGBTQ+ communities, or support militant jihadist organisations like ISIS.
Generative AI and Extremist Memes
Since the emergence of generative AI platforms such as ChatGPT in late 2022, there has been growing concern about the range of harms which could arise from the use of these systems by extremist groups or individuals. Recent CETaS research highlighted how the increased accessibility of generative AI interfaces, alongside the enhanced speed, scale, and personalisation of content, may benefit recruitment methods and propaganda content.
So far, there is limited evidence showing the application of generative Al systems to extremist memes. However, where this has been identified, it has come primarily from extreme right online communities. For instance, there are repeated references to instruction manuals guiding extreme right users on fringe online channels such as 4Chan on how to use AI image generators to create ‘fun’ propaganda memes. Such engagements may suggest that these groups are prioritising ways to bypass pre-existing content moderation policies on social media platforms with new AI systems and achieve wider circulation.
Nevertheless, the content of extremist memes is also being transformed by generative AI. The type of visuals and ideological narratives (thus far) remain consistent with traditional methods targeting minority groups – particularly Jewish and black ethnic groups – albeit with less cartoonish iterations of popular symbols like Moon Man. In a few examples, perceived bias of certain AI systems against ethnic majorities has been weaponised through AI memes. Users on 4Chan have posted memes of how some image generators fail to produce outputs of white individuals, despite explicit prompt requests. Such failures are characterised as representing part of a wider conspiracy to use AI as a propaganda machine against the white race.
Equally concerning is the way that generative AI systems are enabling extremists to embed prejudicial images or words inside memes, creating optical illusions. For instance, users are being encouraged to embellish memes with subliminal references to the antisemitic Happy Merchant. A subtle, but harmful meme of this kind on a mainstream social media platform could reach a wider audience than a more explicit version. It will also be more difficult for content moderation techniques to detect, particularly automated methods, which require clear visual indicators.
‘Memetic Warfare’
There have also been changes to the circulation tactics of memes through AI. With the benefits that generative AI systems provide in rapidly creating several outputs, extremists have sought to ‘flood the zone’ with large quantities of AI memes. Although seeking to dominate the information space with a particular message is not a new tactic, it has often been more closely associated with disinformation campaigns rather than extremist material. While extremists may see this spamming as an opportunity to further exacerbate social divisions, memes still require careful manual creation to determine what is likely to make the content go viral.
In contrast to these traditional methods of manual creation, some 4chan users have been experimenting with mass produced AI memes related to the Israel-Hamas conflict. 43 different memes of this type were found to have reached a combined 2.2 million views on X between 5th October and 16th November 2023, as well as being shared on other platforms such as TikTok and YouTube. It is hard to discern any clear impact on online public opinion around the conflict from these specific memes. Yet the fact that they were able to spread across different digital ecosystems reveals how mainstream users likely perceived the underlying hateful narrative as sufficiently normalised to warrant sharing with others.
Generative AI has also made it possible to effectively turn any image into a video, which is already being used in mainstream circles to transform popular meme templates. Despite the lack of evidence showing extremist usage in this respect, the contemporary popularity of video-based memes on platforms such as TikTok will offer new opportunities for malign actors. Non-AI extreme right memes in the form of video reels have already been circulated on TikTok celebrating terrorists like the Christchurch shooter. Based on recent tactics used by extremists in relation to the Israel-Hamas conflict referenced previously, the mass dissemination of AI-generated videos could quickly overwhelm content moderation processes. This is particularly due to the additional challenges that video formats pose for automated tools discerning whether the content is harmful.
Limitations of Generative AI for Extremist Content
The previous section highlighted different examples of generative AI transforming extremist meme content. Nevertheless, several considerations must be taken into account when evaluating these systems in relation to radicalisation pathways. An individual’s transition from initially embracing an extremist ideology to then carrying out violence in the name of those beliefs is a complex multifaceted phenomenon, influenced by several processes at the individual and societal level. Above all else, it is the combination of both personalised and offline connections which makes the difference in this transition, particularly for Islamists who go on to conduct terrorist attacks.
Furthermore, generative AI systems struggle in interpreting human emotions or understanding how to convey concepts such as irony or humour. These are vital elements to the success of any meme going viral. Memes also draw heavily on life experiences and what is seen as popular at a given moment – whereas an AI system may understand them as nothing more than ‘a bunch of text and images’.
Separately, inauthenticity may be another reason why extremist users will be cautious to integrate generative AI into their meme strategies. The extreme right especially rely on crude and cartoonish symbols (e.g. Pepe the Frog), not least because it provides a layer of plausible deniability as to who may have circulated the content. However, users cannot entirely control the artistic output from content generated by AI, while the realistic graphical style of many AI image generators can make the cartoon aesthetic challenging to replicate. Within the aforementioned AI instruction manuals for creating extremist memes, authors stressed the importance of combining the image generation systems with human editing to maximise effectiveness. If in-group members fail to achieve the level of authenticity expected in their memes, this may damage their reputation among the rest of the community and thus put them off using these systems.
Taking these considerations together, the main benefits for extremists from generative AI will likely be twofold. First, the ability to automate the circulation of pre-made non-AI memes through so-called
Countering AI-generated Extremist Memes
Just as memes themselves are complex entities, the measures to protect online users against their malicious use requires a multi-faceted approach which draws on a combination of different interventions and sectors.
Firstly, there is a need to continuously monitor trends or changes in online extremist behaviours. Utilising open-source intelligence on social media platforms and known digital hate communities (e.g. 4Chan) will help to inform content moderation strategies or AI system constraints that could limit the creation and spread of hateful meme content. For example, many image generators now prevent Nazi-related content from being created when requested through prompts.
While content watermarking and detection techniques are important components for removing extremist content, memes pose unique challenges. Not only do machine learning systems struggle to detect concepts like irony and sarcasm embedded within memes to determine if they might be harmful, but meme strategies rely less on authenticity for success – meaning watermarking will make little difference. Despite this limitation, researchers should continue to improve hateful meme classification and detection systems. This could include figuring out how systems can extract as much information both directly and indirectly from the meme, such as from wider user engagements.
Social media platforms should also work more closely with AI companies, relevant government agencies and other stakeholders on ‘red team’ exercises, taking inspiraton from NATO’s recent initiative. Simulating potential scenarios where extremist memes are shared virally on a particular digital ecosystem would enable vulnerabilities to be identified in advance, and encourage the development or refinement of countermeasures.
Finally, it must be stressed that technical systems form only part of the solution. In addition to these, AI literacy programmes would ensure that citizens are made aware of the methods used by extremists through Internet memes to normalise their ideologies. The UK could also take lessons from other countries in promoting more accessible fact-checking mechanisms. For instance, Taiwan’s FactCheck Center seeks to advance digital literacy through a chatbot integrated into the most popular messaging app in Taiwan (LINE). This allows users to share questionable content for verification, with a list of source material.
Conclusion
There are worrying early signals that extremist actors are tapping into generative AI systems to enhance their ‘memetic warfare’ campaigns. From embedding hateful memes within other memes to overcome content moderation restrictions, to flooding the digital information environment with large volumes of AI-generated memes, there is uncertainty over what long-term impact generative AI will have in this space. The recent viral spread of antisemitic AI memes across multiple social media platforms suggests these systems are already having a dangerous effect in further normalising underlying extremist narratives. However, offline influences still play a vital role in radicalisation processes. Perhaps more significantly, memes are a relatively unique type of content which require an understanding of cultural context, human emotion and group identities. Given these factors, generative AI will likely pose just as many challenges as it does potential opportunities for extremist usage.
The views expressed in this article are those of the author, and do not necessarily represent the views of The Alan Turing Institute or any other organisation.
Authors
Citation information
Sam Stockwell, "Propaganda by Meme: The impact of generative AI on extremist memes," CETaS Expert Analysis (May 2024).