ISIS’s Adoption of Generative AI Tools
By Carlotta Kozlowskyj | 9 September 2025
Summary
Terrorist organisations such as the Islamic State of Iraq and Syria (ISIS) are exploiting cyberspace resources, particularly generative AI (GenAI), to spread their message globally at low cost.
ISIS increasingly relies on GenAI for propaganda, recruitment, social media exploitation and cyber operations while expressing concerns about anonymity.
Due to legal loopholes and limited technological resources, governments worldwide face difficulties regulating GenAI.
Over the last two decades, with the creation of social media platforms such as Facebook, Instagram and Twitter, terrorist and violent extremist groups have demonstrated a capacity to adopt and exploit new developments in cyberspace. For instance, between 2019 and 2020, Facebook removed 26 million pieces of extremist content. Evidently, terrorist organisations no longer solely rely on traditional activities, but also exploit emerging technologies, notably GenAI, to achieve their goals. GenAI is a type of AI that can create original content, including images or videos, by learning patterns from existing data. For ISIS, cyberspace represents an opportunity to disseminate its propaganda across a large audience owing to the increasing accessibility of AI in recent decades.
ISIS perceives GenAI as an opportunity to project its power and promote its ideology, after its decreased influence since its territorial defeat in Iraq (2017) and Syria (2019). On 17 August 2023, a pro-ISIS technology support group published a guide titled “How to protect your privacy when using [the AI content generator]”, demonstrating the organisation’s engagement with AI technology, but also the risks of its usage.
Propaganda
GenAI can be used to generate and distribute propaganda messages more efficiently. ISIS utilises GenAI to enhance the appeal and persuasiveness of its content and mass-produce propaganda at a low cost.
On 7 August 2023, a pro-ISIS user claimed to have used an AI-based automatic speech recognition system to translate an Arabic ISIS propaganda message against Russia, following the ISIS attack on Crocus City Hall in Moscow. This is one of the first uses of AI transcription for propaganda, enabling ISIS to reach a wider international audience without translators.
ISIS-affiliated groups such as Halummu and Al-Azaim are leveraging GenAI for visual and linguistic propaganda.
Recruitment
ISIS is starting to use GenAI to profile candidates and identify potential individuals susceptible to radicalisation, such as those who often seek “violent content online”. GenAI enables tailored recruitment with targeted and personalised messages to potential recruits based on their interests and beliefs.
The United Nations (UN) has also shown concerns that Open AI tools will be used by terrorist organisations such as ISIS in “micro-profiling and micro-targeting, generating automatic text for recruitment purposes”.
Social Media Exploitation
ISIS uses GenAI as a tool to manipulate digital platforms to spread propaganda and recruit followers.
GenAI helps ISIS bypass content regulation controls on social media platforms. In February 2024, a study conducted by Meili Criezis, a PhD student in Justice, Law, and Criminology at the American University, found that ISIS used AI-generated material to create blurred images of its flags and guns to bypass filters on Instagram and Facebook.
ISIS’s use of GenAI complicates traditional counter terrorism monitoring, making propaganda increasingly more challenging to detect.
Counterterrorism and the Risks of the Use of AI
ISIS displays significant apprehension about the inherent dangers of AI technology. On 15 April 2025, one of ISIS’s most prominent media arms, the Qiman Electronic Foundation (QEF), released a bilingual guide in English and Arabic titled: “A Guide to AI Tools and Their Dangers”. The QEF guide expressed deep concerns about privacy and information security associated with AI-enabled data collection.
Regulation gaps and legal loopholes allow extremist groups like ISIS to exploit GenAI and major platforms for their own gains. Governments are also beginning to recognise the need to monitor and regulate AI platforms, with the European Union agreeing on an AI Act in December 2023. A study shows that there is a 50% success rate with AI being responsive and giving detailed information to prompts such as how to convince an audience to donate funds to a terrorist organisation. In 2021, the United Nations Office of Counterterrorism released a special report reviewing prospects offered by AI in combating online terrorism. AI capabilities have also yet to be used for counter-radicalisation and prevention programs. Western governments’ options in cyberspace are limited by their democratic values, as counterterrorism AI must respect privacy and freedom of expression.
Solen Feyissa/Unsplash
Forecast
Short-term (Now - 3 months)
It is highly likely that ISIS and its affiliated groups will continue to experiment and rely on GenAI for organisational and operational purposes, including propaganda and interactive recruitment.
It is likely that ISIS will increasingly exploit smaller platforms with weaker AI regulations and moderation capacity.
It is unlikely that governments and content moderators on major platforms will prevent the circulation of GenAI propaganda by terrorist groups such as ISIS.
Long-term (>1 year)
It is likely that ISIS and other terrorist organisations will use GenAI as a tool of disinformation with the creation of deepfakes to undermine authorities.
It is unlikely, despite its increasing risk, that ISIS will directly weaponise AI in automated attacks because of its high cost and technical barriers.
It is likely that governments and international agencies will develop new GenAI counterterrorism tools, such as automated detection systems that flag extremist keywords or suspicious user behaviour.
It is likely that there will be a “GenAI arm race” between ISIS and state counterterrorism agencies, as ISIS will adjust and try to bypass new security systems.