AI in State-Directed Influence: Lessons from the GTG-1002 Campaign

By Claudia Busuioc | 17 February 2026

BISI is proud to present this piece in collaboration with CyberWomen Groups CIC. Through this partnership, we have combined our expertise in political risk with their knowledge of cyber security to deliver a fresh perspective on emerging threats.

CyberWomen Groups CIC is a student-led initiative dedicated to diversifying STEM by supporting and connecting university students interested in or studying cybersecurity, regardless of gender identity.


Summary

  • Artificial Intelligence (AI) has evolved from a support tool to largely autonomous execution in cyber operations. GTG-1002 is the first publicly documented case of a large-scale cyber campaign conducted predominantly by AI with minimal human oversight.

  • AI now independently supports tactical and sub-strategic decision-making, including victim profiling, exploit selection, and targeted pricing.

  • While AI-driven campaigns do not consistently achieve higher engagement than human-operated campaigns, the convergence of automated cyber operations with AI-managed influence campaigns increases operational and strategic risk.

  • Traditional sequential detect-and-respond playbooks are increasingly ineffective against parallelised, semi-autonomous operations running at machine speed.


Context

Cyber operations are now recognised as a core domain alongside land, sea, air, and space. In 2025, state-sponsored actors accounted for an estimated 39% of cyber attacks. AI systems can orchestrate multi-stage campaigns with minimal human intervention, executing tasks across reconnaissance, exploitation, lateral movement, and exfiltration. Strategic intent remains human-defined; however, recent cases show that AI is capable of exercising bounded strategic agency – selecting targets, prioritising objectives, and adapting coercive or persuasive approaches within defined limits. 

Global landscape: GTG-1002, AI, and state-directed influence

The GTG-1002 campaign highlights China’s integration of AI into offensive cyber operations at an unprecedented scale, targeting approximately 30 organisations across technology, finance, and government sectors. Using a Model Context Protocol framework, threat actors employed Claude Code as an autonomous penetration-testing orchestrator to conduct reconnaissance, vulnerability analysis, exploitation, lateral movement, credential harvesting, and data exfiltration. 

AI capabilities are not limited to technical compromise; they are equally evident in state-directed information operations. Likely China-origin influence operations, such as “Sneer Review” and “Uncle Spam”, involved multi-platform coordination across TikTok, X, Reddit, and Facebook. AI-generated content amplified existing political divisions through bulk posts, comments, and polarising narratives.

Russian-linked actors employed AI models to generate politically aligned commentary, mimicking legitimate news sources to support Russia’s operations in Ukraine.  Content strategies were tailored to platform characteristics and regional audiences. Meanwhile, China’s deployment of DeepSeek AI across People’s Liberation Army (PLA) non-combat functions demonstrates the state’s broader adoption of AI for OSINT reporting, SIGINT processing, and event extraction.

Iranian campaigns accounted for three-quarters of documented cases of Gemini AI use in influence operations in 2024-2025. These campaigns produced fabricated news articles and imagery to support regime messaging and targeted the 2024 United States (US) presidential elections. While AI significantly increased production volume and quality, distribution constraints and audience scepticism limited measurable influence.

Deepfakes and synthetic media

The pattern of AI-enabled automation, observed in the GTG-1002 campaign, is also evident in the information environment. Since 2021, 38 countries have experienced election-related deepfake incidents, affecting an estimated 3.8b people. High-fidelity deepfakes, including impersonations of news anchors in Ecuador’s February 2025 election and fake logos designed to mimic CNN and France 24, have emphasised AI’s ability to generate convincing disinformation. A recent cyber threat assessment of Canada’s democratic process also highlighted that politicians, particularly women and LGBTQ+ individuals, face heightened risk from deepfake pornography, a pattern extending globally to high-value political and business targets.

AI agents

AI agents now manage complex real-time tasks and interactions, similar to the autonomous orchestration observed in GTG-1002. According to industry projections, the shift from human influencers using AI tools to fully autonomous AI agents is accelerating faster than anticipated. Despite alarmist predictions, key constraints remain:


Implications

Taken together, GTG-1002 and these parallel influence activities indicate that AI is transitioning from a support tool to an operational enabler across both cyber intrusion and information operations. GTG-1002 provides a concrete example of how AI and automation can scale technical compromise, while deepfakes and AI agents demonstrate how comparable capabilities can scale persuasion and narrative manipulation.

Politically, AI enables the manipulation of democratic processes via targeted disinformation and deepfakes. Establishing direct causal links between influence operations and behaviour remains analytically challenging. The more consequential threat is cumulative low-level information distortion, which gradually erodes institutional trust as international governance lags technological evolution.

Operationally, AI reduces skill and resource requirements for complex cyber operations, enabling threat actors to scale activity across multiple simultaneous targets, such as the coordinated multi-sector targeting in the GTG-1002 campaign. The interval between public vulnerability disclosure and active exploitation has shortened significantly, requiring organisations to accelerate response readiness and implement “assume breach” security architectures.

AI-driven campaigns, particularly amid political tensions, are compelling organisations to fundamentally reassess traditional crisis readiness. Supply-chain and third-party attack surfaces remain a critical exposure, with an organisation’s effective security perimeter now extending across its entire vendor ecosystem, requiring adaptive response capabilities across business functions to detect and contain breaches before cascading impact. At the same time, these challenges create opportunities for innovation and strategic autonomy, such as the creation of the European Union Vulnerability Database (EUVD) to reduce reliance on US-funded systems. In the defence sector, Italy’s Leonardo has developed the Michelangelo Dome, an AI-powered integrated defence system designed to protect critical infrastructure from coordinated threats including hypersonic missiles, drone swarms, and cyberattacks.

Socially, AI-driven systems enable extremist groups to exploit recommendation algorithms and behavioural profiling to target vulnerable populations at scale. Persistent uncertainty about authenticity undermines trust in information sources, and false information may drive high-consequence decisions before correction is possible.

Economically, cyber conflict-related damages reached USD 13.1b in 2025, a 21% increase from 2024. Insurance premiums for cyber conflict rose 31% as insurers struggled to model evolving risks. Organisations face a paradox: AI offers nearly USD 1.9m in defensive cost savings, yet 13% of organisations surveyed by IBM globally reported breaches of AI models or applications, with the global average data breach cost at USD 4.4m.


Forecast

  • Short-term (Now - 3 months)

    • It is highly likely that threat actors will use prompt injection techniques combined with AI code generation to dynamically alter malware and bypass static defences, compromising large enterprises.

  • Medium-term (3-12 months)

    • It is likely that AI-generated deepfakes will circulate 24-48 hours before elections, with potential to depict candidates withdrawing, false intelligence alerts, or fraudulent official announcements. 

    • State-sponsored actors will very likely continue to deploy AI agents for offensive cyber operations, enabling plausible deniability in attribution and control.

  • Long-term (>1 year)

    • It is likely that organisations will fail to implement adaptive crisis readiness at the pace required. Ineffective response to AI-driven operations may continue to increase operational, reputational, and financial risk.

BISI Probability Scale
Next
Next

Israel’s Recognition of Somaliland: Implications for African Sovereignty and the Reconfiguration of Regional Balances