Deepfake Regulation Accelerates After Grok Controversy
By Martyna Chmura | 16 February 2026
Summary
The Grok controversy has triggered coordinated regulatory scrutiny across the European Union (EU), United Kingdom (UK), and multiple other jurisdictions, signalling growing global focus on generative deepfakes as a platform governance and child-safety issue.
Fragmented enforcement and rapidly improving deepfake capabilities are increasing security risks to vulnerable groups and democratic resilience, while creating uneven legal standards that platforms can exploit through regulatory arbitrage.
Deepfake regulation is likely to shift from ad hoc enforcement toward formal transparency, detection, and accountability requirements, with longer-term movement toward shared liability frameworks and election-security integration.
Context
On 26 January 2026, the European Commission opened formal proceedings against X under the Digital Services Act (DSA), expanding an existing investigation into the platform’s recommender systems to cover the deployment of its Grok AI chatbot. The Commission will assess whether X carried out required systemic‑risk assessments before integrating Grok, and whether it adequately mitigated the spread of manipulated sexually explicit content, including child‑like deepfakes. The move followed research by the Center for Countering Digital Hate, which found that Grok generated more than three million sexualised images in under two weeks, including over 23,000 that appeared to depict children, and separate analysis by The New York Times estimating that Grok posted roughly 4.4 million images in nine days, at least 1.8 million of which were likely sexualised images of women.
Similar concerns have prompted a wave of national actions. In the United Kingdom, Ofcom launched a formal investigation under the Online Safety Act on 12 January, examining whether X complied with duties to prevent the creation and dissemination of illegal content such as child sexual abuse material and non‑consensual intimate imagery when it rolled out Grok. Australia’s eSafety Commissioner has requested information from X on safeguards against Grok’s misuse, building on earlier transparency notices about child‑exploitation material. In Canada, the federal Privacy Commissioner has expanded an ongoing probe into X to determine whether Grok was used to create explicit deepfakes without valid consent and to assess compliance with federal private‑sector privacy law. In India, the Ministry of Electronics and Information Technology issued a warning after flagging serious failures in curbing Grok‑generated explicit images; X blocked thousands of items and hundreds of accounts, but officials signalled that safe‑harbour protections could be at risk if policy gaps persist. Malaysia and Indonesia temporarily blocked access to Grok, lifting or reconsidering restrictions only after X committed to additional safeguards and registration obligations, while Brazilian authorities have given xAI 30 days to halt fake sexualised images or face administrative and legal consequences.
Implications
The Grok incident highlights how generative AI systems can stress‑test platform liability and child‑protection regimes that were not designed with large‑scale, cross‑border image generation in mind. Regulators are using whatever levers their domestic laws provide – DSA systemic‑risk provisions in the EU, online‑safety duties in the UK, privacy enforcement in Canada, intermediary‑liability rules in India, and consumer‑protection and data‑protection powers in Brazil – but these tools differ in scope, thresholds and sanctions. For X and other platforms, this means parallel but unaligned investigations that complicate compliance planning and risk management, as each jurisdiction demands distinct impact assessments, reporting and technical safeguards. The lack of shared procedures or joint case handling allows platforms to calibrate responses to the most demanding regulators while treating others as lower‑priority, increasing the risk of regulatory arbitrage.
These dynamics have wider geopolitical implications for AI governance. Democracies are converging on the view that non‑consensual deepfakes and AI‑generated child sexual abuse material are intolerable harms that justify criminalisation and strict platform duties, but they are doing so at different speeds and via divergent legal architectures. Some governments are moving to criminalise deepfake creation itself in certain contexts, while others focus on distribution or platform‑level responsibilities. This asymmetry can undermine deterrence: perpetrators may exploit jurisdictional gaps, and victims’ access to remedies depends heavily on where they reside and where a platform is headquartered. Over time, uneven enforcement is likely to widen regulatory divergence between jurisdictions, complicating international cooperation on deepfake investigations and increasing the risk that states adopt incompatible standards for online content governance.
The security risks extend beyond platform compliance. The rapid improvement and accessibility of generative AI are lowering the cost and skill required to produce convincing synthetic media, including impersonation content and non-consensual sexual imagery. Europol has assessed that generative AI is likely to increase the scale and efficiency of criminal activity online, including deception, fraud, and harassment, due to the speed and volume at which synthetic content can be generated. This trend disproportionately affects vulnerable groups such as children and women, and aligns with warnings from the United Nations Children’s Fund that AI-enabled harms can increase exploitation risks and weaken child protection online.
In political contexts, deepfakes are an increasing concern due to their potential to spread disinformation during elections and crises. The World Economic Forum identified AI-driven misinformation and disinformation as a leading short-term global risk, reflecting growing concern that synthetic media will undermine trust in institutions and information integrity. Moreover, research from the Alan Turing Institute shows how deepfake scams and poisoned chatbot outputs can be used to mislead users, manipulate beliefs, and undermine trust, including in high-stakes environments where automated influence can shape perceptions and behaviours. As cross-border enforcement remains limited and detection is inconsistent, the likelihood of deepfakes being used for intimidation, reputational attacks, or crisis disinformation is likely to increase.
Forecast
Short-term (Now - 3 months)
Coordinating bodies and like‑minded regulators are highly likely to formalise soft standards around deepfake labelling, watermarking and age‑assurance, without yet achieving binding multilateral rules.
Major platforms are likely to increasingly adopt watermarking and provenance signals for some AI‑generated images and videos, especially where they interact with election integrity or child‑safety rules.
Medium-term (3-12 months)
It is likely that at least a few jurisdictions will make deepfake detection and periodic prevalence reporting mandatory for large platforms, and will require certain high‑risk AI tools to maintain logs that can be shared with law enforcement under defined conditions.
A realistic possibility is that emerging “fingerprint” or provenance standards become procurement conditions for governments and media organisations, creating a de facto global baseline even without a treaty.
Long-term (>1 year)
It is likely that regulation shifts from platform-only duties toward shared accountability frameworks that include developers and deployers of high-risk generative tools, including requirements for safety testing, access controls, and traceability.
It is likely that deepfake risks become a standing element of election security and crisis management planning across democratic states, increasing pressure for cross-border cooperation on attribution, incident response, and evidence standards.