EU's AI Regulatory Pivot: Digital Omnibus and Simplification Under Pressure

By Aryamehr Fattahi | 24 November 2025


EU's AI Geopolitical Position

Summary

  • The European Commission unveiled the Digital Omnibus package on 19 November 2025, extending high-risk AI system compliance deadlines by 16 months to December 2027 and simplifying requirements for smaller firms. This represents a significant retreat from the EU AI Act's original timeline, driven by mounting geopolitical pressure from the United States and competitive threats from China.

  • The changes address procedural bottlenecks but may not resolve Europe's structural competitiveness barriers: A 6-fold investment gap with the US (EUR 19b versus USD 109b in 2024), persistent brain drain of AI talent, and dependence on foreign infrastructure for approximately 70% of cloud services from non-EU providers.

  • Despite expanded exemptions, high-risk AI compliance costs remain substantial, creating disproportionate burdens for SMEs. Meanwhile, enforcement fragmentation across 27 Member States creates opportunities for regulatory arbitrage that could undermine the Act's effectiveness.

  • The deregulatory pivot positions the EU in a precarious middle ground, attempting to maintain regulatory leadership whilst simultaneously chasing US innovation velocity and Chinese state-backed deployment. This analysis examines whether Europe can preserve its 'trustworthy AI' brand whilst remaining competitive in an increasingly fragmented global landscape characterised by diverging regulatory approaches.


Context: The Digital Omnibus recalibrates Europe's AI ambitions

The Digital Omnibus package, presented by the European Commission on 19 November 2025, represents the most substantial revision of the EU AI Act since its adoption in June 2024. The package comprises two separate proposed regulations, one targeting data and cybersecurity rules, another amending the AI Act itself, responding to industry warnings that regulatory complexity threatens European competitiveness.

The centrepiece modification extends high-risk AI system compliance deadlines by up to 16 months, linking implementation to the availability of harmonised standards and support tools. Systems categorised under Annex III (employment, education, law enforcement, critical infrastructure) now face a December 2027 deadline, extended from August 2026, whilst systems embedded in regulated products (Annex I) have until August 2028. This timeline adjustment acknowledges practical implementation challenges. Notified bodies capable of conducting conformity assessments remain in short supply, whilst technical standards continue to lag behind initial projections.

Beyond timeline adjustments, the package extends regulatory privileges: Simplified documentation, proportionate quality management systems, and reduced penalty ceilings, from microenterprises to all SMEs and small mid-cap enterprises, benefiting an additional 8,250 European companies.

The package also eliminates mandatory AI literacy obligations for all staff, shifting responsibility to Member State encouragement. Registration requirements for certain Article 6(3) systems deemed not to pose a significant risk have been removed. Additionally, the AI Office now has exclusive competence over systems based on general-purpose AI models where the provider and system share common ownership, centralising enforcement that previously fragmented across national authorities.

Crucially, the package introduces provisions allowing processing of special categories of personal data (Article 9 GDPR) for bias detection and correction, and recognises AI training as a legitimate interest under GDPR. Civil society organisations have characterised these measures as rollbacks of fundamental rights protections, with 127 signing an open letter, whilst industries welcome them as essential flexibility.

The reforms arrive amid stark competitive pressures. US private sector AI investment reached USD 109b in 2024 compared with Europe's EUR 19b, a ratio exceeding 6:1. China invested USD 9.3b whilst developing approximately 15 large foundation models against Europe's 3. The polarised reception underscores Europe's challenge in navigating between innovation imperatives and value commitments.


Analysis and implications: Europe's impossible trilemma

Compliance burdens persist despite simplification rhetoric

The Digital Omnibus simplifies procedures but leaves fundamental compliance economics unchanged. Annual operational costs per high-risk system remain approximately EUR 52,000, with Initial quality management setup ranging from EUR 193,000 to EUR 330,000. For SMEs operating on thin margins, these figures represent existential pressures rather than manageable overhead. Analysis suggests compliance costs could consume up to 40% of profit margins for smaller AI developers, creating difficult strategic choices: Absorb costs that threaten viability, simplify products to avoid high-risk classification, or relocate operations. The emerging compliance market may benefit legal and technical consultancies, but it represents a deadweight loss for innovation capital.

Classification complexity compounds these challenges. Estimates suggest 18-58% of deployed systems could fall under high-risk categories, exceeding initial 5-15% projections. The wide range reflects genuine uncertainty about how ambiguous provisions will be interpreted in practice. Article 6(3) exceptions introduce potential perverse incentives. These provisions permit providers to self-assess that Annex III systems do not pose a significant risk, avoiding registration requirements. Providers thus bear primary classification responsibility whilst having clear commercial motivations to interpret risks narrowly. Market surveillance authorities, meanwhile, face resource constraints that limit comprehensive oversight capacity.

This dynamic creates conditions where genuinely high-risk systems could escape scrutiny through optimistic self-assessment. Whether this materialises as a significant enforcement gap will depend heavily on how Member States allocate surveillance resources and interpret their oversight mandates.

Enforcement fragmentation enables regulatory arbitrage

The AI Act's decentralised enforcement model, with each Member State designating market surveillance authorities, creates structural conditions for regulatory arbitrage. The extent to which companies exploit these opportunities will shape the Act's practical effectiveness.

Regulatory sandboxes present initial forum-shopping vectors. Implementation standards and oversight intensity vary significantly across Member States, reflecting different regulatory cultures and resource allocations. Companies can strategically select jurisdictions with lighter-touch oversight for initial testing, establishing precedents and market positions that may constrain stricter enforcement elsewhere.

Divergent national approaches to high-risk classification create additional compliance optimisation opportunities. Market surveillance authorities must interpret subjective provisions through local regulatory lenses. Member States with explicit innovation mandates may interpret ambiguities generously, whilst those prioritising consumer protection adopt stricter positions. These variations enable AI developers to concentrate deployment in permissive jurisdictions.

The AI Office's exclusive competence over systems based on general-purpose AI models centralises enforcement for a narrow but significant category. However, the majority of systems remain subject to fragmented national oversight. Strategic corporate structuring, separating model development from deployment, or organising across jurisdictional boundaries, may allow companies to optimise regulatory exposure without substantive operational changes.

The GDPR implementation experience offers instructive precedent. Enforcement intensity varies by orders of magnitude across Member States, with some authorities issuing hundreds of fines whilst others remain largely inactive. If AI Act enforcement follows similar patterns, significant regulatory arbitrage within the EU over the next three years appears probable, though coordinated Commission oversight could potentially mitigate these dynamics.

Geopolitical pressures drive regulatory retreat

The Digital Omnibus reflects external pressure as much as autonomous policymaking. US influence manifested through multiple channels: The Trump administration's public warnings against excessive regulation, semiconductor export restrictions that leverage supply chain dependencies, and sustained Silicon Valley lobbying emphasising competitive disadvantages.

Yet attributing Europe's competitiveness challenges solely to regulation oversimplifies deeper structural factors. The US invests 6 times more annually in AI, but this gap reflects significantly higher energy prices intensified by the Russia-Ukraine conflict, and bank-dominated finance structures that systematically underfund high-risk ventures compared to US venture capital ecosystems.

China's trajectory compounds strategic pressures through different mechanisms. DeepSeek's demonstration of efficient large model training at a fraction of GPT-4's reported cost challenged assumptions about necessary infrastructure investment. State-backed capital enables rapid scaling across commercial and public sector deployments, whilst systematic integration across government services advances more quickly than democratic procurement processes typically allow.

Europe faces an impossible trilemma: Maintain regulatory leadership, close investment gaps with the US and China, and reduce foreign infrastructure dependencies. Current evidence suggests that achieving all three simultaneously may be unfeasible. The Digital Omnibus pivot toward regulatory flexibility appears designed to address competitiveness concerns, but risks undermining regulatory credibility without resolving structural barriers like fragmented capital markets or energy costs. Whether this recalibration proves strategically sound or counterproductive will become clearer as implementation proceeds.

Geopolitical fragmentation of AI governance appears likely to accelerate regardless of EU policy choices. The US pursues innovation-first approaches with minimal ex-ante regulation, whilst China advances state-controlled deployment aligned with political priorities. The EU's attempted third way, combining competitive innovation with ethical guardrails, requires sufficient market power and technological capacity to make compliance attractive.

Without corresponding economic scale or breakthrough capabilities, this approach risks marginalisation as a regulatory framework that governs a shrinking portion of global AI development. However, if Europe successfully positions trustworthy AI as a competitive advantage for sensitive applications, regulatory leadership could yet translate into market strength.

Unintended consequences multiply

For AI developers, extended timelines provide implementation breathing room but introduce new uncertainties. Companies that invested heavily in early compliance to meet original deadlines now face a competitive disadvantage against rivals who delayed. The mechanism linking final implementation to standards availability creates indefinite ambiguity about ultimate requirements, complicating long-term product planning and investment decisions.

The Article 9 GDPR amendment, narrowing prohibited processing to information ‘directly revealing’ sensitive characteristics whilst excluding inferred attributes, creates analytical challenges for modern machine learning. Contemporary algorithms routinely derive protected characteristics from ostensibly neutral inputs through complex correlations. The distinction between direct revelation and inference may prove difficult to operationalise consistently, potentially enabling circumvention through technical reformulation without substantive privacy protection changes.

For deployers in regulated sectors, compliance simplification remains limited. Fundamental Rights Impact Assessment requirements persist despite other relaxations. Healthcare providers, financial institutions, and law enforcement agencies must still assess societal impacts before deployment, creating costs the Omnibus does not materially reduce. The legitimate interest basis for AI training introduces balancing tests that will likely generate litigation as boundaries of ‘necessary’ processing remain undefined.

For consumers and citizens, the net protection impact remains uncertain but potentially negative. Removing mandatory registration for certain Article 6(3) systems eliminates transparency mechanisms for oversight bodies and civil society monitoring. Shifting AI literacy from mandatory provider obligations to voluntary Member State encouragement reduces direct accountability. Allowing providers to unilaterally classify high-risk systems as low-risk without independent verification creates information asymmetries favouring commercial interests over safety considerations.

Whether these changes represent marginal efficiency gains or substantial rights erosion depends critically on implementation vigour and enforcement resource allocation across Member States.

Cross-border implications extend globally

The AI Act's extraterritorial reach applies to non-EU companies placing systems on the EU market or providing services accessible from the EU. This creates complex compliance landscapes for international firms operating across diverging regulatory regimes.

US technology companies increasingly face dual-track compliance challenges as American approaches diverge from European frameworks. The Trump administration's reversal of Biden-era AI safety measures and classification of copyrighted training data as fair use create incompatible legal foundations. Companies serving both markets must maintain parallel development processes or accept that certain systems cannot be deployed globally without modification.

For Chinese firms, China's hybrid regulatory model combines top-down policy guidance with algorithm pre-approval requirements and mandatory ideological alignment. This framework translates poorly to European fundamental rights approaches, creating tensions for market access strategies. Chinese companies seeking European customers must potentially restructure governance and operational practices substantially.

The Brussels Effect, whereby EU regulation shapes global standards through market access requirements, appears less certain for AI than for privacy. GDPR inspired over 150 jurisdictions to adopt similar data protection laws, demonstrating regulatory influence through economic leverage.

AI governance, however, fragments along geopolitical fault lines rather than converging toward European norms. The US rejects precautionary regulatory approaches in favour of sector-specific rules and voluntary standards. China pursues state control aligned with political objectives. Emerging markets increasingly choose regulatory models based on infrastructure dependencies and geopolitical alignments rather than normative preferences or technical merit.

This fragmentation could produce incompatible regulatory blocs: US markets prioritising innovation velocity with minimal constraints, China-aligned jurisdictions accepting authoritarian applications, and a diminishing EU-aligned sphere preserving rights frameworks. Such a division would undermine interoperability and concentrate advanced capabilities within whichever bloc achieves a dominant market position.

However, alternative scenarios remain possible. If trustworthy AI becomes a genuine competitive advantage for applications requiring public trust, healthcare, finance, and critical infrastructure, European regulatory standards could yet establish global benchmarks. The Brussels Effect's realisation for AI depends on whether rights-respecting frameworks prove commercially valuable rather than merely compliance burdens. The next three to five years will likely determine which trajectory prevails.

EU AI Digital Omnibus Package

Solen Feyissa/Unsplash


Forecast

  • Short-term (Now - 3 months)

    • The European Parliament will highly likely follow standard legislative procedure rather than fast-track adoption, given substantial opposition from multiple parliamentary groups. Substantive amendments addressing fundamental rights concerns are likely during committee examination. Companies that delayed compliance preparations are likely to gain short-term competitive advantage over early movers, though this creates future bottleneck risks.

  • Medium-term (3-12 months)

    • Parliamentary negotiations will likely conclude by mid-2026, with final provisions differing materially from the November 2025 proposals. Member States' designation of market surveillance authorities is likely to follow diverse implementation approaches, with some prioritising innovation promotion and others consumer protection. Early enforcement actions across jurisdictions will likely establish divergent interpretative precedents for ambiguous provisions, particularly around Article 6(3) self-assessment.

  • Long-term (>1 year)

    • Notified body capacity constraints are highly likely to generate certification bottlenecks when December 2027 compliance deadlines take effect, despite the 16-month extension. Enforcement fragmentation across Member States is highly likely to enable regulatory arbitrage, with companies gravitating toward permissive jurisdictions for establishment and sandbox testing. Europe's structural competitiveness gap with the US and China is likely to persist, although clos, as the Digital Omnibus addresses regulatory procedures without tackling investment deficits, brain drain, or infrastructure dependencies. The geopolitical trajectory toward incompatible regulatory blocs is highly likely to continue, with diminished prospects for the Brussels Effect in AI compared to privacy regulation.

BISI Probability Scale
Previous
Previous

NATO’s 5% Target: Lessons from European Allies’ Path to 2%

Next
Next

Proposal for New UK Cybersecurity Laws