Artificial Intelligence Power Dynamics: Do 'We' Control AI, or Does 'It' Control Us?
By Aryamehr Fattahi | 11 August 2025
Summary
The proliferation of Large Language Models (LLMs), including GPT, Claude, and Gemini, has intensified debates about AI governance and human agency.
Despite widespread consensus amongst experts that humans must retain power and control over AI systems, evidence suggests a growing dependency that undermines this doctrine.
This analysis examines 3 key areas where AI exerts influence over human behaviour: emotional dependency, capability substitution, and unchecked commercial expansion.
The power dynamic will likely continue favouring AI unless humans reassert control through emotional intelligence, unpredictability, and authentic engagement - Areas where AI remains fundamentally limited.
Where AI Exerts Power and Control
Contemporary human-AI interactions reveal 3 critical areas where control has shifted towards artificial systems, fundamentally altering decision-making processes and personal agency.
Emotional Dependency: AI and Human Attachment
Emotional dependency on AI has reached concerning levels, with mental health applications demonstrating the example of human vulnerability. For example, YouGov polling has found that 31% of 18-24-year-olds feel ‘comfortable’ discussing their mental health concerns with AI chatbots in the United Kingdom (UK); meanwhile, over one-third of adults favour sharing problems with AI over loved ones. This dependency creates a false sense of comfort through AI's tendency to provide agreeable responses rather than genuine therapeutic intervention. Not to mention that this phenomenon disconnects individuals from authentic human relationships whilst exposing them to systems with unresolved ethical frameworks, biased training data, and a limited understanding of complex emotional contexts.
Capability Substitution: When AI Replaces Human Skills
Capability substitution represents another avenue through which AI assumes power and control over human functions. Academic integrity is compromised as some scientists have embedded AI prompts in their research papers as a means to secure favourable peer reviews. Moreover, professionals increasingly rely on AI to perform tasks beyond their competence or speed, resulting in a ‘fake it till you make it’ bubble in which individuals appear capable in areas such as coding or analysis, but remain fundamentally uninformed. More importantly, the deteriorating effectiveness of AI detection systems compounds this issue, enabling widespread academic and professional dishonesty. As a result, expectations have also risen, where people are now expected to complete work faster, creating more room for errors, especially when AI systems may be unaware of certain information or events that may be important.
Commercial Expansion: Market Forces and Rushed Deployments
Commercial expansion of AI systems is also occurring without adequate safety measures, driven by competitive pressures and profit motives. The UK Government has found that AI developers often release models before fully understanding their risks or functions, prioritising market positioning over safety considerations. This rushed approach has generated environmental costs through massive energy consumption, while creating opportunities for malicious actors to exploit vulnerabilities for misinformation campaigns and the creation of harmful content. Furthermore, the mainstream media's AI obsession has allowed technology companies to dominate narrative frameworks, minimising discussion of physical and digital risks. This, in turn, has allowed AI to infiltrate various aspects and spaces of life, creating vulnerabilities in power and control due to its overuse in services such as operating systems, phones, and even art.
Alexander Sinn/Unsplash
Implications for AI Power Dynamics and Control
Individual Autonomy: Cognitive and Emotional Impacts
Individual autonomy faces significant erosion as AI dependency and control deepens across cognitive, emotional, and professional domains. The substitution of human judgment with algorithmic recommendations reduces critical thinking capacity, creating populations increasingly unable to navigate complex decisions independently. This cognitive atrophy particularly affects younger generations who integrate AI tools during formative learning periods, potentially compromising their ability to develop authentic expertise and creative problem-solving skills. The psychological implications also include diminished self-efficacy, reduced tolerance for uncertainty, and weakened resilience in facing challenges without technological assistance.
Social Cohesion: Human Ties and Empathy
Social cohesion also deteriorates as AI intermediates and controls human relationships and communication. The preference for AI counselling over human connection signals a fundamental shift away from community-based support systems towards individualised, technology-mediated solutions. This trend threatens the development of empathy, emotional intelligence, and interpersonal skills that form the foundation of societies. Professional environments can also become increasingly characterised by artificial competence rather than genuine capability, undermining trust, collaboration, and authentic mentorship relationships. Not to mention that educational institutions face the challenge of distinguishing between authentic learning and AI-assisted performance, which could compromise the entire knowledge validation system.
Economic Structures: Skills, Labour Markets and False Competence
Economic structures further risk fundamental disruption as AI capability substitution masks genuine skills gaps, creating a false confidence in workforce readiness. As a result, labour markets may experience severe misalignment between apparent and actual capabilities, leading to systemic productivity failures and economic instability. The concentration of AI development within major technology corporations also causes unprecedented centralisation of power, enabling these entities to influence behaviour, shape discourse, and extract economic value from human dependency relationships. This dynamic represents a form of digital colonialism in which human agency is subordinated to corporate algorithmic objectives.
Security Vulnerabilities: Disinformation, Threats and Cascading Risks
Security vulnerabilities likewise multiply as AI systems proliferate and grow in power without adequate oversight or safety frameworks. The rushed deployment of AI models creates exploitable weaknesses that malicious actors can leverage for disinformation campaigns, social manipulation, and cyber attacks. This also has national security implications, including the risk of foreign interference in democratic processes, economic espionage through AI-mediated data collection, and the potential weaponisation of AI systems against civilian populations. Especially given the interconnected nature of AI deployments, vulnerabilities in one system can consequently cascade across multiple domains, creating systemic risks that exceed traditional threat models.
Regulatory Capture
Regulatory capture also becomes increasingly likely as AI companies accumulate power and influence over policymaking processes through lobbying, research funding, and revolving door employment practices. The complexity of AI technologies creates information asymmetries that favour industry perspectives over public interest considerations. In a similar vein, democratic governance faces challenges in maintaining effective oversight of rapidly evolving technologies whilst balancing innovation benefits against societal risks. International coordination challenges further enable regulatory arbitrage, where companies relocate operations to jurisdictions with weaker oversight frameworks.
Cultural Homogenisation: Loss of Diversity
Cultural homogenisation can also emerge as AI systems trained on specific datasets propagate particular worldviews, values, and problem-solving approaches across diverse populations. The loss of cultural diversity in thinking patterns, creative expression, and knowledge systems represents an existential threat to human civilisation's adaptive capacity. Local knowledge traditions, alternative epistemologies, and minority perspectives therefore risk marginalisation as AI systems promote standardised approaches to complex problems that require contextual understanding and cultural sensitivity.
Where Humans Can Retain Power and Control
Human advantages over AI systems remain substantial in specific domains that leverage uniquely biological and experiential capabilities, providing pathways for reasserting control over technological relationships.
AI May Understand Emotions, But It Will Never Act On Them.
Emotional authenticity represents humanity's most significant advantage, as AI systems can recognise and simulate emotional patterns but cannot genuinely experience or authentically respond to emotions. Significantly, human emotional intelligence encompasses contextual interpretation, cultural sensitivity, and genuine empathy that emerges from shared biological experiences. Therefore, humans can leverage this advantage by prioritising face-to-face interactions, cultivating emotional skills, and choosing human guidance for complex personal decisions. Therapeutic relationships, creative endeavours, and leadership roles particularly benefit from this authentic emotional engagement, which AI simply cannot replicate. This is particularly important since humans can act out of emotions like anger or sadness, something that AI will struggle with due to restrictions and code limitations.
Human Unpredictability
Unpredictability and creative impulse constitute another area where humans maintain decisive advantages over pattern-based AI systems. Human consciousness generates novel approaches through intuitive leaps, illogical associations, and creative insights that emerge from subconscious processing. This unpredictability enables humans to solve problems through unconventional methods, adapt to unprecedented situations, and create truly original content. Cultivating creativity, embracing uncertainty, and maintaining intellectual curiosity allow humans to operate beyond AI's predictive capabilities - particularly within crises or conflicts where random problems may arise or new solutions may be needed.
Critical Thinking, Authenticity and Uniqueness
Contextual wisdom and cultural understanding also provide humans with interpretive frameworks that AI systems struggle to replicate authentically. Human knowledge integrates personal experience, cultural background, and situational awareness in ways that enable nuanced decision-making appropriate to specific contexts. This advantage becomes particularly relevant in complex social situations, ethical dilemmas, and cross-cultural interactions where algorithmic approaches may miss crucial subtleties or perpetuate harmful biases.
Igor Omilaev/Unsplash
Forecast
Medium-term (3-12 months)
AI dependency will likely accelerate as more organisations integrate AI tools without adequate training or oversight frameworks.
Educational institutions will likely struggle to distinguish between authentic and AI-assisted work, especially as researchers insert white text to avoid being called out.
There is a realistic possibility that mental health applications will expand despite unresolved safety concerns.
Professional competency crises will likely become apparent as AI-dependent workers face situations requiring genuine expertise they lack.
It is likely that regulatory responses will emerge as AI-related incidents increase, though enforcement mechanisms will, with a realistic possibility, remain inadequate.
Long-term (>1 year)
Human societies will likely divide between populations maintaining authentic capabilities and those becoming increasingly AI-dependent.
It is highly likely that educational and professional systems will implement new frameworks for validating genuine human competence, whilst international coordination on AI governance will, with a realistic possibility, improve following major security incidents or economic disruptions.
Generational divides will highly likely emerge between AI-native populations and those retaining pre-AI capabilities, creating distinct socioeconomic classes.
Democratic institutions will, with a realistic possibility, face legitimacy crises as AI-mediated information environments undermine shared reality frameworks.
Economic restructuring will likely occur as genuine human expertise becomes a premium commodity, whilst AI-dependent sectors experience productivity collapses during system failures.