Agentic AI: The Future and Governance of Autonomous Systems
By Martyna Chmura | 1 January 2026
Summary
On 22 January 2026, Singapore launched the first state-backed Model AI Governance Framework for Agentic AI (MGF) to guide the deployment of autonomous Artificial Intelligence (AI) agents that can act on real-world systems without constant human input.
The framework is already relevant for businesses and public bodies that use agentic AI to automate complex workflows, but it leaves open questions about security, accountability and systemic risks beyond single organisations.
It is a realistic possibility that agentic AI becomes core digital infrastructure within the next few years, with high impact on economic efficiency and democratic resilience, making stronger technical and legal controls over autonomy likely in the medium term.
Context
On 22 January 2026, Singapore’s Infocomm Media Development Authority (IMDA) published the Model AI Governance Framework for Agentic AI (MGF) at the World Economic Forum in Davos, describing it as the first governance template focused specifically on AI agents. The framework sets out four dimensions of governance, emphasising that humans must remain accountable for decisions and actions taken by autonomous systems.
Agentic AI refers to systems capable of autonomous goal-directed behaviour, planning and taking sequences of actions in dynamic environments with limited human intervention. This contrasts with individual AI agents, which are task-specific components that operate within predefined boundaries and require tighter human direction. The shift to agentic AI marks a step change: from narrow automation to systems that can adapt, prioritise and act across workflows.
Market trends reflect rapid adoption. The global market for agentic AI technologies was estimated at around USD 7.3 b in 2025, with projections to reach about USD 139.2 b by 2034 at over 40% annual growth. Related forecasts suggest that 40% of enterprise applications will include AI agent functionalities by 2026, indicating broad movement from experimentation to core infrastructure. These developments illustrate both the scale of agentic AI adoption and the emergence of governance needs to manage security, accountability and societal risks.
Implications
The MGF acknowledges that once AI systems can act, not just advise, the main problem becomes controlling their impact on operational systems and people. Bounding autonomy and tool access, enforcing traceability to human decision‑makers and building in checkpoints are important steps, but they rely on organisations correctly identifying high‑risk actions and maintaining strong oversight as agents spread across functions. In practice, complex structures and cost pressures can lead to superficial compliance: Staff may approve large volumes of agent actions without detailed review, and managers may gradually loosen controls to gain speed and savings. This weakens the promise of ‘meaningful human oversight’ just as agents gain deeper access to sensitive data and systems.
For businesses, the benefits are clear. Agentic AI can handle entire customer journeys, monitor transactions and initiate first‑line responses in fraud cases, adjust inventory and logistics in near real time, and help compile compliance reports. These systems can cut handling times and labour costs, and surveys suggest that 88% of senior executives plan to increase AI budgets due to agentic AI opportunities. However, the same autonomy creates new risks: agents often need broad permissions, and techniques such as prompt injection and memory poisoning can manipulate goals or corrupt internal state, leading to unauthorised actions, cascading hallucinations and large‑scale errors. A single compromised agent is likely to exfiltrate data or trigger harmful transactions at machine speed, and automation bias makes it less likely that humans will intervene early.
At the societal level, agentic architectures also pose wider risks beyond single organisations. Research on malicious AI swarms shows how coordinated agents can maintain persistent identities, adapt to engagement, infiltrate communities and generate synthetic consensus, threatening democratic debate and trust in institutions. These dynamics extend beyond the operational harms described in enterprise frameworks and raise questions about how societies can distinguish between legitimate and inauthentic participation in online spaces.
Forecast
Short-term (Now - 3 months)
It is likely that major firms in Singapore and the wider region cite the MGF in internal policies and risk assessments, particularly in finance, logistics and digital public services, with medium impact on the pace but not the direction of agentic AI adoption.
Medium-term (3-12 months)
It is likely that regulators in sectors such as banking and critical infrastructure introduce agent‑specific rules on autonomy limits, audit trails and incident reporting.
There is a realistic possibility that at least one notable incident involving misconfigured or compromised agents triggers public and political calls for stricter enforcement, with high impact on compliance costs and deployment strategies.
Long-term (>1 year)
It is likely that agentic AI becomes embedded as core organisational infrastructure and that governments in several jurisdictions move from soft‑law guidance to binding constraints on agent autonomy and shared liability between developers and operators.
It is a realistic possibility that this shift will impact how businesses design, test and monitor autonomous systems and on how democracies manage information risks from coordinated agentic activity