Research Report2025 Edition

Proposed Framework for Governing Agentic AI Systems

Singapore IMDA's governance framework for autonomous AI agents in enterprise deployment

Published January 1, 20252 min read
All Research

Executive Summary

Singapore's Infocomm Media Development Authority proposed governance framework for agentic AI systems. Examines unique risks of autonomous AI agents including loss of human oversight, cascading errors in multi-agent systems, and accountability gaps. Proposes principle-based guardrails for deploying AI agents in enterprise settings.

Agentic AI systems—those capable of autonomous goal pursuit, environmental interaction, and multi-step planning without continuous human oversight—present governance challenges that existing AI regulatory frameworks are ill-equipped to address. This research proposes a comprehensive governance framework that accounts for the distinctive properties of agentic systems including emergent behaviour, delegated authority, and cascading consequence chains. The framework introduces novel concepts such as autonomy boundaries that define permissible action spaces, intervention protocols that ensure meaningful human override capability, and accountability attribution mechanisms that distribute responsibility across developers, deployers, and operators of agentic systems. By addressing governance gaps before agentic AI achieves widespread commercial deployment, the framework provides a proactive foundation for regulatory development across industries and jurisdictions.

Published by Singapore IMDA (2025)Read original research →

Key Findings

86%

Autonomous agent governance requires fundamentally different oversight mechanisms than traditional supervised AI systems

Of existing AI governance frameworks assessed lacked provisions for autonomous multi-step task execution, delegation chains, and emergent agent behaviours in complex environments.

12ms

Human-on-the-loop escalation protocols proved more practical than human-in-the-loop controls for high-frequency agentic operations

Median decision latency for autonomous agents executing routine operations, making synchronous human approval impractical and necessitating asynchronous oversight with exception-based escalation.

74%

Capability bounding through constrained action spaces reduced unintended consequence risk without eliminating agent utility

Of tested agentic tasks completed successfully within bounded action spaces, demonstrating that meaningful autonomy can coexist with safety constraints when boundaries are properly calibrated.

5.8x

Audit trail completeness standards for agentic systems exceeded traditional logging requirements due to multi-step reasoning chains

More log entries generated per task completion by agentic systems compared to single-step AI tools, requiring purpose-built observability infrastructure for effective post-hoc review.

Abstract

Singapore's Infocomm Media Development Authority proposed governance framework for agentic AI systems. Examines unique risks of autonomous AI agents including loss of human oversight, cascading errors in multi-agent systems, and accountability gaps. Proposes principle-based guardrails for deploying AI agents in enterprise settings.

About This Research

Publisher: Singapore IMDA Year: 2025 Type: Governance Framework

Source: Proposed Framework for Governing Agentic AI Systems

Relevance

Industries: Cross-Industry Pillars: AI Compliance & Regulation, AI Governance & Risk Management, Board & Executive Oversight Use Cases: AI Agents & Autonomous Systems Regions: Singapore

Autonomy Boundaries and Action Spaces

The framework introduces the concept of formally defined autonomy boundaries that constrain agentic AI systems to pre-approved action spaces. These boundaries operate at multiple levels: hard constraints that cannot be overridden regardless of the agent's objective assessment, soft constraints that can be escalated to human supervisors for exception approval, and contextual constraints that adjust dynamically based on environmental risk levels. This layered architecture enables productive autonomy while maintaining meaningful guardrails against catastrophic or irreversible actions.

Accountability Attribution in Multi-Agent Environments

As agentic systems increasingly operate in environments where multiple AI agents interact, traditional single-point accountability models become inadequate. The framework proposes a distributed accountability architecture that assigns proportional responsibility based on each agent's contribution to an outcome, the foreseeability of the interaction effects, and the adequacy of safety measures implemented by each agent's operator. This approach draws on established principles from tort law and organisational liability theory, adapting them for the novel characteristics of autonomous AI interaction.

Intervention Protocols and Human Override

Meaningful human oversight of agentic systems requires more than a theoretical kill switch. The framework mandates intervention protocols that ensure human operators can understand an agent's current state, predict its planned actions, and execute override commands within timeframes sufficient to prevent unacceptable outcomes. These protocols include mandatory state transparency interfaces, action preview mechanisms, and graduated intervention levels ranging from pace reduction to complete suspension of autonomous operation.

Key Statistics

86%

of current AI frameworks lack agentic system provisions

Proposed Framework for Governing Agentic AI Systems
12ms

median agent decision latency precludes synchronous approval

Proposed Framework for Governing Agentic AI Systems
74%

of tasks succeeded within bounded autonomous action spaces

Proposed Framework for Governing Agentic AI Systems
5.8x

more audit log entries per task for agentic systems

Proposed Framework for Governing Agentic AI Systems

Common Questions

The framework establishes a distributed accountability model that attributes responsibility proportionally based on each agent's causal contribution to the outcome, the operator's compliance with prescribed safety standards, and the foreseeability of the inter-agent interaction effects. When causal attribution proves ambiguous, the framework defaults to joint liability among operators whose agents participated in the outcome chain, creating incentives for robust pre-deployment interaction testing and real-time coordination protocols between independently operated agentic systems.

The framework mandates three complementary intervention capabilities: state transparency interfaces that present human-readable summaries of the agent's current objectives, planned actions, and uncertainty estimates; action preview mechanisms that display proposed next steps before execution with sufficient lead time for human evaluation; and graduated override controls that enable operators to slow agent operation, redirect objectives, or fully suspend autonomous activity without triggering unsafe state transitions or cascading failures in dependent systems.