Back to Insights
AI Compliance & RegulationFramework

Singapore Model AI Governance Framework: From Traditional AI to Agentic AI

February 12, 202615 min readMichael Lansdowne Hauge
Updated March 15, 2026
For:Board MemberLegal/ComplianceCISOConsultantCTO/CIOIT ManagerCMO

Singapore's Model AI Governance Framework has evolved through three editions — Traditional AI (2020), Generative AI (2024), and Agentic AI (2026). Together they form the most comprehensive voluntary AI governance framework in Asia.

Summarize and fact-check this article with:
Technology team in modern Singapore office discussing AI governance implementation

Key Takeaways

  • 1.Three generations: Traditional AI (2020), Generative AI (2024), and Agentic AI (January 2026 — world's first)
  • 2.All are voluntary but widely referenced by regulators across ASEAN
  • 3.GenAI framework covers 9 areas including content provenance, cybersecurity, and safety alignment
  • 4.Agentic AI framework addresses autonomous agents, cascading actions, and multi-agent coordination
  • 5.AI Verify provides open-source testing toolkit; ISAGO 2.0 enables self-assessment
  • 6.Project Moonshot offers world's first open-source LLM evaluation toolkit

Overview: Three Generations of AI Governance

Singapore has established itself as Asia's most influential voice in AI governance through its Model AI Governance Framework, developed and maintained by the Infocomm Media Development Authority (IMDA). Since its inception, the framework has evolved through three distinct generations, each responding to a new wave of technological capability and risk.

The first generation, the Model AI Governance Framework (2020), addressed traditional AI systems such as recommendation engines, classification models, and predictive analytics. The second, the Model AI Governance Framework for Generative AI (2024), extended governance principles to large language models, image generators, and other generative systems. The third and most recent iteration, the Model AI Governance Framework for Agentic AI (January 2026), represents the world's first governance framework designed specifically for autonomous AI agents.

All three generations remain voluntary in nature. Nevertheless, they have become the gold standard for AI governance across Southeast Asia and are widely referenced by regulators throughout the region.

The Original Framework (2020)

Published by the PDPC and IMDA, the original Model AI Governance Framework established the foundational governance principles upon which all subsequent iterations have been built.

Two Guiding Principles

The framework rests on two core tenets. First, organizations deploying AI should ensure that AI decision-making is explainable, transparent, and fair. Second, all AI solutions should be human-centric in their design and implementation.

Four Key Areas

The framework organizes governance responsibilities into four interconnected areas.

The first area, internal governance structures and measures, calls on organizations to establish clear roles and responsibilities for AI oversight, build risk management frameworks appropriate to their AI maturity, and commit to regular review and updating of governance practices.

The second area addresses the level of human involvement in AI-augmented decision-making. The framework advocates a risk-based approach to human oversight, where higher-risk decisions demand greater human involvement and clear escalation procedures are codified.

The third area, operations management, encompasses the full lifecycle of AI systems. This includes data management practices, model development and testing processes, ongoing monitoring and performance tracking, and incident management protocols.

The fourth area, stakeholder interaction and communication, requires transparency about AI use with customers and affected parties, accessible channels for feedback and complaints, and regular public disclosure of AI governance practices.

The GenAI Framework (January 2024)

Developed in collaboration with more than 70 organizations, the GenAI framework extends Singapore's governance architecture to address the distinctive risks introduced by generative AI systems.

Nine Focus Areas

The framework is organized around nine interconnected focus areas that collectively address the full spectrum of generative AI risk.

Accountability establishes clear responsibility for GenAI outputs across the entire development-to-deployment chain. Data governance ensures training data quality while addressing copyright concerns and managing data privacy obligations. Trusted development and deployment mandates safety testing, red-teaming exercises, and staged rollout protocols.

Incident reporting and management provides structured approaches for detecting, reporting, and responding to GenAI incidents. Testing and assurance defines evaluation standards for model performance, safety, and reliability. Content provenance introduces mechanisms to identify and label AI-generated content through watermarking and metadata.

Safety and alignment ensures that generative AI systems behave as intended and remain aligned with human values. Cybersecurity addresses protection against adversarial attacks, prompt injection, and data poisoning. Finally, human oversight of GenAI establishes requirements for maintaining meaningful human control over generative systems.

Practical Implementation

The framework recognizes that GenAI governance spans multiple stakeholders, each carrying distinct responsibilities. Model developers bear responsibility for safety testing, documentation, and model-level safeguards. Application developers are accountable for appropriate use, additional safeguards, and user-facing governance mechanisms. Deployers carry responsibility for organizational governance, user training, and ongoing monitoring of system behavior.

The Agentic AI Framework (January 2026)

Unveiled at the World Economic Forum in Davos on 22 January 2026, this framework represents the world's first governance standard designed specifically for agentic AI: systems that can autonomously reason, plan, and take actions in pursuit of goals.

Key Concepts

Agentic AI represents a fundamental shift in how artificial intelligence operates within organizations. These systems can set and pursue goals autonomously, make decisions and take actions without requiring human intervention at each step, interact with external tools, systems, and other AI agents, and adapt their behavior based on observed outcomes. This degree of autonomy introduces governance challenges that neither the traditional AI framework nor the GenAI framework was designed to address.

Governance Challenges Unique to Agentic AI

The autonomous nature of agentic AI creates several governance challenges that have no precedent in earlier frameworks. Cascading actions present perhaps the most significant risk: autonomous agents can trigger chains of actions across systems, making it substantially harder to predict and control outcomes. Multi-agent coordination compounds this complexity, as interactions between multiple AI agents introduce emergent behaviors that are difficult to govern at the individual agent level.

Accountability gaps emerge when an agent acts autonomously, raising unresolved questions about who bears responsibility for outcomes that no human directly authorized. Safety boundaries present a further challenge, requiring organizations to define and enforce meaningful limits on autonomous agent behavior without undermining the operational value that autonomy provides.

Framework Provisions

The Agentic AI Framework addresses these challenges through several provisions. It mandates clear human accountability for autonomous agent actions, regardless of the degree of autonomy involved. Organizations must define explicit boundaries and safety limits for agent behavior. The framework requires robust monitoring and intervention mechanisms that enable human operators to observe and override agent decisions in real time. Comprehensive logging and audit trails for all agent decisions and actions are required to support after-the-fact review. Finally, the framework establishes incident management protocols tailored to the unique failure modes of autonomous agent systems.

AI Verify and ISAGO

Beyond the governance frameworks themselves, Singapore has invested in practical tooling that enables organizations to translate governance principles into measurable technical outcomes.

AI Verify

AI Verify is a government-developed toolkit for testing AI systems against governance attributes including fairness, transparency, and robustness. The toolkit is open-source and actively maintained by the AI Verify Foundation, which now comprises more than 90 member organizations. Companies can use AI Verify to demonstrate alignment with the Model Framework through structured, repeatable technical assessments.

Project Moonshot

Project Moonshot is the world's first open-source evaluation toolkit designed specifically for large language models. It provides red-teaming capabilities, benchmark testing, and baseline safety evaluations, giving organizations a standardized way to assess LLM performance against governance expectations.

ISAGO (Implementation and Self-Assessment Guide for Organizations)

ISAGO serves as a companion tool that helps organizations evaluate their AI governance maturity through structured self-assessment questionnaires and gap analysis against the Model Framework. ISAGO 2.0, released in 2025, integrates directly with AI Verify to combine organizational self-assessment with technical testing into a single governance workflow.

How to Implement

Organizations seeking to adopt Singapore's AI governance frameworks should approach implementation as a phased journey, building governance maturity incrementally rather than attempting to address all three framework generations simultaneously.

Phase 1: Foundation

The first phase establishes governance infrastructure. Organizations should assign AI governance ownership, which may reside within an existing risk or compliance function rather than requiring a new team. Completing the ISAGO self-assessment provides an objective baseline of current governance maturity, and defining a risk-based approach to AI governance ensures that subsequent efforts are proportionate to the organization's actual exposure.

Phase 2: Traditional AI Governance

With the foundation in place, organizations should implement the four key areas from the original 2020 framework. Conducting AI Verify testing for the most important AI systems provides technical evidence of governance alignment. Establishing monitoring and incident management processes ensures that governance remains operational rather than purely documentary.

Phase 3: GenAI Governance

The third phase extends governance to generative AI systems. Organizations should map their GenAI usage against the nine focus areas identified in the 2024 framework, implement content provenance mechanisms to track AI-generated outputs, and conduct red-teaming exercises using Project Moonshot. Establishing GenAI-specific acceptable use policies and training programs ensures that governance obligations are understood across the organization.

Phase 4: Agentic AI Governance (If Applicable)

For organizations deploying autonomous AI agents, the fourth phase introduces the additional governance requirements of the 2026 framework. This begins with identifying any autonomous AI agents operating within the organization's systems and defining explicit safety boundaries and human override mechanisms. Implementing comprehensive logging and audit trails for all agent actions is essential, as is establishing monitoring specifically designed to detect cascading action risks before they materialize.

Singapore's Model AI Governance Framework does not exist in isolation. The Singapore PDPA provides mandatory data protection requirements that underpin all AI governance activity. The Singapore MAS AI Guidelines establish a sector-specific mandatory framework for financial institutions, imposing binding obligations that go beyond the voluntary Model Framework. At the regional level, the ASEAN AI Governance Guide draws heavily on Singapore's approach, extending its influence across Southeast Asia. Globally, the EU AI Act shares a similar risk-based governance philosophy, making Singapore's frameworks broadly compatible with the emerging international regulatory landscape.

Common Questions

No. All three editions (Traditional AI, GenAI, and Agentic AI) are voluntary. However, the PDPA is mandatory and the MAS AI guidelines are effectively mandatory for financial institutions. The Model Framework represents best practice that regulators use as a reference when evaluating organizations' AI governance.

AI Verify is a government-developed, open-source toolkit for testing AI systems against governance attributes like fairness, transparency, and robustness. It is not mandatory to use AI Verify, but it provides a practical way to demonstrate alignment with the Model Framework and can strengthen your position with regulators and customers.

The Agentic AI framework (January 2026) covers AI systems that can autonomously reason, plan, and take actions without human intervention at each step. A standard chatbot that only responds to user queries is NOT an agentic AI system. However, if your AI agent can autonomously browse the web, execute code, make API calls, or take actions on behalf of users, it likely falls under this framework.

Singapore's approach is voluntary and principles-based, while the EU AI Act is mandatory and prescriptive. Singapore focuses on industry collaboration and self-governance, with mandatory requirements only in specific sectors (financial services via MAS). The EU takes a risk-based but legally binding approach across all sectors. Many companies use Singapore's framework as a practical governance tool to complement EU AI Act compliance.

References

  1. Model AI Governance Framework (Second Edition). Singapore PDPC / IMDA (2020). View source
  2. New Model AI Governance Framework for Agentic AI. Infocomm Media Development Authority (IMDA) (2026). View source
  3. Singapore Launches AI Verify Foundation. IMDA (2023). View source
  4. Model AI Governance Framework — PDPC Singapore. Singapore PDPC (2024). View source
  5. Artificial Intelligence — Emerging Technologies. IMDA (2024). View source
  6. ASEAN Guide on AI Governance and Ethics. ASEAN Secretariat (2024). View source
  7. Consultation Paper on AI Risk Management for Financial Institutions. Monetary Authority of Singapore (MAS) (2025). View source
Michael Lansdowne Hauge

Managing Partner · HRDF-Certified Trainer (Malaysia), Delivered Training for Big Four, MBB, and Fortune 500 Clients, 100+ Angel Investments (Seed–Series C), Dartmouth College, Economics & Asian Studies

Advises leadership teams across Southeast Asia on AI strategy, readiness, and implementation. HRDF-certified trainer with engagements for a Big Four accounting firm, a leading global management consulting firm, and the world's largest ERP software company.

AI StrategyAI GovernanceExecutive AI TrainingDigital TransformationASEAN MarketsAI ImplementationAI Readiness AssessmentsResponsible AIPrompt EngineeringAI Literacy Programs

EXPLORE MORE

Other AI Compliance & Regulation Solutions

INSIGHTS

Related reading

Talk to Us About AI Compliance & Regulation

We work with organizations across Southeast Asia on ai compliance & regulation programs. Let us know what you are working on.