Back to Insights
AI Compliance & RegulationFrameworkPractitioner

Singapore Model AI Governance Framework: From Traditional AI to Agentic AI

February 12, 202615 min readPertama Partners
For:CTO/CIOCompliance LeadRisk OfficerCEO/Founder

Singapore's Model AI Governance Framework has evolved through three editions — Traditional AI (2020), Generative AI (2024), and Agentic AI (2026). Together they form the most comprehensive voluntary AI governance framework in Asia.

Technology team in modern Singapore office discussing AI governance implementation

Key Takeaways

  • 1.Three generations: Traditional AI (2020), Generative AI (2024), and Agentic AI (January 2026 — world's first)
  • 2.All are voluntary but widely referenced by regulators across ASEAN
  • 3.GenAI framework covers 9 areas including content provenance, cybersecurity, and safety alignment
  • 4.Agentic AI framework addresses autonomous agents, cascading actions, and multi-agent coordination
  • 5.AI Verify provides open-source testing toolkit; ISAGO 2.0 enables self-assessment
  • 6.Project Moonshot offers world's first open-source LLM evaluation toolkit

Overview: Three Generations of AI Governance

Singapore has pioneered AI governance in Asia through its Model AI Governance Framework, developed by the Infocomm Media Development Authority (IMDA). The framework has evolved through three generations:

  1. Model AI Governance Framework (2020): Covers traditional AI systems — recommendation engines, classification models, predictive analytics
  2. Model AI Governance Framework for Generative AI (2024): Addresses risks specific to LLMs, image generators, and other generative models
  3. Model AI Governance Framework for Agentic AI (January 2026): The world's first governance framework for autonomous AI agents

All three are voluntary, but they represent the gold standard for AI governance in Southeast Asia and are referenced by regulators across the region.

The Original Framework (2020)

Two Guiding Principles

1. Organizations using AI should ensure that AI decision-making is explainable, transparent, and fair.

2. AI solutions should be human-centric.

Four Key Areas

Internal governance structures and measures:

  • Clear roles and responsibilities for AI governance
  • Risk management framework appropriate to the organization's AI maturity
  • Regular review and update of AI governance practices

Determining the level of human involvement in AI-augmented decision-making:

  • Risk-based approach to human oversight
  • Higher-risk decisions require more human involvement
  • Clear escalation procedures

Operations management:

  • Data management practices
  • Model development and testing processes
  • Monitoring and performance tracking
  • Incident management

Stakeholder interaction and communication:

  • Transparency about AI use with customers and affected parties
  • Channels for feedback and complaints
  • Regular disclosure of AI governance practices

The GenAI Framework (January 2024)

Developed in collaboration with 70+ organizations, this framework addresses risks specific to generative AI:

Nine Focus Areas

  1. Accountability: Defining who is responsible for GenAI outputs across the development-deployment chain
  2. Data: Ensuring training data quality, addressing copyright, and managing data privacy
  3. Trusted development and deployment: Safety testing, red-teaming, and staged rollouts
  4. Incident reporting and management: Detecting, reporting, and responding to GenAI incidents
  5. Testing and assurance: Evaluating model performance, safety, and reliability
  6. Content provenance: Mechanisms to identify and label AI-generated content (watermarking, metadata)
  7. Safety and alignment: Ensuring GenAI behaves as intended and aligns with human values
  8. Cybersecurity: Protecting against adversarial attacks, prompt injection, and data poisoning
  9. Human oversight of GenAI: Maintaining meaningful human control

Practical Implementation

The framework recognizes that GenAI governance involves multiple stakeholders:

  • Model developers: Responsible for safety testing, documentation, and model-level safeguards
  • Application developers: Responsible for appropriate use, additional safeguards, and user-facing governance
  • Deployers: Responsible for organizational governance, user training, and monitoring

The Agentic AI Framework (January 2026)

Unveiled at the World Economic Forum in Davos on 22 January 2026, this is the world's first governance framework specifically for agentic AI — AI systems that can autonomously reason, plan, and take actions.

Key Concepts

What is Agentic AI? AI systems that can:

  • Set and pursue goals autonomously
  • Make decisions and take actions without human intervention at each step
  • Interact with external tools, systems, and other AI agents
  • Adapt their behavior based on outcomes

Governance Challenges Unique to Agentic AI

  • Cascading actions: Autonomous agents can trigger chains of actions, making it harder to predict and control outcomes
  • Multi-agent coordination: When multiple AI agents interact, governance becomes more complex
  • Accountability gaps: When an agent acts autonomously, who is responsible for the outcome?
  • Safety boundaries: How to define and enforce limits on autonomous agent behavior

Framework Provisions

  • Clear human accountability for autonomous agent actions
  • Defined boundaries and safety limits for agent behavior
  • Monitoring and intervention mechanisms
  • Logging and audit trails for agent decisions and actions
  • Incident management for autonomous agent failures

AI Verify and ISAGO

AI Verify

Singapore's government-developed toolkit for testing AI systems:

  • What it does: Provides technical testing for AI fairness, transparency, robustness, and other governance attributes
  • Status: Open-source, actively maintained by the AI Verify Foundation (90+ member organizations)
  • Use case: Companies can use AI Verify to demonstrate alignment with the Model Framework

Project Moonshot

The world's first open-source LLM evaluation toolkit:

ISAGO (Implementation and Self-Assessment Guide for Organizations)

A companion tool that helps organizations evaluate their AI governance maturity:

  • Self-assessment questionnaire
  • Gap analysis against the Model Framework
  • ISAGO 2.0 (released 2025) integrates with AI Verify for technical testing

How to Implement

Phase 1: Foundation

  1. Assign AI governance ownership (could be existing risk/compliance function)
  2. Complete the ISAGO self-assessment to understand your current maturity
  3. Define your risk-based approach to AI governance

Phase 2: Traditional AI Governance

  1. Implement the four key areas from the original framework
  2. Conduct AI Verify testing for your most important AI systems
  3. Establish monitoring and incident management processes

Phase 3: GenAI Governance

  1. Map your GenAI usage against the nine focus areas
  2. Implement content provenance mechanisms
  3. Conduct Red-teaming using Project Moonshot
  4. Establish GenAI-specific use policies and training

Phase 4: Agentic AI Governance (if applicable)

  1. Identify any autonomous AI agents in your systems
  2. Define safety boundaries and human override mechanisms
  3. Implement logging and audit trails for agent actions
  4. Establish monitoring for cascading action risks
  • Singapore PDPA: Mandatory data protection requirements that underpin all AI governance
  • Singapore MAS AI Guidelines: Sector-specific mandatory framework for financial institutions
  • ASEAN AI Governance Guide: Regional framework built on Singapore's approach
  • EU AI Act: Global reference with similar risk-based governance approach

Frequently Asked Questions

No. All three editions (Traditional AI, GenAI, and Agentic AI) are voluntary. However, the PDPA is mandatory and the MAS AI guidelines are effectively mandatory for financial institutions. The Model Framework represents best practice that regulators use as a reference when evaluating organizations' AI governance.

AI Verify is a government-developed, open-source toolkit for testing AI systems against governance attributes like fairness, transparency, and robustness. It is not mandatory to use AI Verify, but it provides a practical way to demonstrate alignment with the Model Framework and can strengthen your position with regulators and customers.

The Agentic AI framework (January 2026) covers AI systems that can autonomously reason, plan, and take actions without human intervention at each step. A standard chatbot that only responds to user queries is NOT an agentic AI system. However, if your AI agent can autonomously browse the web, execute code, make API calls, or take actions on behalf of users, it likely falls under this framework.

Singapore's approach is voluntary and principles-based, while the EU AI Act is mandatory and prescriptive. Singapore focuses on industry collaboration and self-governance, with mandatory requirements only in specific sectors (financial services via MAS). The EU takes a risk-based but legally binding approach across all sectors. Many companies use Singapore's framework as a practical governance tool to complement EU AI Act compliance.

References

  1. Model AI Governance Framework (Second Edition). Singapore IMDA (2020). View source
  2. Model AI Governance Framework for Generative AI. Singapore IMDA and AI Verify Foundation (2024)
  3. Singapore Launches New Model AI Governance Framework for Agentic AI. Singapore IMDA (2026). View source
SingaporeAI governance frameworkIMDAgenerative AIagentic AIAI Verify

Ready to Apply These Insights to Your Organization?

Book a complimentary AI Readiness Audit to identify opportunities specific to your context.

Book an AI Readiness Audit