Overview: Three Generations of AI Governance
Singapore has pioneered AI governance in Asia through its Model AI Governance Framework, developed by the Infocomm Media Development Authority (IMDA). The framework has evolved through three generations:
- Model AI Governance Framework (2020): Covers traditional AI systems — recommendation engines, classification models, predictive analytics
- Model AI Governance Framework for Generative AI (2024): Addresses risks specific to LLMs, image generators, and other generative models
- Model AI Governance Framework for Agentic AI (January 2026): The world's first governance framework for autonomous AI agents
All three are voluntary, but they represent the gold standard for AI governance in Southeast Asia and are referenced by regulators across the region.
The Original Framework (2020)
Two Guiding Principles
1. Organizations using AI should ensure that AI decision-making is explainable, transparent, and fair.
2. AI solutions should be human-centric.
Four Key Areas
Internal governance structures and measures:
- Clear roles and responsibilities for AI governance
- Risk management framework appropriate to the organization's AI maturity
- Regular review and update of AI governance practices
Determining the level of human involvement in AI-augmented decision-making:
- Risk-based approach to human oversight
- Higher-risk decisions require more human involvement
- Clear escalation procedures
Operations management:
- Data management practices
- Model development and testing processes
- Monitoring and performance tracking
- Incident management
Stakeholder interaction and communication:
- Transparency about AI use with customers and affected parties
- Channels for feedback and complaints
- Regular disclosure of AI governance practices
The GenAI Framework (January 2024)
Developed in collaboration with 70+ organizations, this framework addresses risks specific to generative AI:
Nine Focus Areas
- Accountability: Defining who is responsible for GenAI outputs across the development-deployment chain
- Data: Ensuring training data quality, addressing copyright, and managing data privacy
- Trusted development and deployment: Safety testing, red-teaming, and staged rollouts
- Incident reporting and management: Detecting, reporting, and responding to GenAI incidents
- Testing and assurance: Evaluating model performance, safety, and reliability
- Content provenance: Mechanisms to identify and label AI-generated content (watermarking, metadata)
- Safety and alignment: Ensuring GenAI behaves as intended and aligns with human values
- Cybersecurity: Protecting against adversarial attacks, prompt injection, and data poisoning
- Human oversight of GenAI: Maintaining meaningful human control
Practical Implementation
The framework recognizes that GenAI governance involves multiple stakeholders:
- Model developers: Responsible for safety testing, documentation, and model-level safeguards
- Application developers: Responsible for appropriate use, additional safeguards, and user-facing governance
- Deployers: Responsible for organizational governance, user training, and monitoring
The Agentic AI Framework (January 2026)
Unveiled at the World Economic Forum in Davos on 22 January 2026, this is the world's first governance framework specifically for agentic AI — AI systems that can autonomously reason, plan, and take actions.
Key Concepts
What is Agentic AI? AI systems that can:
- Set and pursue goals autonomously
- Make decisions and take actions without human intervention at each step
- Interact with external tools, systems, and other AI agents
- Adapt their behavior based on outcomes
Governance Challenges Unique to Agentic AI
- Cascading actions: Autonomous agents can trigger chains of actions, making it harder to predict and control outcomes
- Multi-agent coordination: When multiple AI agents interact, governance becomes more complex
- Accountability gaps: When an agent acts autonomously, who is responsible for the outcome?
- Safety boundaries: How to define and enforce limits on autonomous agent behavior
Framework Provisions
- Clear human accountability for autonomous agent actions
- Defined boundaries and safety limits for agent behavior
- Monitoring and intervention mechanisms
- Logging and audit trails for agent decisions and actions
- Incident management for autonomous agent failures
AI Verify and ISAGO
AI Verify
Singapore's government-developed toolkit for testing AI systems:
- What it does: Provides technical testing for AI fairness, transparency, robustness, and other governance attributes
- Status: Open-source, actively maintained by the AI Verify Foundation (90+ member organizations)
- Use case: Companies can use AI Verify to demonstrate alignment with the Model Framework
Project Moonshot
The world's first open-source LLM evaluation toolkit:
- Red-teaming capabilities
- Benchmark testing
- Baseline safety evaluations for large language models
ISAGO (Implementation and Self-Assessment Guide for Organizations)
A companion tool that helps organizations evaluate their AI governance maturity:
- Self-assessment questionnaire
- Gap analysis against the Model Framework
- ISAGO 2.0 (released 2025) integrates with AI Verify for technical testing
How to Implement
Phase 1: Foundation
- Assign AI governance ownership (could be existing risk/compliance function)
- Complete the ISAGO self-assessment to understand your current maturity
- Define your risk-based approach to AI governance
Phase 2: Traditional AI Governance
- Implement the four key areas from the original framework
- Conduct AI Verify testing for your most important AI systems
- Establish monitoring and incident management processes
Phase 3: GenAI Governance
- Map your GenAI usage against the nine focus areas
- Implement content provenance mechanisms
- Conduct Red-teaming using Project Moonshot
- Establish GenAI-specific use policies and training
Phase 4: Agentic AI Governance (if applicable)
- Identify any autonomous AI agents in your systems
- Define safety boundaries and human override mechanisms
- Implement logging and audit trails for agent actions
- Establish monitoring for cascading action risks
Related Regulations
- Singapore PDPA: Mandatory data protection requirements that underpin all AI governance
- Singapore MAS AI Guidelines: Sector-specific mandatory framework for financial institutions
- ASEAN AI Governance Guide: Regional framework built on Singapore's approach
- EU AI Act: Global reference with similar risk-based governance approach
Frequently Asked Questions
No. All three editions (Traditional AI, GenAI, and Agentic AI) are voluntary. However, the PDPA is mandatory and the MAS AI guidelines are effectively mandatory for financial institutions. The Model Framework represents best practice that regulators use as a reference when evaluating organizations' AI governance.
AI Verify is a government-developed, open-source toolkit for testing AI systems against governance attributes like fairness, transparency, and robustness. It is not mandatory to use AI Verify, but it provides a practical way to demonstrate alignment with the Model Framework and can strengthen your position with regulators and customers.
The Agentic AI framework (January 2026) covers AI systems that can autonomously reason, plan, and take actions without human intervention at each step. A standard chatbot that only responds to user queries is NOT an agentic AI system. However, if your AI agent can autonomously browse the web, execute code, make API calls, or take actions on behalf of users, it likely falls under this framework.
Singapore's approach is voluntary and principles-based, while the EU AI Act is mandatory and prescriptive. Singapore focuses on industry collaboration and self-governance, with mandatory requirements only in specific sectors (financial services via MAS). The EU takes a risk-based but legally binding approach across all sectors. Many companies use Singapore's framework as a practical governance tool to complement EU AI Act compliance.
References
- Model AI Governance Framework (Second Edition). Singapore IMDA (2020). View source
- Model AI Governance Framework for Generative AI. Singapore IMDA and AI Verify Foundation (2024)
- Singapore Launches New Model AI Governance Framework for Agentic AI. Singapore IMDA (2026). View source
