Executive Summary: Singapore has pioneered a principles-based, voluntary approach to AI governance through its Model AI Governance Framework, positioning itself as ASEAN's AI hub while avoiding prescriptive regulation. First released in 2019 and updated in 2020, the framework provides practical guidance on transparency, fairness, accountability, and human oversight without imposing mandatory requirements. Unlike the EU's hard law approach or China's state-controlled model, Singapore emphasizes industry self-regulation, innovation enablement, and trusted adoption. Organizations deploying AI in Singapore benefit from regulatory clarity, government support programs (AI Verify, sandbox programs), and alignment with international standards, though they must still comply with sector-specific regulations (Personal Data Protection Act, Financial Services Act, Healthcare Services Act) and demonstrate responsible AI practices to maintain trust and market access.
Understanding Singapore's AI Governance Model
Philosophy: Light Touch, High Trust
Singapore's approach contrasts sharply with other major jurisdictions:
Not Prescriptive Regulation:
- No mandatory AI registration or pre-approval (unlike China)
- No risk-based classification with specific obligations (unlike EU AI Act)
- No sector-neutral comprehensive AI law (unlike proposed US legislation)
Instead:
- Voluntary framework providing practical implementation guidance
- Sector regulators address AI within existing mandates
- Focus on enabling responsible innovation while building trust
- Government as facilitator, not enforcer
Key Components
1. Model AI Governance Framework (2nd Edition, 2020)
- Core principles and implementation guide
- Internal governance structures and processes
- Operations management practices
- Stakeholder interaction approaches
2. AI Verify (2022) - Testing Framework and Toolkit
- Open-source tool for testing AI systems
- Standardized testing for transparency and fairness
- Objective metrics and benchmarks
- Voluntary assurance pathway
3. Sector-Specific Guidance
- Financial Services: Principles for Responsible AI (MAS FEAT, 2018)
- Healthcare: AI in Healthcare Guidelines (MOH 2021)
- Public Sector: AI Governance Framework for Singapore Public Sector (2020)
4. Innovation Support
- Regulatory sandboxes (financial services, healthcare, transportation)
- Government grants and incentives (AI Singapore, Enterprise Singapore)
- Research collaborations and testbeds
Model AI Governance Framework: Core Principles
Principle 1: Transparency
What It Means:
- Disclose use of AI in decision-making
- Explain AI system's role and limitations
- Provide information about data used and model logic
- Communicate when and how AI influences outcomes
Internal Transparency:
- Document AI system design, data sources, algorithms, and assumptions
- Maintain model cards or system documentation
- Log decision-making processes and rationale
- Enable internal auditability
External Transparency:
- Inform users when interacting with AI systems
- Provide accessible explanations of AI recommendations
- Disclose AI's role in consequential decisions (credit, employment, services)
- Balance transparency with trade secret protection
Risk-Proportionate Approach:
- Higher-risk decisions require greater transparency
- Simple automation may need minimal disclosure
- Consumer-facing AI needs user-understandable explanations
- B2B AI may require technical documentation
Principle 2: Fairness
What It Means:
- AI systems should not discriminate unfairly
- Outcomes should be equitable across different groups
- Address bias in data and algorithms
- Ensure fair treatment aligned with legal and social norms
Pre-Deployment:
- Identify potential fairness concerns for your use case
- Define fairness metrics appropriate to context (e.g., demographic parity, equalized odds, individual fairness)
- Test for disparate impact across protected groups
- Use diverse, representative training data where feasible
During Deployment:
- Monitor for emerging bias patterns
- Track outcomes across demographic groups
- Establish bias thresholds and alerts
- Implement fairness-aware algorithms where appropriate
Remediation:
- Investigate fairness violations promptly
- Adjust models or decision processes to reduce bias
- Provide recourse mechanisms for affected individuals
- Document fairness interventions and effectiveness
Principle 3: Ethics
What It Means:
- AI development and deployment should respect human values
- Consider societal impacts beyond legal compliance
- Uphold human dignity, autonomy, and rights
- Align with organizational values and public interest
Governance:
- Establish ethics committees or AI governance boards
- Include diverse perspectives (legal, technical, business, external)
- Create ethical review processes for high-risk AI
- Develop organizational AI ethics principles
Risk Assessment:
- Identify ethical risks beyond legal/regulatory requirements
- Consider impacts on vulnerable populations
- Assess potential for misuse or unintended consequences
- Evaluate alignment with social norms and expectations
Decision Framework:
- Decide when to deploy AI vs. human-only decision-making
- Balance competing interests (efficiency vs. fairness)
- Manage trade-offs between accuracy and explainability
- Determine acceptable risk levels and escalation triggers
Principle 4: Human Agency and Oversight
What It Means:
- Humans retain ultimate control and accountability
- AI augments rather than replaces human judgment for consequential decisions
- Meaningful human oversight throughout the AI lifecycle
- Ability to override AI recommendations when necessary
Human-in-the-Loop:
- Identify decisions requiring human involvement
- Design interfaces that support human oversight
- Provide context and explanations to human reviewers
- Enable human override capabilities and document use
Meaningful Oversight:
- Humans must have genuine ability to understand and question AI
- Avoid "automation bias" where humans rubber-stamp AI decisions
- Train humans to critically evaluate AI recommendations
- Monitor human override patterns and feedback
Accountability Structure:
- Designate responsible individuals for AI systems
- Define clear escalation paths for issues or concerns
- Grant authority to pause or shut down problematic AI
- Assign responsibility for outcomes and decisions
Principle 5: Accountability
What It Means:
- Organizations are responsible for AI system outcomes
- Clear lines of accountability for AI decisions
- Mechanisms to address harms and provide remedies
- Robust governance structures and documentation
Governance Structure:
- Board-level oversight of AI strategy and risks
- Designated AI governance roles (e.g., Chief AI Officer, AI Ethics Committee)
- Cross-functional AI review processes
- Integration with enterprise risk management and model risk management
Documentation:
- Maintain comprehensive records of AI systems
- Document design choices, testing, and validation
- Log significant decisions and changes
- Retain evidence for audits and investigations
Remediation and Redress:
- Establish processes for addressing AI-caused harms
- Provide accessible complaint mechanisms
- Investigate incidents thoroughly
- Offer appropriate remedies (corrections, compensation, appeals)
AI Verify: Testing and Validation
What Is AI Verify?
AI Verify is Singapore's open-source testing framework for AI systems, released in 2022.
Purpose:
- Provide objective, standardized testing for AI governance principles
- Generate verifiable results for transparency and trust
- Support internal governance and external assurance
- Align with international standards (ISO/IEC, OECD)
How It Works
1. Technical Testing:
- Automated tests on AI models and datasets
- Tests for fairness metrics (e.g., disparate impact, demographic parity)
- Explainability assessments (feature importance, SHAP values)
- Robustness and safety checks
2. Process Assessment:
- Evaluation of governance processes
- Internal documentation review
- Accountability structure assessment
- Human oversight verification
3. Reporting:
- Standardized AI Verify report
- Test results with objective metrics
- Process maturity scores
- Identified areas for improvement
Current Version:
- AI Verify Foundation (open-source, GitHub)
- Supports common ML frameworks (TensorFlow, PyTorch, scikit-learn)
- Initially focused on tabular data use cases
- Expanding coverage to computer vision and NLP
Using AI Verify
Step 1: Preparation
- Identify AI system for testing
- Gather required inputs (model, training/test data, metadata)
- Define fairness and performance metrics
- Select relevant test suites
Step 2: Technical Testing
- Run automated tests on the model
- Evaluate fairness across protected features
- Assess model transparency and explainability
- Test robustness to adversarial or perturbed inputs
Step 3: Process Documentation
- Complete governance questionnaires
- Document oversight structures
- Describe data governance practices
- Explain human review processes
Step 4: Report Generation
- Generate standardized AI Verify report
- Capture quantitative test results
- Summarize process maturity assessment
- Highlight gaps and recommendations
Step 5: Remediation
- Address identified gaps or concerns
- Implement improvements to model or processes
- Retest to validate improvements
- Update documentation and governance artefacts
Benefits:
- Objective evidence of responsible AI practices
- Early identification of issues before deployment or harm
- Support for regulatory engagement (demonstrate due diligence)
- Builds trust with customers and stakeholders
- Alignment with international AI governance standards
Sector-Specific Requirements
Financial Services
The Monetary Authority of Singapore (MAS) applies AI governance principles to regulated financial institutions.
Principles for Responsible Use of AI and Data Analytics (FEAT, 2018):
Fairness:
- Detect and mitigate discriminatory outcomes
- Focus on credit, insurance, and investment advice
- Regular testing for bias and disparate impact
- Support fair treatment obligations under financial regulations
Ethics:
- Consider societal impacts of AI applications
- Respect customer interests and privacy
- Avoid deceptive, manipulative, or predatory AI use
Accountability:
- Board and senior management oversight
- Clear accountability for AI-related risks
- Integration with risk management frameworks
- Reporting to MAS on material AI incidents where relevant
Transparency:
- Disclosure of AI use in customer interactions
- Explanations of AI-driven decisions (e.g., credit denials, investment recommendations)
- Balance transparency with proprietary information
Practical Requirements:
- Document AI governance framework and policies
- Conduct regular model validation and testing
- Maintain model risk management standards
- Provide board reporting on AI risks and performance
- Maintain customer complaint and escalation processes for AI decisions
Healthcare
The Ministry of Health (MOH) provides guidance for AI in healthcare settings.
AI in Healthcare Guidelines (2021):
Clinical Validation:
- AI medical devices may require HSA approval (Health Sciences Authority)
- Clinical evidence of safety and effectiveness
- Validation on local population (Singaporean demographics) where appropriate
- Ongoing performance monitoring and post-market surveillance
Accountability:
- Healthcare provider remains accountable for care decisions
- AI as clinical decision support, not replacement
- Clear delineation of AI's role in workflows
- Human clinician review of AI recommendations
Transparency:
- Inform patients of AI use in diagnosis or treatment
- Explain AI's role in clinical decision-making
- Obtain patient consent where appropriate
- Disclose AI use in medical records when material
Data Governance:
- Compliance with Human Biomedical Research Act (HBRA) where applicable
- Strong patient privacy protections
- Secure handling of health data
- Data minimization and purpose limitation
Practical Requirements:
- Risk assessment for clinical AI applications
- Validation studies with local patient data
- Ongoing monitoring of AI performance and safety
- Incident reporting for AI-related adverse events
- Training for clinicians using AI tools
Personal Data Protection Act (PDPA)
PDPA applies to AI systems that process personal data.
Key PDPA Obligations for AI:
Consent and Purpose Limitation:
- Obtain consent for collection, use, and disclosure of personal data
- Use data only for purposes notified to individuals
- Treat AI training and inference as "use" requiring a valid basis
Accuracy:
- Take reasonable efforts to ensure personal data is accurate and complete
- Critical for AI systems making decisions based on personal data
- Provide correction mechanisms for inaccurate data
Protection:
- Implement security safeguards proportionate to sensitivity and harm potential
- Protect against unauthorized access, disclosure, or modification
- Consider AI model security (e.g., model theft, adversarial attacks)
Retention:
- Retain personal data only as long as necessary
- Apply to training data and operational data
- Secure disposal when no longer needed
Automated Decision-Making:
- PDPA does not prohibit automated decisions
- Transparency, fairness, and accountability principles still apply
- Individuals may challenge inaccurate data affecting AI decisions
- Organizations should provide meaningful recourse and explanations
Practical Implementation Guide
Phase 1: Governance Foundation (Months 1–2)
Establish AI Governance Structure:
- Designate AI governance lead or Chief AI Officer
- Form cross-functional AI governance committee
- Define roles and responsibilities
- Integrate with existing risk, compliance, and IT governance
Develop AI Governance Framework:
- Adopt Model AI Governance Framework principles
- Tailor to organizational context and risk appetite
- Create AI governance policy and standards
- Obtain board/leadership endorsement
AI System Inventory:
- Identify all AI systems in use or development
- Classify by risk level (high/medium/low impact)
- Map to business functions and accountable owners
- Prioritize high-risk systems for enhanced governance
Phase 2: Operationalize Principles (Months 3–6)
Transparency:
- Create standard disclosures for AI use (customer-facing)
- Develop model documentation templates (internal)
- Implement user notification mechanisms
- Train staff on transparency requirements
Fairness:
- Define fairness metrics for key use cases
- Implement bias testing procedures
- Establish fairness thresholds and monitoring cadence
- Create remediation protocols for bias issues
Human Oversight:
- Identify decisions requiring human review
- Design human-in-the-loop workflows and controls
- Develop override processes and documentation
- Train human reviewers on AI capabilities and limitations
Accountability:
- Assign accountable owners for each AI system
- Document decision-making processes and approvals
- Establish incident response procedures for AI issues
- Create complaint and redress mechanisms
Phase 3: Testing and Validation (Months 4–8)
AI Verify Implementation:
- Select high-risk systems for AI Verify testing
- Prepare required inputs (models, data, documentation)
- Run AI Verify test suites
- Generate baseline reports
Gap Remediation:
- Address issues identified in testing
- Improve model fairness, transparency, or robustness
- Enhance governance processes and documentation
- Retest to validate improvements
Sector-Specific Compliance:
- Map AI systems to sector regulations (MAS, MOH, PDPA, others as relevant)
- Conduct sector-specific risk assessments
- Implement additional controls as needed
- Engage with regulators if uncertainty exists
Phase 4: Ongoing Operations (Continuous)
Monitoring:
- Track AI system performance metrics
- Monitor for fairness and bias patterns
- Detect anomalies and model drift
- Log human overrides and escalations
Governance:
- Hold quarterly AI governance committee meetings
- Provide annual board reporting on AI risks and performance
- Review AI inventory and risk classifications regularly
- Update policies based on lessons learned and regulatory changes
Training and Culture:
- Provide AI ethics and governance training for staff
- Run responsible AI awareness programs
- Encourage raising concerns and questions
- Recognize exemplary AI governance practices
External Engagement:
- Participate in industry working groups
- Engage with Singapore regulators proactively
- Share AI Verify results with stakeholders where appropriate
- Contribute to AI governance standards development
Benefits of Singapore's Approach
For Organizations
Regulatory Clarity:
- Clear expectations without prescriptive rules
- Flexibility to implement controls based on context and risk
- Predictable regulatory environment
- Innovation-friendly approach
Competitive Advantage:
- Early adoption signals responsible practices
- AI Verify results build trust with customers and partners
- Differentiation in local and regional markets
- Readiness for future regulations in other jurisdictions
Risk Management:
- Proactive identification and mitigation of AI risks
- Reduced likelihood of harms and reputational damage
- Evidence of due diligence for liability purposes
- Alignment with international standards and best practices
Government Support:
- Access to sandboxes for testing innovative AI
- Grants and incentives for responsible AI development
- Collaboration opportunities with research institutions
- Recognition and promotion by Singapore government
For Singapore
AI Hub Positioning:
- Attracts AI companies and talent to Singapore
- Balances innovation enablement with trust and safety
- Positions Singapore as a leader in AI governance globally
- Provides a model for other ASEAN countries
International Alignment:
- Singapore framework influences ASEAN AI governance
- Bridges Eastern and Western approaches
- Compatible with EU, US, and OECD principles
- Facilitates cross-border AI deployment
Key Takeaways
- Singapore's Model AI Governance Framework is voluntary but influential, with strong adoption incentives from regulators, markets, and stakeholders.
- Five core principles—transparency, fairness, ethics, human agency and oversight, and accountability—provide a comprehensive foundation adaptable to any AI use case.
- AI Verify offers standardized, verifiable assessments of AI systems against governance principles, supporting both internal governance and external trust.
- Sector-specific rules in financial services, healthcare, and data protection impose binding requirements on top of the voluntary framework.
- Implementation is risk-proportionate: higher-risk AI systems require more rigorous governance, testing, and human oversight.
- Singapore positions itself as innovation-friendly through sandboxes, support programs, and a principles-based approach rather than prescriptive regulation.
- The framework aligns with international standards, facilitating global deployment for organizations operating across multiple jurisdictions.
Frequently Asked Questions
Is the Model AI Governance Framework legally binding?
No, the framework itself is voluntary guidance, not law. However, sector regulators may reference the framework in their expectations, and organizations failing to adopt responsible AI practices may face regulatory scrutiny under existing laws (such as PDPA and sector-specific regulations) or reputational consequences.
Do I need to use AI Verify?
AI Verify is voluntary and free to use. It is particularly valuable for high-risk AI systems, organizations seeking to demonstrate responsible practices, or those preparing for future regulations. Many organizations use it internally even without publishing results.
How does Singapore's approach compare to the EU AI Act?
Singapore uses voluntary, principles-based guidance, while the EU imposes mandatory risk-based requirements with significant penalties. Singapore emphasizes flexibility and innovation enablement; the EU prioritizes fundamental rights protection with prescriptive rules. Organizations operating in both jurisdictions typically align with the stricter EU requirements while leveraging Singapore's tools and guidance.
What are the consequences of not following the framework?
There are no direct legal penalties for not following the framework. However, consequences can include regulatory scrutiny under existing laws, reputational damage if AI harms occur, competitive disadvantage as customers prefer responsible AI, and potential civil liability for negligence if reasonable care standards are not met.
Does Singapore's framework apply to AI developed elsewhere?
The framework applies to AI deployed in Singapore regardless of where it was developed. Many organizations choose to apply Singapore's principles globally as a baseline, particularly if they also operate in other jurisdictions with similar principles-based approaches.
How does the PDPA intersect with AI governance?
PDPA is legally binding and applies when AI processes personal data. Key intersections include consent for AI training and inference, accuracy obligations for data used in AI decisions, security requirements for models and data, and providing recourse for individuals affected by AI-driven outcomes. AI governance frameworks should be designed to ensure PDPA compliance.
What support does Singapore offer for AI governance implementation?
Singapore provides AI Verify (a free testing framework), regulatory sandboxes for innovative AI, grants for AI development and governance, guidance documents and playbooks, industry workshops and training, and opportunities for direct engagement with regulators.
Frequently Asked Questions
No. The Model AI Governance Framework is voluntary guidance. However, sector regulators and market expectations treat it as a benchmark for responsible AI, so non-adoption can still create regulatory, reputational, and commercial risks.
Prioritize high-impact systems: those affecting access to credit, employment, healthcare, essential services, or involving sensitive personal data. These systems face higher regulatory and reputational risk and benefit most from standardized testing.
MAS FEAT principles are sector-specific expectations for financial institutions, focusing on Fairness, Ethics, Accountability, and Transparency. They are consistent with and complementary to the broader Model AI Governance Framework, which adds structure around human oversight and operationalization.
Overseas validation can be a starting point, but MOH and HSA expect evidence that performance is appropriate for Singapore’s context, including local demographics and clinical workflows. Local validation or bridging studies are often needed.
PDPA does not ban AI training but requires a valid basis (such as consent or an applicable exception), clear purpose limitation, and safeguards. Training and inference are considered "use" of personal data and must align with notified purposes and retention limits.
Singapore's Unique Approach to AI Governance
Singapore emphasizes voluntary, principles-based AI governance supported by government tools and sandboxes, rather than prescriptive, risk-tiered regulation. Organizations are encouraged—but not compelled—to adopt best practices, with sector regulators stepping in where higher risks justify binding rules.
Organizations using AI Verify as of 2023
Source: Infocomm Media Development Authority / AI Verify Foundation
"Although Singapore’s Model AI Governance Framework is voluntary, it increasingly functions as a de facto standard: boards, regulators, and customers expect material AI systems to be governed in line with its principles."
— Singapore AI Governance Framework: A Practical Guide
References
- Model AI Governance Framework (Second Edition). Personal Data Protection Commission & Infocomm Media Development Authority (2020). View source
- AI Verify Foundation. Infocomm Media Development Authority (2022). View source
- Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT) in the Use of Artificial Intelligence and Data Analytics in Singapore's Financial Sector. Monetary Authority of Singapore (2018). View source
- Artificial Intelligence in Healthcare Guidelines. Ministry of Health Singapore (2021). View source
- Personal Data Protection Act 2012 (2020 Revised Edition). Singapore Statutes Online (2020). View source
