Executive Summary: Singapore has pioneered a principles-based, voluntary approach to AI governance through its Model AI Governance Framework, positioning itself as ASEAN's AI hub while avoiding prescriptive regulation. First released in 2019 and updated in 2020, the framework provides practical guidance on transparency, fairness, accountability, and human oversight without imposing mandatory requirements. Unlike the EU's hard law approach or China's state-controlled model, Singapore emphasizes industry self-regulation, innovation enablement, and trusted adoption. Organizations deploying AI in Singapore benefit from regulatory clarity, government support programs (AI Verify, sandbox programs), and alignment with international standards, though they must still comply with sector-specific regulations (Personal Data Protection Act, Financial Services Act, Healthcare Services Act) and demonstrate responsible AI practices to maintain trust and market access.
Understanding Singapore's AI Governance Model
Philosophy: Light Touch, High Trust
Singapore's approach contrasts sharply with other major jurisdictions. Unlike China, Singapore imposes no mandatory AI registration or pre-approval requirements. Unlike the EU AI Act, it establishes no risk-based classification system carrying specific legal obligations. And unlike proposed US legislation, it avoids creating a sector-neutral comprehensive AI law altogether.
What Singapore offers instead is a voluntary framework providing practical implementation guidance, where sector regulators address AI within their existing mandates. The government positions itself as a facilitator rather than an enforcer, focusing on enabling responsible innovation while building public trust. This philosophy reflects a deliberate bet that collaborative governance, rather than top-down regulation, will produce better outcomes for both industry and society.
Key Components
The governance ecosystem rests on four interlocking pillars. The first and most foundational is the Model AI Governance Framework (2nd Edition, 2020), which sets out core principles alongside an implementation guide covering internal governance structures, operations management practices, and stakeholder interaction approaches.
The second pillar is AI Verify, released in 2022 as an open-source testing framework and toolkit. AI Verify provides standardized testing for transparency and fairness, generates objective metrics and benchmarks, and offers organizations a voluntary assurance pathway to demonstrate responsible AI practices.
Third, Singapore has developed sector-specific guidance tailored to its most consequential industries. The Monetary Authority of Singapore published its Principles for Responsible AI (known as MAS FEAT) as early as 2018. The Ministry of Health followed with AI in Healthcare Guidelines in 2021. The public sector received its own AI Governance Framework in 2020, ensuring that government agencies lead by example.
Fourth, Singapore provides substantial innovation support through regulatory sandboxes in financial services, healthcare, and transportation; government grants and incentives administered through AI Singapore and Enterprise Singapore; and research collaborations and testbeds that allow organizations to experiment with AI in controlled environments.
Model AI Governance Framework: Core Principles
Principle 1: Transparency
Transparency under the framework means that organizations must disclose the use of AI in decision-making, explain the AI system's role and limitations, provide information about the data used and model logic, and communicate when and how AI influences outcomes affecting individuals.
On the internal side, organizations should document AI system design, data sources, algorithms, and assumptions. They should maintain model cards or equivalent system documentation, log decision-making processes and rationale, and enable internal auditability so that governance teams can trace how and why a given outcome was produced.
External transparency requires informing users when they are interacting with AI systems and providing accessible explanations of AI recommendations. This obligation is particularly acute for consequential decisions involving credit, employment, or access to services, where individuals have a legitimate interest in understanding how outcomes were reached. The framework acknowledges, however, that organizations must balance transparency with the protection of trade secrets and proprietary information.
The framework applies a risk-proportionate approach to transparency. Higher-risk decisions demand greater disclosure, while simple automation may need only minimal notification. Consumer-facing AI systems require user-understandable explanations, whereas B2B applications may satisfy the transparency principle through detailed technical documentation.
Principle 2: Fairness
The fairness principle holds that AI systems should not discriminate unfairly, that outcomes should be equitable across different groups, that bias in data and algorithms must be actively addressed, and that fair treatment should align with both legal requirements and prevailing social norms.
Before deployment, organizations should identify potential fairness concerns specific to their use case and define appropriate fairness metrics, whether demographic parity, equalized odds, individual fairness, or another measure suited to the context. Testing for disparate impact across protected groups is essential, as is the use of diverse, representative training data wherever feasible.
During deployment, the work of ensuring fairness continues. Organizations should monitor for emerging bias patterns, track outcomes across demographic groups, establish bias thresholds with automated alerts, and implement fairness-aware algorithms where the application warrants them.
When fairness violations are detected, the framework calls for prompt investigation, adjustment of models or decision processes to reduce bias, and the provision of recourse mechanisms for affected individuals. All fairness interventions and their effectiveness should be documented to support continuous improvement and demonstrate due diligence.
Principle 3: Ethics
The ethics principle extends governance beyond legal compliance. AI development and deployment should respect human values, consider societal impacts, uphold human dignity, autonomy, and rights, and align with both organizational values and the public interest.
From a governance perspective, organizations should establish ethics committees or AI governance boards that include diverse perspectives spanning legal, technical, business, and external viewpoints. High-risk AI applications warrant dedicated ethical review processes, and every organization deploying AI at scale should develop explicit AI ethics principles that guide decision-making.
Risk assessment under the ethics principle goes further than regulatory requirements. Organizations should consider impacts on vulnerable populations, assess the potential for misuse or unintended consequences, and evaluate whether an AI application aligns with prevailing social norms and expectations.
The framework also provides a decision framework for navigating inherent tensions in AI deployment: when to rely on AI versus human-only decision-making, how to balance efficiency against fairness, how to manage the trade-off between accuracy and explainability, and how to determine acceptable risk levels and escalation triggers.
Principle 4: Human Agency and Oversight
The principle of human agency and oversight establishes that humans must retain ultimate control and accountability over AI systems. AI should augment rather than replace human judgment for consequential decisions, meaningful human oversight must persist throughout the AI lifecycle, and the ability to override AI recommendations must be preserved.
Implementing human-in-the-loop controls requires identifying which decisions demand human involvement, designing interfaces that genuinely support human oversight, providing sufficient context and explanations to human reviewers, and enabling (and documenting) human override capabilities.
The framework is careful to distinguish between nominal oversight and meaningful oversight. Humans must have a genuine ability to understand and question AI outputs. Organizations should guard against "automation bias," the well-documented tendency for humans to rubber-stamp AI decisions without critical evaluation. This means training human reviewers to critically assess AI recommendations and monitoring patterns in human overrides and feedback to ensure the oversight function remains robust.
Clear accountability structures must designate responsible individuals for each AI system, define escalation paths for issues or concerns, grant authority to pause or shut down problematic AI, and assign explicit responsibility for outcomes and decisions.
Principle 5: Accountability
Accountability means that organizations bear responsibility for the outcomes of their AI systems. There must be clear lines of accountability for AI-driven decisions, mechanisms to address harms and provide remedies, and robust governance structures supported by thorough documentation.
The governance structure should include board-level oversight of AI strategy and risks, designated AI governance roles such as a Chief AI Officer or AI Ethics Committee, cross-functional AI review processes, and integration with enterprise risk management and model risk management frameworks.
Documentation requirements are comprehensive. Organizations should maintain complete records of their AI systems, document design choices alongside testing and validation results, log significant decisions and changes over time, and retain evidence sufficient to support audits and investigations.
For remediation and redress, the framework calls on organizations to establish clear processes for addressing AI-caused harms, provide accessible complaint mechanisms, investigate incidents thoroughly, and offer appropriate remedies including corrections, compensation, and appeals.
AI Verify: Testing and Validation
What Is AI Verify?
AI Verify is Singapore's open-source testing framework for AI systems, released in 2022. Its purpose is to provide objective, standardized testing aligned with AI governance principles, generate verifiable results that support transparency and trust, enable both internal governance and external assurance, and ensure alignment with international standards from the ISO/IEC and the OECD.
How It Works
AI Verify operates across three dimensions. The technical testing component runs automated tests on AI models and datasets, evaluating fairness metrics such as disparate impact and demographic parity, performing explainability assessments using techniques like feature importance analysis and SHAP values, and conducting robustness and safety checks.
The process assessment component evaluates the maturity of an organization's governance processes, reviewing internal documentation, assessing accountability structures, and verifying the adequacy of human oversight mechanisms.
The reporting component brings these two dimensions together in a standardized AI Verify report that presents test results with objective metrics, process maturity scores, and identified areas for improvement.
The current version of AI Verify is maintained as an open-source project through the AI Verify Foundation on GitHub. It supports common machine learning frameworks including TensorFlow, PyTorch, and scikit-learn, with initial focus on tabular data use cases and expanding coverage to computer vision and natural language processing.
Using AI Verify
The AI Verify process unfolds across five stages. Preparation involves identifying the AI system for testing, gathering required inputs (the model, training and test data, and metadata), defining fairness and performance metrics, and selecting the relevant test suites.
Technical testing then runs automated evaluations on the model, assessing fairness across protected features, measuring model transparency and explainability, and testing robustness against adversarial or perturbed inputs.
Process documentation requires completing governance questionnaires, documenting oversight structures, describing data governance practices, and explaining human review processes in detail.
Report generation produces a standardized AI Verify report that captures quantitative test results, summarizes the process maturity assessment, and highlights gaps alongside actionable recommendations.
Finally, remediation addresses any identified gaps or concerns, implements improvements to the model or governance processes, retests to validate those improvements, and updates documentation and governance artefacts accordingly.
The benefits of undertaking this process are substantial. AI Verify provides objective evidence of responsible AI practices and enables early identification of issues before deployment or harm. The resulting reports support regulatory engagement by demonstrating due diligence, build trust with customers and stakeholders, and ensure alignment with international AI governance standards.
Sector-Specific Requirements
Financial Services
The Monetary Authority of Singapore (MAS) applies AI governance principles to regulated financial institutions through its Principles for Responsible Use of AI and Data Analytics (FEAT, 2018).
On fairness, MAS expects financial institutions to detect and mitigate discriminatory outcomes, with particular focus on credit, insurance, and investment advice. Regular testing for bias and disparate impact is expected, and AI fairness obligations operate alongside existing fair treatment requirements under financial regulations.
The ethics dimension requires institutions to consider the societal impacts of their AI applications, respect customer interests and privacy, and avoid deploying AI in ways that are deceptive, manipulative, or predatory.
For accountability, MAS expects board and senior management oversight of AI-related risks, clear accountability lines, integration with existing risk management frameworks, and reporting to MAS on material AI incidents where relevant.
Transparency requires disclosure of AI use in customer interactions and explanations of AI-driven decisions, particularly for credit denials and investment recommendations, while balancing disclosure obligations against the protection of proprietary information.
In practical terms, financial institutions should document their AI governance framework and policies, conduct regular model validation and testing, maintain model risk management standards, provide board reporting on AI risks and performance, and maintain customer complaint and escalation processes specifically designed for AI-driven decisions.
Healthcare
The Ministry of Health (MOH) provides guidance for AI in healthcare settings through its AI in Healthcare Guidelines (2021).
Clinical validation is the starting point. AI medical devices may require approval from the Health Sciences Authority (HSA), and clinical evidence of safety and effectiveness is essential. The guidelines emphasize validation on the local Singaporean population where appropriate, along with ongoing performance monitoring and post-market surveillance.
The accountability principle in healthcare is unambiguous: the healthcare provider remains accountable for care decisions regardless of AI involvement. AI functions as clinical decision support, not as a replacement for clinical judgment. Clear delineation of AI's role in clinical workflows is required, and human clinician review of AI recommendations is expected.
Transparency in healthcare means informing patients of AI use in diagnosis or treatment, explaining AI's role in clinical decision-making, obtaining patient consent where appropriate, and disclosing AI use in medical records when it is material to the care provided.
Data governance requirements are particularly stringent in healthcare. Organizations must comply with the Human Biomedical Research Act (HBRA) where applicable, maintain strong patient privacy protections, ensure secure handling of health data, and observe principles of data minimization and purpose limitation.
Practical requirements include conducting risk assessments for clinical AI applications, performing validation studies with local patient data, establishing ongoing monitoring of AI performance and safety, implementing incident reporting for AI-related adverse events, and providing training for clinicians who use AI tools.
Personal Data Protection Act (PDPA)
The PDPA applies to all AI systems that process personal data, imposing several obligations that intersect directly with AI governance.
Consent and purpose limitation require organizations to obtain consent for the collection, use, and disclosure of personal data. Data may be used only for purposes that have been notified to individuals, and the framework treats both AI training and inference as "use" requiring a valid legal basis.
The accuracy obligation is particularly significant for AI. Organizations must take reasonable efforts to ensure personal data is accurate and complete, a requirement that becomes critical when AI systems make decisions based on that data. Correction mechanisms must be available for individuals whose data is inaccurate.
Protection obligations require security safeguards proportionate to the sensitivity of the data and the potential for harm. This extends beyond traditional data security to encompass AI-specific risks such as model theft and adversarial attacks, demanding that organizations protect against unauthorized access, disclosure, or modification of both data and models.
Retention limits require that personal data be kept only as long as necessary. This applies to training data and operational data alike, with secure disposal required when data is no longer needed.
Regarding automated decision-making, the PDPA does not prohibit automated decisions outright. However, the principles of transparency, fairness, and accountability continue to apply in full. Individuals retain the right to challenge inaccurate data that affects AI-driven decisions, and organizations should provide meaningful recourse and explanations for automated outcomes.
Practical Implementation Guide
Phase 1: Governance Foundation (Months 1 to 2)
The first step is to establish an AI governance structure. This means designating an AI governance lead or Chief AI Officer, forming a cross-functional AI governance committee, defining roles and responsibilities with precision, and integrating AI governance with existing risk, compliance, and IT governance frameworks rather than creating a parallel structure.
With the structure in place, organizations should develop an AI governance framework by adopting the Model AI Governance Framework principles, tailoring them to organizational context and risk appetite, creating formal AI governance policies and standards, and securing board or leadership endorsement to ensure the framework carries institutional weight.
Concurrently, a comprehensive AI system inventory should catalog all AI systems in use or development, classify each by risk level (high, medium, or low impact), map systems to business functions and accountable owners, and prioritize high-risk systems for enhanced governance attention.
Phase 2: Operationalize Principles (Months 3 to 6)
Operationalizing transparency involves creating standard disclosures for customer-facing AI use, developing model documentation templates for internal purposes, implementing user notification mechanisms, and training staff on their transparency obligations.
For fairness, organizations should define fairness metrics for key use cases, implement bias testing procedures, establish fairness thresholds and a regular monitoring cadence, and create remediation protocols that can be activated when bias issues surface.
Human oversight operationalization requires identifying which decisions need human review, designing human-in-the-loop workflows and controls, developing override processes with supporting documentation, and training human reviewers on both the capabilities and the limitations of the AI systems they supervise.
Accountability becomes operational when accountable owners are assigned to each AI system, decision-making processes and approvals are documented systematically, incident response procedures for AI issues are established, and complaint and redress mechanisms are made accessible to affected parties.
Phase 3: Testing and Validation (Months 4 to 8)
AI Verify implementation begins by selecting high-risk systems for testing, preparing required inputs (models, data, and documentation), running the AI Verify test suites, and generating baseline reports that establish a benchmark for future improvement.
Gap remediation then addresses the issues identified during testing, whether through improving model fairness, transparency, or robustness, enhancing governance processes and documentation, or retesting to validate that improvements have taken effect.
Sector-specific compliance requires mapping AI systems to applicable sector regulations (MAS, MOH, PDPA, and others as relevant), conducting sector-specific risk assessments, implementing any additional controls that those assessments reveal as necessary, and engaging proactively with regulators where uncertainty exists.
Phase 4: Ongoing Operations (Continuous)
Sustained monitoring means tracking AI system performance metrics over time, watching for emerging fairness and bias patterns, detecting anomalies and model drift, and logging human overrides and escalations to maintain a complete audit trail.
Governance in this phase becomes routine but no less important. Organizations should hold quarterly AI governance committee meetings, provide annual board reporting on AI risks and performance, review the AI inventory and risk classifications on a regular basis, and update policies as lessons are learned and the regulatory landscape evolves.
Training and culture initiatives should provide AI ethics and governance training for staff at all levels, run responsible AI awareness programs, encourage the raising of concerns and questions without fear of reprisal, and recognize exemplary AI governance practices to reinforce desired behaviors.
External engagement rounds out the ongoing operations phase. Organizations benefit from participating in industry working groups, engaging with Singapore regulators proactively rather than reactively, sharing AI Verify results with stakeholders where appropriate, and contributing to the development of AI governance standards that will shape the next generation of frameworks.
Benefits of Singapore's Approach
For Organizations
Singapore's framework offers regulatory clarity by setting clear expectations without prescriptive rules, giving organizations the flexibility to implement controls based on their specific context and risk profile. The resulting predictability in the regulatory environment, combined with an innovation-friendly posture, makes Singapore an attractive jurisdiction for AI deployment.
From a competitive advantage perspective, early adoption of the framework signals responsible practices to the market. AI Verify results build trust with customers and partners, creating differentiation in both local and regional markets. Perhaps most strategically, organizations that align with Singapore's framework position themselves well for compliance with future regulations emerging in other jurisdictions.
The risk management benefits are equally compelling. The framework drives proactive identification and mitigation of AI risks, reducing the likelihood of harms and the reputational damage that follows. Documented adherence provides evidence of due diligence for liability purposes and ensures alignment with international standards and best practices.
Organizations also benefit from direct government support, including access to regulatory sandboxes for testing innovative AI applications, grants and incentives for responsible AI development, collaboration opportunities with leading research institutions, and the recognition and promotion that comes from Singapore government endorsement.
For Singapore
Singapore's governance model strengthens its positioning as an AI hub by attracting AI companies and talent, striking a balance between innovation enablement and trust, establishing the city-state as a global leader in AI governance, and providing a model that other ASEAN countries can adapt.
On the international stage, the Singapore framework is already influencing AI governance across ASEAN. It serves as a bridge between Eastern and Western regulatory philosophies, maintains compatibility with EU, US, and OECD principles, and facilitates cross-border AI deployment for organizations operating across multiple jurisdictions.
Key Takeaways
Singapore's Model AI Governance Framework is voluntary but influential, with strong adoption incentives flowing from regulators, markets, and stakeholders alike. Its five core principles (transparency, fairness, ethics, human agency and oversight, and accountability) provide a comprehensive foundation that can be adapted to virtually any AI use case.
AI Verify offers standardized, verifiable assessments of AI systems against governance principles, supporting both internal governance improvement and external trust-building. Meanwhile, sector-specific rules in financial services, healthcare, and data protection impose binding requirements that sit on top of the voluntary framework, ensuring that the highest-risk applications face appropriate scrutiny.
Implementation is deliberately risk-proportionate: higher-risk AI systems require more rigorous governance, testing, and human oversight, while lower-risk applications face lighter requirements. Singapore positions itself as innovation-friendly through sandboxes, support programs, and a principles-based approach that avoids the compliance burden of prescriptive regulation.
The framework's alignment with international standards is a strategic asset for organizations operating across multiple jurisdictions, offering a governance foundation that translates across regulatory boundaries.
Common Questions
No. The Model AI Governance Framework is voluntary guidance. However, sector regulators and market expectations treat it as a benchmark for responsible AI, so non-adoption can still create regulatory, reputational, and commercial risks.
Prioritize high-impact systems: those affecting access to credit, employment, healthcare, essential services, or involving sensitive personal data. These systems face higher regulatory and reputational risk and benefit most from standardized testing.
MAS FEAT principles are sector-specific expectations for financial institutions, focusing on Fairness, Ethics, Accountability, and Transparency. They are consistent with and complementary to the broader Model AI Governance Framework, which adds structure around human oversight and operationalization.
Overseas validation can be a starting point, but MOH and HSA expect evidence that performance is appropriate for Singapore’s context, including local demographics and clinical workflows. Local validation or bridging studies are often needed.
PDPA does not ban AI training but requires a valid basis (such as consent or an applicable exception), clear purpose limitation, and safeguards. Training and inference are considered "use" of personal data and must align with notified purposes and retention limits.
Singapore's Unique Approach to AI Governance
Singapore emphasizes voluntary, principles-based AI governance supported by government tools and sandboxes, rather than prescriptive, risk-tiered regulation. Organizations are encouraged—but not compelled—to adopt best practices, with sector regulators stepping in where higher risks justify binding rules.
Organizations using AI Verify as of 2023
Source: Infocomm Media Development Authority / AI Verify Foundation
"Although Singapore’s Model AI Governance Framework is voluntary, it increasingly functions as a de facto standard: boards, regulators, and customers expect material AI systems to be governed in line with its principles."
— Singapore AI Governance Framework: A Practical Guide
References
- Model AI Governance Framework (Second Edition). PDPC and IMDA Singapore (2020). View source
- Personal Data Protection Act 2012. Personal Data Protection Commission Singapore (2012). View source
- What is AI Verify — AI Verify Foundation. AI Verify Foundation (2023). View source
- Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT). Monetary Authority of Singapore (2018). View source
- AI Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology (NIST) (2023). View source
- ISO/IEC 42001:2023 — Artificial Intelligence Management System. International Organization for Standardization (2023). View source
- OECD Principles on Artificial Intelligence. OECD (2019). View source

