Singapore has established itself as Asia Pacific's leader in AI governance, adopting a principles-based regulatory approach that balances innovation with accountability. This comprehensive guide provides practical guidance for achieving and maintaining AI compliance in Singapore across the Model AI Governance Framework, PDPA requirements, and sector-specific regulations.
Singapore's AI Regulatory Approach
Singapore's approach to AI regulation reflects the government's broader regulatory philosophy: establish clear principles and governance expectations while allowing organizations flexibility in implementation. This enables innovation while ensuring accountability.
The regulatory framework is defined by several distinguishing characteristics. The emphasis falls on governance over technology, meaning regulators focus on organizational governance structures, accountability mechanisms, and risk management rather than prescribing narrow technical specifications. Singapore also embraces sectoral specificity, subjecting high-risk sectors such as financial services and healthcare to additional requirements administered through existing sectoral regulators. The government provides practical guidance that goes well beyond abstract principles, offering detailed implementation examples, case studies, and tools. Finally, enforcement is active rather than aspirational. The Personal Data Protection Commission (PDPC) has issued significant penalties for data protection violations involving algorithmic processing.
Five regulatory bodies share oversight of AI in Singapore. The PDPC administers and enforces the Personal Data Protection Act. The Infocomm Media Development Authority (IMDA) developed the Model AI Governance Framework. The Monetary Authority of Singapore (MAS) regulates financial services AI through its FEAT principles. The Ministry of Health (MOH) and Health Sciences Authority (HSA) provide guidance on AI in healthcare. The Cyber Security Agency (CSA) addresses AI security considerations.
Model AI Governance Framework (2024 Update)
The Model AI Governance Framework, first released in 2019 and updated in 2020 and 2024, provides comprehensive guidance for organizations deploying AI. While not legally binding, it represents regulatory expectations and provides a safe harbor for compliance.
1. Internal Governance Structures and Measures
Objective: Establish clear accountability, assign roles and responsibilities, and create governance processes for AI systems throughout their lifecycle.
The framework places board and senior management accountability at the center of AI governance. Boards are expected to provide oversight of AI strategy, risk appetite, and governance direction. Senior management carries responsibility for AI risk management, with clear escalation pathways for AI-related issues and regular reporting on AI systems, risks, and incidents flowing upward through the organization.
Organizations should establish a dedicated AI governance structure anchored by a cross-functional governance committee. This committee should draw membership from technology, legal, compliance, risk, and business functions. Its mandate should encompass AI system approval, risk assessment review, and incident response, with a defined meeting frequency and clear decision-making authority.
Several roles and responsibilities must be assigned to support this structure. Each AI system should have a designated system owner who remains accountable throughout the system's lifecycle. The data protection officer, already required under the PDPA, should take on an explicit AI governance role. Organizations deploying high-risk systems should appoint an AI ethics officer or establish an ethics committee. Technical specialists responsible for model development, validation, and monitoring round out the governance team.
Supporting these roles, organizations need a suite of policies and procedures. An overarching AI governance policy should establish the organization's principles, risk appetite, and governance approach. This should be complemented by procedures covering AI system development and deployment, a defined risk assessment methodology, and processes for change management, incident response, and audit.
The framework recognizes that implementation will differ by organization size. Small organizations with fewer than 100 employees can adopt a simplified governance structure with a designated AI owner reporting to senior management, a cross-functional review process (even if informal), and documented key decisions. Large organizations should implement formal AI governance committees operating at multiple levels, including operational, senior management, and board-level oversight.
2. Determining Human Involvement in AI-Augmented Decision-Making
Objective: Ensure appropriate human oversight of AI systems, particularly when they inform or make decisions affecting individuals.
The framework prescribes a risk-based approach to human oversight, calibrating the degree of human involvement to the severity of potential impact.
High-risk decisions, defined as those carrying legal effects or significant impacts on individuals, demand the most rigorous human involvement. Meaningful human review must occur before decisions are finalized, and the human reviewer must have both the ability and the authority to override the AI. Sufficient information about AI reasoning must be provided to enable genuine review, and human reviewers must be appropriately trained and competent. Audit trails must record each instance of human review and the resulting decisions. Examples of high-risk decisions include credit decisions, employment decisions, insurance underwriting, and medical diagnoses.
Medium-risk decisions with moderate impact require a human-on-the-loop model. Human operators maintain oversight with the ability to intervene, and regular review of AI decisions (with sampling considered acceptable) ensures quality. Exception handling remains the responsibility of humans, and monitoring for bias or performance degradation should trigger escalation to human review. Fraud detection and customer service recommendations fall into this category.
Low-risk decisions with minimal impact may operate with humans out of the loop. Fully automated processing is acceptable provided monitoring is in place. Periodic performance reviews and incident response procedures provide a safety net. Product recommendations and spam filtering are typical examples.
The framework also addresses the challenge of mitigating automation bias. Organizations should invest in training that builds critical evaluation skills, present AI confidence levels and uncertainty alongside outputs, require documentation of reasoning when humans agree with or override the AI, and periodically test the quality of human decision-making in AI-assisted contexts.
3. Operations Management
The framework sets out detailed expectations for AI lifecycle management across four phases.
During the development phase, organizations should define the AI system's purpose, scope, and success criteria. Teams must identify and document datasets used for training, validation, and testing, then assess data quality, representativeness, and potential biases. Algorithm and modeling approach selection should be deliberate and documented. Performance metrics covering accuracy, fairness, and robustness must be established from the outset, and initial bias and fairness testing should be conducted. The entire development process, including key decisions and trade-offs, should be thoroughly documented.
The validation phase begins with testing performance against a held-out validation dataset. A comprehensive bias and fairness analysis should examine outcomes across demographic groups. Adversarial testing assesses robustness to malicious inputs, while explainability and interpretability are evaluated to ensure outputs can be understood. Security testing should cover threats including data poisoning, model extraction, and adversarial attacks. All validation results, identified issues, and mitigations must be documented. Governance approval should be obtained before any system proceeds to deployment.
In the deployment phase, organizations implement the monitoring infrastructure, establish human oversight mechanisms, and configure logging and audit trails. Explainability interfaces should be made available to relevant stakeholders. User training prepares the people who will interact with the system, and communication about the AI's use should reach all affected stakeholders. A gradual rollout with close monitoring is recommended over full-scale launch.
Monitoring and maintenance is an ongoing commitment. Continuous performance monitoring, regular bias and fairness assessments, and drift detection (covering data drift, concept drift, and model drift) form the core of operational oversight. Security monitoring, incident tracking and response, and periodic revalidation (at least annually) ensure the system remains trustworthy. Retraining or updates should be undertaken as conditions warrant.
The framework devotes particular attention to bias and fairness management. Organizations must select appropriate fairness metrics based on context. Demographic parity requires similar outcomes across demographic groups. Equalized odds requires similar error rates across groups. Individual fairness requires that similar individuals receive similar outcomes.
The testing approach should begin by identifying protected characteristics such as race, gender, and age. AI performance should then be tested across demographic groups, with results analyzed for statistical disparities. Root causes of any disparities should be investigated and findings documented comprehensively.
When disparities are found, several mitigation strategies are available. Pre-processing techniques address biases in training data through resampling or reweighting. In-processing approaches incorporate fairness constraints during model training. Post-processing methods adjust model outputs to meet fairness criteria. Enhanced human oversight can provide additional review for groups showing disparities. Continuous monitoring with defined alert thresholds ensures bias is caught as conditions evolve.
4. Stakeholder Interaction and Communication
The framework establishes clear transparency requirements for organizations deploying AI.
Organizations must communicate several key facts to affected individuals. Stakeholders should know that AI is being used in decision-making, understand the purpose of the AI system, and be informed about the types of decisions the AI makes or informs. Individuals should know what data is used to make decisions about them, understand the general logic or factors the system considers, and be made aware of the consequences of AI decisions. Their rights of access, correction, and objection must also be clearly communicated.
The manner of communication matters as much as its substance. Disclosures should use clear, plain language appropriate to the audience and be available in accessible formats, whether through websites, applications, or physical documents. Proactive disclosure, delivered before or at the time of interaction, is preferred. A layered approach works well: a brief summary provides essential information upfront, with an option for individuals to access more detailed explanations.
Explainability and interpretability requirements scale with risk. For high-risk decisions, organizations must provide individual explanations showing the specific factors that influenced the decision, the relative importance of those factors, how the individual's data compared to relevant thresholds, and counterfactual information describing what would need to change for a different outcome. Techniques such as SHAP values, LIME, and attention mechanisms can support these explanations.
For medium-risk decisions, general explanations may suffice. These should describe how the AI system works at a high level, outline the types of factors considered, share performance statistics, and offer example scenarios. For low-risk decisions, high-level transparency is adequate, requiring disclosure that AI is used along with a description of its general purpose and approach.
Personal Data Protection Act 2012 (PDPA)
The PDPA is Singapore's primary data protection law, establishing requirements for collection, use, disclosure, and care of personal data. AI systems processing personal data must comply with PDPA obligations.
Key PDPA Provisions for AI
The Consent Obligation (Section 13) requires organizations to obtain consent before collecting, using, or disclosing personal data. Purpose specification must be clear, such as stating that data will be used "to develop AI models for credit assessment." Collection must be limited to data necessary for the specified purpose. Deemed consent is available in circumstances where the purpose would be obvious given the context of the interaction.
Under the Purpose Limitation Obligation (Section 18), organizations may use data only for purposes that are reasonable and have been communicated to individuals. AI model retraining or expansion to new use cases may constitute new purposes requiring fresh consent. Any secondary uses require either new consent or a legitimate interests assessment.
The Notification Obligation (Section 20) requires that privacy notices explicitly disclose the use of AI in decision-making. Notices must specify the types of AI decisions involved and explain any data sharing with AI service providers. When AI systems change significantly, notices must be updated accordingly.
The Accuracy Obligation (Section 23) places responsibility on organizations to ensure training data accuracy through validation and data cleaning. Operational data quality must also be validated on an ongoing basis. Organizations must provide mechanisms for individuals to correct their data and must re-run AI decisions when corrections are made.
The Protection Obligation (Section 24) requires robust security for training datasets, including access controls, encryption, and audit logging. AI models must be protected from information leakage through threats such as model inversion attacks. Deployed AI systems must be secured against adversarial attacks, and AI-specific security controls such as query rate limiting and differential privacy should be considered.
The Retention Limitation Obligation (Section 25) requires organizations to define retention periods for training data, operational data, and AI decision logs. These periods must balance retention needs (including retraining, auditing, and dispute resolution) against privacy risks. Anonymization should be considered as an alternative to deletion where appropriate.
The Transfer Limitation Obligation (Section 26) requires comparable protection for personal data transferred across borders. Organizations should use contracts that require adherence to data protection standards, consider data residency options such as Singapore-based infrastructure, and assess the protection levels offered by destination jurisdictions.
PDPA enforcement carries significant consequences. Violations may result in fines of up to SGD 1 million. For organizations with annual turnover exceeding SGD 10 million, financial penalties can reach up to 10% of annual turnover. The PDPC may also issue directions requiring specific remedial actions and will publicly disclose enforcement actions.
Sector-Specific AI Regulations
Financial Services: MAS Requirements
The FEAT Principles (Fairness, Ethics, Accountability, Transparency), issued by MAS in 2018 and updated in 2020, govern AI use in financial services.
The Fairness principle requires financial institutions to design AI systems that treat customers and counterparties fairly. Institutions must identify and mitigate discriminatory bias, ensure balanced and representative datasets, test for disparate impact across demographic groups, and establish processes to address instances of unfair treatment.
The Ethics principle calls on institutions to align AI with ethical standards and societal norms. Organizations must consider the broader impacts of their AI systems beyond immediate business objectives, establish AI ethics frameworks and governance, engage stakeholders on ethical concerns, and avoid uses that could harm customer interests.
The Accountability principle demands clear accountability for AI decisions and outcomes. Senior management and boards must provide oversight, roles and responsibilities must be defined, institutions must be able to explain AI decisions to regulators and customers, and audit trails and documentation must be maintained.
The Transparency principle requires disclosure of AI use in customer-facing applications. Institutions must provide explanations of AI-driven decisions, communicate in clear and accessible language, disclose limitations and risks, and ensure customers understand how to raise concerns.
Beyond the FEAT principles, MAS sets detailed implementation requirements across several domains.
Governance expectations require board and senior management oversight of AI strategy, cross-functional AI governance committees, clear accountability for each AI system, integration with existing technology risk governance, and regular reporting upward through the organization.
Development and validation standards call for rigorous methodology with documentation, independent model validation by qualified validators, comprehensive testing (encompassing bias analysis, scenario analysis, and stress testing), documentation of limitations and appropriate use cases, and formal approval processes before deployment.
Fairness and bias management requires institutions to identify protected characteristics and potential bias sources, test for bias across demographic groups, assess disparate impact using established fairness metrics, implement bias mitigation strategies, and continuously monitor for bias in production systems.
Explainability obligations require institutions to implement mechanisms appropriate to the complexity and risk of each AI system, provide customers with explanations for AI-driven decisions, and train customer-facing staff to communicate those explanations effectively.
Monitoring and audit expectations include continuous performance monitoring, regular model revalidation (at least annually), internal audit coverage of AI systems, and response procedures for performance degradation.
Healthcare: MOH and HSA Requirements
AI-based medical devices face a comprehensive regulatory pathway. They require premarket review and approval by the HSA, along with clinical validation demonstrating safety and efficacy. Labeling requirements mandate clear documentation of intended use, limitations, and contraindications. Post-market surveillance and adverse event reporting are ongoing obligations. Classification under the Software as a Medical Device (SaMD) framework determines the appropriate level of regulatory scrutiny.
AI systems supporting clinical decision-making must meet additional standards. These systems must be validated in relevant clinical contexts and integrate appropriately with clinical workflows. Explainability is essential, enabling clinicians to understand the basis for AI-generated recommendations. Human oversight must be maintained, with the clinician remaining the final decision-maker. Performance in real-world clinical use must be documented over time.
Data governance in healthcare AI spans multiple regulatory regimes. Research applications must comply with the Human Biomedical Research Act. Patient data is subject to the PDPA. The Healthcare Services Act imposes additional requirements, and institutional review board (IRB) approvals are required where applicable.
Practical Compliance Roadmap
Phase 1: Assessment and Planning (Months 1-2)
The first phase focuses on establishing a clear picture of the organization's AI landscape and identifying where gaps exist.
Organizations should begin by conducting a thorough AI system inventory. This means identifying all AI systems currently in use or under development and documenting each system's purpose, the data it processes, its role in decision-making, the stakeholders it affects, and the jurisdictions in which it operates.
With the inventory complete, a gap analysis should compare the organization's current state against the requirements of the Model AI Governance Framework, the PDPA, and any applicable sector-specific regulations. The analysis should examine governance structures, risk assessment practices, technical controls, and stakeholder communication for shortfalls.
Finally, risk prioritization classifies each AI system by risk level (high, medium, or low) and directs compliance efforts toward high-risk systems first. This ensures that the most impactful work happens early in the compliance journey.
Phase 2: Governance Foundation (Months 2-4)
The second phase builds the organizational infrastructure needed to sustain AI compliance over time.
Establishing the governance structure is the first priority. This means forming an AI governance committee, assigning roles and responsibilities across the organization, and creating escalation and decision-making processes that connect operational teams to senior leadership.
Policy development follows. Organizations should draft a comprehensive AI governance policy and create supporting procedures that translate policy into practice. Existing policies covering privacy, data retention, security, and vendor management should be updated to reflect AI-specific considerations.
Training and awareness programs prepare the organization to operate within the new governance framework. General awareness training should reach all staff, while detailed training should be developed for AI system owners and developers. PDPA compliance training and human oversight training round out the program, ensuring that everyone involved in AI decision-making understands their obligations.
Phase 3: System-by-System Implementation (Months 4-8)
The third phase applies compliance measures to each AI system individually, starting with those classified as high-risk.
Each system undergoes a comprehensive risk assessment covering AI-specific risks, data protection concerns, and operational risks. Based on the results, teams design the appropriate level of human oversight, building mechanisms and procedures tailored to the system's risk profile, and training the reviewers who will exercise that oversight.
Bias and fairness testing requires identifying relevant demographic groups, selecting appropriate fairness metrics, conducting testing, analyzing results, implementing mitigations where disparities are found, and documenting the entire process. In parallel, explainability implementation determines the level of explanation required for each system, implements the technical mechanisms to generate those explanations, develops user-facing explanation formats, tests them for clarity, and trains staff to communicate them.
Organizations must also build out monitoring infrastructure, including performance dashboards, bias monitoring, security monitoring, and alert configurations. Thorough documentation captures system design, training data provenance, the development process, validation results, risk assessments, human oversight arrangements, explainability mechanisms, and monitoring procedures.
PDPA compliance for each system requires verifying the lawful basis for data processing, updating privacy notices, implementing mechanisms for individuals to exercise their rights, ensuring data quality, implementing security measures, and establishing retention policies. Finally, stakeholder communication involves developing customer-facing disclosures, publishing them through appropriate channels, preparing FAQs, training staff on responses, and establishing feedback mechanisms.
Phase 4: Monitoring and Continuous Improvement (Ongoing)
The fourth phase is not a one-time effort but a permanent operational commitment.
Regular monitoring involves reviewing dashboards on a daily or weekly basis, investigating alerts and anomalies as they arise, tracking incidents, collecting stakeholder feedback, and staying abreast of regulatory developments that may affect compliance obligations.
Governance reviews take place through regular AI governance committee meetings, held monthly or quarterly. The committee reviews AI system performance and issues, approves new AI systems or significant changes to existing ones, and assesses the effectiveness of policies and procedures.
Periodic assessments ensure that compliance does not erode over time. Annual comprehensive audits should examine each AI system in depth. Risk reassessments should be conducted periodically to account for changing conditions. Model revalidation should occur at least annually, or more frequently where MAS or other sector-specific requirements apply.
Emerging Issues and Future Outlook
Generative AI and Large Language Models
Current frameworks apply to generative AI, with specific guidance expected in 2026 addressing the unique challenges these systems present.
Several key issues demand regulatory attention. Training data governance and the lawful basis for processing vast datasets raise foundational questions. Individual rights become harder to operationalize in the context of training data drawn from millions of sources. Intellectual property rights for copyrighted content used in training remain contested. Bias and hallucinations require novel testing and mitigation approaches. Prompt injection and jailbreaking present security risks that traditional frameworks were not designed to address. The inherent opacity of large language models creates explainability challenges that differ from those of conventional AI systems.
Anticipated guidance from Singapore regulators is expected to cover training data governance best practices, testing and validation methodologies specific to LLMs, explainability approaches suited to generative outputs, security controls for prompt injection, and frameworks for evaluating the use of LLMs in high-risk contexts.
AI Assurance and Certification
AI Verify, IMDA's AI testing framework and toolkit, provides standardized testing for transparency, explainability, fairness, and robustness. A formal certification scheme is currently in development, and the initiative is designed to align internationally with frameworks from the EU and the United States.
A growing ecosystem of third-party AI auditors is emerging alongside government-led initiatives. These auditors provide independent validation of AI governance, fairness, and security practices, producing audit reports that demonstrate compliance. Integration with established financial auditing practices is bringing AI assurance into the mainstream of corporate governance.
Conclusion
Singapore's AI regulatory landscape combines comprehensive governance frameworks, robust data protection requirements, and sector-specific rules to ensure responsible AI deployment. Success requires robust governance with clear accountability, governance structures, and processes covering the full AI lifecycle. Organizations must adopt a risk-based approach that focuses resources on high-risk AI systems. Proactive bias management demands continuous testing and mitigation, while meaningful transparency requires providing stakeholders with clear information and accessible explanations. Strong data protection ensures PDPA compliance for all AI systems processing personal data. Continuous monitoring of performance, bias, security, and compliance keeps systems trustworthy over time. And above all, adaptability is essential, as organizations must stay informed about regulatory developments and be prepared to evolve their practices accordingly.
Organizations that view AI governance as a strategic enabler will be best positioned for success in Singapore's dynamic AI landscape.
Need expert guidance on Singapore AI compliance? Contact Pertama Partners for advisory services covering governance framework design, PDPA compliance, sector-specific requirements, and ongoing monitoring.
Common Questions
The Model AI Governance Framework itself is not legally binding legislation. It is comprehensive guidance developed by IMDA and PDPC that represents regulatory expectations for AI governance in Singapore. However, organizations should treat it as de facto binding for several reasons: (1) It represents how regulators expect organizations to govern AI responsibly; demonstrating alignment provides a safe harbor. (2) Failure to implement framework principles could constitute failure to meet PDPA obligations when AI processes personal data. (3) Sector-specific regulators like MAS reference the framework and expect financial institutions to align with it. (4) In enforcement actions, PDPC and other regulators assess organizations against framework standards. Therefore, while technically non-binding, treating the Model AI Governance Framework as a practical compliance requirement is advisable.
While Singapore's PDPA and the EU's GDPR share core data protection principles, key differences affect AI systems: (1) Consent: PDPA allows more flexible reliance on deemed consent and exceptions (legitimate interests, business improvement), whereas GDPR has stricter consent requirements. (2) Automated Decision-Making: GDPR Article 22 provides explicit right to object to solely automated decisions; PDPA doesn't have equivalent explicit provision but addresses through general accountability. (3) DPIA: GDPR mandates DPIA for high-risk processing; PDPA doesn't explicitly mandate but Model AI Governance Framework strongly recommends risk assessments. (4) Transfers: GDPR restricts transfers outside EEA; PDPA requires accountability be maintained but is more flexible. (5) Penalties: GDPR allows up to 4% of global annual turnover; PDPA allows up to SGD 1 million or 10% of annual turnover. Organizations compliant with GDPR generally meet PDPA requirements, but Singapore's emphasis on practical governance requires additional attention.
MAS FEAT principles (Fairness, Ethics, Accountability, Transparency) establish specific expectations for AI in Singapore financial services: (1) Fairness: Financial institutions must actively test for and mitigate discriminatory bias across demographic groups, assess disparate impact, implement mitigation strategies, and continuously monitor for bias. (2) Ethics: AI must align with ethical standards and societal norms; institutions must establish ethics frameworks and consider broader impacts. (3) Accountability: Clear accountability from board to operational levels, including board oversight, defined AI system owners, independent model validation, and comprehensive documentation. (4) Transparency: Disclose AI use to customers, provide explanations of AI-driven decisions affecting customers, communicate clearly and accessibly. Implementation requires AI governance committees, rigorous development and validation, comprehensive bias testing, explainability mechanisms, continuous monitoring, and regular revalidation (at least annually). MAS actively supervises compliance.
Singapore's explainability requirements are risk-based: (1) High-Risk Decisions (legal or significant effects): Individual-specific explanations required showing specific factors influencing the decision, relative importance, how individual's data compared to thresholds, and counterfactuals. Implementation techniques include SHAP values, LIME, attention mechanisms, or simpler interpretable models. (2) Medium-Risk Decisions: General explanations may suffice including how AI works generally, types of factors considered, performance statistics. (3) Low-Risk Decisions: High-level transparency adequate with disclosure that AI is used and general purpose. Key considerations: explanations must be meaningful to the intended audience, tested with representative users, and balanced with accuracy. Under PDPA, individuals have rights to access personal data and information about logic involved in automated decision-making. Financial institutions face additional MAS transparency requirements.
AI system changes require rigorous change management: (1) Change Classification: Major changes (new data sources, model architecture changes, new use cases) require full revalidation; Moderate changes (parameter tuning, minor features) require targeted testing; Minor changes (bug fixes) require standard procedures. (2) Change Process: Risk assessment, comprehensive testing (performance, bias, security, explainability), independent validation for major changes in financial services, documentation, governance approval, implementation with monitoring, rollback procedures. (3) Revalidation: Major changes require comprehensive revalidation equivalent to new system validation. MAS requires financial institutions to revalidate models at least annually and whenever material changes occur. (4) Stakeholder Communication: Assess whether changes require updated disclosures, update privacy notices if data processing changes. (5) Monitoring: Enhanced monitoring following deployment of changes. (6) Documentation: Maintain comprehensive change logs enabling reconstruction of AI system evolution.
References
- Model AI Governance Framework (Second Edition). PDPC and IMDA Singapore (2020). View source
- Personal Data Protection Act 2012. Personal Data Protection Commission Singapore (2012). View source
- Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT) in the Use of AI and Data Analytics. Monetary Authority of Singapore (2018). View source
- Technology Risk Management Guidelines. Monetary Authority of Singapore (2021). View source
- Regulatory Guidelines for Software Medical Devices — A Life Cycle Approach (GL-04). Health Sciences Authority Singapore (2022). View source
- What is AI Verify — AI Verify Foundation. AI Verify Foundation (2023). View source
- ISO/IEC 42001:2023 — Artificial Intelligence Management System. International Organization for Standardization (2023). View source

