Back to Insights
AI Compliance & RegulationGuide

AI Regulations in Singapore: IMDA Guidelines and Compliance Requirements

October 21, 202511 min readMichael Lansdowne Hauge
Updated March 15, 2026
For:Legal/ComplianceCTO/CIOCISOBoard MemberConsultantCHROHead of OperationsIT Manager

Complete guide to Singapore AI governance. Covers IMDA Model Framework, PDPA requirements for AI, MAS guidelines, and practical implementation.

Summarize and fact-check this article with:
Muslim Man Lawyer Formal - ai compliance & regulation insights

Key Takeaways

  • 1.IMDA Model AI Governance Framework provides practical guidance for responsible AI deployment
  • 2.Singapore takes a principles-based approach rather than prescriptive regulation
  • 3.AI Verify is Singapore's testing toolkit for demonstrating responsible AI practices
  • 4.Financial services, healthcare, and government sectors face additional AI requirements
  • 5.Early adoption of governance frameworks provides competitive advantage in regulated sectors

AI Regulations in Singapore: IMDA Guidelines and Compliance Requirements

Singapore has positioned itself as a leader in AI governance through practical, business-friendly frameworks. While currently voluntary, these frameworks set expectations that organizations should meet, and are increasingly referenced in contracts, audits, and regulatory discussions.

Executive Summary

Singapore favors principles over prescriptions, building its AI governance approach on practical flexibility rather than rigid mandates. The foundational document in this system is the Model AI Governance Framework, which sets clear expectations for responsible AI deployment even in the absence of a legal mandate. Where legal force does apply is in data protection: the Personal Data Protection Act (PDPA) governs any AI system that processes personal data, and compliance is not optional.

Layered on top of these horizontal frameworks are sector-specific requirements. Financial institutions, for example, must satisfy the Monetary Authority of Singapore's (MAS) additional guidelines. Organizations that treat these frameworks as purely voluntary take a short-sighted view. What is guidance today is increasingly becoming a contractual and regulatory baseline, and building governance capability now positions an organization well for the mandatory requirements that are widely anticipated. Singapore's regional leadership role amplifies this effect: its frameworks serve as reference points across ASEAN and are gaining global relevance. Throughout, the emphasis is on practical, actionable governance rather than paper exercises, with accountability as the core organizing principle. Organizations must be prepared to answer for how their AI systems operate and the decisions those systems produce.


Why This Matters Now

Singapore's AI governance landscape is maturing rapidly along several dimensions. The Model AI Governance Framework has achieved wide adoption among organizations operating in the city-state, while the PDPC's guidance on AI and personal data now provides specific, actionable expectations that go beyond general principles. MAS has introduced detailed AI requirements for financial institutions that carry meaningful regulatory weight. At the same time, customer and investor due diligence on AI governance has intensified, creating commercial pressure that reinforces regulatory direction. Singapore's frameworks are also serving as the reference architecture for ASEAN-wide coordination, extending their practical significance well beyond the city-state's borders.

Organizations operating in Singapore, or serving Singapore customers, should understand and implement these expectations.


Singapore's AI Governance Framework

Model AI Governance Framework

Published by IMDA and PDPC, now in its second edition, this framework establishes four key principles:

1. Internal Governance Structures and Measures

RequirementWhat It MeansImplementation
Clear roles and responsibilitiesSomeone is accountable for AIDesignate AI governance owner
Board and management oversightLeadership understands AI risksRegular AI reporting to leadership
Risk management integrationAI risks in enterprise riskInclude AI in risk framework
Operations managementAI lifecycle managedGovernance through development and deployment

2. Determining AI Decision-Making Model

The framework distinguishes three levels of human involvement in AI-driven decisions. In a human-in-the-loop model, the human retains full decision-making authority while AI provides supporting analysis. A human-over-the-loop model grants the AI greater autonomy while preserving the human's ability to monitor outcomes and intervene when necessary. At the far end of the spectrum, a human-out-of-the-loop model allows the AI to operate autonomously, which carries the highest governance bar. Organizations should select the appropriate model based on the risk profile and potential impact of each AI application.

3. Operations Management

AreaRequirements
Data managementQuality, accuracy, and relevance of training data
Model developmentRobust development practices
Model deploymentTesting, validation, and monitoring
Performance monitoringOngoing accuracy and effectiveness tracking

4. Stakeholder Interaction and Communication

StakeholderExpectation
UsersInformed about AI use; can seek clarification
Affected partiesRecourse available for adverse decisions
RegulatorsTransparency about AI governance
PublicOrganizational stance on AI ethics clear

PDPA Requirements for AI

Singapore's Personal Data Protection Act applies when AI processes personal data:

Key PDPA Principles Applied to AI

PDPA RequirementAI Application
ConsentObtain consent for AI processing of personal data
Purpose limitationUse data only for consented purposes
NotificationInform individuals about AI processing
Access and correctionEnable access to and correction of AI-processed data
AccuracyEnsure AI uses and produces accurate data
ProtectionSecure personal data in AI systems
Retention limitationDon't retain AI data longer than necessary
Transfer limitationCross-border AI processing compliance

PDPC Advisory Guidelines on AI

The PDPC has issued specific guidance across three pillars that shape how organizations must govern AI systems.

Accountability sits at the center of the PDPC's expectations. Organizations bear direct responsibility for the outcomes their AI systems produce and must be able to demonstrate compliance at any point. Critically, this accountability cannot be outsourced to technology vendors; the deploying organization remains the accountable party regardless of who built or operates the underlying model.

Explainability requires that individuals understand the AI-driven decisions that affect them. The level of explanation should be proportionate to the decision's impact on the individual. The PDPC does not demand technical precision in these explanations; what matters is that the explanation is meaningful and accessible to the person affected.

Fairness addresses the obligation to ensure AI systems do not unfairly discriminate. Organizations are expected to conduct bias testing on their AI applications and to implement remediation measures when bias is discovered.


Sector-Specific: Financial Services (MAS)

The Monetary Authority of Singapore has detailed expectations:

FEAT Principles

MAS structures its AI governance expectations around four interlocking principles known as FEAT.

Fairness requires that AI-driven decisions in financial services are equitable and non-discriminatory. Institutions must conduct regular bias testing and maintain established remediation processes for when testing reveals problematic outcomes.

Ethics demands that AI use aligns with the institution's broader ethical principles, that customer interests are protected throughout, and that the institution maintains transparency about where and how AI is being applied.

Accountability mandates clear ownership of every AI system, governance structures that provide meaningful oversight, and the organizational capability to explain how the system reached a given decision when asked by customers or regulators.

Transparency closes the loop by requiring that customers are informed about AI use, that regulators can understand how AI systems operate, and that comprehensive documentation is maintained throughout the AI lifecycle.

MAS Technology Risk Management Guidelines

Financial institutions must also satisfy the MAS Technology Risk Management (TRM) requirements, which apply directly to AI systems. Model risk management practices are mandatory, and institutions are expected to conduct thorough assessments of third-party AI vendors before onboarding and throughout the relationship.


Implementation Roadmap

Phase 1: Foundation (Weeks 1-4)

Governance structure:

  • Designate AI governance owner
  • Establish oversight mechanism
  • Define escalation procedures
  • Create AI governance policy

Inventory and assessment:

  • Catalog all AI systems
  • Map personal data in AI
  • Classify risk levels
  • Document purposes

Phase 2: Core Compliance (Weeks 5-8)

PDPA compliance:

  • Review consent mechanisms for AI
  • Update privacy notices
  • Implement access/correction for AI data
  • Establish retention policies

Human oversight:

  • Define oversight model per system
  • Implement intervention capabilities
  • Train oversight staff
  • Document procedures

Phase 3: Enhanced Governance (Weeks 9-12)

Documentation:

  • Complete technical documentation
  • Document governance decisions
  • Prepare audit-ready materials
  • Establish evidence repository

Monitoring:

  • Implement performance monitoring
  • Establish bias testing
  • Create reporting mechanisms
  • Schedule regular reviews

Common Failure Modes

1. Treating the framework as optional. While the Model AI Governance Framework is legally voluntary, it has become the de facto industry standard. Customers and partners increasingly reference it in procurement requirements and due diligence processes, making non-adoption a competitive liability rather than merely a governance gap.

2. Documentation without implementation. Policies that exist on paper but lack corresponding operational practices do not satisfy governance requirements. Regulators and auditors look for evidence of governance in action, not binders on a shelf.

3. Ignoring PDPA for AI. The PDPA is enforceable law, not voluntary guidance. Any AI system that processes personal data must comply fully, and the penalties for non-compliance are substantial.

4. Sector-blindness. Organizations in financial services and other regulated sectors face additional obligations beyond the general frameworks. Failing to identify and address these layered requirements creates significant regulatory exposure.

5. One-time compliance. Governance is an ongoing operational discipline, not a point-in-time implementation exercise. AI systems evolve, data inputs change, and regulatory expectations shift. Without continuous maintenance and periodic reassessment, governance posture degrades.


Singapore AI Compliance Checklist

SINGAPORE AI COMPLIANCE CHECKLIST

Governance Structure
[ ] AI governance owner designated
[ ] Board/management oversight established
[ ] AI risk in enterprise risk framework
[ ] AI governance policy documented

Model AI Governance Framework
[ ] AI systems inventoried
[ ] Decision-making model selected per system
[ ] Human oversight appropriate to risk
[ ] Stakeholder communication approach defined

PDPA Compliance
[ ] Personal data in AI systems mapped
[ ] Consent obtained for AI processing
[ ] Privacy notices updated for AI
[ ] Access and correction processes include AI
[ ] Data protection measures implemented
[ ] Retention policies applied
[ ] Cross-border compliance verified

Sector-Specific (if applicable)
[ ] MAS FEAT principles addressed (financial services)
[ ] Industry-specific requirements identified
[ ] Sector regulator guidance reviewed

Documentation
[ ] Technical documentation complete
[ ] Governance decisions documented
[ ] Testing results maintained
[ ] Audit trail established

Monitoring
[ ] Performance monitoring active
[ ] Bias testing conducted
[ ] Regular review scheduled
[ ] Improvement process defined

Metrics to Track

MetricTargetFrequency
AI systems with governance assessment100%Quarterly
PDPA compliance for AI personal data100%Ongoing
Staff training completion>95%Annually
Governance review completion100%Annually
Bias testing for high-risk AI100%Semi-annually

FAQ

Q: Is the Model AI Governance Framework legally required? A: Not directly, but it sets industry expectations. PDPA compliance is legally required, and the framework helps demonstrate it.

Q: Does PDPA apply to AI that doesn't use personal data? A: PDPA applies only when personal data is processed. AI using only non-personal data isn't subject to PDPA but should still follow governance principles.

Q: What are the penalties for non-compliance? A: PDPA violations can result in penalties up to millions of dollars. Sector-specific violations may have additional consequences.

Q: How does this compare to EU AI Act? A: Singapore's approach is less prescriptive. The EU AI Act has binding requirements with specific risk categories. Singapore emphasizes principles with organizational flexibility.

Q: Should we align with EU AI Act as well? A: If you serve EU customers or your AI affects EU residents, yes. Many organizations align with both frameworks.


Next Steps

Singapore compliance is part of regional governance:

  • [AI Regulations in 2026: What Businesses Need to Know]
  • [AI Regulations in Malaysia: Current Framework and Future Directions]
  • [AI Regulations in Thailand: DEPA Guidelines and Business Compliance]

Disclaimer

This article provides general guidance on Singapore AI regulations. It does not constitute legal advice. Organizations should consult qualified Singapore legal counsel for specific compliance requirements.


IMDA's AI Governance Framework in Practice

Singapore's approach to AI governance through the Infocomm Media Development Authority emphasizes industry-led voluntary adoption supported by practical implementation guidance. The Model AI Governance Framework provides organizations with structured principles covering internal governance, risk assessment, operations management, and stakeholder interaction and communication. Companies operating in Singapore should implement AI governance practices that align with IMDA's framework even when not legally required, as this demonstrates responsible AI deployment to customers, partners, and regulators. The AI Verify testing framework and toolkit, developed by IMDA, provides organizations with technical tools to test and demonstrate the trustworthiness of their AI systems against recognized governance principles, offering a practical pathway from governance commitment to verifiable compliance.

Practical Steps for IMDA Compliance Readiness

Organizations operating in Singapore should take concrete steps toward AI governance compliance even before mandatory regulations are enacted. Conduct an internal AI system inventory documenting all AI applications in use, their data inputs, decision outputs, and risk classifications. Map each AI system against the Model AI Governance Framework's recommended practices to identify governance gaps requiring attention. Implement AI Verify testing for customer-facing AI systems to establish baseline trustworthiness metrics and demonstrate proactive governance commitment. Designate an internal AI governance coordinator responsible for monitoring IMDA guidance updates, coordinating compliance activities across departments, and maintaining the documentation portfolio that demonstrates organizational commitment to responsible AI deployment.

Industry-Specific AI Governance in Singapore

Beyond IMDA's cross-industry framework, Singapore's sectoral regulators have issued AI-specific guidance for regulated industries. The Monetary Authority of Singapore's FEAT principles provide financial institutions with fairness, ethics, accountability, and transparency guidelines for AI use in banking, insurance, and capital markets. The Ministry of Health has published guidance on AI in healthcare covering clinical decision support validation, patient data protection, and algorithmic transparency for diagnostic applications. Organizations operating in regulated sectors should layer industry-specific requirements on top of IMDA's general framework to develop comprehensive governance programs that satisfy both horizontal and vertical regulatory expectations.

Building Compliance Programs for Singapore Operations

Organizations establishing or expanding AI operations in Singapore should build compliance programs that address both current requirements and anticipated regulatory evolution. Singapore's regulatory approach has consistently favored industry collaboration and practical guidance over punitive enforcement, creating an environment where proactive governance adoption is rewarded through regulatory goodwill and faster approval processes for new AI deployments. Invest in relationships with IMDA and relevant sectoral regulators through participation in public consultations, industry sandboxes, and governance pilot programs. These engagement activities provide early visibility into regulatory direction and demonstrate the organizational commitment to responsible AI deployment that regulators value when evaluating novel AI applications.

Singapore's regulatory approach emphasizes proportionality, applying governance requirements that scale with the risk level of each AI application rather than imposing uniform requirements across all AI deployments regardless of their potential impact. This risk-proportionate model enables organizations to deploy low-risk AI applications with minimal governance overhead while reserving comprehensive assessment and documentation requirements for high-risk systems that affect individual rights or critical infrastructure.

Practical Next Steps

Translating these governance frameworks into operational reality requires deliberate organizational action. The starting point is establishing a cross-functional governance committee with clear decision-making authority and regular review cadences that bring together legal, technology, risk, and business stakeholders. From there, organizations should document their current governance processes and conduct a gap analysis against the regulatory requirements in each market where they operate. Standardized templates for governance reviews, approval workflows, and compliance documentation reduce friction and promote consistency across business units. Quarterly governance assessments ensure the framework evolves alongside both regulatory developments and organizational changes, preventing the governance posture from becoming stale. Finally, building internal governance capabilities through targeted training programs for stakeholders across different business functions ensures that governance is understood and practiced at the operational level, not only at the policy level.

Effective governance structures require deliberate investment in organizational alignment, executive accountability, and transparent reporting mechanisms. Without these foundational elements, governance frameworks remain theoretical documents rather than living operational systems.

Common Questions

Singapore takes a principles-based approach through the IMDA Model AI Governance Framework rather than prescriptive rules. It emphasizes practical guidance, voluntary adoption, and sector-specific requirements for regulated industries.

AI Verify is Singapore's testing framework and toolkit that allows organizations to demonstrate responsible AI practices. It provides standardized tests for fairness, explainability, and robustness of AI systems.

Financial services (MAS guidelines), healthcare, and government sectors face additional AI requirements including model risk management, explainability standards, and enhanced documentation.

References

  1. Model AI Governance Framework (Second Edition). PDPC and IMDA Singapore (2020). View source
  2. Personal Data Protection Act 2012. Personal Data Protection Commission Singapore (2012). View source
  3. Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT). Monetary Authority of Singapore (2018). View source
  4. What is AI Verify — AI Verify Foundation. AI Verify Foundation (2023). View source
  5. ASEAN Guide on AI Governance and Ethics. ASEAN Secretariat (2024). View source
  6. AI Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology (NIST) (2023). View source
  7. ISO/IEC 42001:2023 — Artificial Intelligence Management System. International Organization for Standardization (2023). View source
Michael Lansdowne Hauge

Managing Partner · HRDF-Certified Trainer (Malaysia), Delivered Training for Big Four, MBB, and Fortune 500 Clients, 100+ Angel Investments (Seed–Series C), Dartmouth College, Economics & Asian Studies

Advises leadership teams across Southeast Asia on AI strategy, readiness, and implementation. HRDF-certified trainer with engagements for a Big Four accounting firm, a leading global management consulting firm, and the world's largest ERP software company.

AI StrategyAI GovernanceExecutive AI TrainingDigital TransformationASEAN MarketsAI ImplementationAI Readiness AssessmentsResponsible AIPrompt EngineeringAI Literacy Programs

EXPLORE MORE

Other AI Compliance & Regulation Solutions

Related Resources

Key terms:AI Regulation

INSIGHTS

Related reading

Talk to Us About AI Compliance & Regulation

We work with organizations across Southeast Asia on ai compliance & regulation programs. Let us know what you are working on.