Back to Insights
AI Compliance & RegulationGuidePractitioner

Singapore MAS AI Risk Management Guidelines: What Financial Institutions Need to Know

February 12, 202614 min readPertama Partners
For:Compliance LeadRisk OfficerCTO/CIOCEO/Founder

The Monetary Authority of Singapore (MAS) released AI Risk Management Guidelines in November 2025 for all financial institutions. Built on the FEAT principles, these guidelines establish comprehensive AI governance requirements for banks, insurers, and fintechs.

Financial services professional in Singapore reviewing risk management documentation

Key Takeaways

  • 1.Applies to all MAS-regulated financial institutions — banks, insurers, fintechs, payment providers
  • 2.Built on FEAT principles: Fairness, Ethics, Accountability, Transparency
  • 3.Requires board-level AI oversight, three lines of defense, and comprehensive AI inventory
  • 4.Full AI lifecycle controls: development, deployment, monitoring, and retirement
  • 5.Third-party AI tools are covered — institutions cannot delegate governance to vendors
  • 6.GenAI addressed through Project MindForge with focus on hallucination, prompt injection, and data leakage

What Are the MAS AI Risk Management Guidelines?

On 17 November 2025, the Monetary Authority of Singapore (MAS) released its Guidelines on Artificial Intelligence Risk Management for Financial Institutions. These guidelines establish a comprehensive framework for how banks, insurers, fintechs, and other regulated financial institutions should govern and manage AI risks.

The guidelines build on years of MAS initiatives including the FEAT Principles (Fairness, Ethics, Accountability, Transparency) launched in 2018, the Veritas Initiative, and Project MindForge for GenAI risk management.

Who Must Comply

The guidelines apply to all financial institutions regulated by MAS, including:

  • Banks (local and foreign)
  • Insurance companies and takaful operators
  • Capital markets services firms
  • Payment service providers
  • Fintech companies holding MAS licenses

Implementation is proportionate — larger, more complex institutions with extensive AI use are expected to implement more comprehensive governance, while smaller firms can take a proportionate approach.

The FEAT Principles Foundation

MAS's AI governance is built on four principles:

Fairness

AI systems should not produce unfairly biased outcomes. Financial institutions must:

  • Define fairness metrics relevant to their AI applications
  • Monitor for bias across demographic groups
  • Take corrective action when bias is detected
  • Document fairness assessments and decisions

Ethics

AI systems should respect ethical standards. This includes:

  • Ensuring AI is used for legitimate business purposes
  • Avoiding applications that could cause disproportionate harm
  • Considering the societal impact of AI-driven decisions

Accountability

Clear accountability structures must be in place:

  • Board and senior management oversight of AI use
  • Designated AI governance functions
  • Clear escalation procedures for AI incidents
  • Regular reporting on AI risk metrics

Transparency

AI decisions should be explainable to relevant stakeholders:

  • Customers should understand when AI influences decisions affecting them
  • Regulators should be able to review AI decision-making processes
  • Internal audit should have access to AI model documentation

Key Requirements

1. AI Governance and Oversight

  • Board responsibility: The board must set the organization's AI risk appetite and approve the AI governance framework
  • Senior management: Must ensure adequate resources for AI risk management
  • Three lines of defense: AI risk management must be integrated into the existing risk management framework
  • AI inventory: Maintain a comprehensive inventory of all AI systems in use

2. AI Lifecycle Controls

MAS expects robust controls across the entire AI lifecycle:

Development:

  • Data quality and representativeness assessments
  • Model validation and testing before deployment
  • Documentation of model design, training data, and limitations
  • Bias testing across relevant demographic categories

Deployment:

  • Staged rollout with monitoring
  • Clear criteria for moving from testing to production
  • Integration with existing operational processes
  • User training and change management

Monitoring:

  • Ongoing performance monitoring against defined metrics
  • Drift detection for data and model performance
  • Regular model revalidation
  • Incident detection and response

Retirement:

  • Clear criteria for when to retire or replace AI models
  • Safe decommissioning processes
  • Data retention and disposal

3. Materiality Assessment

Financial institutions must assess the materiality of each AI application based on:

  • Impact on customers: How significantly does the AI affect customer outcomes?
  • Financial impact: What are the financial risks if the AI fails?
  • Reputational impact: Could AI failures damage the institution's reputation?
  • Operational impact: How critical is the AI to business operations?

Higher-materiality applications face more rigorous governance requirements.

4. Third-Party AI Risk Management

For AI systems procured from third parties (vendors, cloud providers):

  • Due diligence on vendor AI capabilities
  • Contractual protections for data handling and model performance
  • Regular review of third-party AI performance
  • Exit strategies if vendor relationships change

5. GenAI-Specific Considerations (Project MindForge)

MAS has also addressed GenAI through Project MindForge:

  • Additional risk considerations for large language models and generative AI
  • Focus on hallucination risk, prompt injection, and data leakage
  • Guidance on acceptable use cases for GenAI in financial services
  • Requirements for human oversight of GenAI outputs

Supporting Initiatives

Veritas Initiative

An industry collaborative that provides practical tools for assessing AI fairness:

  • FEAT assessment methodology
  • Fairness metrics libraries
  • Industry-specific assessment templates
  • Case studies and best practices

AI Verify

Singapore's government-developed AI testing toolkit:

  • Technical testing for fairness, transparency, and robustness
  • Can be used to demonstrate compliance with MAS guidelines
  • Open-source, with industry support through the AI Verify Foundation

How to Comply

Step 1: Establish AI Governance Structure

  • Assign board-level oversight of AI
  • Designate an AI governance function (could be within existing risk or compliance teams)
  • Define your AI risk appetite and governance policies

Step 2: Inventory All AI Applications

  • Catalog every AI system in use
  • Classify each by materiality (customer impact, financial risk, operational criticality)
  • Prioritize governance efforts based on materiality

Step 3: Implement Lifecycle Controls

  • For each material AI application, implement controls for development, deployment, monitoring, and retirement
  • Document model design, training data, validation results, and limitations
  • Establish ongoing monitoring and drift detection

Step 4: Address Fairness

  • Define fairness metrics for each customer-facing AI application
  • Conduct initial bias assessments
  • Implement ongoing monitoring for bias
  • Establish corrective action procedures

Step 5: Manage Third-Party AI Risk

  • Review contracts with AI vendors
  • Assess vendor AI governance practices
  • Establish monitoring of vendor-provided AI performance
  • Develop contingency plans
  • Singapore PDPA: Data protection requirements that apply to all AI data processing
  • Singapore Model AI Governance Framework: Voluntary best practices that complement MAS guidelines
  • Malaysia BNM AI Guidelines: Comparable requirements for Malaysian financial institutions
  • Thailand BOT AI Risk Management: Similar framework for Thai financial services
  • EU AI Act: Classifies financial AI as high-risk with comparable requirements

Frequently Asked Questions

The guidelines were released for consultation in November 2025, with the consultation period closing 31 January 2026. Once finalized (expected 2026), they will be considered supervisory expectations — meaning MAS will evaluate financial institutions' compliance during inspections and supervisory reviews. Non-compliance could result in supervisory action.

Yes, but proportionately. MAS applies the principle of proportionality — smaller, less complex institutions can implement a lighter governance framework. However, all MAS-regulated entities, regardless of size, are expected to have basic AI governance in place if they use AI for material business functions.

FEAT stands for Fairness, Ethics, Accountability, and Transparency. Launched in 2018, these four principles form the foundation of MAS's AI governance approach. Financial institutions must ensure their AI systems are fair (not biased), ethical (used responsibly), accountable (clear ownership and oversight), and transparent (explainable to stakeholders).

Financial institutions remain responsible for AI governance even when using third-party AI tools. This means conducting due diligence on vendors, including contractual protections in vendor agreements, monitoring vendor AI performance, and maintaining exit strategies. The institution cannot delegate its governance responsibilities to the vendor.

Yes. MAS addressed GenAI through Project MindForge and the guidelines include considerations for generative AI. Key focus areas include hallucination risk, prompt injection, data leakage, and the need for human oversight of GenAI outputs. Financial institutions using GenAI face additional scrutiny on acceptable use cases.

References

  1. Guidelines on Artificial Intelligence Risk Management for Financial Institutions. Monetary Authority of Singapore (MAS) (2025). View source
  2. Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT). Monetary Authority of Singapore (MAS) (2018)
MASSingaporeAI risk managementfinancial servicesFEAT principlesfintech

Ready to Apply These Insights to Your Organization?

Book a complimentary AI Readiness Audit to identify opportunities specific to your context.

Book an AI Readiness Audit