What Are the MAS AI Risk Management Guidelines?
On 13 November 2025, the Monetary Authority of Singapore (MAS) published a Consultation Paper proposing Guidelines on Artificial Intelligence Risk Management for Financial Institutions. The guidelines establish a comprehensive framework governing how banks, insurers, fintechs, and other regulated financial institutions should manage the risks that accompany AI adoption, from model development through retirement.
These guidelines did not emerge in a vacuum. They build on years of foundational MAS work, beginning with the FEAT Principles (Fairness, Ethics, Accountability, Transparency) published in November 2018, continuing through the Veritas Initiative's industry collaboration on fairness assessment, and extending into Project MindForge for generative AI risk management. Taken together, they represent one of the most structured regulatory approaches to AI governance in financial services anywhere in the world.
Who Must Comply
The guidelines apply to all financial institutions regulated by MAS, encompassing local and foreign banks, insurance companies and takaful operators, capital markets services firms, payment service providers, and fintech companies holding MAS licenses.
MAS has adopted a proportionality principle for implementation. Larger, more complex institutions with extensive AI deployments are expected to build correspondingly comprehensive governance structures, while smaller firms may take a scaled approach that reflects the narrower scope and lower materiality of their AI use. The critical point for regulated entities is that no institution is exempt; the question is one of degree, not applicability.
The FEAT Principles Foundation
The entire MAS AI governance architecture rests on four interconnected principles that define what responsible AI looks like in financial services.
Fairness
AI systems must not produce unfairly biased outcomes. MAS expects financial institutions to define fairness metrics relevant to each AI application, monitor outputs for bias across demographic groups, take corrective action when bias is detected, and maintain documentation of all fairness assessments and the decisions that follow from them. In practice, this means that a credit-scoring model, an insurance pricing algorithm, and a fraud detection system each require their own tailored fairness framework rather than a single institutional policy applied uniformly.
Ethics
AI systems must respect ethical standards that go beyond narrow regulatory compliance. Institutions are expected to ensure that AI serves legitimate business purposes, that no application creates disproportionate harm to customers or communities, and that the societal impact of AI-driven decisions receives genuine consideration rather than perfunctory acknowledgment. For C-suite leaders, this principle creates an expectation of proactive ethical review, not reactive damage control.
Accountability
Clear accountability structures must run from the board room to the model development team. MAS expects board and senior management oversight of AI use, designated AI governance functions with defined mandates, clear escalation procedures for AI incidents, and regular reporting on AI risk metrics to senior leadership. The principle is designed to prevent the diffusion of responsibility that often accompanies complex technology programs, where no single person can explain who approved a model or who is responsible when it fails.
Transparency
AI decisions must be explainable to the stakeholders they affect. Customers should understand when AI influences decisions that affect them, such as credit approvals or insurance pricing. Regulators should be able to review AI decision-making processes during supervisory examinations. Internal audit teams should have full access to AI model documentation. Transparency, in the MAS framework, is not about publishing source code; it is about ensuring that every stakeholder in the chain can obtain an explanation appropriate to their role.
Key Requirements
1. AI Governance and Oversight
MAS places governance responsibility squarely at the top of the organization. The board must set the institution's AI risk appetite and approve the AI governance framework. Senior management must ensure adequate resources for AI risk management, including staffing, technology, and training. AI risk management must be integrated into the institution's existing three-lines-of-defense model rather than treated as a standalone function. Perhaps most operationally significant, institutions must maintain a comprehensive inventory of all AI systems in use, a requirement that many organizations will find surprisingly difficult to fulfill given the pace at which AI tools proliferate across business units.
2. AI Lifecycle Controls
MAS expects robust controls across the entire AI lifecycle, from initial development through eventual retirement.
During development, institutions must conduct data quality and representativeness assessments, validate and test models before deployment, document model design alongside training data and known limitations, and perform bias testing across relevant demographic categories. At deployment, MAS expects staged rollouts with monitoring, clear criteria for promoting models from testing to production, integration with existing operational processes, and user training paired with change management programs. Once in production, institutions must maintain ongoing performance monitoring against defined metrics, implement drift detection for both data and model performance, conduct regular model revalidation, and operate incident detection and response capabilities. Even retirement requires governance: institutions need clear criteria for when to retire or replace AI models, safe decommissioning processes, and appropriate data retention and disposal procedures.
The lifecycle approach reflects MAS's recognition that AI risk is not a point-in-time concern. A model that performs well at deployment can degrade as the data environment shifts, making continuous monitoring as important as initial validation.
3. Materiality Assessment
Not all AI applications carry the same risk, and MAS does not expect identical governance for every algorithm. Financial institutions must assess the materiality of each AI application across four dimensions: the significance of its impact on customer outcomes, the financial risks if the AI fails or produces erroneous results, the potential for reputational damage, and the criticality of the AI to ongoing business operations. Higher-materiality applications face correspondingly more rigorous governance requirements. This tiered approach allows institutions to concentrate resources where they matter most while avoiding the paralysis that can result from applying maximum governance to every automated process.
4. Third-Party AI Risk Management
For AI systems procured from vendors, cloud providers, or other third parties, MAS extends governance expectations beyond the institution's own development teams. Institutions must conduct due diligence on vendor AI capabilities before procurement, secure contractual protections covering data handling and model performance standards, review third-party AI performance on a regular basis, and maintain exit strategies in the event that vendor relationships deteriorate or terminate. As financial institutions increasingly rely on externally developed AI, particularly large language models and cloud-hosted machine learning services, this requirement ensures that outsourcing the technology does not mean outsourcing accountability for its risks.
5. GenAI-Specific Considerations (Project MindForge)
Through Project MindForge, MAS has addressed the distinct risk profile of generative AI. The initiative recognizes that large language models and other generative systems introduce risks that traditional model governance frameworks were not designed to handle, including hallucination, prompt injection, and data leakage. MAS has provided guidance on acceptable use cases for GenAI in financial services and established expectations for human oversight of GenAI outputs. For institutions exploring GenAI for customer-facing applications, internal knowledge management, or code generation, the Project MindForge guidance provides the regulatory guardrails within which experimentation should proceed.
Supporting Initiatives
Veritas Initiative
The Veritas Initiative operates as an industry collaborative that translates the FEAT Principles into practical tools financial institutions can use. It provides a FEAT assessment methodology, fairness metrics libraries, industry-specific assessment templates, and case studies drawn from participating institutions. For organizations that accept the principles but struggle with implementation, Veritas offers a bridge between regulatory expectation and operational reality.
AI Verify
AI Verify is Singapore's government-developed AI testing toolkit, launched by the Infocomm Media Development Authority (IMDA). The open-source platform provides technical testing capabilities for fairness, transparency, and robustness, and it can be used to demonstrate compliance with MAS guidelines. Supported by the AI Verify Foundation, the toolkit gives institutions a standardized way to assess their AI systems against regulatory expectations without building proprietary testing infrastructure from scratch.
How to Comply
Step 1: Establish AI Governance Structure
The foundation of compliance is governance architecture. Institutions should assign board-level oversight of AI, designate an AI governance function (which may sit within existing risk or compliance teams rather than requiring a new organizational unit), and formally define their AI risk appetite alongside governance policies. Without this structural foundation, downstream compliance activities lack the authority and mandate to be effective.
Step 2: Inventory All AI Applications
Before governance can be applied, institutions must know what they are governing. This requires cataloguing every AI system in use across the organization, classifying each application by materiality based on customer impact, financial risk, and operational criticality, and then prioritizing governance efforts according to that materiality assessment. Many institutions discover during this exercise that AI use is far more widespread than leadership realized, with individual business units having adopted tools that never passed through central technology procurement.
Step 3: Implement Lifecycle Controls
For each material AI application, institutions must implement controls spanning development, deployment, monitoring, and retirement. This includes documenting model design, training data, validation results, and known limitations, as well as establishing ongoing monitoring and drift detection capabilities. The depth of controls should be proportionate to materiality, but the expectation of lifecycle coverage applies to all material systems.
Step 4: Address Fairness
Fairness compliance begins with defining appropriate metrics for each customer-facing AI application, conducting initial bias assessments against those metrics, implementing ongoing monitoring to detect emerging bias as models interact with changing data, and establishing corrective action procedures for when bias is identified. Fairness is not a one-time certification; it is a continuous obligation that requires sustained investment in monitoring infrastructure and analytical capability.
Step 5: Manage Third-Party AI Risk
Institutions must review contracts with AI vendors to ensure adequate protections, assess vendor AI governance practices through structured due diligence, establish monitoring of vendor-provided AI performance against contractual benchmarks, and develop contingency plans for vendor disruption or exit. As the share of AI capability sourced externally continues to grow, third-party risk management will increasingly define the quality of an institution's overall AI governance posture.
Related Regulations
Singapore's MAS guidelines operate within a broader regulatory ecosystem. The Singapore Personal Data Protection Act (PDPA) imposes data protection requirements that apply to all AI data processing. The Singapore Model AI Governance Framework provides voluntary best practices that complement the MAS guidelines with cross-industry relevance. Across the region, Bank Negara Malaysia's AI guidelines and the Bank of Thailand's AI risk management framework create comparable requirements for financial institutions in those jurisdictions. Globally, the EU AI Act classifies financial AI as high-risk, imposing requirements that share significant conceptual overlap with the MAS approach.
How MAS Guidelines Compare to Bank Negara Malaysia and Bank of Thailand Requirements
Southeast Asian financial regulators have taken meaningfully different approaches to AI risk management, and those differences affect multinational institutions operating across the region. MAS provides the most detailed AI-specific guidance, with explicit expectations around model governance, fairness testing, and explainability for AI-driven financial decisions. Bank Negara Malaysia addresses AI risk within its broader Technology Risk Management Framework, as outlined in its Discussion Paper on AI in the Financial Sector (August 2025), without standalone AI guidance. This requires institutions to extrapolate AI-specific controls from general technology risk principles, an exercise that introduces interpretive ambiguity. The Bank of Thailand has issued specific AI risk guidelines for supervised institutions, creating requirements that fall between Singapore's comprehensive approach and Malaysia's general technology framework in their level of specificity.
Practical Implementation for Regional Financial Institutions
Financial institutions operating across ASEAN should implement MAS guidelines as their regional baseline, since they represent the most comprehensive requirements in the region. From that foundation, institutions can then verify compliance with additional jurisdiction-specific provisions in each market where they operate. This approach avoids the cost and complexity of maintaining parallel governance programs for each country while ensuring that the governance floor satisfies the most demanding regional regulator. The alternative, building separate frameworks for each jurisdiction, creates duplication, inconsistency, and the risk that the least governed market becomes the weakest link in the institution's AI risk posture.
What MAS Expects From Financial Institutions Using AI in 2026
MAS has progressively increased its expectations for AI governance through successive guidance publications, and the trajectory points clearly toward continued tightening. Current expectations include documented model risk management frameworks covering the complete AI lifecycle from development through retirement, fairness testing protocols that evaluate AI-driven credit, insurance, and investment decisions across demographic groups, explainability mechanisms that enable compliance officers to understand and audit AI-influenced decisions, and regular model validation reviews conducted by teams independent from the model development function.
Financial institutions should treat these expectations as de facto requirements even when framed as guidance. MAS supervisory examinations increasingly evaluate AI governance practices as part of broader technology risk assessments, and institutions that have not formalized their approach will find themselves at a significant disadvantage during those reviews. The window for treating AI governance as a voluntary best practice, rather than a supervisory expectation with real consequences, has effectively closed.
Common Questions
The guidelines were released for consultation in November 2025, with the consultation period closing 31 January 2026. Once finalized (expected 2026), they will be considered supervisory expectations — meaning MAS will evaluate financial institutions' compliance during inspections and supervisory reviews. Non-compliance could result in supervisory action.
Yes, but proportionately. MAS applies the principle of proportionality — smaller, less complex institutions can implement a lighter governance framework. However, all MAS-regulated entities, regardless of size, are expected to have basic AI governance in place if they use AI for material business functions.
FEAT stands for Fairness, Ethics, Accountability, and Transparency. Launched in 2018, these four principles form the foundation of MAS's AI governance approach. Financial institutions must ensure their AI systems are fair (not biased), ethical (used responsibly), accountable (clear ownership and oversight), and transparent (explainable to stakeholders).
Financial institutions remain responsible for AI governance even when using third-party AI tools. This means conducting due diligence on vendors, including contractual protections in vendor agreements, monitoring vendor AI performance, and maintaining exit strategies. The institution cannot delegate its governance responsibilities to the vendor.
Yes. MAS addressed GenAI through Project MindForge and the guidelines include considerations for generative AI. Key focus areas include hallucination risk, prompt injection, data leakage, and the need for human oversight of GenAI outputs. Financial institutions using GenAI face additional scrutiny on acceptable use cases.
References
- Consultation Paper on Proposed Guidelines on AI Risk Management for Financial Institutions. Monetary Authority of Singapore (MAS) (2025). View source
- Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT). Monetary Authority of Singapore (MAS) (2018). View source
- Project MindForge. Monetary Authority of Singapore (MAS) (2024). View source
- Information Paper on Artificial Intelligence Model Risk Management. Monetary Authority of Singapore (MAS) (2024). View source
- Singapore Launches AI Verify Foundation. Infocomm Media Development Authority (IMDA) (2023). View source
- Discussion Paper on Artificial Intelligence in the Malaysian Financial Sector. Bank Negara Malaysia (BNM) (2025). View source
- Artificial Intelligence — Emerging Technologies and Research. Infocomm Media Development Authority (IMDA) (2024). View source

