
Executive Summary
- The Monetary Authority of Singapore (MAS) has established comprehensive AI governance expectations through the FEAT (Fairness, Ethics, Accountability, Transparency) principles and related guidelines
- Model risk management requirements that traditionally applied to statistical models now extend to AI and machine learning systems
- Explainability is non-negotiable for AI systems that affect customer outcomes — black-box models face heightened scrutiny
- Fairness assessments must be documented with particular attention to protected characteristics in credit, insurance, and employment decisions
- Board and senior management accountability for AI governance is explicit in MAS expectations
- Third-party AI models and vendors are subject to the same governance standards as internally developed systems
- Continuous monitoring requirements mean compliance is ongoing, not a one-time certification
- Penalties for non-compliance can include licensing actions, making AI governance an existential priority
Why This Matters Now
Financial services sits at the intersection of regulatory scrutiny and AI transformation. MAS-regulated entities face a dual imperative: innovate with AI or fall behind competitors, while maintaining the trust and stability that regulators demand.
MAS has accelerated its focus on AI governance:
- The FEAT principles (2018, updated 2022) established Singapore as a global leader in AI ethics for financial services
- MAS Notice 655 (Technology Risk Management) was updated to explicitly address AI and machine learning risks
- Thematic reviews of AI usage at major financial institutions have resulted in Supervisory Expectations letters
- Project Veritas — MAS's initiative to develop fairness assessment methodologies — signals enforcement priorities
For compliance leaders, the message is clear: AI governance is no longer a "nice to have" or a checkbox exercise. It's a regulatory expectation with teeth.
Definitions and Scope
What Counts as "AI" Under MAS Guidelines?
MAS takes a broad, risk-based view of AI that includes:
- Machine learning models — Supervised, unsupervised, and reinforcement learning
- Deep learning systems — Neural networks, including LLMs
- Automated decision systems — Rule-based systems with adaptive components
- Predictive analytics — Statistical models used for customer-affecting decisions
The key factor isn't the technology label — it's whether the system:
- Processes customer or market data
- Influences or automates decisions
- Affects customer outcomes or market stability
Regulated Activities Where AI Governance Applies
| Business Function | AI Application Examples | Risk Level |
|---|---|---|
| Credit decisioning | Loan approval, credit limits, pricing | High |
| Insurance underwriting | Risk assessment, premium calculation | High |
| Investment advice | Robo-advisory, portfolio recommendations | High |
| Fraud detection | Transaction monitoring, AML screening | Medium-High |
| Customer service | Chatbots, complaint handling | Medium |
| Marketing | Customer segmentation, next-best-action | Medium |
| Operations | Document processing, reconciliation | Lower |
The FEAT Principles Framework
Fairness: AI decisions should not systematically disadvantage individuals or groups based on protected characteristics, and any differential treatment should be justifiable.
Ethics: AI should be used in alignment with the organization's ethical values and societal expectations.
Accountability: Clear governance structures must exist with defined roles for AI oversight at board and management levels.
Transparency: Affected individuals should be informed when AI is used in decisions affecting them, and explanations should be available.
RACI Matrix: AI Model Governance in Financial Services
| Activity | Board Risk Committee | CRO/CCO | Model Risk Team | Data Science | Business Owner | Internal Audit |
|---|---|---|---|---|---|---|
| AI strategy and risk appetite | A | R | C | C | C | I |
| Model development standards | I | A | R | C | C | C |
| Individual model approval | I | A | R | C | R | I |
| Model validation | I | A | R | C | I | C |
| Model deployment decision | I | A | R | C | R | I |
| Ongoing monitoring | I | A | R | C | R | C |
| Incident escalation | I | A | R | R | R | I |
| Annual model inventory review | C | A | R | C | C | C |
| Regulatory reporting | I | A | R | I | I | C |
| Independent review | C | I | I | I | I | R |
Legend: R = Responsible, A = Accountable, C = Consulted, I = Informed
Step-by-Step Implementation Guide
Step 1: Establish AI Governance Structure
MAS expects board and senior management accountability for AI risks.
Requirements:
- Board Risk Committee or equivalent must have AI risk within its mandate
- Senior management role (typically CRO or CTO) designated as accountable for AI governance
- Clear escalation pathways for AI-related issues
- Regular reporting to board on AI risks and governance effectiveness
Action items:
- Update Board Risk Committee charter to include AI
- Designate senior management accountability
- Establish AI Governance Committee or embed in existing technology risk committee
- Define reporting frequency and content
Timeline: 2-3 months for governance structure updates
Step 2: Develop AI Policy Framework
A documented policy framework aligned with MAS expectations.
Required policies:
- AI/ML Model Risk Management Policy
- Model Validation Standards
- Fairness Assessment Policy
- AI Ethics Policy
- Third-Party AI Vendor Policy
Policy content requirements:
- Risk appetite statements for AI usage
- Materiality thresholds for governance intensity
- Roles and responsibilities
- Standards and procedures references
- Exception handling processes
Timeline: 3-4 months for comprehensive policy suite
Step 3: Implement Model Inventory and Classification
You cannot govern what you don't know exists.
Action items:
- Inventory all AI/ML models across the organization
- Classify by risk tier (typically 3-4 tiers based on customer impact, financial materiality, complexity)
- Document model purpose, inputs, outputs, and downstream uses
- Identify model owners and developers
- Flag third-party vs. internally developed models
Classification criteria:
- Tier 1 (Critical): Direct customer-affecting decisions (credit, pricing), high financial impact
- Tier 2 (High): Significant influence on decisions, material operational risk
- Tier 3 (Medium): Support functions, limited direct customer impact
- Tier 4 (Low): Non-material, easily reversible applications
Timeline: 4-6 weeks for initial inventory; ongoing maintenance thereafter
Step 4: Establish Model Development Standards
Standards for building AI models that can pass validation and regulatory scrutiny.
Key requirements:
- Data quality and lineage documentation
- Feature selection justification (especially for protected characteristics)
- Training/test data split and representativeness
- Performance metrics and thresholds
- Bias testing and fairness metrics
- Documentation standards (model cards)
Fairness testing must address:
- Disparate impact analysis across protected groups
- Proxy discrimination (features correlated with protected characteristics)
- Sample bias in training data
- Performance equality across subgroups
Timeline: 2-3 months for standards development
Step 5: Implement Independent Model Validation
MAS expects independent challenge of AI models before deployment.
Validation requirements:
- Conceptual soundness review (is the approach appropriate?)
- Data quality assessment
- Performance testing on holdout data
- Fairness and bias testing
- Stress testing and sensitivity analysis
- Documentation review
Independence requirements:
- Validators should not have developed the model
- Validation function should have direct reporting line to risk or audit
- For Tier 1 models, consider external validation
Timeline: 6-12 weeks per Tier 1 model; less for lower tiers
Step 6: Deploy with Controls
Deployment should include controls for ongoing governance.
Pre-deployment requirements:
- Validation sign-off
- Business owner acceptance
- Compliance review (for customer-affecting models)
- Technical deployment review
- Monitoring infrastructure in place
Deployment controls:
- Champion/challenger frameworks where appropriate
- Phased rollouts with monitoring gates
- Kill switches for model deactivation
- Audit logging of model decisions
Timeline: 2-4 weeks for deployment preparation
Step 7: Establish Continuous Monitoring
AI models can degrade. MAS expects ongoing monitoring.
Monitoring requirements:
- Performance metrics (accuracy, precision, recall as appropriate)
- Stability metrics (population stability index, feature drift)
- Fairness metrics (ongoing disparate impact monitoring)
- Volume and usage monitoring
- Exception and override tracking
Trigger thresholds:
- Define thresholds that trigger review or remediation
- Establish escalation protocols
- Document remediation actions and outcomes
Timeline: Ongoing; initial infrastructure 4-6 weeks
Step 8: Maintain Documentation and Reporting
Documentation is your audit defense and regulatory response readiness.
Required documentation:
- Model development documentation (model cards)
- Validation reports
- Approval records
- Monitoring reports
- Incident and issue logs
- Change documentation
Regulatory reporting:
- Be prepared for MAS requests on AI inventory and governance
- Include AI governance in regular regulatory meetings
- Report material AI incidents promptly
Timeline: Ongoing
Common Failure Modes
1. Treating AI Governance as a Technology Problem
The problem: Delegating AI governance entirely to data science or IT without risk and compliance involvement.
The fix: AI governance is a risk management discipline. Model risk teams and compliance must be involved from design through monitoring.
2. Incomplete Model Inventories
The problem: Shadow AI — models developed by business units, embedded in vendor products, or inherited through acquisitions — escapes governance.
The fix: Comprehensive discovery processes, including vendor AI assessments and regular attestations from business units.
3. Fairness Theater
The problem: Running bias tests to check a box without meaningful analysis or remediation.
The fix: Fairness assessment must be substantive. If disparate impact exists, document justification or remediate. Regulators can tell the difference.
4. Inadequate Explainability
The problem: Using complex models without ability to explain decisions to customers or regulators.
The fix: Invest in explainability tools (SHAP, LIME) or use inherently interpretable models for high-stakes decisions. "The model decided" is not an acceptable answer.
5. Validation Without Independence
The problem: Model developers validating their own models, or validation treated as a formality.
The fix: True independence with authority to reject or require changes. Validation should be resourced appropriately.
6. Monitoring Gaps
The problem: Models deployed without monitoring, or monitoring that doesn't trigger action.
The fix: Monitoring infrastructure built before deployment. Clear thresholds and escalation paths. Regular review of monitoring effectiveness.
AI Compliance Checklist for Financial Services
Governance Structure
- Board Risk Committee mandate includes AI risk
- Senior management accountability designated
- AI Governance Committee or equivalent established
- Reporting cadence to board defined
- Escalation pathways documented
Policy Framework
- AI/ML Model Risk Management Policy approved
- Model Validation Standards documented
- Fairness Assessment Policy in place
- Third-Party AI Vendor Policy established
- Policies reviewed at least annually
Model Inventory
- Complete AI/ML model inventory maintained
- Models classified by risk tier
- Model owners designated for all models
- Third-party models identified and governed
- Inventory reviewed quarterly
Development and Validation
- Model development standards documented
- Fairness testing required for customer-affecting models
- Independent validation for Tier 1 and Tier 2 models
- Validation sign-off before deployment
- Model documentation (model cards) maintained
Deployment and Monitoring
- Pre-deployment checklist completed
- Monitoring infrastructure operational
- Performance thresholds defined
- Fairness metrics tracked
- Escalation protocols documented
Regulatory Readiness
- Documentation audit-ready
- Model inventory available for regulatory request
- FEAT alignment documented
- Incident reporting procedures established
- Staff trained on regulatory expectations
Metrics to Track
| Metric | Target | Regulatory Relevance |
|---|---|---|
| Model inventory completeness | 100% | Demonstrates governance coverage |
| Models with current validation | 100% (Tier 1-2) | MAS expectation |
| Models with fairness testing | 100% (customer-affecting) | FEAT requirement |
| Model issues resolved within SLA | >95% | Governance effectiveness |
| Monitoring coverage | 100% (Tier 1-2) | Ongoing compliance |
| Board reporting frequency | Quarterly minimum | Governance structure |
| Staff training completion | 100% relevant staff | Control environment |
Tooling Suggestions
Model Risk Management Platforms
- ModelOp — Enterprise model governance and monitoring
- Domino Data Lab — MLOps with governance features
- DataRobot — AutoML with built-in governance
- SAS Model Manager — Established in financial services
Fairness and Explainability Tools
- IBM AI Fairness 360 — Open-source fairness toolkit
- Google What-If Tool — Visualization for model behavior
- SHAP — Model-agnostic explainability
- Fiddler AI — Monitoring and explainability platform
Selection Criteria for Financial Services
- Audit trail and documentation capabilities
- Integration with existing model development workflows
- Regulatory reporting features
- Singapore/APAC data residency options
- Proven track record with MAS-regulated entities
Frequently Asked Questions
Next Steps
AI compliance for financial services requires sustained investment in governance infrastructure. Start with your highest-risk AI applications and build systematic governance capability.
For a comprehensive assessment of your AI governance posture against MAS expectations:
Book an AI Readiness Audit — Our financial services assessment covers FEAT alignment, model risk management gaps, and regulatory readiness across your AI portfolio.
Disclaimer
This article provides general guidance on AI compliance for MAS-regulated entities and should not be construed as legal or regulatory advice. MAS requirements evolve, and specific circumstances vary. Organizations should consult with legal counsel and regulatory specialists, and verify current MAS requirements before implementation.
References
-
Monetary Authority of Singapore. (2022). Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT) in the Use of Artificial Intelligence and Data Analytics in Singapore's Financial Sector. MAS.
-
Monetary Authority of Singapore. (2021). MAS Notice 655 on Technology Risk Management. MAS.
-
Monetary Authority of Singapore. (2024). Supervisory Expectations on Model Risk Management. MAS.
-
Veritas Consortium. (2023). FEAT Fairness Assessment Methodology, Version 2. MAS/Veritas.
-
Association of Banks in Singapore. (2024). AI Model Risk Management Guidelines for Banking. ABS.
Related reading:
- AI Regulations in 2026: What Businesses Need to Know
- AI Compliance Checklist: Preparing for Regulatory Requirements
- AI Regulations in Singapore: IMDA Guidelines and Compliance Requirements
Frequently Asked Questions
MAS does not require pre-approval of individual AI models. However, the institution must have appropriate governance frameworks, and MAS may request information about specific models during inspections or thematic reviews.
References
- Monetary Authority of Singapore. (2022). *Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT) in the Use of Artificial Intelligence and Data Analytics in Singapore's Financi. (2022)
- Monetary Authority of Singapore. (2021). *MAS Notice 655 on Technology Risk Management*. MAS.. Monetary Authority of Singapore *MAS Notice on Technology Risk Management* MAS (2021)
- Monetary Authority of Singapore. (2024). *Supervisory Expectations on Model Risk Management*. MAS.. Monetary Authority of Singapore *Supervisory Expectations on Model Risk Management* MAS (2024)
- Veritas Consortium. (2023). *FEAT Fairness Assessment Methodology, Version 2*. MAS/Veritas.. Veritas Consortium *FEAT Fairness Assessment Methodology Version * MAS/Veritas (2023)
- Association of Banks in Singapore. (2024). *AI Model Risk Management Guidelines for Banking*. ABS.. Association of Banks in Singapore *AI Model Risk Management Guidelines for Banking* ABS (2024)

