Why Finance Needs Specialised AI Governance
Financial services is the most heavily regulated industry in both Malaysia and Singapore. AI governance in finance is not optional — it is a regulatory expectation. The Monetary Authority of Singapore (MAS) and Bank Negara Malaysia (BNM) have both issued guidance that directly or indirectly governs how financial institutions use AI.
Beyond regulation, financial services firms handle some of the most sensitive data in any economy: personal financial information, credit histories, transaction records, and investment details. A data breach or AI error in financial services has far greater consequences than in most other industries.
Regulatory Landscape
Singapore — MAS Requirements
MAS governs AI use in financial services through several frameworks:
MAS Technology Risk Management (TRM) Guidelines
- Requires financial institutions to establish governance frameworks for technology including AI
- Mandates risk assessment, testing, and monitoring for all technology deployments
- Requires board and senior management oversight of technology risks
MAS Fairness, Ethics, Accountability, and Transparency (FEAT) Principles
- Fairness: AI decisions should not systematically disadvantage any group
- Ethics: AI use should be consistent with the institution's ethical standards
- Accountability: Clear ownership and governance for AI decisions
- Transparency: Customers should understand when and how AI affects decisions about them
PDPA (Singapore)
- Personal financial data is subject to all PDPA requirements
- Financial institutions must obtain consent for AI processing of personal data
- Customers have the right to access data held about them, including AI-derived data
Malaysia — BNM Requirements
BNM Risk Management in Technology (RMiT)
- Applies to all financial institutions regulated by BNM
- Requires board-approved technology risk management framework
- Mandates risk assessment for new technology including AI
- Requires ongoing monitoring and incident reporting
BNM Policy on Data Management and MIS
- Governs data quality, integrity, and security in financial institutions
- Applies to data used as input for AI systems
PDPA (Malaysia)
- Financial institutions must comply with all seven data protection principles
- Cross-border data transfers require adequate protection
- Financial data is considered sensitive and requires higher protection
AI Use Cases in Financial Services
Permitted with Strong Controls
| Use Case | Key Risks | Required Controls |
|---|---|---|
| Credit scoring and underwriting | Bias, fairness, explainability | Bias testing, human review, model validation, customer explanation |
| Fraud detection | False positives/negatives, privacy | Accuracy monitoring, appeals process, data minimisation |
| Customer service chatbots | Misinformation, data leakage | Content guardrails, escalation to humans, data handling rules |
| Document processing | Accuracy, data privacy | Verification workflow, access controls, audit trail |
| Regulatory reporting | Accuracy, completeness | Human review, validation against source data |
| Market analysis and research | Hallucinations, outdated data | Fact-checking, source verification, disclosure |
Restricted or Prohibited
| Use Case | Concern | Typical Restriction |
|---|---|---|
| Automated loan decisions (no human review) | Fairness, accountability | Prohibited without human oversight |
| Customer profiling without consent | Privacy | Prohibited under PDPA |
| Processing personal data via free AI tools | Data security | Prohibited — enterprise tools required |
| AI-generated financial advice without disclosure | Transparency, liability | Must disclose AI involvement and have licensed advisor review |
AI Governance Framework for Financial Services
Layer 1: Board and Senior Management Oversight
- Board must approve the AI governance framework
- Senior management must designate AI risk ownership
- Regular (at least quarterly) reporting on AI risks to the board
- Board training on AI risks and governance responsibilities
Layer 2: AI Risk Management
- AI risk assessment for every AI deployment (use our AI Risk Assessment Template)
- AI risk integrated into the enterprise risk management framework
- Model validation for AI models used in decision-making
- Ongoing monitoring and periodic reassessment
Layer 3: Policies and Standards
- AI acceptable use policy for all employees
- Data classification and handling standards for AI inputs
- Model governance standards (validation, testing, monitoring)
- Vendor management standards for AI providers
Layer 4: Operational Controls
- Access controls and role-based permissions for AI tools
- Audit logging for all AI interactions involving customer data
- Human review requirements for high-impact AI decisions
- Incident response procedures for AI-related incidents
Layer 5: Testing and Validation
- Pre-deployment testing for accuracy, bias, and security
- Ongoing accuracy monitoring with automated alerts
- Annual independent review of AI models and governance
- Stress testing for AI systems in adverse scenarios
Implementation Checklist for Financial Institutions
Governance Structure
- Board has approved the AI governance framework
- AI risk ownership is designated at senior management level
- AI governance committee is formed with cross-functional representation
- Reporting lines and escalation procedures are defined
Policies
- AI policy and acceptable use policy are published
- Data classification standards cover AI inputs and outputs
- Vendor management policy includes AI-specific requirements
- Incident response plan includes AI-specific scenarios
Risk Assessment
- All existing AI deployments have been risk-assessed
- Risk assessment process for new AI deployments is documented
- AI risks are integrated into the enterprise risk register
- Quarterly AI risk reporting to the board is scheduled
Technical Controls
- Enterprise AI tools with appropriate security controls are deployed
- Unapproved AI tools are blocked or monitored
- Audit logging captures AI interactions involving sensitive data
- Data loss prevention (DLP) rules are configured for AI tools
Fairness and Transparency
- AI models affecting customer decisions are tested for bias
- Customer notification processes exist for AI-influenced decisions
- Appeals and review processes exist for AI-influenced decisions
- Explainability requirements are defined for customer-facing AI
MAS FEAT Principles Implementation
Fairness
- Conduct demographic bias testing on AI models quarterly
- Maintain documentation of fairness metrics and testing results
- Establish a fairness review board for new AI deployments
- Provide mechanisms for customers to challenge AI decisions
Ethics
- Align AI use cases with the institution's code of ethics
- Prohibit AI use cases that conflict with ethical standards
- Train employees on ethical AI use in financial services
Accountability
- Document clear ownership for every AI system
- Maintain a register of all AI models and their owners
- Define escalation paths for AI-related concerns
Transparency
- Inform customers when AI is used in decisions affecting them
- Provide explanations of AI decision factors upon request
- Publish information about AI use in annual reports or on the website
Related Reading
- ChatGPT for Finance — Practical ChatGPT skills for finance teams
- Prompt Engineering for Finance — Advanced prompting for financial analysis and reporting
- AI Policy Template — Start with a comprehensive AI policy for your financial institution
Frequently Asked Questions
MAS does not have a single AI-specific regulation, but AI governance is required through multiple frameworks: the Technology Risk Management (TRM) Guidelines mandate governance for all technology including AI, the FEAT Principles set fairness and transparency expectations, and PDPA governs personal data processing. Together, these create comprehensive AI governance requirements for financial institutions.
Financial institutions can use enterprise versions of AI tools with appropriate controls. Free or consumer versions are generally not suitable due to data handling risks. Enterprise versions with SSO, audit logging, and data protection agreements can be approved after completing a risk assessment aligned with MAS TRM and BNM RMiT requirements.
Consequences include regulatory enforcement action from MAS or BNM, financial penalties, required remediation programmes, reputational damage, loss of customer trust, and potential liability from biased or incorrect AI-driven decisions. MAS has increasingly focused on technology governance in its supervisory assessments.
