Back to AI Governance & Adoption for Companies

AI Governance for Finance — Compliance, Risk, and Best Practices

Pertama PartnersFebruary 11, 202611 min read
🇲🇾 Malaysia🇸🇬 Singapore
AI Governance for Finance — Compliance, Risk, and Best Practices

Why Finance Needs Specialised AI Governance

Financial services is the most heavily regulated industry in both Malaysia and Singapore. AI governance in finance is not optional — it is a regulatory expectation. The Monetary Authority of Singapore (MAS) and Bank Negara Malaysia (BNM) have both issued guidance that directly or indirectly governs how financial institutions use AI.

Beyond regulation, financial services firms handle some of the most sensitive data in any economy: personal financial information, credit histories, transaction records, and investment details. A data breach or AI error in financial services has far greater consequences than in most other industries.

Regulatory Landscape

Singapore — MAS Requirements

MAS governs AI use in financial services through several frameworks:

MAS Technology Risk Management (TRM) Guidelines

  • Requires financial institutions to establish governance frameworks for technology including AI
  • Mandates risk assessment, testing, and monitoring for all technology deployments
  • Requires board and senior management oversight of technology risks

MAS Fairness, Ethics, Accountability, and Transparency (FEAT) Principles

  • Fairness: AI decisions should not systematically disadvantage any group
  • Ethics: AI use should be consistent with the institution's ethical standards
  • Accountability: Clear ownership and governance for AI decisions
  • Transparency: Customers should understand when and how AI affects decisions about them

PDPA (Singapore)

  • Personal financial data is subject to all PDPA requirements
  • Financial institutions must obtain consent for AI processing of personal data
  • Customers have the right to access data held about them, including AI-derived data

Malaysia — BNM Requirements

BNM Risk Management in Technology (RMiT)

  • Applies to all financial institutions regulated by BNM
  • Requires board-approved technology risk management framework
  • Mandates risk assessment for new technology including AI
  • Requires ongoing monitoring and incident reporting

BNM Policy on Data Management and MIS

  • Governs data quality, integrity, and security in financial institutions
  • Applies to data used as input for AI systems

PDPA (Malaysia)

  • Financial institutions must comply with all seven data protection principles
  • Cross-border data transfers require adequate protection
  • Financial data is considered sensitive and requires higher protection

AI Use Cases in Financial Services

Permitted with Strong Controls

Use CaseKey RisksRequired Controls
Credit scoring and underwritingBias, fairness, explainabilityBias testing, human review, model validation, customer explanation
Fraud detectionFalse positives/negatives, privacyAccuracy monitoring, appeals process, data minimisation
Customer service chatbotsMisinformation, data leakageContent guardrails, escalation to humans, data handling rules
Document processingAccuracy, data privacyVerification workflow, access controls, audit trail
Regulatory reportingAccuracy, completenessHuman review, validation against source data
Market analysis and researchHallucinations, outdated dataFact-checking, source verification, disclosure

Restricted or Prohibited

Use CaseConcernTypical Restriction
Automated loan decisions (no human review)Fairness, accountabilityProhibited without human oversight
Customer profiling without consentPrivacyProhibited under PDPA
Processing personal data via free AI toolsData securityProhibited — enterprise tools required
AI-generated financial advice without disclosureTransparency, liabilityMust disclose AI involvement and have licensed advisor review

AI Governance Framework for Financial Services

Layer 1: Board and Senior Management Oversight

  • Board must approve the AI governance framework
  • Senior management must designate AI risk ownership
  • Regular (at least quarterly) reporting on AI risks to the board
  • Board training on AI risks and governance responsibilities

Layer 2: AI Risk Management

  • AI risk assessment for every AI deployment (use our AI Risk Assessment Template)
  • AI risk integrated into the enterprise risk management framework
  • Model validation for AI models used in decision-making
  • Ongoing monitoring and periodic reassessment

Layer 3: Policies and Standards

  • AI acceptable use policy for all employees
  • Data classification and handling standards for AI inputs
  • Model governance standards (validation, testing, monitoring)
  • Vendor management standards for AI providers

Layer 4: Operational Controls

  • Access controls and role-based permissions for AI tools
  • Audit logging for all AI interactions involving customer data
  • Human review requirements for high-impact AI decisions
  • Incident response procedures for AI-related incidents

Layer 5: Testing and Validation

  • Pre-deployment testing for accuracy, bias, and security
  • Ongoing accuracy monitoring with automated alerts
  • Annual independent review of AI models and governance
  • Stress testing for AI systems in adverse scenarios

Implementation Checklist for Financial Institutions

Governance Structure

  • Board has approved the AI governance framework
  • AI risk ownership is designated at senior management level
  • AI governance committee is formed with cross-functional representation
  • Reporting lines and escalation procedures are defined

Policies

  • AI policy and acceptable use policy are published
  • Data classification standards cover AI inputs and outputs
  • Vendor management policy includes AI-specific requirements
  • Incident response plan includes AI-specific scenarios

Risk Assessment

  • All existing AI deployments have been risk-assessed
  • Risk assessment process for new AI deployments is documented
  • AI risks are integrated into the enterprise risk register
  • Quarterly AI risk reporting to the board is scheduled

Technical Controls

  • Enterprise AI tools with appropriate security controls are deployed
  • Unapproved AI tools are blocked or monitored
  • Audit logging captures AI interactions involving sensitive data
  • Data loss prevention (DLP) rules are configured for AI tools

Fairness and Transparency

  • AI models affecting customer decisions are tested for bias
  • Customer notification processes exist for AI-influenced decisions
  • Appeals and review processes exist for AI-influenced decisions
  • Explainability requirements are defined for customer-facing AI

MAS FEAT Principles Implementation

Fairness

  • Conduct demographic bias testing on AI models quarterly
  • Maintain documentation of fairness metrics and testing results
  • Establish a fairness review board for new AI deployments
  • Provide mechanisms for customers to challenge AI decisions

Ethics

  • Align AI use cases with the institution's code of ethics
  • Prohibit AI use cases that conflict with ethical standards
  • Train employees on ethical AI use in financial services

Accountability

  • Document clear ownership for every AI system
  • Maintain a register of all AI models and their owners
  • Define escalation paths for AI-related concerns

Transparency

  • Inform customers when AI is used in decisions affecting them
  • Provide explanations of AI decision factors upon request
  • Publish information about AI use in annual reports or on the website

Related Reading

What's Changed: Financial AI Governance Requirements in 2025-2026

Financial services AI governance has accelerated from voluntary best practices toward mandatory compliance obligations across major jurisdictions, fundamentally reshaping how institutions structure their oversight architectures.

United States Regulatory Developments. The OCC (Office of the Comptroller of the Currency), Federal Reserve, and FDIC jointly issued updated interagency guidance on model risk management in October 2024, explicitly incorporating AI and machine learning models within the scope of SR 11-7 supervisory expectations. The SEC finalized rules requiring broker-dealers and investment advisers to address conflicts of interest arising from predictive analytics and AI-driven customer interaction tools, citing specific concerns about optimization algorithms that prioritize firm revenue over client suitability.

European Union AI Act Impact. Financial AI applications involving creditworthiness assessment, insurance pricing, and fraud detection are classified as high-risk under the EU AI Act Annex III, triggering mandatory conformity assessments, technical documentation requirements, human oversight provisions, and registration in the EU AI database before market deployment. Compliance deadlines for high-risk system requirements begin August 2026, creating immediate implementation pressure for institutions operating across European markets.

Asia-Pacific Framework Proliferation. Beyond Singapore's MAS FEAT principles and Hong Kong's HKMA guidance, additional frameworks emerged from the Reserve Bank of India (Framework for Responsible AI in Financial Sector, Draft December 2024), Bank Negara Malaysia (updated RMiT provisions addressing AI specifically), and the Australian Prudential Regulation Authority (CPG 235 companion guidance on AI risk management).

Building a Cross-Jurisdictional Governance Architecture

Financial institutions operating across multiple regulatory environments should implement governance structures that satisfy overlapping requirements through unified processes:

  • Model inventory and classification: Centralized registry categorizing each AI model by regulatory jurisdiction, risk tier, and applicable framework requirements — using platforms like ModelOp, Monitaur, or IBM OpenPages configured with financial services taxonomies
  • Three lines of defense integration: First line (business units) owns model usage and monitoring; second line (risk management and compliance) conducts independent validation using techniques from the Prudential Regulation Authority's SS1/23 expectations; third line (internal audit) performs periodic effectiveness assessments
  • Board-level reporting cadence: Quarterly AI risk dashboards presented to board risk committees, incorporating metrics on model performance drift, fairness testing outcomes, incident volumes, and regulatory examination findings — aligning with OCC Heightened Standards for large bank governance
  • Regulatory examination preparedness: Documented evidence packages organized by examination topic, cross-referencing FFIEC IT Examination Handbook modules with institution-specific AI governance artifacts, updated continuously rather than assembled reactively during examination cycles

Financial institutions implementing algorithmic governance reference Basel III Pillar Two supervisory review expectations alongside BCBS 239 risk data aggregation principles when architecting model validation hierarchies. Collibra Data Intelligence Platform and Alation catalog deployments enable lineage traceability satisfying Federal Reserve SR 11-7 model risk guidance requirements. Credit unions, thrift institutions, and mutual savings banks spanning Omaha, Charlotte, and Des Moines leverage Moody's Analytics RiskAuthority alongside SAS Model Manager for challenger-champion backtesting cadences. Fiduciary stewardship obligations under Dodd-Frank Title VII derivatives provisions necessitate counterparty exposure quantification through XVA (Credit, Debit, Funding Valuation Adjustment) computational frameworks implemented via QuantLib open-source libraries and Murex MX.3 treasury platforms.

Common Questions

MAS does not have a single AI-specific regulation, but AI governance is required through multiple frameworks: the Technology Risk Management (TRM) Guidelines mandate governance for all technology including AI, the FEAT Principles set fairness and transparency expectations, and PDPA governs personal data processing. Together, these create comprehensive AI governance requirements for financial institutions.

Financial institutions can use enterprise versions of AI tools with appropriate controls. Free or consumer versions are generally not suitable due to data handling risks. Enterprise versions with SSO, audit logging, and data protection agreements can be approved after completing a risk assessment aligned with MAS TRM and BNM RMiT requirements.

Consequences include regulatory enforcement action from MAS or BNM, financial penalties, required remediation programmes, reputational damage, loss of customer trust, and potential liability from biased or incorrect AI-driven decisions. MAS has increasingly focused on technology governance in its supervisory assessments.

More on AI Governance & Adoption for Companies