Back to Insights
AI Governance & AdoptionGuide

AI Governance for Finance — Compliance, Risk, and Best Practices

February 11, 202611 min readMichael Lansdowne Hauge
Updated March 15, 2026
For:Legal/ComplianceCISOBoard MemberConsultantCTO/CIOHead of OperationsIT Manager

AI governance framework for financial services firms in Malaysia and Singapore. Covers MAS guidelines, BNM requirements, PDPA compliance, and practical controls for AI use in banking, insurance, and fintech.

Summarize and fact-check this article with:
AI Governance for Finance — Compliance, Risk, and Best Practices

Key Takeaways

  • 1.AI governance is mandatory for financial institutions under MAS and BNM regulations
  • 2.Five-layer framework required: board oversight, risk management, policies, controls, validation
  • 3.Credit scoring and fraud detection need bias testing and human review
  • 4.Automated loan decisions without human oversight are typically prohibited
  • 5.MAS FEAT principles demand fairness testing, ethics alignment, accountability, and transparency
  • 6.Board approval and quarterly AI risk reporting are essential requirements
  • 7.Enterprise AI tools mandatory; free AI tools prohibited for personal data

Why Finance Needs Specialised AI Governance

Financial services sits at the intersection of two forces that make AI governance not merely advisable but unavoidable: intense regulatory scrutiny and extraordinary data sensitivity. In both Malaysia and Singapore, financial institutions operate under some of the most prescriptive supervisory regimes in the world. The Monetary Authority of Singapore (MAS) and Bank Negara Malaysia (BNM) have each issued guidance that directly governs how institutions deploy artificial intelligence, moving the conversation well beyond voluntary best practices.

The stakes are proportionally high. Financial institutions hold personal financial information, credit histories, transaction records, and investment details that, if mishandled through an AI error or data breach, can inflict damage far exceeding what most other industries face. For boards and senior leaders in the region's financial sector, the question is no longer whether to govern AI but how quickly a defensible framework can be stood up.

Regulatory Landscape

Singapore: MAS Requirements

Singapore's regulatory architecture for AI in financial services rests on three interconnected pillars.

The first is the MAS Technology Risk Management (TRM) Guidelines, which require financial institutions to establish governance frameworks covering all technology deployments, including AI. The TRM Guidelines mandate risk assessment, testing, and monitoring for every deployment and place explicit responsibility on boards and senior management for technology risk oversight.

The second pillar is the MAS Fairness, Ethics, Accountability, and Transparency (FEAT) Principles. These principles establish that AI decisions should not systematically disadvantage any group, that AI use must align with an institution's ethical standards, that clear ownership and governance must exist for every AI-driven decision, and that customers should understand when and how AI affects outcomes that matter to them.

The third pillar is Singapore's Personal Data Protection Act (PDPA), which subjects all personal financial data to its full requirements. Financial institutions must obtain consent before processing personal data through AI systems, and customers retain the right to access data held about them, including data derived by AI models.

Malaysia: BNM Requirements

Malaysia's framework follows a parallel structure. BNM's Risk Management in Technology (RMiT) policy applies to all BNM-regulated financial institutions. It requires a board-approved technology risk management framework, mandates risk assessment for new technology including AI, and imposes ongoing monitoring and incident reporting obligations.

The BNM Policy on Data Management and MIS adds a further layer, governing data quality, integrity, and security within financial institutions. Because AI systems depend on data inputs, this policy extends directly to the information that feeds algorithmic decision-making.

Malaysia's PDPA rounds out the picture. Financial institutions must comply with all seven data protection principles, ensure adequate protection for cross-border data transfers, and treat financial data as sensitive information warranting elevated safeguards.

AI Use Cases in Financial Services

Permitted with Strong Controls

Not all AI applications in financial services carry the same risk profile. Credit scoring and underwriting can proceed under strong controls, but the risks of bias, fairness gaps, and opacity demand bias testing, human review, model validation, and clear customer explanation of how decisions are reached. Fraud detection introduces its own challenges around false positives, false negatives, and privacy, requiring accuracy monitoring, a defined appeals process, and strict data minimisation.

Customer service chatbots present risks of misinformation and data leakage that must be managed through content guardrails, escalation pathways to human agents, and explicit data handling rules. Document processing, while seemingly lower risk, still requires verification workflows, access controls, and audit trails to address accuracy and privacy concerns. Regulatory reporting demands human review and validation against source data to ensure accuracy and completeness. Market analysis and research using AI must contend with hallucinations and outdated information through fact-checking, source verification, and appropriate disclosure.

Restricted or Prohibited

Certain applications face outright restrictions. Automated loan decisions without human review are prohibited under fairness and accountability requirements. Customer profiling without consent violates the PDPA in both jurisdictions. Processing personal data through free, consumer-grade AI tools is prohibited on data security grounds, with enterprise-grade tools required instead. AI-generated financial advice must disclose AI involvement and receive review from a licensed advisor before reaching the customer.

AI Governance Framework for Financial Services

Effective governance in financial services requires a five-layer architecture, each layer reinforcing the others.

Layer 1: Board and Senior Management Oversight

The foundation is board-level accountability. The board must approve the AI governance framework and senior management must designate clear ownership of AI risk. Reporting on AI risks to the board should occur at least quarterly, and board members themselves need training on AI risks and their governance responsibilities. This is not a delegation exercise. Regulators in both Singapore and Malaysia expect boards to demonstrate informed oversight.

Layer 2: AI Risk Management

Every AI deployment requires a dedicated risk assessment. AI risk must be integrated into the enterprise risk management framework rather than treated as a standalone concern. Models used in decision-making need formal validation, and institutions must commit to ongoing monitoring and periodic reassessment as models evolve and market conditions shift.

Layer 3: Policies and Standards

Written policies translate governance intent into operational guidance. Financial institutions need an AI acceptable use policy covering all employees, data classification and handling standards for AI inputs, model governance standards addressing validation, testing, and monitoring, and vendor management standards for external AI providers.

Layer 4: Operational Controls

Day-to-day governance depends on operational controls: access controls and role-based permissions for AI tools, audit logging for all AI interactions involving customer data, human review requirements for high-impact decisions, and incident response procedures specifically designed for AI-related events.

Layer 5: Testing and Validation

The final layer ensures that governance remains effective over time. Pre-deployment testing must cover accuracy, bias, and security. Ongoing accuracy monitoring with automated alerts catches degradation early. Annual independent reviews of AI models and governance provide external assurance. Stress testing AI systems under adverse scenarios rounds out a comprehensive validation programme.

Implementation Checklist for Financial Institutions

Governance Structure

Financial institutions should confirm that the board has approved the AI governance framework, that AI risk ownership sits at the senior management level, that an AI governance committee operates with cross-functional representation, and that reporting lines and escalation procedures are clearly defined.

Policies

The policy foundation requires a published AI acceptable use policy, data classification standards that cover both AI inputs and outputs, a vendor management policy with AI-specific requirements, and an incident response plan that addresses AI-specific scenarios.

Risk Assessment

On the risk assessment front, all existing AI deployments should be assessed, the risk assessment process for new deployments should be documented, AI risks should appear in the enterprise risk register, and quarterly AI risk reporting to the board should be scheduled.

Technical Controls

Technical safeguards include deploying enterprise AI tools with appropriate security controls, blocking or monitoring unapproved AI tools, configuring audit logging to capture AI interactions involving sensitive data, and implementing data loss prevention (DLP) rules for AI tools.

Fairness and Transparency

Finally, institutions must test AI models affecting customer decisions for bias, establish customer notification processes for AI-influenced decisions, create appeals and review mechanisms, and define explainability requirements for all customer-facing AI.

MAS FEAT Principles Implementation

Fairness

Implementing the fairness principle requires quarterly demographic bias testing on AI models, thorough documentation of fairness metrics and testing results, a fairness review board that evaluates new AI deployments before they reach production, and mechanisms that allow customers to challenge AI-driven decisions.

Ethics

The ethics principle demands that institutions align every AI use case with their code of ethics, prohibit use cases that conflict with ethical standards, and train employees specifically on ethical AI use in financial services contexts.

Accountability

Accountability requires documented ownership for every AI system, a centralised register of all AI models and their designated owners, and defined escalation paths for AI-related concerns.

Transparency

Transparency obligations include informing customers whenever AI is used in decisions affecting them, providing explanations of AI decision factors upon request, and publishing information about AI use in annual reports or on the institution's website.

What's Changed: Financial AI Governance Requirements in 2025-2026

Financial services AI governance has crossed a threshold. What began as voluntary best practices has accelerated into mandatory compliance obligations across major jurisdictions, fundamentally reshaping how institutions must structure their oversight architectures.

In the United States, the OCC, Federal Reserve, and FDIC jointly issued updated interagency guidance on model risk management in October 2024, explicitly bringing AI and machine learning models within the scope of SR 11-7 supervisory expectations. The SEC finalised rules requiring broker-dealers and investment advisers to address conflicts of interest arising from predictive analytics and AI-driven customer interaction tools, citing specific concerns about optimisation algorithms that prioritise firm revenue over client suitability.

In the European Union, the AI Act classifies financial AI applications involving creditworthiness assessment, insurance pricing, and fraud detection as high-risk under Annex III. This classification triggers mandatory conformity assessments, technical documentation requirements, human oversight provisions, and registration in the EU AI database before market deployment. Compliance deadlines for high-risk system requirements begin August 2026, creating immediate implementation pressure for institutions operating across European markets.

Across the Asia-Pacific region, the framework landscape has broadened considerably. Beyond Singapore's MAS FEAT principles and Hong Kong's HKMA guidance, the Reserve Bank of India published its Framework for Responsible AI in the Financial Sector in draft form in December 2024. Bank Negara Malaysia updated its RMiT provisions to address AI specifically, and the Australian Prudential Regulation Authority issued CPG 235 companion guidance on AI risk management.

Building a Cross-Jurisdictional Governance Architecture

Financial institutions operating across multiple regulatory environments face a particular challenge: satisfying overlapping and sometimes divergent requirements without duplicating effort or creating governance gaps. The answer lies in unified processes that map to multiple frameworks simultaneously.

A centralised model inventory and classification system forms the starting point. Institutions should maintain a registry that categorises each AI model by regulatory jurisdiction, risk tier, and applicable framework requirements. Platforms such as ModelOp, Monitaur, or IBM OpenPages, configured with financial services taxonomies, can support this at scale.

The three lines of defence model provides the organisational backbone. Business units as the first line own model usage and day-to-day monitoring. Risk management and compliance functions as the second line conduct independent validation, drawing on techniques outlined in the Prudential Regulation Authority's SS1/23 expectations. Internal audit as the third line performs periodic effectiveness assessments of the entire governance architecture.

Board-level reporting must keep pace with the complexity of the AI estate. Quarterly AI risk dashboards presented to board risk committees should incorporate metrics on model performance drift, fairness testing outcomes, incident volumes, and regulatory examination findings. This reporting cadence aligns with OCC Heightened Standards for large bank governance.

Regulatory examination preparedness rounds out the architecture. Institutions should maintain documented evidence packages organised by examination topic, cross-referencing FFIEC IT Examination Handbook modules with institution-specific AI governance artefacts. The critical discipline is continuous maintenance of these packages rather than reactive assembly during examination cycles.

Common Questions

MAS does not have a single AI-specific regulation, but AI governance is required through multiple frameworks: the Technology Risk Management (TRM) Guidelines mandate governance for all technology including AI, the FEAT Principles set fairness and transparency expectations, and PDPA governs personal data processing. Together, these create comprehensive AI governance requirements for financial institutions.

Financial institutions can use enterprise versions of AI tools with appropriate controls. Free or consumer versions are generally not suitable due to data handling risks. Enterprise versions with SSO, audit logging, and data protection agreements can be approved after completing a risk assessment aligned with MAS TRM and BNM RMiT requirements.

Consequences include regulatory enforcement action from MAS or BNM, financial penalties, required remediation programmes, reputational damage, loss of customer trust, and potential liability from biased or incorrect AI-driven decisions. MAS has increasingly focused on technology governance in its supervisory assessments.

References

  1. Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT). Monetary Authority of Singapore (2018). View source
  2. AI Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology (NIST) (2023). View source
  3. ISO/IEC 42001:2023 — Artificial Intelligence Management System. International Organization for Standardization (2023). View source
  4. EU AI Act — Regulatory Framework for Artificial Intelligence. European Commission (2024). View source
  5. Model AI Governance Framework (Second Edition). PDPC and IMDA Singapore (2020). View source
  6. OECD Principles on Artificial Intelligence. OECD (2019). View source
  7. ASEAN Guide on AI Governance and Ethics. ASEAN Secretariat (2024). View source
Michael Lansdowne Hauge

Managing Partner · HRDF-Certified Trainer (Malaysia), Delivered Training for Big Four, MBB, and Fortune 500 Clients, 100+ Angel Investments (Seed–Series C), Dartmouth College, Economics & Asian Studies

Advises leadership teams across Southeast Asia on AI strategy, readiness, and implementation. HRDF-certified trainer with engagements for a Big Four accounting firm, a leading global management consulting firm, and the world's largest ERP software company.

AI StrategyAI GovernanceExecutive AI TrainingDigital TransformationASEAN MarketsAI ImplementationAI Readiness AssessmentsResponsible AIPrompt EngineeringAI Literacy Programs

EXPLORE MORE

Other AI Governance & Adoption Solutions

INSIGHTS

Related reading

Talk to Us About AI Governance & Adoption

We work with organizations across Southeast Asia on ai governance & adoption programs. Let us know what you are working on.