Abstract
Bank Negara Malaysia's guidance on AI and technology risk management for financial institutions. Covers AI governance requirements for banks, insurers, and payment providers. Addresses AI-driven fraud detection, algorithmic bias in credit decisions, model risk management, and the integration of AI within Malaysia's financial regulatory framework.
About This Research
Publisher: Bank Negara Malaysia Year: 2025 Type: Governance Framework
Source: Bank Negara Malaysia: AI and Technology Risk Management in Financial Services
Relevance
Industries: Financial Services Pillars: AI Governance & Risk Management Use Cases: Fraud Detection & AML, Risk Assessment & Management Regions: Malaysia
Model Risk Management for Machine Learning Systems
Bank Negara's framework extends traditional model risk management principles to address the distinctive characteristics of machine learning systems, including their capacity for autonomous learning, sensitivity to training data quality, and potential for performance degradation through data drift. Financial institutions must establish model validation protocols that evaluate not only predictive accuracy but also stability, fairness, and explainability characteristics. The framework requires ongoing model monitoring with quantified performance thresholds that trigger mandatory revalidation when breached, ensuring that deployed models remain within acceptable operating parameters throughout their lifecycle rather than being validated once and assumed to remain reliable indefinitely.
Third-Party AI Vendor Governance
Recognizing that many Malaysian financial institutions procure AI capabilities from external vendors rather than developing them internally, the framework establishes specific governance requirements for third-party AI relationships. Financial institutions remain fully accountable for AI decisions regardless of whether the underlying models were developed internally or procured externally. Vendor due diligence requirements encompass model documentation review, bias testing verification, intellectual property assessment, and business continuity planning for vendor disruption scenarios. These provisions address a governance gap that has emerged as AI-as-a-service models proliferate across the financial services industry.
Board-Level Accountability and Organizational Culture
The framework explicitly assigns board-level accountability for AI governance, requiring directors to demonstrate understanding of AI deployment strategies, associated risks, and risk mitigation measures within their organizations. This provision challenges a common pattern where AI governance responsibility resides exclusively within technology departments without meaningful board oversight. Board reporting requirements include regular updates on AI deployment inventory, model performance metrics, incident reports, and emerging risk assessments, ensuring that algorithmic decision-making receives governance attention commensurate with its business significance and risk implications.