Back to Insights
AI Compliance & RegulationGuidePractitioner

Malaysia AI Regulations 2026: Complete Guide

February 9, 202613 min read min readPertama Partners
For:Compliance LeadRisk OfficerLegal CounselData Protection OfficerChief Technology Officer

Comprehensive guide to Malaysia's AI regulatory framework covering the National AI Framework, PDPA requirements, Bank Negara Malaysia guidelines, and practical compliance strategies for organizations.

Malaysia AI Regulations 2026: Complete Guide
Part 4 of 6

AI Regulations & Compliance

Country-specific AI regulations, global compliance frameworks, and industry guidance for Asia-Pacific businesses

Key Takeaways

  • 1.Malaysia's AI regulation operates through the principles-based National AI Framework (ethical, human-centric, transparent, accountable, secure AI) combined with binding PDPA requirements for personal data protection, without dedicated AI legislation like the EU AI Act.
  • 2.Data Protection Impact Assessment (DPIA) is not legally mandated but strongly recommended by PDPD for high-risk AI and should be treated as a best practice and regulatory expectation demonstrating responsible AI governance.
  • 3.Bank Negara Malaysia's Risk Management in Technology framework applies to financial services AI with requirements for governance, risk management, development/validation, consumer protection, monitoring, and third-party management, with more specific AI guidance expected.
  • 4.Cross-border data transfer restrictions under PDPA Section 129 require assessing destination jurisdiction data protection laws and establishing contractual safeguards (data processing agreements, standard contractual clauses) for cloud AI platforms and international AI services.
  • 5.Organizations should proactively implement comprehensive AI governance aligned with National AI Framework principles and rigorous PDPA compliance now to prepare for anticipated future AI-specific legislation and sectoral regulations.

Malaysia is building a comprehensive AI regulatory framework as part of its national digital transformation strategy. This guide provides detailed coverage of Malaysia's AI regulations, including the National AI Framework, Personal Data Protection Act (PDPA) requirements, sector-specific regulations, and practical compliance guidance.

Malaysia's AI Regulatory Landscape

Malaysia's approach to AI regulation balances innovation with governance, aligning AI development with national economic and social objectives while protecting individuals and ensuring ethical AI deployment.

Key Characteristics:

  • National Digital Transformation: AI regulation integrated with broader Fourth Industrial Revolution (4IR) strategy
  • Principles-Based Framework: Ethical AI principles guide development and deployment
  • Sectoral Evolution: Financial services, public sector, and critical sectors developing AI-specific requirements
  • ASEAN Alignment: Coordination with ASEAN initiatives on AI governance and cross-border data flows

Key Regulatory Bodies:

  • Personal Data Protection Department (PDPD): Administers and enforces the PDPA, with specific focus on AI and automated decision-making
  • Bank Negara Malaysia (BNM): Regulates financial services AI through Risk Management in Technology framework
  • Malaysian Communications and Multimedia Commission (MCMC): Oversees digital services and emerging technologies
  • Malaysian Administrative Modernisation and Management Planning Unit (MAMPU): Provides AI guidelines for public sector
  • Ministry of Science, Technology and Innovation (MOSTI): Coordinates national AI strategy development

National AI Framework and Roadmap

National Fourth Industrial Revolution (4IR) Policy

Malaysia's 4IR Policy, launched as part of the Malaysia Digital Economy Blueprint, establishes the strategic framework for AI adoption and governance. The policy emphasizes human-centric, ethical, and responsible AI development.

AI Governance Principles:

1. Ethical AI Development

  • AI systems should align with Malaysian values and societal norms
  • Respect for human dignity, rights, and autonomy
  • Consideration of social, cultural, and religious contexts
  • Ethical considerations throughout AI lifecycle

2. Human-Centric Design

  • AI should augment human capabilities, not replace human judgment in critical areas
  • Human oversight for consequential decisions
  • AI designed to serve public good and improve quality of life
  • Inclusivity and accessibility in AI design

3. Transparency and Explainability

  • Organizations should be transparent about AI use
  • Individuals have right to understand how AI affects them
  • Explainability mechanisms proportionate to AI impact and risk
  • Clear communication about AI capabilities and limitations

4. Accountability and Responsibility

  • Clear accountability for AI systems and their outcomes
  • Organizations responsible for AI performance, bias, and failures
  • Governance structures ensuring oversight and control
  • Mechanisms for redress when AI causes harm

5. Safety and Security

  • Robust security measures protecting AI systems and data
  • Safety testing and validation before deployment
  • Ongoing monitoring for failures, attacks, and degradation
  • Incident response and remediation procedures

AI Implementation Roadmap

Malaysia's AI roadmap focuses on:

  • Capacity Building: Developing AI talent and expertise
  • Infrastructure: Building computational and data infrastructure
  • Regulatory Framework: Developing specific AI regulations and standards
  • Public Sector Adoption: Implementing AI in government services
  • Industry Enablement: Supporting private sector AI adoption
  • International Cooperation: Participating in ASEAN and international AI initiatives

Personal Data Protection Act 2010 (PDPA)

PDPA Overview

Malaysia's PDPA, enacted in 2010 and effective from 2013, is the primary data protection law. AI systems processing personal data must comply with PDPA obligations.

Personal Data Definition: Information in respect of commercial transactions that relates directly or indirectly to a data subject, who is identified or identifiable from that information or from that and other information in possession of data user. Includes:

  • Name, address, contact information
  • Identification numbers
  • Financial information
  • Health information
  • Any information that can identify an individual

Data Protection Principles

The PDPA establishes seven Data Protection Principles applicable to AI:

General Principle (Section 5)

  • Personal data shall not be processed unless consent obtained or processing otherwise lawful
  • Processing must be fair, lawful, and for specified purposes

Notice and Choice Principle (Section 7)

  • Data users must inform individuals:
    • That personal data is being collected
    • Purposes of collection and processing
    • Sources of personal data
    • Individual's right to access and correct data
    • Whether supply of data is voluntary or mandatory
    • Consequences of failing to supply data
  • For AI: Notice must disclose AI use, automated decision-making, and consequences

Disclosure Principle (Section 8)

  • Personal data shall not be disclosed for purposes other than specified without consent
  • For AI: Data collected for one purpose cannot be used for different AI application without new consent or lawful basis

Security Principle (Section 9)

  • Data users must take practical steps to protect personal data from loss, misuse, modification, unauthorized access, or disclosure
  • For AI: Robust security for training data, models, and AI systems; protection against AI-specific threats (adversarial attacks, data poisoning, model extraction)

Retention Principle (Section 10)

  • Personal data shall not be kept longer than necessary to fulfill purposes for which it was collected
  • For AI: Define retention periods for training data, operational data, AI decision logs balancing accountability needs with privacy

Data Integrity Principle (Section 11)

  • Personal data must be accurate, complete, not misleading, and kept up-to-date
  • For AI: Critical for training data quality and operational data accuracy; inaccurate data produces biased, incorrect AI outputs violating PDPA

Access Principle (Section 12)

  • Individuals have right to:
    • Request information about processing of their personal data
    • Access their personal data
    • Request correction of inaccurate personal data
  • For AI: Individuals can request information about how AI processes their data and challenge AI-driven decisions

AI-Specific PDPA Considerations

Consent for AI Processing

PDPA requires consent for processing personal data. For AI:

  • Explicit Consent: Recommended for high-risk AI applications (automated decisions with legal/significant effects)
  • Informed Consent: Individuals must understand that AI will process their data and make or inform decisions affecting them
  • Specific Consent: Consent should be purpose-specific; vague "analytics" or "business operations" insufficient
  • Withdrawal: Individuals can withdraw consent; organizations must accommodate withdrawal in AI systems

Example Consent Language: "We use artificial intelligence to assess your loan application. The AI analyzes your financial information, employment history, and credit data to predict likelihood of repayment and determine loan approval and terms. Decisions may be made automatically with limited human review. By submitting this application, you consent to this AI-powered assessment."

Data Protection Impact Assessment (DPIA)

While PDPA doesn't explicitly mandate DPIA, the Personal Data Protection Commissioner has issued guidance recommending DPIA for:

  • Processing personal data at scale
  • Processing sensitive personal data
  • Automated decision-making with legal or significant effects on individuals
  • New technologies with privacy implications (including AI)

DPIA for AI Should Cover:

  1. Description: AI system purpose, functionality, data processed, decisions made
  2. Necessity and Proportionality: Why AI is necessary, whether less privacy-invasive alternatives exist
  3. Risks to Individuals: Potential harms from AI errors, bias, security breaches, misuse
  4. Risk Mitigation: Controls implemented (explainability, bias testing, human oversight, security)
  5. Consultation: Stakeholder engagement on AI deployment
  6. Review and Approval: Assessment by privacy officer, legal, compliance; approval by appropriate governance body

Automated Decision-Making Rights

PDPA doesn't explicitly provide a "right to object to automated decision-making" like GDPR Article 22. However, PDPD guidance indicates:

  • Individuals should be informed of automated decision-making
  • Individuals have right to request human review of consequential automated decisions
  • Organizations should implement human oversight for high-impact AI
  • Individuals can challenge AI-driven decisions through access and correction rights

Best Practice: Implement human review mechanisms for AI making decisions with legal or significant effects (credit, employment, insurance, benefits, legal rights).

Cross-Border Data Transfers

Section 129 restricts transferring personal data outside Malaysia unless:

  • Recipient country has adequate data protection laws, OR
  • Organization ensures adequate protection through contract or other means

For AI Systems:

  • Using cloud AI platforms processing data outside Malaysia: Establish data processing agreements ensuring PDPA-level protection
  • Offshore AI development: Assess destination jurisdiction's data protection laws; implement contractual safeguards
  • Cross-border AI deployments: Ensure data protection across all jurisdictions
  • Document transfer risk assessments and safeguards

PDPA Enforcement and Penalties

Personal Data Protection Commissioner: Investigates complaints, conducts audits, issues enforcement notices.

Penalties:

  • Failure to comply with Commissioner's enforcement notice: Fine up to MYR 500,000 and/or imprisonment up to 3 years
  • Unlawful processing of personal data: Penalties vary by violation severity
  • Serious violations: Commissioner can publicize non-compliance (significant reputational impact)

Recent Enforcement: PDPD has investigated AI-related complaints including:

  • Lack of transparency about automated decision-making
  • Inadequate security for personal data used in AI systems
  • Using personal data for AI purposes beyond original consent scope

Enforcement is increasing as AI adoption grows.

Sector-Specific AI Regulations

Financial Services: Bank Negara Malaysia (BNM)

Risk Management in Technology (RMiT) Framework

BNM's RMiT framework applies to all technology risks, including AI systems deployed by financial institutions.

Key Requirements for AI:

1. Governance and Oversight

  • Board and senior management oversight of AI strategy and deployment
  • Clear accountability for AI systems
  • AI governance integrated with overall technology risk governance
  • Regular reporting to board on AI systems, risks, incidents

2. Risk Management

  • Comprehensive risk assessments for AI systems covering:
    • Model risk (accuracy, bias, robustness)
    • Operational risk (failures, performance degradation)
    • Compliance risk (PDPA, consumer protection, AML/CFT)
    • Reputational risk (customer trust, public perception)
    • Strategic risk (over-reliance on AI, competitive positioning)
  • Risk mitigation controls proportionate to risk level
  • Ongoing risk monitoring and reassessment

3. Development and Validation

  • Rigorous AI development methodology with documentation
  • Testing and validation before deployment (accuracy, fairness, robustness, security)
  • Independent validation for material AI systems
  • Documentation of model assumptions, limitations, appropriate use cases
  • Approval processes and sign-offs before deployment

4. Consumer Protection

  • Fair treatment of customers in AI-driven processes
  • Transparency about AI use in customer-facing applications
  • Explainability of AI-driven decisions affecting customers
  • Complaint handling mechanisms for AI-related issues
  • Human oversight for consequential customer decisions

5. Monitoring and Change Management

  • Continuous monitoring of AI performance against KPIs
  • Drift detection and model revalidation
  • Rigorous change management for AI system updates
  • Incident management and escalation for AI failures

6. Third-Party Risk Management

  • Due diligence on AI service providers and platforms
  • Contractual requirements ensuring compliance and performance
  • Ongoing monitoring of third-party AI services
  • Accountability maintained by financial institution

AI-Specific Emerging Requirements

BNM is developing specific AI guidance for financial institutions expected to address:

  • Bias and fairness testing and mitigation
  • Explainability requirements for customer-facing AI
  • Governance structures for AI ethics and accountability
  • AI security and resilience
  • Use of generative AI and large language models

Public Sector: MAMPU Guidelines

AI in Government Services

MAMPU provides guidelines for AI adoption in Malaysian government agencies:

Principles:

  • Public Interest: AI should serve public good and improve service delivery
  • Transparency: Government AI use should be transparent to citizens
  • Fairness: Government AI must treat all citizens equitably
  • Accountability: Clear accountability for government AI systems
  • Security: Robust protection of citizen data in government AI

Requirements:

  • Government agencies deploying AI must conduct impact assessments
  • High-risk government AI requires additional scrutiny and approval
  • Regular audits of government AI systems
  • Publication of information about government AI use (transparency registers)
  • Citizen feedback and complaint mechanisms

Implementation Roadmap for Malaysia AI Compliance

Phase 1: Assessment (Months 1-2)

AI System Inventory:

  • Identify all AI systems in use or development
  • Document purpose, personal data processed, decision-making role, affected individuals, cross-border aspects
  • Classify by risk level (high, medium, low)

Regulatory Mapping:

  • PDPA compliance assessment
  • Sector-specific requirements (BNM for financial services, MAMPU for public sector)
  • Cross-border data transfer requirements
  • Industry-specific regulations

Gap Analysis:

  • Compare current practices against regulatory requirements
  • Identify gaps in governance, DPIA, consent, explainability, bias management, security, documentation
  • Prioritize remediation efforts

Phase 2: Governance and Policy (Months 2-4)

Governance Structure:

  • Establish AI governance committee
  • Assign roles: AI system owners, data protection officer, AI ethics officer
  • Define escalation and approval processes
  • Integrate with existing governance (risk committee, technology committee)

Policy Development:

  • AI governance policy aligned with National AI Framework principles
  • PDPA compliance procedures for AI
  • DPIA methodology
  • Bias and fairness testing procedures
  • Explainability standards
  • Security standards for AI systems and training data
  • Cross-border data transfer procedures

Training:

  • PDPA and AI compliance training for relevant staff
  • Technical training on bias testing, explainability, security
  • Ethics training for AI ethics committee

Phase 3: Implementation (Months 4-8)

For Each AI System (Prioritize High-Risk):

  1. Data Protection Impact Assessment:

    • Conduct comprehensive DPIA
    • Document AI purpose, data processed, decisions made, risks, mitigations
    • Obtain governance approval
  2. Consent and Notice:

    • Review and update privacy notices disclosing AI use
    • Ensure consent (or other lawful basis) for AI processing
    • Implement consent management for withdrawal
  3. Explainability:

    • Implement explainability mechanisms appropriate to AI risk and complexity
    • Develop customer-facing explanations for AI-driven decisions
    • Train staff to explain AI to customers
  4. Bias and Fairness:

    • Identify relevant demographic groups and protected characteristics
    • Test AI for bias and disparate impact
    • Implement bias mitigation strategies
    • Document bias testing and mitigations
  5. Human Oversight:

    • Determine appropriate level of human involvement
    • Implement human review mechanisms for high-risk decisions
    • Train human reviewers on AI oversight
  6. Security:

    • Conduct AI security risk assessment
    • Implement security controls (access controls, encryption, monitoring)
    • Test AI resilience to adversarial attacks
    • Establish incident response procedures
  7. Cross-Border Transfers:

    • Document cross-border data flows
    • Assess destination jurisdiction data protection laws
    • Establish data processing agreements or other transfer safeguards
    • Obtain PDPD approval if required
  8. Documentation:

    • Document AI system design, development, validation, deployment
    • Maintain DPIA, consent records, bias testing results, security assessments
    • Create audit trails for AI decisions

Phase 4: Monitoring and Improvement (Ongoing)

Continuous Monitoring:

  • Monitor AI performance, bias, security
  • Track incidents and complaints
  • Collect user feedback
  • Monitor regulatory developments

Regular Reviews:

  • Quarterly AI governance committee meetings
  • Annual comprehensive AI system audits
  • Periodic DPIA reviews and updates
  • Regular bias and fairness testing

Regulatory Engagement:

  • Monitor PDPD guidance and enforcement
  • Monitor BNM developments on AI in financial services
  • Participate in industry consultations
  • Proactive engagement with regulators for novel AI applications

Key Compliance Challenges and Solutions

Challenge 1: Lack of Explicit AI Legislation

  • Issue: Malaysia doesn't have specific AI legislation like EU AI Act; relies on PDPA and sectoral regulations
  • Solution: Align with National AI Framework principles; implement PDPA rigorously; monitor international best practices; engage proactively with regulators

Challenge 2: DPIA Not Legally Mandated

  • Issue: PDPA doesn't explicitly require DPIA, though PDPD recommends it
  • Solution: Treat DPIA as best practice and regulatory expectation for high-risk AI; conduct comprehensive DPIA demonstrating due diligence

Challenge 3: Cross-Border Data Transfers

  • Issue: Section 129 restrictions can complicate use of cloud AI platforms and international AI services
  • Solution: Assess destination jurisdiction data protection; establish robust data processing agreements; document transfer safeguards; consider local infrastructure options

Challenge 4: Emerging Regulations

  • Issue: BNM and other regulators developing AI-specific requirements; landscape evolving
  • Solution: Monitor regulatory developments closely; participate in industry consultations; build flexible AI governance capable of adapting to new requirements

Challenge 5: Resource Constraints

  • Issue: Smaller organizations may lack resources for comprehensive AI compliance
  • Solution: Prioritize high-risk AI systems; leverage industry frameworks and tools; consider third-party compliance services; engage industry associations for shared resources

Future Outlook

Anticipated Regulatory Developments

PDPA Amendments: Potential amendments strengthening AI-related provisions:

  • Explicit automated decision-making rights
  • Mandatory DPIA for high-risk AI
  • Enhanced enforcement powers and penalties
  • Alignment with international standards (GDPR, APEC CBPR)

AI-Specific Legislation: Malaysia may introduce dedicated AI legislation addressing:

  • AI governance requirements
  • High-risk AI regulations
  • AI transparency and explainability standards
  • AI safety and security requirements
  • Prohibited AI applications

Sectoral Regulations: Expect specific AI guidance from:

  • BNM for financial services (bias testing, explainability, governance)
  • Ministry of Health for healthcare AI
  • Malaysian Communications and Multimedia Commission for telecommunications and digital services
  • Other regulators for critical sectors

ASEAN Harmonization: Malaysia participating in ASEAN AI governance initiatives:

Preparing for Future Requirements

Build Strong Foundation Now:

  • Implement comprehensive AI governance aligned with National AI Framework
  • Rigorous PDPA compliance for AI systems
  • Proactive bias testing and fairness management
  • Robust explainability and human oversight
  • Strong security and data protection

Stay Informed and Engaged:

  • Monitor PDPD guidance and enforcement
  • Track BNM and other sectoral regulator developments
  • Participate in industry consultations
  • Engage with international AI governance initiatives

Document Everything:

  • Comprehensive documentation of AI governance, risk assessments, compliance measures
  • Audit trails enabling demonstration of responsible AI practices
  • Evidence of continuous improvement and adaptation

Conclusion

Malaysia's AI regulatory landscape is evolving rapidly, with the National AI Framework providing principles-based guidance and PDPA establishing binding data protection requirements. Organizations deploying AI in Malaysia must:

  1. Align with National AI Framework: Implement ethical, human-centric, transparent, accountable, and secure AI
  2. Rigorous PDPA Compliance: Ensure AI systems processing personal data comply with all PDPA obligations
  3. Conduct DPIA: Treat DPIA as essential for high-risk AI despite not being legally mandated
  4. Sector-Specific Compliance: Meet BNM requirements for financial services and other sectoral regulations
  5. Manage Cross-Border Transfers: Establish appropriate safeguards for data transfers outside Malaysia
  6. Continuous Monitoring: Actively monitor regulatory developments and adapt AI governance

Organizations that proactively implement comprehensive AI governance will be well-positioned for current compliance and future regulatory developments.


Need expert guidance on Malaysia AI compliance? Contact Pertama Partners for comprehensive advisory services covering PDPA, BNM requirements, and AI governance.

Frequently Asked Questions

No, Malaysia does not currently have dedicated AI legislation comparable to the EU AI Act. Instead, AI regulation in Malaysia operates through: (1) National AI Framework: Principles-based guidance under the Malaysia Digital Economy Blueprint and Fourth Industrial Revolution Policy, establishing ethical AI principles (ethical development, human-centric design, transparency, accountability, safety/security) but not legally binding. (2) Personal Data Protection Act 2010 (PDPA): Binding data protection law applying to AI systems processing personal data, with enforcement by Personal Data Protection Commissioner. (3) Sectoral Regulations: Sector-specific requirements from Bank Negara Malaysia (financial services), MAMPU (public sector), and other regulators. Malaysia is expected to introduce more specific AI legislation in the future, potentially including explicit automated decision-making rights, mandatory DPIA for high-risk AI, and AI-specific governance requirements. Organizations should align with National AI Framework principles, rigorously comply with PDPA, and monitor regulatory developments for emerging AI-specific requirements.

DPIA is not explicitly mandated by Malaysia's PDPA, but the Personal Data Protection Commissioner has issued guidance strongly recommending DPIA for: (1) Processing personal data at large scale; (2) Processing sensitive personal data; (3) Automated decision-making with legal or significant effects on individuals; (4) New technologies with privacy implications, including AI. Most AI systems fall within these categories. While technically not a legal requirement, organizations should treat DPIA as a best practice and regulatory expectation for high-risk AI systems for several reasons: (1) Demonstrates due diligence and responsible AI governance; (2) Helps identify and mitigate privacy risks before deployment; (3) Provides evidence of PDPA compliance if challenged; (4) Aligns with international best practices (GDPR requires DPIA for high-risk processing); (5) Future PDPA amendments may mandate DPIA. A comprehensive AI DPIA should cover: system description, necessity and proportionality assessment, risks to individuals, mitigation controls, stakeholder consultation, and governance approval. Document DPIA thoroughly as evidence of responsible AI practices.

While both Malaysia and Singapore have Personal Data Protection Acts (PDPA), there are important differences affecting AI compliance: (1) Scope: Malaysia PDPA applies to commercial transactions; Singapore PDPA broader scope. (2) Consent: Both require consent but Malaysia emphasizes explicit notice and choice; Singapore allows deemed consent more readily. (3) Automated Decision-Making: Neither has explicit right to object like GDPR Article 22, but Singapore's Model AI Governance Framework provides more detailed guidance on human oversight; Malaysia relies on PDPD recommendations. (4) DPIA: Neither explicitly mandates DPIA, but both regulators recommend it for high-risk processing; Singapore's Model AI Governance Framework provides more structured risk assessment guidance. (5) Cross-Border Transfers: Malaysia Section 129 requires adequate protection in destination jurisdiction or contractual safeguards; Singapore Section 26 similar but more flexible interpretation. (6) Penalties: Malaysia up to MYR 500,000 and/or imprisonment; Singapore up to SGD 1 million or 10% of turnover. (7) Regulatory Guidance: Singapore has more comprehensive AI-specific guidance (Model AI Governance Framework, MAS FEAT principles); Malaysia has National AI Framework principles but less detailed implementation guidance. Organizations operating in both jurisdictions can use a harmonized approach: implement Singapore's Model AI Governance Framework (more comprehensive) while ensuring Malaysia-specific requirements (explicit notice, DPIA, cross-border transfer safeguards) are met.

Bank Negara Malaysia (BNM) regulates AI in financial services through its Risk Management in Technology (RMiT) framework, with specific AI considerations: (1) Governance and Oversight: Board and senior management oversight of AI strategy, clear accountability for AI systems, AI governance integrated with technology risk governance, regular reporting to board on AI systems and risks. (2) Risk Management: Comprehensive risk assessments covering model risk (accuracy, bias, robustness), operational risk (failures, degradation), compliance risk (PDPA, consumer protection), reputational risk, and strategic risk; risk mitigation controls proportionate to risk level; ongoing monitoring and reassessment. (3) Development and Validation: Rigorous development methodology with documentation, testing and validation before deployment (accuracy, fairness, robustness, security), independent validation for material AI systems, documentation of model assumptions and limitations, approval processes before deployment. (4) Consumer Protection: Fair treatment of customers in AI-driven processes, transparency about AI use, explainability of AI-driven decisions, complaint handling mechanisms, human oversight for consequential decisions. (5) Monitoring and Change Management: Continuous performance monitoring, drift detection and revalidation, rigorous change management for updates, incident management for AI failures. (6) Third-Party Risk: Due diligence on AI service providers, contractual requirements ensuring compliance, ongoing monitoring, accountability maintained by financial institution. BNM is developing more specific AI guidance expected to address bias and fairness testing requirements, explainability standards, AI governance structures, AI security, and use of generative AI. Financial institutions should proactively implement comprehensive AI governance anticipating these requirements.

Malaysia PDPA Section 129 restricts transferring personal data outside Malaysia unless: (1) The recipient country has adequate data protection laws (deemed adequate), OR (2) Organization ensures adequate protection through contractual or other means. For AI systems involving cross-border transfers (cloud AI platforms, offshore development, international AI services): Assessment: (1) Identify all cross-border data flows related to AI (training data, operational data, model parameters); (2) Assess destination jurisdiction's data protection laws - countries with comprehensive data protection (Singapore, EU, Japan, South Korea, etc.) may be deemed adequate; for others, rely on contractual safeguards; (3) Document transfer necessity and alternatives considered. Safeguards: (1) Data Processing Agreements: Establish contracts requiring recipient to protect personal data per PDPA standards, use data only for specified purposes, implement appropriate security, notify of breaches, return or delete data upon termination, submit to PDPA compliance audits; (2) Standard Contractual Clauses: Use internationally recognized clauses (APEC CBPR, EU SCCs adapted for Malaysia); (3) Binding Corporate Rules: For multinationals, establish BCRs ensuring PDPA-level protection across all entities; (4) Additional Technical Measures: Encryption before transfer, data minimization, anonymization where feasible. Documentation: Maintain comprehensive records of transfer risk assessments, destination jurisdiction evaluations, contractual safeguards, and technical measures. For high-risk or large-scale transfers, consider proactive engagement with PDPD. Major cloud AI platforms (AWS, Google Cloud, Azure) typically offer: data processing agreements meeting regulatory standards, compliance certifications, Malaysia-based infrastructure options (data residency). Best practice: Use Malaysia-based infrastructure where feasible; establish robust contractual safeguards; document all cross-border transfer decisions thoroughly.

References

  1. Malaysia National AI Roadmap 2021-2025. Malaysia Digital Economy Corporation (MDEC) (2021). View source
  2. AI Regulation and Governance in Malaysia. EY Malaysia (2025). View source
  3. SAP AI Ethics and Compliance Framework. SAP Southeast Asia (2025). View source
  4. Ethical AI Development in Malaysia. Universiti Malaya Centre for AI Research (2024). View source
malaysiaAI regulationcompliancePDPANational AI FrameworkBank Negara MalaysiaBNMdata protectionAI governancefinancial servicesDPIA

Explore Further

Key terms:AI Regulation

Ready to Apply These Insights to Your Organization?

Book a complimentary AI Readiness Audit to identify opportunities specific to your context.

Book an AI Readiness Audit

RELEVANT INDUSTRIES

Industries This Applies To