Malaysia is building a comprehensive AI regulatory framework as part of its national digital transformation strategy. This guide provides detailed coverage of Malaysia's AI regulations, including the National AI Framework, Personal Data Protection Act (PDPA) requirements, sector-specific regulations, and practical compliance guidance.
Malaysia's AI Regulatory Landscape
Malaysia's approach to AI regulation reflects a deliberate balancing act between fostering innovation and enforcing governance, aligning AI development with national economic and social objectives while protecting individuals and ensuring ethical deployment.
Several defining characteristics shape this landscape. AI regulation in Malaysia is not treated as a standalone policy domain but is integrated with the country's broader Fourth Industrial Revolution (4IR) strategy, positioning it as a core pillar of national digital transformation. The framework is principles-based, meaning ethical AI principles guide both development and deployment rather than prescriptive technical mandates. Regulation is also evolving along sectoral lines, with financial services, the public sector, and other critical industries each developing AI-specific requirements tailored to their risk profiles. Crucially, Malaysia's regulatory posture is shaped by its commitment to ASEAN alignment, coordinating with regional initiatives on AI governance and cross-border data flows.
Five regulatory bodies share responsibility for overseeing this landscape. The Personal Data Protection Department (PDPD) administers and enforces the PDPA, with increasing focus on AI and automated decision-making. Bank Negara Malaysia (BNM) regulates financial services AI through its Risk Management in Technology framework. The Malaysian Communications and Multimedia Commission (MCMC) oversees digital services and emerging technologies, while the Malaysian Administrative Modernisation and Management Planning Unit (MAMPU) provides AI guidelines specifically for public sector agencies. At the strategic level, the Ministry of Science, Technology and Innovation (MOSTI) coordinates national AI strategy development across government.
National AI Framework and Roadmap
National Fourth Industrial Revolution (4IR) Policy
Malaysia's 4IR Policy, launched as part of the Malaysia Digital Economy Blueprint, establishes the strategic framework for AI adoption and governance. The policy emphasizes human-centric, ethical, and responsible AI development, and is organized around five governance principles.
The first principle, ethical AI development, requires that AI systems align with Malaysian values and societal norms. This encompasses respect for human dignity, rights, and autonomy, with explicit consideration of social, cultural, and religious contexts. Ethical considerations must be embedded throughout the entire AI lifecycle, from design through deployment and retirement.
The second principle, human-centric design, holds that AI should augment human capabilities rather than replace human judgment in critical areas. Organizations must maintain human oversight for consequential decisions, and AI systems should be designed to serve the public good and improve quality of life. Inclusivity and accessibility are core design requirements, not afterthoughts.
Transparency and explainability form the third principle. Organizations are expected to be open about their use of AI, and individuals have the right to understand how AI affects them. Explainability mechanisms should be proportionate to the AI system's impact and risk level, accompanied by clear communication about AI capabilities and limitations.
The fourth principle, accountability and responsibility, demands clear lines of accountability for AI systems and their outcomes. Organizations bear responsibility for AI performance, bias, and failures. Governance structures must ensure meaningful oversight and control, and mechanisms for redress must be available when AI causes harm.
Finally, safety and security requires robust security measures to protect AI systems and the data they process. Safety testing and validation must occur before deployment, with ongoing monitoring for failures, attacks, and degradation. Incident response and remediation procedures must be established and maintained.
AI Implementation Roadmap
Malaysia's AI roadmap advances along six interconnected tracks. Capacity building focuses on developing AI talent and expertise across the workforce. Infrastructure investment targets computational and data capabilities needed to support AI at scale. The regulatory framework track is developing specific AI regulations and standards to complement the principles-based approach. Public sector adoption drives implementation of AI in government services, while industry enablement supports private sector AI uptake. International cooperation ensures Malaysia participates actively in ASEAN and global AI governance initiatives.
Personal Data Protection Act 2010 (PDPA)
PDPA Overview
Malaysia's PDPA, enacted in 2010 and effective from 2013, serves as the primary data protection law. All AI systems that process personal data must comply with its obligations.
The Act defines personal data as information in respect of commercial transactions that relates directly or indirectly to a data subject who is identified or identifiable from that information, or from that information combined with other information in the possession of the data user. This encompasses names, addresses, and contact information; identification numbers; financial information; health information; and any other information that can identify an individual.
Data Protection Principles
The PDPA establishes seven Data Protection Principles, each carrying specific implications for AI systems.
The General Principle (Section 5) stipulates that personal data shall not be processed unless consent has been obtained or the processing is otherwise lawful. All processing must be fair, lawful, and conducted for specified purposes.
Under the Notice and Choice Principle (Section 7), data users must inform individuals that personal data is being collected, explain the purposes of collection and processing, identify the sources of personal data, and communicate the individual's right to access and correct their data. Individuals must also be told whether supplying their data is voluntary or mandatory, along with the consequences of failing to supply it. For AI specifically, notice must disclose AI use, automated decision-making, and its consequences.
The Disclosure Principle (Section 8) provides that personal data shall not be disclosed for purposes other than those specified without consent. In the AI context, this means data collected for one purpose cannot be repurposed for a different AI application without obtaining new consent or establishing a separate lawful basis.
The Security Principle (Section 9) requires data users to take practical steps to protect personal data from loss, misuse, modification, unauthorized access, or disclosure. For AI systems, this translates into robust security measures for training data, models, and AI infrastructure, including protection against AI-specific threats such as adversarial attacks, data poisoning, and model extraction.
Under the Retention Principle (Section 10), personal data shall not be kept longer than necessary to fulfill the purposes for which it was collected. Organizations deploying AI must define retention periods for training data, operational data, and AI decision logs, balancing accountability needs with privacy obligations.
The Data Integrity Principle (Section 11) mandates that personal data be accurate, complete, not misleading, and kept up to date. This principle is critical for AI, as inaccurate training or operational data produces biased and incorrect AI outputs, directly violating the PDPA.
Finally, the Access Principle (Section 12) grants individuals the right to request information about how their personal data is processed, to access their personal data, and to request correction of inaccurate data. For AI, this means individuals can request information about how AI processes their data and challenge AI-driven decisions.
AI-Specific PDPA Considerations
Consent for AI Processing
The PDPA requires consent for processing personal data, and AI applications introduce particular complexity in meeting this requirement. Explicit consent is recommended for high-risk AI applications, particularly those involving automated decisions with legal or significant effects. Informed consent requires that individuals genuinely understand that AI will process their data and make or inform decisions affecting them. Specific consent demands purpose-specific authorization; vague references to "analytics" or "business operations" are insufficient. Organizations must also accommodate consent withdrawal within their AI systems when individuals exercise this right.
An example of appropriate consent language reads: "We use artificial intelligence to assess your loan application. The AI analyzes your financial information, employment history, and credit data to predict likelihood of repayment and determine loan approval and terms. Decisions may be made automatically with limited human review. By submitting this application, you consent to this AI-powered assessment."
Data Protection Impact Assessment (DPIA)
While the PDPA does not explicitly mandate DPIAs, the Personal Data Protection Commissioner has issued guidance recommending them for processing personal data at scale, processing sensitive personal data, automated decision-making with legal or significant effects on individuals, and new technologies with privacy implications (including AI).
A comprehensive DPIA for AI should cover six areas. It begins with a description of the AI system's purpose, functionality, data processed, and decisions made. The assessment then evaluates necessity and proportionality, examining why AI is required and whether less privacy-invasive alternatives exist. A thorough analysis of risks to individuals follows, addressing potential harms from AI errors, bias, security breaches, and misuse. The DPIA must document risk mitigation measures, including explainability mechanisms, bias testing, human oversight, and security controls. Consultation with stakeholders on the AI deployment should be documented. Finally, the assessment requires review and approval by the privacy officer, legal and compliance teams, and an appropriate governance body.
Automated Decision-Making Rights
The PDPA does not explicitly provide a "right to object to automated decision-making" equivalent to GDPR Article 22. However, PDPD guidance indicates that individuals should be informed of automated decision-making and have the right to request human review of consequential automated decisions. Organizations should implement human oversight for high-impact AI, and individuals can challenge AI-driven decisions through their existing access and correction rights.
As a best practice, organizations should implement human review mechanisms for any AI making decisions with legal or significant effects, including credit, employment, insurance, benefits, and legal rights determinations.
Cross-Border Data Transfers
Section 129 of the PDPA restricts transferring personal data outside Malaysia unless the recipient country has adequate data protection laws, or the organization ensures adequate protection through contractual or other means.
For AI systems, this has several practical implications. Organizations using cloud AI platforms that process data outside Malaysia must establish data processing agreements ensuring PDPA-level protection. Offshore AI development requires assessment of the destination jurisdiction's data protection laws, supported by contractual safeguards. Cross-border AI deployments must ensure data protection across all jurisdictions involved. Transfer risk assessments and safeguards must be thoroughly documented.
PDPA Enforcement and Penalties
The Personal Data Protection Commissioner investigates complaints, conducts audits, and issues enforcement notices.
Penalties under the PDPA are significant. Failure to comply with the Commissioner's enforcement notice carries a fine of up to MYR 500,000 and/or imprisonment of up to 3 years. Penalties for unlawful processing of personal data vary by the severity of the violation. For serious violations, the Commissioner can publicize non-compliance, creating significant reputational consequences.
Recent PDPD enforcement activity has addressed AI-related complaints including lack of transparency about automated decision-making, inadequate security for personal data used in AI systems, and use of personal data for AI purposes beyond the original scope of consent. Enforcement intensity is increasing as AI adoption grows across the Malaysian economy.
Sector-Specific AI Regulations
Financial Services: Bank Negara Malaysia (BNM)
Risk Management in Technology (RMiT) Framework
BNM's RMiT framework applies to all technology risks, including AI systems deployed by financial institutions. The framework establishes six categories of requirements for AI.
In the area of governance and oversight, the framework requires board and senior management oversight of AI strategy and deployment, with clear accountability for AI systems. AI governance must be integrated with overall technology risk governance, and the board must receive regular reporting on AI systems, associated risks, and incidents.
For risk management, financial institutions must conduct comprehensive risk assessments covering model risk (accuracy, bias, robustness), operational risk (failures, performance degradation), compliance risk (PDPA, consumer protection, AML/CFT), reputational risk (customer trust, public perception), and strategic risk (over-reliance on AI, competitive positioning). Risk mitigation controls must be proportionate to the identified risk level, with ongoing monitoring and reassessment.
The development and validation requirements mandate rigorous AI development methodology with full documentation. Testing and validation must occur before deployment, covering accuracy, fairness, robustness, and security. Independent validation is required for material AI systems. Organizations must document model assumptions, limitations, and appropriate use cases, and formal approval processes and sign-offs are required before deployment.
Consumer protection provisions require fair treatment of customers in AI-driven processes, transparency about AI use in customer-facing applications, and explainability of AI-driven decisions affecting customers. Complaint handling mechanisms must address AI-related issues, and human oversight is required for consequential customer decisions.
Under monitoring and change management, institutions must implement continuous monitoring of AI performance against key performance indicators, drift detection and model revalidation, rigorous change management for AI system updates, and incident management and escalation procedures for AI failures.
Third-party risk management requirements address due diligence on AI service providers and platforms, contractual requirements ensuring compliance and performance, ongoing monitoring of third-party AI services, and the principle that accountability is maintained by the financial institution regardless of outsourcing.
BNM is currently developing AI-specific guidance for financial institutions that is expected to address bias and fairness testing and mitigation, explainability requirements for customer-facing AI, governance structures for AI ethics and accountability, AI security and resilience, and the use of generative AI and large language models.
Public Sector: MAMPU Guidelines
AI in Government Services
MAMPU provides guidelines for AI adoption in Malaysian government agencies, built on five principles. The public interest principle holds that AI should serve the public good and improve service delivery. Transparency requires that government AI use be open to citizens. Fairness mandates that government AI treat all citizens equitably. Accountability demands clear responsibility for government AI systems. Security requires robust protection of citizen data processed by government AI.
Under these principles, government agencies deploying AI must conduct impact assessments, with high-risk government AI subject to additional scrutiny and approval. Regular audits of government AI systems are required, along with publication of information about government AI use through transparency registers. Citizen feedback and complaint mechanisms must be established and maintained.
Implementation Roadmap for Malaysia AI Compliance
Phase 1: Assessment (Months 1-2)
The compliance journey begins with a thorough AI system inventory. Organizations should identify all AI systems in use or under development, documenting each system's purpose, the personal data it processes, its decision-making role, the individuals it affects, and any cross-border aspects. Each system should be classified by risk level (high, medium, or low).
Next, a regulatory mapping exercise should assess PDPA compliance, identify applicable sector-specific requirements (BNM for financial services, MAMPU for the public sector), evaluate cross-border data transfer requirements, and flag any industry-specific regulations.
The assessment phase concludes with a gap analysis comparing current practices against regulatory requirements. This analysis should identify gaps in governance, DPIA processes, consent mechanisms, explainability, bias management, security, and documentation. Remediation efforts should be prioritized based on risk and regulatory urgency.
Phase 2: Governance and Policy (Months 2-4)
With the assessment complete, organizations should build the governance structure needed to sustain AI compliance. This includes establishing an AI governance committee, assigning key roles (AI system owners, data protection officer, AI ethics officer), defining escalation and approval processes, and integrating AI governance with existing structures such as risk and technology committees.
Policy development should produce an AI governance policy aligned with National AI Framework principles, PDPA compliance procedures for AI, a DPIA methodology, bias and fairness testing procedures, explainability standards, security standards for AI systems and training data, and cross-border data transfer procedures.
A comprehensive training program should cover PDPA and AI compliance for relevant staff, technical training on bias testing, explainability, and security, as well as ethics training for AI ethics committee members.
Phase 3: Implementation (Months 4-8)
Implementation should prioritize high-risk AI systems, addressing each through a structured sequence. Organizations should begin by conducting a comprehensive Data Protection Impact Assessment that documents the AI system's purpose, the data it processes, the decisions it makes, associated risks, and mitigations, then obtaining governance approval.
Consent and notice mechanisms require reviewing and updating privacy notices to disclose AI use, ensuring consent (or another lawful basis) is in place for AI processing, and implementing consent management systems that accommodate withdrawal.
Explainability efforts should implement mechanisms appropriate to each AI system's risk and complexity, develop customer-facing explanations for AI-driven decisions, and train staff to explain AI to customers in accessible terms.
For bias and fairness, organizations should identify relevant demographic groups and protected characteristics, test AI systems for bias and disparate impact, implement bias mitigation strategies, and document all testing and mitigation activities.
Human oversight requires determining the appropriate level of human involvement for each system, implementing human review mechanisms for high-risk decisions, and training human reviewers on effective AI oversight.
Security implementation involves conducting AI security risk assessments, implementing security controls (access controls, encryption, monitoring), testing AI resilience to adversarial attacks, and establishing incident response procedures.
Cross-border transfer compliance requires documenting all cross-border data flows, assessing destination jurisdiction data protection laws, establishing data processing agreements or other transfer safeguards, and obtaining PDPD approval where required.
Throughout implementation, rigorous documentation must be maintained, covering AI system design, development, validation, and deployment. Organizations should preserve DPIA records, consent records, bias testing results, and security assessments, and create audit trails for AI decisions.
Phase 4: Monitoring and Improvement (Ongoing)
Continuous monitoring should track AI performance, bias, and security on an ongoing basis, while also tracking incidents and complaints, collecting user feedback, and monitoring regulatory developments.
Regular reviews provide structured checkpoints: quarterly AI governance committee meetings, annual comprehensive AI system audits, periodic DPIA reviews and updates, and regular bias and fairness testing cycles.
Regulatory engagement keeps organizations ahead of change. This means monitoring PDPD guidance and enforcement activity, tracking BNM developments on AI in financial services, participating in industry consultations, and proactively engaging with regulators on novel AI applications.
Key Compliance Challenges and Solutions
The absence of explicit AI legislation presents the first significant challenge. Unlike jurisdictions governed by the EU AI Act, Malaysia relies on the PDPA and sectoral regulations to govern AI. Organizations should address this gap by aligning with National AI Framework principles, implementing the PDPA rigorously for all AI systems, monitoring international best practices, and engaging proactively with regulators to shape emerging policy.
The fact that DPIAs are not legally mandated creates a second challenge. Although the PDPA does not explicitly require them, the PDPD recommends their use. Organizations should treat DPIAs as both a best practice and a regulatory expectation for high-risk AI, conducting comprehensive assessments that demonstrate due diligence and a commitment to responsible deployment.
Cross-border data transfers represent a third area of complexity. Section 129 restrictions can complicate the use of cloud AI platforms and international AI services. Organizations should assess the data protection adequacy of destination jurisdictions, establish robust data processing agreements, document transfer safeguards thoroughly, and consider local infrastructure options where practical.
The evolving regulatory environment poses a fourth challenge, as BNM and other regulators develop AI-specific requirements. Organizations should monitor regulatory developments closely, participate in industry consultations, and build flexible AI governance frameworks capable of adapting to new requirements without wholesale redesign.
Finally, resource constraints affect smaller organizations that may lack the capacity for comprehensive AI compliance programs. These organizations should prioritize compliance efforts on high-risk AI systems, leverage industry frameworks and tools, consider third-party compliance services, and engage industry associations for shared resources and collective guidance.
Future Outlook
Anticipated Regulatory Developments
Several regulatory developments are on the horizon. PDPA amendments may strengthen AI-related provisions, potentially introducing explicit automated decision-making rights, mandatory DPIAs for high-risk AI, enhanced enforcement powers and penalties, and closer alignment with international standards such as the GDPR and APEC Cross-Border Privacy Rules.
Malaysia may also introduce dedicated AI legislation addressing AI governance requirements, high-risk AI regulations, AI transparency and explainability standards, AI safety and security requirements, and prohibited AI applications.
Sectoral regulations are expected to produce specific AI guidance from multiple regulators. BNM is likely to issue requirements for financial services covering bias testing, explainability, and governance. The Ministry of Health may address healthcare AI. The Malaysian Communications and Multimedia Commission is expected to regulate AI in telecommunications and digital services. Other regulators will likely follow for their respective critical sectors.
ASEAN harmonization efforts will also shape Malaysia's regulatory trajectory. Malaysia is actively participating in the ASEAN Guide on AI Governance and Ethics, the ASEAN Framework on Digital Data Governance, cross-border data flow mechanisms, and mutual recognition frameworks for AI certifications.
Preparing for Future Requirements
Organizations should build a strong foundation now by implementing comprehensive AI governance aligned with the National AI Framework, ensuring rigorous PDPA compliance for all AI systems, conducting proactive bias testing and fairness management, establishing robust explainability and human oversight mechanisms, and maintaining strong security and data protection practices.
Staying informed and engaged is equally important. Organizations should monitor PDPD guidance and enforcement, track developments from BNM and other sectoral regulators, participate in industry consultations, and engage with international AI governance initiatives.
Above all, organizations should document everything. Comprehensive documentation of AI governance, risk assessments, and compliance measures creates the evidentiary foundation needed to demonstrate responsible AI practices. Audit trails should enable clear demonstration of compliance, and records should reflect continuous improvement and adaptation over time.
Conclusion
Malaysia's AI regulatory landscape is evolving rapidly. The National AI Framework provides principles-based guidance, while the PDPA establishes binding data protection requirements that apply directly to AI systems. Organizations deploying AI in Malaysia must align with the National AI Framework by implementing ethical, human-centric, transparent, accountable, and secure AI practices. Rigorous PDPA compliance is non-negotiable for any AI system processing personal data. DPIAs should be treated as essential for high-risk AI, even though they are not yet legally mandated. Sector-specific compliance, particularly BNM requirements for financial services, adds further obligations. Cross-border data transfers require appropriate safeguards under Section 129. And continuous monitoring of regulatory developments is necessary to keep AI governance current as the landscape matures.
Organizations that proactively implement comprehensive AI governance will be well-positioned for current compliance and future regulatory developments.
Need expert guidance on Malaysia AI compliance? Contact Pertama Partners for comprehensive advisory services covering PDPA, BNM requirements, and AI governance.
Common Questions
No, Malaysia does not currently have dedicated AI legislation comparable to the EU AI Act. Instead, AI regulation in Malaysia operates through: (1) National AI Framework: Principles-based guidance under the Malaysia Digital Economy Blueprint and Fourth Industrial Revolution Policy, establishing ethical AI principles (ethical development, human-centric design, transparency, accountability, safety/security) but not legally binding. (2) Personal Data Protection Act 2010 (PDPA): Binding data protection law applying to AI systems processing personal data, with enforcement by Personal Data Protection Commissioner. (3) Sectoral Regulations: Sector-specific requirements from Bank Negara Malaysia (financial services), MAMPU (public sector), and other regulators. Malaysia is expected to introduce more specific AI legislation in the future, potentially including explicit automated decision-making rights, mandatory DPIA for high-risk AI, and AI-specific governance requirements. Organizations should align with National AI Framework principles, rigorously comply with PDPA, and monitor regulatory developments for emerging AI-specific requirements.
DPIA is not explicitly mandated by Malaysia's PDPA, but the Personal Data Protection Commissioner has issued guidance strongly recommending DPIA for: (1) Processing personal data at large scale; (2) Processing sensitive personal data; (3) Automated decision-making with legal or significant effects on individuals; (4) New technologies with privacy implications, including AI. Most AI systems fall within these categories. While technically not a legal requirement, organizations should treat DPIA as a best practice and regulatory expectation for high-risk AI systems for several reasons: (1) Demonstrates due diligence and responsible AI governance; (2) Helps identify and mitigate privacy risks before deployment; (3) Provides evidence of PDPA compliance if challenged; (4) Aligns with international best practices (GDPR requires DPIA for high-risk processing); (5) Future PDPA amendments may mandate DPIA. A comprehensive AI DPIA should cover: system description, necessity and proportionality assessment, risks to individuals, mitigation controls, stakeholder consultation, and governance approval. Document DPIA thoroughly as evidence of responsible AI practices.
While both Malaysia and Singapore have Personal Data Protection Acts (PDPA), there are important differences affecting AI compliance: (1) Scope: Malaysia PDPA applies to commercial transactions; Singapore PDPA broader scope. (2) Consent: Both require consent but Malaysia emphasizes explicit notice and choice; Singapore allows deemed consent more readily. (3) Automated Decision-Making: Neither has explicit right to object like GDPR Article 22, but Singapore's Model AI Governance Framework provides more detailed guidance on human oversight; Malaysia relies on PDPD recommendations. (4) DPIA: Neither explicitly mandates DPIA, but both regulators recommend it for high-risk processing; Singapore's Model AI Governance Framework provides more structured risk assessment guidance. (5) Cross-Border Transfers: Malaysia Section 129 requires adequate protection in destination jurisdiction or contractual safeguards; Singapore Section 26 similar but more flexible interpretation. (6) Penalties: Malaysia up to MYR 500,000 and/or imprisonment; Singapore up to SGD 1 million or 10% of turnover. (7) Regulatory Guidance: Singapore has more comprehensive AI-specific guidance (Model AI Governance Framework, MAS FEAT principles); Malaysia has National AI Framework principles but less detailed implementation guidance. Organizations operating in both jurisdictions can use a harmonized approach: implement Singapore's Model AI Governance Framework (more comprehensive) while ensuring Malaysia-specific requirements (explicit notice, DPIA, cross-border transfer safeguards) are met.
Bank Negara Malaysia (BNM) regulates AI in financial services through its Risk Management in Technology (RMiT) framework, with specific AI considerations: (1) Governance and Oversight: Board and senior management oversight of AI strategy, clear accountability for AI systems, AI governance integrated with technology risk governance, regular reporting to board on AI systems and risks. (2) Risk Management: Comprehensive risk assessments covering model risk (accuracy, bias, robustness), operational risk (failures, degradation), compliance risk (PDPA, consumer protection), reputational risk, and strategic risk; risk mitigation controls proportionate to risk level; ongoing monitoring and reassessment. (3) Development and Validation: Rigorous development methodology with documentation, testing and validation before deployment (accuracy, fairness, robustness, security), independent validation for material AI systems, documentation of model assumptions and limitations, approval processes before deployment. (4) Consumer Protection: Fair treatment of customers in AI-driven processes, transparency about AI use, explainability of AI-driven decisions, complaint handling mechanisms, human oversight for consequential decisions. (5) Monitoring and Change Management: Continuous performance monitoring, drift detection and revalidation, rigorous change management for updates, incident management for AI failures. (6) Third-Party Risk: Due diligence on AI service providers, contractual requirements ensuring compliance, ongoing monitoring, accountability maintained by financial institution. BNM is developing more specific AI guidance expected to address bias and fairness testing requirements, explainability standards, AI governance structures, AI security, and use of generative AI. Financial institutions should proactively implement comprehensive AI governance anticipating these requirements.
Malaysia PDPA Section 129 restricts transferring personal data outside Malaysia unless: (1) The recipient country has adequate data protection laws (deemed adequate), OR (2) Organization ensures adequate protection through contractual or other means. For AI systems involving cross-border transfers (cloud AI platforms, offshore development, international AI services): Assessment: (1) Identify all cross-border data flows related to AI (training data, operational data, model parameters); (2) Assess destination jurisdiction's data protection laws - countries with comprehensive data protection (Singapore, EU, Japan, South Korea, etc.) may be deemed adequate; for others, rely on contractual safeguards; (3) Document transfer necessity and alternatives considered. Safeguards: (1) Data Processing Agreements: Establish contracts requiring recipient to protect personal data per PDPA standards, use data only for specified purposes, implement appropriate security, notify of breaches, return or delete data upon termination, submit to PDPA compliance audits; (2) Standard Contractual Clauses: Use internationally recognized clauses (APEC CBPR, EU SCCs adapted for Malaysia); (3) Binding Corporate Rules: For multinationals, establish BCRs ensuring PDPA-level protection across all entities; (4) Additional Technical Measures: Encryption before transfer, data minimization, anonymization where feasible. Documentation: Maintain comprehensive records of transfer risk assessments, destination jurisdiction evaluations, contractual safeguards, and technical measures. For high-risk or large-scale transfers, consider proactive engagement with PDPD. Major cloud AI platforms (AWS, Google Cloud, Azure) typically offer: data processing agreements meeting regulatory standards, compliance certifications, Malaysia-based infrastructure options (data residency). Best practice: Use Malaysia-based infrastructure where feasible; establish robust contractual safeguards; document all cross-border transfer decisions thoroughly.
References
- Personal Data Protection (Amendment) Act 2024. Government of Malaysia (2024). View source
- National Guidelines on AI Governance and Ethics (AIGE). Ministry of Science, Technology and Innovation (MOSTI) (2024). View source
- Discussion Paper on AI in the Malaysian Financial Sector. Bank Negara Malaysia (BNM) (2025). View source
- National AI Office (NAIO) — Malaysia. Ministry of Digital, Malaysia (2024). View source
- ASEAN Guide on AI Governance and Ethics. ASEAN Secretariat (2024). View source
- Malaysia — Global AI Ethics and Governance Observatory. UNESCO (2024). View source
- The National Guidelines on AI Governance & Ethics. MASTIC / MOSTI (2024). View source

