
As Malaysian companies accelerate their adoption of artificial intelligence, the need for robust AI governance frameworks has become urgent. AI governance is not about restricting AI use β it is about enabling safe, responsible, and effective adoption that protects the company, its employees, its customers, and its reputation.
Without governance, AI adoption tends to follow one of two problematic paths. Either employees use AI tools freely without guidelines, creating risks around data privacy, accuracy, and compliance. Or management restricts AI use entirely out of fear, losing competitive advantage while competitors upskill their teams. The right approach is structured governance: clear policies, defined responsibilities, risk assessment processes, and ongoing education.
For Malaysian companies, AI governance must be tailored to the local regulatory environment, including the Personal Data Protection Act 2010 (PDPA), the National AI Framework (MyDIGITAL), and industry-specific regulations from bodies such as Bank Negara Malaysia, the Securities Commission, and the Malaysian Communications and Multimedia Commission.
The MyDIGITAL blueprint, launched by the Malaysian government, sets the strategic direction for the country's digital economy through 2030. AI is a central pillar of this framework, with specific targets for AI adoption across key economic sectors. The blueprint establishes:
The National AI Roadmap provides more detailed guidance on how Malaysia intends to develop its AI ecosystem. Key elements relevant to corporate governance include:
The Ministry of Science, Technology and Innovation (MOSTI) oversees national AI policy and coordinates with other ministries and agencies. Companies developing AI governance frameworks should align with MOSTI guidance on responsible AI use and monitor updates to national AI policy.
The Personal Data Protection Act 2010 (PDPA) is the primary legislation governing personal data protection in Malaysia. It applies to any commercial transaction involving personal data and is directly relevant to how companies use AI tools.
The PDPA establishes seven data protection principles that must be considered when using AI:
General Principle β Personal data shall not be processed without the consent of the data subject. When using AI tools with customer or employee data, companies must ensure appropriate consent is in place.
Notice and Choice Principle β Data subjects must be informed about how their data is processed. If AI tools are used to process personal data, this should be disclosed in privacy notices.
Disclosure Principle β Personal data shall not be disclosed for purposes other than those for which it was collected. Uploading personal data to external AI platforms may constitute unauthorised disclosure.
Security Principle β Practical steps must be taken to protect personal data from loss, misuse, or unauthorised access. Companies must assess the security of AI tools before using them with personal data.
Retention Principle β Personal data shall not be kept longer than necessary. AI tools may retain data in ways that conflict with this principle.
Data Integrity Principle β Personal data must be accurate, complete, and kept up to date. AI-generated outputs based on personal data must be verified for accuracy.
Access Principle β Data subjects have the right to access and correct their personal data. Companies must be able to comply with access requests even when AI tools have been used.
AI governance training covers how to operationalise PDPA compliance in the context of AI:
An AI acceptable use policy (AUP) is the foundational document of any AI governance framework. It establishes the rules and guidelines for how employees may use AI tools in the workplace.
A well-structured AI acceptable use policy for Malaysian companies should include:
1. Scope and Definitions
2. Approved Tools
3. Data Handling Rules
4. Output Review Requirements
5. Transparency and Disclosure
6. Prohibited Uses
7. Incident Reporting
8. Training and Awareness
9. Monitoring and Enforcement
AI governance workshops typically include policy templates that companies can adapt to their specific context. These templates are pre-aligned with PDPA requirements and Malaysian regulatory expectations, reducing the time and cost of developing governance documentation from scratch.
AI risk assessment helps companies identify, evaluate, and mitigate the risks associated with AI adoption. For Malaysian companies, the risk assessment framework should account for local regulatory, cultural, and business factors.
Data Privacy Risks
Accuracy and Reliability Risks
Legal and Regulatory Risks
Reputational Risks
Operational Risks
AI governance workshops cover practical mitigation strategies for each risk category:
AI governance workshops are fully HRDF claimable for Malaysian companies. These workshops are designed to help organisations build governance frameworks that enable safe, effective AI adoption.
Module 1: AI Governance Foundations (2 hours) Understanding why governance matters, the Malaysian regulatory landscape, and key governance principles.
Module 2: PDPA and AI Compliance (2 hours) Deep dive into PDPA requirements for AI use, data classification, anonymisation techniques, and vendor assessment.
Module 3: Policy Development Workshop (2 hours) Hands-on development of an AI acceptable use policy using templates tailored to the Malaysian context.
Module 4: Risk Assessment and Mitigation (2 hours) Practical risk assessment exercises using your company's actual AI use cases, with mitigation planning.
AI governance workshops are designed for cross-functional teams including:
Companies that invest in AI governance workshops alongside technical AI training programmes create the foundation for sustainable, responsible AI adoption that delivers long-term value while managing risk appropriately.
The Personal Data Protection Act 2010 (PDPA) governs how personal data is processed in Malaysia. It directly affects AI use by requiring companies to obtain consent before processing personal data, limit disclosure to authorised purposes, ensure security of data, and comply with data retention limits. Companies using AI tools must ensure PDPA compliance, particularly when tools process or store data outside Malaysia.
Yes, Malaysia has established national AI governance through the MyDIGITAL blueprint and National AI Roadmap. These frameworks set ethical AI principles, sectoral adoption targets, workforce development goals, and data governance standards. Companies should align their internal AI governance policies with these national frameworks to ensure regulatory alignment and demonstrate responsible AI adoption.
An AI acceptable use policy for Malaysian companies should include approved tools, data handling rules (what can and cannot be entered into AI), output review requirements, transparency and disclosure guidelines, prohibited uses, incident reporting procedures, training requirements, and enforcement mechanisms. The policy should be aligned with PDPA requirements and reviewed regularly as AI tools and regulations evolve.
Yes, AI governance workshops are fully HRDF claimable when delivered by an HRD Corp-registered training provider. These workshops typically run for 1-2 days and cover regulatory compliance, policy development, risk assessment, and governance implementation. Companies can claim under SBL-Khas or SBL schemes, covering up to 100% of training fees.
AI governance training should involve a cross-functional group including senior leadership, IT and data teams, legal and compliance, HR and L&D, and department heads. This ensures governance decisions reflect all perspectives β strategic, technical, legal, and operational β and that policies are understood and implemented consistently across the organisation.