Back to AI Governance & Adoption for Companies

AI Governance for Malaysian Companies β€” Policy Templates & HRDF Workshops

Pertama PartnersFebruary 12, 202612 min read
πŸ‡²πŸ‡Ύ Malaysia
AI Governance for Malaysian Companies β€” Policy Templates & HRDF Workshops

Why AI Governance Matters for Malaysian Companies

As Malaysian companies accelerate their adoption of artificial intelligence, the need for robust AI governance frameworks has become urgent. AI governance is not about restricting AI use β€” it is about enabling safe, responsible, and effective adoption that protects the company, its employees, its customers, and its reputation.

Without governance, AI adoption tends to follow one of two problematic paths. Either employees use AI tools freely without guidelines, creating risks around data privacy, accuracy, and compliance. Or management restricts AI use entirely out of fear, losing competitive advantage while competitors upskill their teams. The right approach is structured governance: clear policies, defined responsibilities, risk assessment processes, and ongoing education.

For Malaysian companies, AI governance must be tailored to the local regulatory environment, including the Personal Data Protection Act 2010 (PDPA), the National AI Framework (MyDIGITAL), and industry-specific regulations from bodies such as Bank Negara Malaysia, the Securities Commission, and the Malaysian Communications and Multimedia Commission.

Malaysia's National AI Framework

MyDIGITAL Blueprint

The MyDIGITAL blueprint, launched by the Malaysian government, sets the strategic direction for the country's digital economy through 2030. AI is a central pillar of this framework, with specific targets for AI adoption across key economic sectors. The blueprint establishes:

  • National AI ethics principles β€” Guidelines for responsible AI development and deployment in Malaysia
  • Sectoral AI adoption targets β€” Specific goals for AI implementation in manufacturing, agriculture, healthcare, education, transport, smart cities, and public services
  • Workforce development β€” Targets for upskilling the Malaysian workforce in AI and digital skills
  • Data infrastructure β€” Plans for national data sharing frameworks and cloud infrastructure

National AI Roadmap

The National AI Roadmap provides more detailed guidance on how Malaysia intends to develop its AI ecosystem. Key elements relevant to corporate governance include:

  • Ethical AI guidelines β€” Principles for fairness, transparency, accountability, and safety in AI systems
  • Data governance framework β€” Standards for data quality, data sharing, and data protection in AI applications
  • Talent development β€” Programmes and incentives for building AI capabilities in the workforce
  • Industry collaboration β€” Public-private partnership models for AI adoption

MOSTI and AI Policy

The Ministry of Science, Technology and Innovation (MOSTI) oversees national AI policy and coordinates with other ministries and agencies. Companies developing AI governance frameworks should align with MOSTI guidance on responsible AI use and monitor updates to national AI policy.

PDPA Malaysia and AI

The Personal Data Protection Act 2010 (PDPA) is the primary legislation governing personal data protection in Malaysia. It applies to any commercial transaction involving personal data and is directly relevant to how companies use AI tools.

Key PDPA Principles for AI Use

The PDPA establishes seven data protection principles that must be considered when using AI:

  1. General Principle β€” Personal data shall not be processed without the consent of the data subject. When using AI tools with customer or employee data, companies must ensure appropriate consent is in place.

  2. Notice and Choice Principle β€” Data subjects must be informed about how their data is processed. If AI tools are used to process personal data, this should be disclosed in privacy notices.

  3. Disclosure Principle β€” Personal data shall not be disclosed for purposes other than those for which it was collected. Uploading personal data to external AI platforms may constitute unauthorised disclosure.

  4. Security Principle β€” Practical steps must be taken to protect personal data from loss, misuse, or unauthorised access. Companies must assess the security of AI tools before using them with personal data.

  5. Retention Principle β€” Personal data shall not be kept longer than necessary. AI tools may retain data in ways that conflict with this principle.

  6. Data Integrity Principle β€” Personal data must be accurate, complete, and kept up to date. AI-generated outputs based on personal data must be verified for accuracy.

  7. Access Principle β€” Data subjects have the right to access and correct their personal data. Companies must be able to comply with access requests even when AI tools have been used.

Practical PDPA Compliance for AI Users

AI governance training covers how to operationalise PDPA compliance in the context of AI:

  • Data classification β€” Categorising data into types (public, internal, confidential, restricted) and establishing which categories may be used with AI tools
  • Anonymisation techniques β€” How to remove personal identifiers before using data with AI tools
  • Vendor assessment β€” Evaluating AI tool providers against PDPA requirements, including data residency, processing terms, and sub-processor arrangements
  • Consent management β€” Ensuring that privacy notices and consent mechanisms cover AI processing scenarios
  • Breach response β€” Procedures for handling data breaches involving AI tools, including notification requirements

AI Acceptable Use Policies for Malaysian Companies

An AI acceptable use policy (AUP) is the foundational document of any AI governance framework. It establishes the rules and guidelines for how employees may use AI tools in the workplace.

Essential Components of an AI AUP

A well-structured AI acceptable use policy for Malaysian companies should include:

1. Scope and Definitions

  • Which AI tools are covered by the policy
  • Definitions of key terms (AI, generative AI, large language model, etc.)
  • Who the policy applies to (employees, contractors, vendors)

2. Approved Tools

  • A list of AI tools approved for business use
  • The approved versions or plans (e.g., ChatGPT Team rather than personal ChatGPT)
  • Tools that are explicitly prohibited
  • Process for requesting approval of new AI tools

3. Data Handling Rules

  • Data classification framework specific to AI use
  • What types of data may be entered into AI tools (public, anonymised internal data)
  • What types of data must never be entered (personal data, financial records, trade secrets, privileged information)
  • How to anonymise data before using AI tools

4. Output Review Requirements

  • Mandatory human review of all AI-generated content before external use
  • Specific review standards for different output types (customer communications, financial reports, legal documents)
  • How to verify AI-generated facts, figures, and citations

5. Transparency and Disclosure

  • When to disclose AI use to clients, customers, or stakeholders
  • How to attribute AI-assisted work in professional contexts
  • Internal documentation requirements for AI-assisted decisions

6. Prohibited Uses

  • Specific prohibited activities (e.g., using AI for employment decisions without human review, submitting AI-generated content as original work without disclosure)
  • Industry-specific prohibitions based on regulatory requirements

7. Incident Reporting

  • How to report AI-related data breaches, errors, or policy violations
  • Escalation procedures and response protocols

8. Training and Awareness

  • Required AI training for employees before accessing approved tools
  • Ongoing training and awareness requirements
  • Role-specific training obligations

9. Monitoring and Enforcement

  • How the company monitors compliance with the policy
  • Consequences of policy violations
  • Regular review and update schedule

Policy Templates

AI governance workshops typically include policy templates that companies can adapt to their specific context. These templates are pre-aligned with PDPA requirements and Malaysian regulatory expectations, reducing the time and cost of developing governance documentation from scratch.

Risk Assessment for AI in the Malaysian Context

AI risk assessment helps companies identify, evaluate, and mitigate the risks associated with AI adoption. For Malaysian companies, the risk assessment framework should account for local regulatory, cultural, and business factors.

Risk Categories

Data Privacy Risks

  • Unauthorised disclosure of personal data through AI tools
  • Non-compliance with PDPA requirements
  • Cross-border data transfer issues (many AI tools process data outside Malaysia)
  • Data retention by AI tool providers

Accuracy and Reliability Risks

  • AI hallucinations (generating false information presented as fact)
  • Bias in AI outputs based on training data limitations
  • Over-reliance on AI for critical decisions
  • Outdated information in AI knowledge bases

Legal and Regulatory Risks

  • Non-compliance with industry-specific regulations (BNM, SC, MCMC)
  • Intellectual property issues (ownership of AI-generated content, potential infringement)
  • Contractual obligations (NDAs, client agreements that may restrict AI use)
  • Employment law implications (using AI in hiring, performance management)

Reputational Risks

  • Public disclosure of inappropriate AI use
  • Customer trust erosion if AI is used without transparency
  • Professional liability for AI-assisted errors

Operational Risks

  • Dependency on AI tools that may experience outages
  • Vendor lock-in with specific AI providers
  • Skills atrophy if teams become overly dependent on AI

Risk Mitigation Strategies

AI governance workshops cover practical mitigation strategies for each risk category:

  • Technical controls β€” Enterprise AI tool configurations, data loss prevention tools, access controls
  • Policy controls β€” Acceptable use policies, data handling procedures, review requirements
  • Training controls β€” Mandatory AI literacy training, role-specific governance training, ongoing awareness programmes
  • Monitoring controls β€” Regular audits of AI use, compliance checks, incident tracking

HRDF Claimable AI Governance Workshops

AI governance workshops are fully HRDF claimable for Malaysian companies. These workshops are designed to help organisations build governance frameworks that enable safe, effective AI adoption.

Workshop Structure

Module 1: AI Governance Foundations (2 hours) Understanding why governance matters, the Malaysian regulatory landscape, and key governance principles.

Module 2: PDPA and AI Compliance (2 hours) Deep dive into PDPA requirements for AI use, data classification, anonymisation techniques, and vendor assessment.

Module 3: Policy Development Workshop (2 hours) Hands-on development of an AI acceptable use policy using templates tailored to the Malaysian context.

Module 4: Risk Assessment and Mitigation (2 hours) Practical risk assessment exercises using your company's actual AI use cases, with mitigation planning.

Who Should Attend

AI governance workshops are designed for cross-functional teams including:

  • Senior leadership β€” Understanding strategic AI governance and board-level responsibilities
  • IT and data teams β€” Technical governance implementation, tool selection, and security
  • Legal and compliance β€” Regulatory alignment, policy development, and risk management
  • HR and L&D β€” Training programme design and workforce readiness
  • Department heads β€” Understanding governance requirements and communicating expectations to teams

Companies that invest in AI governance workshops alongside technical AI training programmes create the foundation for sustainable, responsible AI adoption that delivers long-term value while managing risk appropriately.

Frequently Asked Questions

The Personal Data Protection Act 2010 (PDPA) governs how personal data is processed in Malaysia. It directly affects AI use by requiring companies to obtain consent before processing personal data, limit disclosure to authorised purposes, ensure security of data, and comply with data retention limits. Companies using AI tools must ensure PDPA compliance, particularly when tools process or store data outside Malaysia.

Yes, Malaysia has established national AI governance through the MyDIGITAL blueprint and National AI Roadmap. These frameworks set ethical AI principles, sectoral adoption targets, workforce development goals, and data governance standards. Companies should align their internal AI governance policies with these national frameworks to ensure regulatory alignment and demonstrate responsible AI adoption.

An AI acceptable use policy for Malaysian companies should include approved tools, data handling rules (what can and cannot be entered into AI), output review requirements, transparency and disclosure guidelines, prohibited uses, incident reporting procedures, training requirements, and enforcement mechanisms. The policy should be aligned with PDPA requirements and reviewed regularly as AI tools and regulations evolve.

Yes, AI governance workshops are fully HRDF claimable when delivered by an HRD Corp-registered training provider. These workshops typically run for 1-2 days and cover regulatory compliance, policy development, risk assessment, and governance implementation. Companies can claim under SBL-Khas or SBL schemes, covering up to 100% of training fees.

AI governance training should involve a cross-functional group including senior leadership, IT and data teams, legal and compliance, HR and L&D, and department heads. This ensures governance decisions reflect all perspectives β€” strategic, technical, legal, and operational β€” and that policies are understood and implemented consistently across the organisation.

More on AI Governance & Adoption for Companies