Back to Insights
AI Governance & AdoptionFramework

AI Policy Template for Companies in Malaysia & Singapore

February 11, 202612 min readPertama Partners
Updated March 15, 2026
For:Legal/ComplianceCISOBoard MemberCTO/CIOCHROConsultantCEO/FounderData Science/MLIT Manager

A ready-to-use AI policy template for companies in Malaysia and Singapore. Covers data handling, approved tools, employee responsibilities, and compliance with PDPA and Singapore AI governance frameworks.

Summarize and fact-check this article with:
AI Policy Template for Companies in Malaysia & Singapore

Key Takeaways

  • 1.Formal AI policies enable faster, safer adoption than informal guidelines
  • 2.Never input personal data or confidential information into AI tools
  • 3.Use only enterprise-approved AI tools, avoid free consumer versions
  • 4.All AI-generated content requires human review before business use
  • 5.Disclose AI use in client deliverables and regulatory submissions
  • 6.Implement quarterly policy reviews as AI technology evolves rapidly
  • 7.Mandatory employee training prevents underground AI usage and incidents

Why Every Company Needs a Formal AI Policy

The rapid adoption of generative AI tools across Southeast Asian workplaces has created an urgent governance gap. Employees are already using ChatGPT, Claude, Copilot, and other AI tools — often without their employer's knowledge or approval. Without a formal AI policy, companies face uncontrolled risk exposure in data privacy, intellectual property, regulatory compliance, and output quality.

A well-crafted AI policy does not restrict innovation. It channels it safely. Companies with clear AI policies adopt AI faster, with fewer incidents, than those relying on informal guidelines or no guidance at all.

This template is designed specifically for companies operating in Malaysia and Singapore, incorporating the regulatory requirements of both jurisdictions.

AI Policy Template

Below is a comprehensive template you can adapt for your organisation. Sections marked with [COMPANY] should be replaced with your company name.


1. Purpose and Scope

This policy establishes guidelines for the responsible use of artificial intelligence tools and systems at [COMPANY]. It applies to all employees, contractors, and third-party service providers who use AI tools in the course of their work for [COMPANY].

Objectives:

  • Enable employees to use AI tools productively and safely
  • Protect company data, intellectual property, and client information
  • Ensure compliance with applicable data protection laws (PDPA Malaysia, PDPA Singapore)
  • Align with Singapore's Model AI Governance Framework and Malaysia's AI ethics guidelines
  • Establish clear accountability for AI-related decisions and outputs

2. Definitions

TermDefinition
AI ToolAny software that uses artificial intelligence or machine learning to generate text, images, code, analysis, or other outputs
Generative AIAI systems that create new content (text, images, code) based on prompts
PromptThe input or instruction given to an AI tool
AI OutputAny content, analysis, or recommendation generated by an AI tool
Personal DataAny data that can identify a living individual, as defined under PDPA
Confidential DataInformation classified as confidential under [COMPANY]'s data classification policy

3. Approved AI Tools

[COMPANY] has approved the following AI tools for business use:

ToolApproved UseData Classification AllowedLicence Type
[e.g. ChatGPT Enterprise]General writing, research, analysisInternal, PublicEnterprise
[e.g. Microsoft Copilot]Office 365 integrationInternal, PublicEnterprise
[e.g. GitHub Copilot]Code generation and reviewInternal code onlyEnterprise

Unapproved tools: Employees must not use AI tools that have not been approved by [COMPANY]'s IT department. If you wish to request approval for a new tool, submit a request through the [AI Tool Approval Process].

Free/consumer versions: The free or consumer versions of AI tools (e.g. free ChatGPT, free Claude) are not approved for work use, as they may use your inputs for model training and lack enterprise data protection.

4. Data Handling Rules

This is the most critical section of the policy. Data mishandling is the primary risk of AI use in the workplace.

Never input into any AI tool:

  • Personal data of employees, clients, or third parties (names, IC numbers, addresses, phone numbers, email addresses)
  • Financial data (bank account numbers, credit card details, salary information)
  • Health or medical information
  • Client-confidential information or trade secrets
  • Passwords, API keys, or security credentials
  • Proprietary source code (unless using an approved code AI tool)
  • Board papers, M&A documents, or legally privileged communications

Permitted inputs:

  • Publicly available information
  • De-identified or anonymised data
  • General business writing (emails, reports, presentations) that does not contain restricted data
  • Internal process documentation that is not classified as confidential

When in doubt: Do not input the data. Consult your manager or the [COMPANY] Data Protection Officer.

5. Quality Assurance Requirements

AI outputs must be reviewed before use in any business context:

  • All AI-generated content must be reviewed by a human before being sent to clients, published externally, or used in decision-making
  • Factual claims must be verified against primary sources — AI tools can produce plausible but incorrect information
  • Legal, financial, and medical content requires review by a qualified professional regardless of AI involvement
  • Code generated by AI must go through the standard code review process

6. Disclosure and Transparency

  • Internal use: Employees are not required to disclose AI use for routine internal tasks (drafting emails, summarising notes, etc.)
  • Client deliverables: AI use must be disclosed to clients when AI-generated content forms a substantial part of a client deliverable, unless the client has agreed otherwise
  • Regulatory filings: AI-generated content used in regulatory submissions must be disclosed and reviewed by the relevant compliance team
  • Recruitment: AI use in candidate screening or evaluation must be documented and reviewed for bias

7. Intellectual Property

  • AI-generated content created using [COMPANY] tools and in the course of employment is the intellectual property of [COMPANY]
  • Employees must not input [COMPANY]'s proprietary content into AI tools that may use inputs for model training
  • Employees must be aware that AI-generated content may not be eligible for copyright protection in all jurisdictions

8. Compliance

Singapore:

  • All AI use must comply with the Personal Data Protection Act 2012 (PDPA)
  • AI deployments involving personal data must be assessed against IMDA's Model AI Governance Framework
  • AI use in regulated industries (financial services, healthcare) must comply with sector-specific MAS or MOH guidelines

Malaysia:

  • All AI use must comply with the Personal Data Protection Act 2010 (PDPA)
  • Cross-border data transfers via AI tools must comply with PDPA Section 129
  • Companies should align with Malaysia's National AI Roadmap and ethical AI guidelines

9. Incident Reporting

Employees must report AI-related incidents immediately to [designated contact]:

  • Accidental input of restricted data into an AI tool
  • AI output that contains personal data, bias, or discriminatory content
  • AI output used in a decision that later proves to be incorrect
  • Any data breach or security incident involving AI tools

10. Enforcement

  • Violations of this policy will be addressed through [COMPANY]'s standard disciplinary process
  • Repeated or serious violations may result in suspension of AI tool access
  • This policy is reviewed quarterly and updated as AI technology and regulations evolve

How to Implement This Policy

Step 1: Customise the Template

Replace all [COMPANY] placeholders, fill in the approved tools table, and add any industry-specific requirements.

Have your legal team or external counsel review the policy for compliance with your specific regulatory obligations.

Step 3: Leadership Endorsement

The policy should be endorsed by the CEO or Managing Director, not just IT or HR. This signals that AI governance is a company-wide priority.

Step 4: Employee Training

Distribute the policy with a mandatory training session — not just an email. Employees need to understand the "why" behind each rule.

Step 5: Monitor and Iterate

Review the policy quarterly. AI tools and regulations evolve rapidly, and your policy must keep pace.

Regulatory Context

Singapore's AI Governance Framework

Singapore's IMDA Model AI Governance Framework is principles-based, emphasising transparency, fairness, and accountability. While not legally binding, it is considered best practice and is increasingly referenced by regulators including MAS and the PDPC.

Malaysia's PDPA Considerations

Malaysia's PDPA governs the processing of personal data in commercial transactions. Companies using AI tools that process personal data must ensure compliance with the seven data protection principles, particularly the General Principle, Notice and Choice Principle, and Disclosure Principle.

Common Mistakes to Avoid

  1. Writing the policy but not training employees — A policy that nobody reads provides no protection
  2. Being too restrictive — Overly strict policies drive AI use underground, increasing risk
  3. Not updating the policy — AI tools change every quarter; your policy must keep up
  4. Ignoring the approved tools list — Without clear approved tools, employees will choose their own
  5. Skipping the data classification — Data handling rules are the most important part of any AI policy

Adapting Policy Templates to Your Organization's Context

Generic AI policy templates provide a starting framework but require significant customization to address organization-specific risk profiles, regulatory environments, and operational realities. Before adopting a template, organizations should conduct a gap analysis comparing the template's provisions against their existing policies, industry regulations, and identified AI risk areas. Sections addressing data handling should reference the organization's existing data governance framework and privacy policies. Sections covering acceptable use should reflect the specific AI tools deployed within the organization and the business contexts in which they are used. Legal counsel should review all customized policies before adoption to ensure alignment with applicable employment law, intellectual property protections, and industry-specific regulatory requirements.

Rolling Out AI Policies Across the Organization

Policy adoption requires more than distributing a document through email. Effective rollout strategies include mandatory training sessions that walk employees through the policy using scenario-based examples relevant to their specific roles and departments. Department managers should lead follow-up discussions applying the policy to their team's most common AI use cases, identifying any ambiguities or gaps that require policy clarification. Organizations should establish a dedicated communication channel where employees can ask questions about policy interpretation and report potential policy violations without fear of punitive consequences during the initial adoption period.

Keeping AI Policies Current as Technology Evolves

AI capabilities evolve rapidly, making policy currency an ongoing challenge. Organizations should establish a formal policy review cycle of at least every six months, triggered additionally by significant events such as the release of major new AI models, changes in regulatory requirements, or internal AI incidents that expose policy gaps. The review process should include feedback collection from employees about policy provisions that are unclear, impractical, or overly restrictive. Version control and change tracking ensure that all employees are working from the current policy version, and policy update communications should highlight specific changes and their rationale rather than simply redistributing the entire document.

Measuring Policy Effectiveness

AI policies should include measurable success criteria that enable organizations to evaluate whether the policy achieves its intended objectives. Track metrics including employee awareness rates measured through periodic assessments, reported policy violations and their resolution outcomes, shadow AI usage rates detected through IT monitoring, and employee satisfaction with AI tool access and policy clarity. Organizations should set annual improvement targets for each metric and use the results to guide policy refinements during scheduled review cycles. Declining awareness scores or rising violation rates signal that the policy requires updated training materials, clearer language, or revised provisions that better match actual workplace AI usage patterns.

Common Questions

Yes. While there is no specific law requiring an AI policy, both Malaysia's PDPA and Singapore's PDPA impose obligations on how personal data is handled — including when processed through AI tools. An AI policy is the practical mechanism for ensuring compliance and managing risk.

At minimum, an AI policy should cover: approved AI tools, data handling rules (what can and cannot be input), quality assurance requirements for AI outputs, disclosure guidelines, compliance with local data protection laws, and an incident reporting process.

AI policies should be reviewed quarterly and updated whenever there are significant changes to AI tools, regulations, or company operations. Major updates such as new tool approvals or regulatory changes should trigger immediate policy revisions.

References

  1. AI Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology (NIST) (2023). View source
  2. ISO/IEC 42001:2023 — Artificial Intelligence Management System. International Organization for Standardization (2023). View source
  3. EU AI Act — Regulatory Framework for Artificial Intelligence. European Commission (2024). View source
  4. Model AI Governance Framework (Second Edition). PDPC and IMDA Singapore (2020). View source
  5. OECD Principles on Artificial Intelligence. OECD (2019). View source
  6. Personal Data Protection Act 2012. Personal Data Protection Commission Singapore (2012). View source
  7. ASEAN Guide on AI Governance and Ethics. ASEAN Secretariat (2024). View source

EXPLORE MORE

Other AI Governance & Adoption Solutions

INSIGHTS

Related reading

Talk to Us About AI Governance & Adoption

We work with organizations across Southeast Asia on ai governance & adoption programs. Let us know what you are working on.