Back to Insights
AI Governance & AdoptionFramework

AI Policy Template for Companies in Malaysia & Singapore

February 11, 202612 min readPertama Partners

A ready-to-use AI policy template for companies in Malaysia and Singapore. Covers data handling, approved tools, employee responsibilities, and compliance with PDPA and Singapore AI governance frameworks.

AI Policy Template for Companies in Malaysia & Singapore

Why Every Company Needs a Formal AI Policy

The rapid adoption of generative AI tools across Southeast Asian workplaces has created an urgent governance gap. Employees are already using ChatGPT, Claude, Copilot, and other AI tools — often without their employer's knowledge or approval. Without a formal AI policy, companies face uncontrolled risk exposure in data privacy, intellectual property, regulatory compliance, and output quality.

A well-crafted AI policy does not restrict innovation. It channels it safely. Companies with clear AI policies adopt AI faster, with fewer incidents, than those relying on informal guidelines or no guidance at all.

This template is designed specifically for companies operating in Malaysia and Singapore, incorporating the regulatory requirements of both jurisdictions.

AI Policy Template

Below is a comprehensive template you can adapt for your organisation. Sections marked with [COMPANY] should be replaced with your company name.


1. Purpose and Scope

This policy establishes guidelines for the responsible use of artificial intelligence tools and systems at [COMPANY]. It applies to all employees, contractors, and third-party service providers who use AI tools in the course of their work for [COMPANY].

Objectives:

  • Enable employees to use AI tools productively and safely
  • Protect company data, intellectual property, and client information
  • Ensure compliance with applicable data protection laws (PDPA Malaysia, PDPA Singapore)
  • Align with Singapore's Model AI Governance Framework and Malaysia's AI ethics guidelines
  • Establish clear accountability for AI-related decisions and outputs

2. Definitions

TermDefinition
AI ToolAny software that uses artificial intelligence or machine learning to generate text, images, code, analysis, or other outputs
Generative AIAI systems that create new content (text, images, code) based on prompts
PromptThe input or instruction given to an AI tool
AI OutputAny content, analysis, or recommendation generated by an AI tool
Personal DataAny data that can identify a living individual, as defined under PDPA
Confidential DataInformation classified as confidential under [COMPANY]'s data classification policy

3. Approved AI Tools

[COMPANY] has approved the following AI tools for business use:

ToolApproved UseData Classification AllowedLicence Type
[e.g. ChatGPT Enterprise]General writing, research, analysisInternal, PublicEnterprise
[e.g. Microsoft Copilot]Office 365 integrationInternal, PublicEnterprise
[e.g. GitHub Copilot]Code generation and reviewInternal code onlyEnterprise

Unapproved tools: Employees must not use AI tools that have not been approved by [COMPANY]'s IT department. If you wish to request approval for a new tool, submit a request through the [AI Tool Approval Process].

Free/consumer versions: The free or consumer versions of AI tools (e.g. free ChatGPT, free Claude) are not approved for work use, as they may use your inputs for model training and lack enterprise data protection.

4. Data Handling Rules

This is the most critical section of the policy. Data mishandling is the primary risk of AI use in the workplace.

Never input into any AI tool:

  • Personal data of employees, clients, or third parties (names, IC numbers, addresses, phone numbers, email addresses)
  • Financial data (bank account numbers, credit card details, salary information)
  • Health or medical information
  • Client-confidential information or trade secrets
  • Passwords, API keys, or security credentials
  • Proprietary source code (unless using an approved code AI tool)
  • Board papers, M&A documents, or legally privileged communications

Permitted inputs:

  • Publicly available information
  • De-identified or anonymised data
  • General business writing (emails, reports, presentations) that does not contain restricted data
  • Internal process documentation that is not classified as confidential

When in doubt: Do not input the data. Consult your manager or the [COMPANY] Data Protection Officer.

5. Quality Assurance Requirements

AI outputs must be reviewed before use in any business context:

  • All AI-generated content must be reviewed by a human before being sent to clients, published externally, or used in decision-making
  • Factual claims must be verified against primary sources — AI tools can produce plausible but incorrect information
  • Legal, financial, and medical content requires review by a qualified professional regardless of AI involvement
  • Code generated by AI must go through the standard code review process

6. Disclosure and Transparency

  • Internal use: Employees are not required to disclose AI use for routine internal tasks (drafting emails, summarising notes, etc.)
  • Client deliverables: AI use must be disclosed to clients when AI-generated content forms a substantial part of a client deliverable, unless the client has agreed otherwise
  • Regulatory filings: AI-generated content used in regulatory submissions must be disclosed and reviewed by the relevant compliance team
  • Recruitment: AI use in candidate screening or evaluation must be documented and reviewed for bias

7. Intellectual Property

  • AI-generated content created using [COMPANY] tools and in the course of employment is the intellectual property of [COMPANY]
  • Employees must not input [COMPANY]'s proprietary content into AI tools that may use inputs for model training
  • Employees must be aware that AI-generated content may not be eligible for copyright protection in all jurisdictions

8. Compliance

Singapore:

  • All AI use must comply with the Personal Data Protection Act 2012 (PDPA)
  • AI deployments involving personal data must be assessed against IMDA's Model AI Governance Framework
  • AI use in regulated industries (financial services, healthcare) must comply with sector-specific MAS or MOH guidelines

Malaysia:

  • All AI use must comply with the Personal Data Protection Act 2010 (PDPA)
  • Cross-border data transfers via AI tools must comply with PDPA Section 129
  • Companies should align with Malaysia's National AI Roadmap and ethical AI guidelines

9. Incident Reporting

Employees must report AI-related incidents immediately to [designated contact]:

  • Accidental input of restricted data into an AI tool
  • AI output that contains personal data, bias, or discriminatory content
  • AI output used in a decision that later proves to be incorrect
  • Any data breach or security incident involving AI tools

10. Enforcement

  • Violations of this policy will be addressed through [COMPANY]'s standard disciplinary process
  • Repeated or serious violations may result in suspension of AI tool access
  • This policy is reviewed quarterly and updated as AI technology and regulations evolve

How to Implement This Policy

Step 1: Customise the Template

Replace all [COMPANY] placeholders, fill in the approved tools table, and add any industry-specific requirements.

Have your legal team or external counsel review the policy for compliance with your specific regulatory obligations.

Step 3: Leadership Endorsement

The policy should be endorsed by the CEO or Managing Director, not just IT or HR. This signals that AI governance is a company-wide priority.

Step 4: Employee Training

Distribute the policy with a mandatory training session — not just an email. Employees need to understand the "why" behind each rule.

Step 5: Monitor and Iterate

Review the policy quarterly. AI tools and regulations evolve rapidly, and your policy must keep pace.

Regulatory Context

Singapore's AI Governance Framework

Singapore's IMDA Model AI Governance Framework is principles-based, emphasising transparency, fairness, and accountability. While not legally binding, it is considered best practice and is increasingly referenced by regulators including MAS and the PDPC.

Malaysia's PDPA Considerations

Malaysia's PDPA governs the processing of personal data in commercial transactions. Companies using AI tools that process personal data must ensure compliance with the seven data protection principles, particularly the General Principle, Notice and Choice Principle, and Disclosure Principle.

Common Mistakes to Avoid

  1. Writing the policy but not training employees — A policy that nobody reads provides no protection
  2. Being too restrictive — Overly strict policies drive AI use underground, increasing risk
  3. Not updating the policy — AI tools change every quarter; your policy must keep up
  4. Ignoring the approved tools list — Without clear approved tools, employees will choose their own
  5. Skipping the data classification — Data handling rules are the most important part of any AI policy

Frequently Asked Questions

Yes. While there is no specific law requiring an AI policy, both Malaysia's PDPA and Singapore's PDPA impose obligations on how personal data is handled — including when processed through AI tools. An AI policy is the practical mechanism for ensuring compliance and managing risk.

At minimum, an AI policy should cover: approved AI tools, data handling rules (what can and cannot be input), quality assurance requirements for AI outputs, disclosure guidelines, compliance with local data protection laws, and an incident reporting process.

AI policies should be reviewed quarterly and updated whenever there are significant changes to AI tools, regulations, or company operations. Major updates such as new tool approvals or regulatory changes should trigger immediate policy revisions.

Ready to Apply These Insights to Your Organization?

Book a complimentary AI Readiness Audit to identify opportunities specific to your context.

Book an AI Readiness Audit