
The rapid adoption of generative AI tools across Southeast Asian workplaces has created an urgent governance gap. Employees are already using ChatGPT, Claude, Copilot, and other AI tools β often without their employer's knowledge or approval. Without a formal AI policy, companies face uncontrolled risk exposure in data privacy, intellectual property, regulatory compliance, and output quality.
A well-crafted AI policy does not restrict innovation. It channels it safely. Companies with clear AI policies adopt AI faster, with fewer incidents, than those relying on informal guidelines or no guidance at all.
This template is designed specifically for companies operating in Malaysia and Singapore, incorporating the regulatory requirements of both jurisdictions.
Below is a comprehensive template you can adapt for your organisation. Sections marked with [COMPANY] should be replaced with your company name.
This policy establishes guidelines for the responsible use of artificial intelligence tools and systems at [COMPANY]. It applies to all employees, contractors, and third-party service providers who use AI tools in the course of their work for [COMPANY].
Objectives:
| Term | Definition |
|---|---|
| AI Tool | Any software that uses artificial intelligence or machine learning to generate text, images, code, analysis, or other outputs |
| Generative AI | AI systems that create new content (text, images, code) based on prompts |
| Prompt | The input or instruction given to an AI tool |
| AI Output | Any content, analysis, or recommendation generated by an AI tool |
| Personal Data | Any data that can identify a living individual, as defined under PDPA |
| Confidential Data | Information classified as confidential under [COMPANY]'s data classification policy |
[COMPANY] has approved the following AI tools for business use:
| Tool | Approved Use | Data Classification Allowed | Licence Type |
|---|---|---|---|
| [e.g. ChatGPT Enterprise] | General writing, research, analysis | Internal, Public | Enterprise |
| [e.g. Microsoft Copilot] | Office 365 integration | Internal, Public | Enterprise |
| [e.g. GitHub Copilot] | Code generation and review | Internal code only | Enterprise |
Unapproved tools: Employees must not use AI tools that have not been approved by [COMPANY]'s IT department. If you wish to request approval for a new tool, submit a request through the [AI Tool Approval Process].
Free/consumer versions: The free or consumer versions of AI tools (e.g. free ChatGPT, free Claude) are not approved for work use, as they may use your inputs for model training and lack enterprise data protection.
This is the most critical section of the policy. Data mishandling is the primary risk of AI use in the workplace.
Never input into any AI tool:
Permitted inputs:
When in doubt: Do not input the data. Consult your manager or the [COMPANY] Data Protection Officer.
AI outputs must be reviewed before use in any business context:
Singapore:
Malaysia:
Employees must report AI-related incidents immediately to [designated contact]:
Replace all [COMPANY] placeholders, fill in the approved tools table, and add any industry-specific requirements.
Have your legal team or external counsel review the policy for compliance with your specific regulatory obligations.
The policy should be endorsed by the CEO or Managing Director, not just IT or HR. This signals that AI governance is a company-wide priority.
Distribute the policy with a mandatory training session β not just an email. Employees need to understand the "why" behind each rule.
Review the policy quarterly. AI tools and regulations evolve rapidly, and your policy must keep pace.
Singapore's IMDA Model AI Governance Framework is principles-based, emphasising transparency, fairness, and accountability. While not legally binding, it is considered best practice and is increasingly referenced by regulators including MAS and the PDPC.
Malaysia's PDPA governs the processing of personal data in commercial transactions. Companies using AI tools that process personal data must ensure compliance with the seven data protection principles, particularly the General Principle, Notice and Choice Principle, and Disclosure Principle.
Yes. While there is no specific law requiring an AI policy, both Malaysia's PDPA and Singapore's PDPA impose obligations on how personal data is handled β including when processed through AI tools. An AI policy is the practical mechanism for ensuring compliance and managing risk.
At minimum, an AI policy should cover: approved AI tools, data handling rules (what can and cannot be input), quality assurance requirements for AI outputs, disclosure guidelines, compliance with local data protection laws, and an incident reporting process.
AI policies should be reviewed quarterly and updated whenever there are significant changes to AI tools, regulations, or company operations. Major updates such as new tool approvals or regulatory changes should trigger immediate policy revisions.