Back to Insights
AI Governance & Risk ManagementGuide

AI Policy Template for Southeast Asian Companies

March 16, 202611 min readPertama Partners
For:CTO/CIOCISOLegal/ComplianceCEO/Founder

A seven-section AI policy template built for Southeast Asian companies, covering PDPA compliance across jurisdictions, cross-border data transfer, and role-based acceptable use guidelines.

Summarize and fact-check this article with:

Key Takeaways

  • 1.Generic AI policy templates built for US or EU companies miss critical Southeast Asian requirements: multi-jurisdiction PDPA compliance, cross-border data transfer restrictions, and Shariah-compliant data handling.
  • 2.An effective AI policy needs seven sections: scope, data classification, acceptable use, model risk management, vendor governance, incident response, and review cadence.
  • 3.The most common failure is writing policies at the wrong altitude: too abstract gets ignored, too prescriptive becomes outdated within months.
  • 4.A four-tier data classification framework (public, internal, confidential, restricted) mapped to AI tool permissions is the foundation.
  • 5.Role-based acceptable use guidelines outlast tool-based guidelines because tools change faster than roles.

Why Generic AI Policy Templates Fail in Southeast Asia

If you search for "AI policy template" today, you will find dozens of options. Most are structured around US or European regulatory frameworks: GDPR, SOC 2, NIST AI RMF, the EU AI Act. They are competent documents for the jurisdictions they serve.

They are not built for Southeast Asian companies.

The gap is not cosmetic. A mid-market firm operating across Malaysia and Singapore faces compliance requirements, data residency expectations, and cultural considerations that generic templates do not address. Adopting a US-centric AI policy and adding "Malaysia" to the header creates the appearance of governance without the substance of it.

This template is built for the regulatory and business environment Southeast Asian mid-market companies actually operate in.

The Seven Sections Every AI Policy Needs

Section 1: Scope, Definitions, and Ownership

This section answers three questions every employee will ask: What counts as AI? Who does this policy apply to? Who do I ask when I am unsure?

What to include:

  • A working definition of AI that covers current tools (LLMs, copilots, automation platforms) without becoming obsolete when new categories emerge
  • Explicit scope: does this policy cover only company-procured tools, or also personal AI tools used for work tasks?
  • Named policy owner (typically CTO, CISO, or a dedicated AI governance lead)
  • Review cadence: we recommend quarterly for the first year, then semi-annually

Regional consideration: In Malaysia, the Personal Data Protection Act 2010 (PDPA) defines "processing" broadly enough to include AI-assisted analysis of personal data (PDPC Malaysia, "Personal Data Protection Act 2010," 2010). Your AI policy scope should explicitly acknowledge this. In Singapore, the PDPC's Model AI Governance Framework provides a voluntary but increasingly expected standard for responsible AI use (PDPC Singapore, "Model AI Governance Framework, Second Edition," 2020).

Section 2: Data Classification and Handling

This is where most AI policies either fail or succeed. Without clear data classification, employees cannot make good decisions about what information to share with AI tools.

Four-tier classification for AI use:

TierDescriptionAI Tool PermissionsExamples
PublicInformation already published or intended for publicationAny approved AI toolMarketing copy, published research, public financial reports
InternalBusiness information not intended for external audiencesApproved enterprise AI tools with data processing agreementsInternal memos, process documentation, non-sensitive meeting notes
ConfidentialSensitive business or personal dataEnterprise AI tools with contractual data protection onlyCustomer data, employee records, financial projections, strategy documents
RestrictedRegulated data subject to specific compliance requirementsNo external AI tools; on-premises or approved sovereign cloud onlyData subject to PDPA, banking secrecy laws, Shariah audit records

Regional consideration: Cross-border data transfer is where ASEAN complexity surfaces. Malaysia's PDPA Section 129 restricts transfer of personal data outside Malaysia unless the destination country provides adequate protection. Singapore's PDPA requires organizations to ensure comparable protection for overseas transfers (PDPC Singapore, "Advisory Guidelines on Key Concepts in the PDPA," 2022). Indonesia's Government Regulation No. 71 of 2019 requires certain categories of data to be stored domestically. Your policy needs to specify which AI tools process data domestically versus internationally, and which data tiers are eligible for each.

Section 3: Acceptable Use Guidelines

This section translates data classification into daily decisions. Structure it by role, not by tool, because tools change faster than roles do.

Executive leadership (CEO, CFO, board members):

  • Approved for: strategic analysis, market research, presentation preparation, scenario modeling
  • Prohibited: inputting board materials, M&A discussions, undisclosed financial data, or investor communications into any external AI tool
  • Escalation: any AI-assisted output used in regulatory filings or board presentations must be reviewed by legal

Operations and line managers:

  • Approved for: process documentation, workflow optimization, report generation, meeting summaries
  • Prohibited: inputting employee performance data, disciplinary records, or compensation information
  • Requirement: all AI-assisted outputs that inform business decisions must include a human verification step

Customer-facing roles (sales, support, marketing):

  • Approved for: drafting communications, research, content creation, translation
  • Prohibited: inputting customer personal data, transaction histories, or complaint records
  • Requirement: AI-generated customer communications must be reviewed before sending

Technical roles (IT, data, engineering):

  • Approved for: code generation, debugging, documentation, architecture exploration
  • Prohibited: inputting production credentials, API keys, security configurations, or proprietary algorithms
  • Requirement: AI-generated code must pass the same review process as human-written code

Section 4: Model Risk Management

For companies using AI beyond general productivity tools, specifically those building or fine-tuning models, deploying AI in customer-facing applications, or automating decisions that affect individuals.

What to include:

  • Model inventory: maintain a register of all AI models in use, their purpose, data inputs, and risk classification
  • Bias and fairness testing requirements before deployment
  • Performance monitoring cadence (monthly for high-risk, quarterly for medium-risk)
  • Rollback procedures if a model produces unacceptable outcomes

Regional consideration: Bank Negara Malaysia (BNM) expects financial institutions to apply model risk management principles to AI systems, including validation, monitoring, and governance (BNM, "Risk Management in Technology Policy Document," 2023). MAS in Singapore has published detailed guidance on AI model risk management that, while focused on financial services, is increasingly referenced as a baseline across industries (MAS, "Model Risk Management Guidance," 2024).

Section 5: Vendor and Third-Party AI Governance

Most mid-market companies use AI primarily through third-party tools (Microsoft Copilot, ChatGPT Enterprise, Salesforce Einstein). This section governs how you evaluate and monitor those vendors.

Vendor evaluation checklist:

  • Where does the vendor process data? (geography matters for PDPA compliance)
  • Does the vendor use customer data to train models? (most enterprise tiers do not, but verify contractually)
  • What data retention and deletion policies does the vendor offer?
  • Does the vendor have relevant certifications (ISO 27001, SOC 2, CSA STAR)?
  • Can the vendor provide a Data Processing Agreement (DPA) that satisfies your jurisdiction's PDPA requirements?

AI incidents differ from traditional cybersecurity incidents. They include: AI-generated content that contains inaccurate information published externally, confidential data inadvertently shared with an AI tool, AI-assisted decisions that produce discriminatory outcomes, or AI-generated code that introduces security vulnerabilities.

Response framework:

  1. Detection: Establish channels for employees to report AI-related concerns without fear of reprimand
  2. Classification: Severity levels tied to data exposure, business impact, and regulatory implications
  3. Containment: Immediate steps (revoke access, remove published content, pause automated decisions)
  4. Notification: PDPA-mandated breach notification timelines apply when personal data is involved. Malaysia: "as soon as practicable." Singapore: within 3 calendar days of assessing the breach as notifiable (PDPC Singapore, "Guide on Managing and Notifying Data Breaches Under the PDPA," 2021).
  5. Review: Root cause analysis and policy update

Section 7: Review Cadence and Continuous Improvement

An AI policy written in 2026 will be partially outdated by 2027. Build in structured review cycles.

Recommended cadence:

  • Quarterly (Year 1): Review acceptable use guidelines and tool approvals
  • Semi-annually (Year 2+): Full policy review including data classification, vendor governance, and incident response
  • Triggered reviews: Any significant regulatory change, major AI incident, or new tool category adoption

Ownership: Assign a cross-functional AI governance committee, not a single individual. Effective committees include representation from IT/security, legal/compliance, HR, and at least one business unit leader.

Common Mistakes That Undermine AI Policies

Writing at the wrong altitude. "Use AI responsibly and ethically" communicates nothing actionable. "Never input any data into any AI tool" kills adoption. The right altitude is: "Tier 3 and Tier 4 data may only be processed using enterprise AI tools with executed Data Processing Agreements."

Ignoring shadow AI. A 2024 Microsoft Work Trend Index found that 78% of AI users bring their own AI tools to work, and 52% are reluctant to admit it (Microsoft, "2024 Work Trend Index Annual Report," 2024). Your policy must acknowledge that employees are already using AI.

Treating the policy as a one-time document. AI governance is a practice, not a deliverable. Companies that write a policy and file it away find it irrelevant within 6-12 months.

Copying a template without customization. This template provides structure. The data classification tiers, role-specific guidelines, and vendor evaluation criteria need to be calibrated to your company's specific data environment, risk tolerance, and regulatory obligations. A policy customization workshop can compress what typically takes 4-6 weeks of internal work into 2-3 focused sessions.

Frequently Asked Questions

Do we need a separate AI policy or can we add AI sections to existing IT policies? For companies with fewer than 200 employees, adding AI-specific sections to your existing IT acceptable use policy and data handling policy is sufficient. For larger organizations, or those in regulated industries, a standalone AI policy provides clearer accountability, easier updates, and simpler audit trails.

How do we handle employees using personal AI accounts for work? Acknowledge it, do not prohibit it. Your policy should define which data tiers are acceptable for personal AI tools (typically Tier 1 only) and require employees to use enterprise accounts for anything involving internal or confidential data.

What if we operate across multiple ASEAN countries with different PDPA requirements? Apply the strictest applicable standard as your baseline. If you operate in both Malaysia and Singapore, your policy should meet the more stringent requirement for each area.

How does this template work with ISO 27001 or SOC 2 frameworks? This template is designed to complement, not replace, existing information security frameworks. The data classification tiers map to ISO 27001 Annex A controls. The vendor governance section aligns with SOC 2 Type II vendor management requirements.

Our industry regulator has not published AI-specific guidance yet. Should we wait? No. Regulators in Southeast Asia are moving toward AI governance requirements across all sectors. Building governance now is cheaper and less disruptive than retrofitting under regulatory pressure. The Monetary Authority of Singapore's approach, publishing voluntary guidance that later becomes expected practice, is the pattern most ASEAN regulators are following (MAS, "Principles to Promote Fairness, Ethics, Accountability and Transparency in AI Use," 2024).

Common Questions

For companies with fewer than 200 employees, adding AI-specific sections to existing IT policies is sufficient. For larger organizations or regulated industries, a standalone AI policy provides clearer accountability and easier updates.

Acknowledge it, do not prohibit it. Define which data tiers are acceptable for personal AI tools (typically Tier 1 only) and require enterprise accounts for internal or confidential data.

Apply the strictest applicable standard as your baseline. If you operate in both Malaysia and Singapore, your policy should meet the more stringent requirement for each area.

This template complements existing information security frameworks. The data classification tiers map to ISO 27001 Annex A controls and the vendor governance section aligns with SOC 2 Type II vendor management requirements.

No. Building governance now is cheaper than retrofitting under regulatory pressure. The MAS approach of publishing voluntary guidance that later becomes expected practice is the pattern most ASEAN regulators are following.

References

  1. Personal Data Protection Act 2010. PDPC Malaysia (2010). View source
  2. Model AI Governance Framework, Second Edition. PDPC Singapore (2020). View source
  3. Advisory Guidelines on Key Concepts in the PDPA. PDPC Singapore (2022). View source
  4. Risk Management in Technology Policy Document. Bank Negara Malaysia (2023). View source
  5. 2024 Work Trend Index Annual Report. Microsoft (2024). View source
  6. Principles to Promote Fairness, Ethics, Accountability and Transparency in AI Use. Monetary Authority of Singapore (2024). View source

EXPLORE MORE

Other AI Governance & Risk Management Solutions

INSIGHTS

Related reading

Talk to Us About AI Governance & Risk Management

We work with organizations across Southeast Asia on ai governance & risk management programs. Let us know what you are working on.

Start a Conversation