Back to AI Glossary
AI Governance & Ethics

What is AI Policy?

AI Policy is the formal set of organisational rules, guidelines, and procedures that govern how artificial intelligence is researched, developed, procured, deployed, and monitored within an organisation. It provides clear boundaries and expectations for AI use and serves as the operational backbone of AI governance.

What is AI Policy?

AI Policy refers to the documented rules, guidelines, and procedures that an organisation establishes to govern its use of artificial intelligence. It covers the entire AI lifecycle, from evaluating whether to adopt AI for a particular use case, through development and procurement, to deployment, monitoring, and eventual retirement.

For business leaders, an AI policy is the practical document that translates your organisation's AI governance principles into actionable rules. While governance provides the strategic framework, policy provides the specific instructions that your teams follow day to day.

Why Your Organisation Needs an AI Policy

Without a clear AI policy, AI adoption tends to happen in an uncoordinated fashion. Individual teams adopt AI tools based on their own judgement, without consistent standards for data handling, security, fairness, or vendor evaluation. This creates several problems:

  • Inconsistent risk management: Different teams may apply different standards, leaving some AI applications inadequately governed.
  • Regulatory exposure: Without clear policies, it is difficult to demonstrate compliance with data protection laws and AI governance requirements across your ASEAN operating markets.
  • Shadow AI: Employees may use AI tools, particularly generative AI services, without organisational approval or awareness, creating data security and intellectual property risks.
  • Wasted resources: Without policy-level guidance on AI priorities and standards, teams may invest in tools that do not align with organisational strategy or that duplicate existing capabilities.

Core Components of an AI Policy

Acceptable Use Guidelines

Define what AI tools and applications are approved for use within the organisation, what restrictions apply, and what activities are prohibited. This is especially important in the era of generative AI, where employees may use tools like chatbots, image generators, or code assistants without realising the data security implications.

Data Handling Requirements

Specify how data used in AI systems must be collected, stored, processed, and disposed of. This includes consent requirements, data quality standards, retention periods, and compliance with data protection regulations across your operating markets.

Procurement and Vendor Standards

Establish criteria for evaluating and selecting AI vendors, including requirements for model transparency, data handling practices, security standards, and liability provisions. Define the review and approval process for bringing new AI tools into the organisation.

Development Standards

If your organisation builds AI systems internally, specify the standards for model development, including data documentation, bias testing, security review, and approval processes before deployment. Reference established frameworks such as Singapore's AI Verify where applicable.

Risk Classification

Define categories of AI risk and the governance requirements associated with each category. High-risk applications, such as those that make decisions affecting individuals' access to credit, employment, or services, should require more rigorous review and oversight than lower-risk applications like internal analytics.

Roles and Responsibilities

Clearly define who is responsible for AI governance within the organisation. This includes identifying who approves new AI deployments, who monitors ongoing operations, who handles incidents, and who is accountable for policy compliance.

Developing an AI Policy for Southeast Asia

AI policy development must account for the specific regulatory, cultural, and business environments of the markets you operate in.

Regulatory alignment: Your policy should reference and comply with relevant regulations across your ASEAN markets. This includes Singapore's PDPA and Model AI Governance Framework, Indonesia's Personal Data Protection Act, Thailand's PDPA, and any sector-specific regulations that apply to your industry.

Multi-market consistency: If you operate across multiple ASEAN countries, design your policy to meet the strictest applicable requirements while allowing for market-specific adaptations where necessary. This avoids the complexity of maintaining entirely separate policies for each market.

Language accessibility: Ensure your AI policy is available in the languages your employees use. A policy that exists only in English is ineffective for teams in Indonesia, Thailand, or Vietnam.

Cultural sensitivity: AI policy should reflect local norms around data privacy, decision-making authority, and communication. What constitutes appropriate AI use may vary across markets, and your policy should acknowledge these differences.

Implementation and Maintenance

  1. Start with what matters most: You do not need a comprehensive AI policy on day one. Start with acceptable use guidelines for generative AI, data handling requirements, and vendor evaluation criteria. Expand from there.
  2. Involve stakeholders: Develop your policy with input from legal, compliance, IT, data teams, and business units. A policy written in isolation will miss important perspectives.
  3. Communicate broadly: Publish your policy prominently, conduct training sessions, and ensure every employee understands the key requirements. A policy that people do not know about is a policy that does not work.
  4. Enforce consistently: Apply your AI policy consistently across the organisation. Selective enforcement undermines the policy's credibility and creates governance gaps.
  5. Review regularly: Update your policy at least semi-annually to account for regulatory changes, new AI capabilities, and lessons learned from your organisation's AI operations.
  6. Measure compliance: Establish mechanisms to track whether teams across the organisation are adhering to your AI policy. Regular compliance assessments help identify gaps before they become incidents and demonstrate to regulators that your policy is enforced, not just documented.
Why It Matters for Business

AI Policy is the operational foundation of AI governance. Without it, governance principles remain aspirational statements rather than enforceable standards. For business leaders, an AI policy provides the clarity that teams need to adopt AI confidently and responsibly.

The business case for AI policy is strongest in organisations that are scaling their AI adoption. As the number of AI systems, tools, and use cases grows, the risk of uncoordinated adoption increases. An AI policy provides the consistent framework that prevents individual teams from creating governance gaps, security vulnerabilities, or compliance exposures.

For organisations operating in Southeast Asia, a well-crafted AI policy also demonstrates maturity to regulators, customers, and partners. As AI governance expectations tighten across ASEAN, the ability to point to a comprehensive, enforced AI policy is increasingly important for maintaining regulatory goodwill and winning business in regulated industries such as financial services, healthcare, and telecommunications.

Key Considerations
  • Prioritise acceptable use guidelines for generative AI tools as an immediate first step, given the rapid adoption of these tools across organisations.
  • Align your AI policy with data protection regulations across all ASEAN markets you operate in to ensure consistent compliance.
  • Include clear procurement and vendor evaluation criteria to prevent unvetted AI tools from entering your technology stack.
  • Define risk categories for AI applications and apply governance controls proportional to the risk level.
  • Ensure your AI policy is accessible in the languages your employees use across your operating markets.
  • Review and update your AI policy at least semi-annually to keep pace with regulatory changes and evolving AI capabilities.

Common Questions

How is an AI policy different from an AI strategy?

An AI strategy defines your organisation's vision for how AI will create business value, including priorities, investment areas, and competitive positioning. An AI policy defines the rules and guidelines for how AI is used within the organisation. Strategy answers "what do we want to achieve with AI?" while policy answers "what rules must we follow when using AI?" Both are essential: strategy without policy leads to ungoverned AI adoption, while policy without strategy leads to AI use that lacks business direction.

Should our AI policy cover employees using free AI tools like ChatGPT?

Absolutely. Employee use of free generative AI tools is one of the highest-priority areas for AI policy. Without clear guidelines, employees may share confidential business information, customer data, or intellectual property with external AI services. Your policy should specify which AI tools are approved, what types of data can and cannot be shared with them, and what review processes apply. Many organisations have experienced data leaks through casual employee use of AI chatbots, making this a critical policy area.

More Questions

At minimum, review your AI policy semi-annually. However, trigger-based reviews should also occur when new regulations are introduced in your operating markets, when your organisation adopts new AI technologies, when a significant AI incident occurs within or outside your organisation, or when industry best practices evolve materially. The AI landscape is changing rapidly, and policies that are not regularly updated quickly become outdated and ineffective.

References

  1. NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology (NIST) (2023). View source
  2. Stanford HAI AI Index Report 2025. Stanford Institute for Human-Centered AI (2025). View source
  3. EU AI Act — Regulatory Framework for Artificial Intelligence. European Commission (2024). View source
  4. OECD AI Policy Observatory. Organisation for Economic Co-operation and Development (OECD) (2024). View source
  5. Ethically Aligned Design: A Vision for Prioritizing Human Well-Being with Autonomous and Intelligent Systems. IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (2019). View source
  6. ACM FAccT: Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery (ACM) (2024). View source
  7. Partnership on AI — Responsible AI Practices. Partnership on AI (2024). View source
  8. Algorithmic Justice League — Unmasking AI Harms and Biases. Algorithmic Justice League (2024). View source
  9. AI Now Institute — Research on AI Policy and Social Implications. AI Now Institute (NYU) (2024). View source
  10. PAI's Responsible Practices for Synthetic Media. Partnership on AI (2024). View source
  11. Artificial Intelligence and Emerging Technology Initiative. Brookings Institution (2024). View source
  12. Center for Security and Emerging Technology. Georgetown University (CSET) (2024). View source
Related Terms
AI Governance

AI Governance is the set of policies, frameworks, and organisational structures that guide how artificial intelligence is developed, deployed, and monitored within an organisation. It ensures AI systems operate responsibly, comply with regulations, and align with business values and societal expectations.

Generative AI

Generative AI is a category of artificial intelligence that creates new content such as text, images, code, and audio by learning patterns from large datasets. It enables businesses to automate creative and analytical tasks that previously required significant human effort and expertise.

AI Governance Framework

An AI Governance Framework is a structured set of policies, processes, roles, and accountability mechanisms that an organization establishes to ensure its artificial intelligence systems are developed, deployed, and managed responsibly, ethically, and in compliance with applicable regulations.

Data Privacy

Data Privacy is the practice of handling personal data in a way that respects individuals' rights to control how their information is collected, used, stored, shared, and deleted. It encompasses the legal, technical, and organisational measures that organisations implement to protect personal data and comply with data protection regulations.

Artificial Intelligence

Artificial Intelligence is the broad field of computer science focused on building systems capable of performing tasks that typically require human intelligence, such as understanding language, recognizing patterns, making decisions, and learning from experience to improve over time.

Need help implementing AI Policy?

Pertama Partners helps businesses across Southeast Asia adopt AI strategically. Let's discuss how ai policy fits into your AI roadmap.