Executive Summary
The gap between AI adoption and AI governance is widening at a pace that should alarm every board. According to McKinsey's 2024 Global Survey on AI, 72% of organizations now use AI in at least one business function, yet the International Association of Privacy Professionals (IAPP) found that only 35% of companies have a formal AI governance policy in place. That disconnect creates a growing zone of unmanaged risk, from data breaches and regulatory penalties to bias-driven lawsuits and reputational damage.
An AI governance policy closes that gap. It establishes the rules, processes, and accountability structures that determine how your organization develops, procures, deploys, and monitors AI systems. This article provides a complete, customizable policy template you can adapt to your industry, size, and [AI maturity] level. It covers purpose, scope, principles, roles, acceptable use, risk management, vendor oversight, incident response, and compliance. The template aligns with Singapore's Model AI Governance Framework and reflects prevailing international best practices.
A policy that sits in a shared drive unread is no policy at all. Plan for communication, training, and enforcement from the outset, and commit to reviewing the document at least annually or whenever your regulatory landscape, AI strategy, or risk profile changes materially.
Why This Matters Now
The urgency is not theoretical. PwC's 2024 Global Risk Survey found that 60% of business leaders rank AI-related risks among their top concerns, while only a fraction have documented how those risks should be identified, escalated, and resolved. The consequences of that gap play out in predictable ways.
Without a governance policy, employees operate in a gray zone. They do not know what is acceptable when using generative AI tools, what data they can and cannot share with third-party models, or who to contact when something goes wrong. Risks accumulate in silence because no one owns them. When regulators come asking, and they increasingly do, there is no documentation to demonstrate due diligence. When incidents occur, there is no playbook for containment or response. And as departments adopt AI tools independently, the organization ends up with a sprawl of ungoverned systems, each carrying its own unquantified risk.
A well-crafted governance policy is not bureaucratic overhead. It is the operating manual for responsible AI use, protecting the organization while enabling the kind of disciplined adoption that creates sustainable business value.
How to Use This Template
This template is designed to be practical. Begin by reading through every section to understand the full scope of what an enterprise-grade AI governance policy covers. Then customize the bracketed placeholders [like this] with your organization's specific details, names, titles, and regulatory references.
Not every section will apply equally. Smaller organizations may consolidate roles and simplify approval processes. Highly regulated industries, such as financial services or healthcare, will likely need to expand the risk management and compliance sections considerably. Adjust the level of detail to match your context, but resist the temptation to strip out sections entirely. Each one exists because real governance failures have occurred in its absence.
Before publishing, have legal counsel review the document to ensure alignment with your jurisdiction's data protection laws and any sector-specific AI regulations. Secure executive approval. Then communicate the policy broadly and invest in training so that employees understand not just the rules, but the reasoning behind them. Finally, set a calendar reminder for annual review. AI capabilities and regulations are evolving rapidly, and your policy must keep pace.
AI Governance Policy Template
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
[ORGANIZATION NAME]
AI GOVERNANCE POLICY
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Document Information
--------------------
Version: [1.0]
Effective Date: [Date]
Owner: [Title/Name]
Approved By: [Title/Name]
Review Date: [Annual review date]
Classification: [Internal / Confidential]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
PURPOSE.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
1.1 This policy establishes the governance framework for the
development, procurement, deployment, and use of artificial
intelligence (AI) systems at [Organization Name].
1.2 The objectives of this policy are to:
a) Enable responsible AI adoption that creates business value
b) Manage risks associated with AI systems
c) Ensure compliance with applicable laws and regulations
d) Protect stakeholders from potential AI-related harms
e) Establish clear accountability for AI decisions
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
SCOPE.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
2.1 This policy applies to:
a) All employees, contractors, and third parties acting on
behalf of [Organization Name]
b) All AI systems developed internally
c) All AI systems procured from third-party vendors
d) All AI features embedded in existing software
e) All use of generative AI tools (including ChatGPT, Claude,
Copilot, and similar services)
2.2 Definitions
"AI System" means any software that uses machine learning,
deep learning, natural language processing, or other
techniques to perform tasks that typically require human
intelligence.
"Generative AI" means AI systems that create new content
including text, images, code, audio, or video.
"AI Risk" means potential negative outcomes resulting from
AI system behavior, including errors, bias, security
vulnerabilities, or unintended consequences.
"High-Risk AI" means AI applications that significantly
affect individual rights, safety, or have substantial
business impact.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
AI GOVERNANCE PRINCIPLES.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
3.1 Human-Centered AI
AI systems shall augment human capabilities, not replace
human judgment on consequential decisions. Humans remain
accountable for AI-influenced outcomes.
3.2 Transparency
We shall be clear with employees, customers, and partners
about when and how AI is used. We shall provide explanations
of AI decisions when they significantly affect individuals.
3.3 Fairness
We shall design and monitor AI systems to prevent unfair
bias and discrimination. We shall assess AI outcomes for
disparate impact on different groups.
3.4 Security and Privacy
We shall protect AI systems and the data they use from
unauthorized access. We shall respect individual privacy
and comply with data protection regulations.
3.5 Reliability
We shall ensure AI systems perform as intended and implement
safeguards against harmful outputs. We shall test thoroughly
before deployment and monitor continuously.
3.6 Accountability
We shall maintain clear ownership and accountability for all
AI systems. We shall document AI decisions and maintain audit
trails for compliance and review.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
ROLES AND RESPONSIBILITIES.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
4.1 Executive Leadership
a) Sets strategic direction for AI adoption
b) Ensures adequate resources for AI governance
c) Reviews AI governance effectiveness annually
d) Approves high-risk AI deployments
4.2 AI Governance Committee
a) Oversees implementation of this policy
b) Reviews and approves AI risk assessments
c) Monitors AI incidents and remediation
d) Recommends policy updates
e) Reports to executive leadership quarterly
4.3 AI Governance Lead / Officer
a) Coordinates day-to-day governance activities
b) Maintains AI inventory and [risk register]
c) Facilitates governance committee meetings
d) Manages AI governance communications and training
e) Serves as primary contact for AI governance questions
4.4 Business Unit Leaders
a) Ensure compliance with this policy in their units
b) Identify and escalate AI risks
c) Designate AI owners for systems in their domain
d) Participate in AI risk assessments
4.5 AI System Owners
a) Accountable for specific AI systems
b) Ensure systems comply with this policy
c) Conduct or coordinate risk assessments
d) Monitor system performance and incidents
e) Maintain documentation
4.6 All Employees
a) Use AI systems in accordance with this policy
b) Report AI-related concerns or incidents
c) Complete required AI training
d) Protect confidential information when using AI
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
ACCEPTABLE USE.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
5.1 Approved AI Systems
Only AI systems that have been approved through the AI
approval process (Section 6) may be used for business
purposes. Approved systems are listed in the AI Inventory.
5.2 Generative AI Guidelines
When using approved generative AI tools:
PERMITTED:
a) Drafting content for human review and editing
b) Research and information gathering (verify accuracy)
c) Code assistance and debugging
d) Brainstorming and ideation
e) Administrative task automation
PROHIBITED:
a) Inputting confidential or sensitive information
b) Inputting personal data without authorization
c) Generating content without human review before use
d) Making consequential decisions based solely on AI output
e) Creating deceptive or misleading content
f) Bypassing security or compliance controls
5.3 Data Protection
a) Do not input confidential information into external AI
b) Do not input personal data without appropriate consent
c) Anonymize data before AI processing when possible
d) Follow data classification guidelines
5.4 Intellectual Property
a) Do not input proprietary code, designs, or trade secrets
b) Review AI-generated content for potential IP issues
c) Retain rights to AI-generated content per vendor terms
d) Attribute AI-generated content as required
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
AI APPROVAL PROCESS.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
6.1 All new AI systems require approval before deployment.
6.2 Approval Process
Step 1: Request Submission
Complete AI System Request Form including:
Business purpose and use case. Data requirements. Vendor information (if applicable). Preliminary [risk assessment].
Step 2: Review
AI Governance Lead reviews request and conducts:
Security assessment. Privacy assessment. Vendor due diligence (if applicable). Risk classification.
Step 3: Approval
Low-risk: AI Governance Lead approval. Medium-risk: AI Governance Committee approval. High-risk: Executive approval required.
Step 4: Documentation
Add to AI Inventory. Document in risk register. Assign system owner.
6.3 Risk Classification
LOW RISK
Internal productivity tools. No personal data processing. No customer-facing application. No significant business impact if failure.
MEDIUM RISK
Customer-facing applications. Personal data processing (non-sensitive). Business process automation. Moderate impact if failure.
HIGH RISK
Decisions affecting individual rights. Sensitive personal data processing. Regulated activities. Significant financial or reputational impact.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
RISK MANAGEMENT.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
7.1 Risk Assessment
All AI systems shall undergo risk assessment:
Before initial deployment. When significant changes occur. Annually for ongoing systems. Following incidents.
7.2 Risk Register
The AI Governance Lead shall maintain a risk register
documenting:
Identified risks for each AI system. Risk ratings (likelihood and impact). Mitigation measures. Risk owners. Status of mitigation actions.
7.3 Monitoring
AI systems shall be monitored for:
Performance against expected outcomes. Bias and fairness metrics. Security events. User complaints or concerns. Regulatory compliance.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
VENDOR MANAGEMENT.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
8.1 AI Vendor Due Diligence
Before procuring third-party AI systems:
a) Assess vendor security practices
b) Review data handling and privacy terms
c) Verify compliance certifications
d) Evaluate AI model transparency
e) Confirm support and SLA commitments
8.2 Contractual Requirements
AI vendor contracts shall address:
a) Data protection obligations
b) Security requirements
c) AI model training data restrictions
d) Audit rights
e) Incident notification obligations
f) Liability allocation
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
INCIDENT MANAGEMENT.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
9.1 AI incidents include:
a) AI system producing harmful or inappropriate outputs
b) Bias or discrimination identified in AI decisions
c) Security breach involving AI systems
d) Data leakage through AI tools
e) Regulatory inquiry related to AI
9.2 Incident Response
a) Report incidents immediately to AI Governance Lead
b) AI Governance Lead assesses severity
c) Escalate to Governance Committee as needed
d) Implement containment measures
e) Conduct root cause analysis
f) Document lessons learned
g) Update controls and processes
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
COMPLIANCE AND AUDIT.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
10.1 Regulatory Compliance
AI systems shall comply with:
a) Personal Data Protection Act (PDPA) [or applicable law]
b) Sector-specific regulations
c) Applicable AI governance frameworks
d) Contractual obligations
10.2 Internal Audit
AI governance shall be included in internal audit scope
covering:
a) Policy compliance
b) Risk management effectiveness
c) Approval process adherence
d) Documentation completeness
10.3 Documentation Requirements
Maintain documentation for:
a) AI system inventory
b) Risk assessments
c) Approval decisions
d) Monitoring results
e) Incidents and resolutions
f) Training records
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
TRAINING AND AWARENESS.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
11.1 Training Requirements
a) All employees: AI Awareness training (annual)
b) AI users: Tool-specific training (as needed)
c) AI system owners: Governance responsibilities
d) Leadership: AI risk briefings (quarterly)
11.2 Communication
a) Policy published on internal portal
b) Updates communicated via [channels]
c) Governance contact available for questions
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
POLICY ADMINISTRATION.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
12.1 Policy Owner
This policy is owned by [Title] who is responsible for
its maintenance and review.
12.2 Review Cycle
This policy shall be reviewed:
a) Annually at minimum
b) Following significant AI incidents
c) When regulations change
d) When AI strategy changes materially
12.3 Exceptions
Exceptions to this policy require:
a) Written request with business justification
b) Risk assessment of exception
c) Approval from [authority level]
d) Time-limited scope with review date
12.4 Non-Compliance
Violations of this policy may result in:
a) Revocation of AI access
b) Disciplinary action per HR policies
c) Contractual consequences for third parties
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
APPENDICES
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Appendix A: AI System Request Form
Appendix B: AI Risk Assessment Template
Appendix C: AI Vendor Due Diligence Checklist
Appendix D: AI Incident Report Template
Appendix E: AI System Inventory Template
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
DOCUMENT HISTORY
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
| Version | Date | Author | Changes |
|---------|------|--------|---------|
| 1.0 | [Date] | [Name] | Initial release |
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Implementation Tips
Don't Just Publish. Communicate.
The most common failure mode for governance policies is not poor drafting. It is poor rollout. Gartner's 2024 survey of IT governance practices found that fewer than 40% of employees at organizations with formal AI policies could accurately describe those policies when asked. The document existed; the understanding did not.
A meaningful launch requires an all-hands announcement that explains not just what the policy says, but why it matters. Managers should brief their teams directly, translating the policy's requirements into the specific actions relevant to each function. An FAQ document should address the most common edge cases, and a clearly designated point of contact should be available from day one so that employees with questions get answers, not silence.
Start with Training, Not Enforcement
People cannot follow rules they do not understand. Before any enforcement mechanisms take effect, invest in training that covers the policy's core requirements, the reasoning behind each section, and practical scenarios employees are likely to encounter. Samsung's compliance team learned this lesson in 2023 when engineers inadvertently leaked proprietary source code through ChatGPT, not out of malice, but because no training had clarified the boundaries of acceptable use. Samsung subsequently banned generative AI tools entirely before developing internal guidelines. Training first would have been far less disruptive than a blanket ban followed by a policy scramble.
Make Compliance Easy
Behavioral science consistently shows that when the compliant path is also the easiest path, compliance rates rise dramatically. Provide templates for every form referenced in the policy, create checklists for the approval process, and build clear, step-by-step workflows so that employees requesting a new AI tool know exactly what to submit and to whom. If compliance requires heroic effort, people will find workarounds, and those workarounds will be invisible to your governance function until something goes wrong.
Enforce Consistently
Selective enforcement is worse than no enforcement at all. When a policy applies to junior staff but not to senior leaders, or when one business unit faces scrutiny while another operates freely, the policy loses credibility across the entire organization. Deloitte's 2024 report on enterprise risk culture found that organizations with inconsistent policy enforcement were 2.7 times more likely to experience material compliance failures than those that applied rules uniformly. Apply the policy at every level, without exception.
Iterate Based on Feedback
No governance policy survives first contact with reality unchanged. Plan to collect structured feedback during the first 90 days after launch, identify where the policy creates unnecessary friction without improving risk management, and adjust accordingly. The goal is a living document that balances rigor with practicality, not a static artifact that becomes increasingly disconnected from how AI is actually being used in your organization.
Next Steps
Download the template, customize it for your organization, and begin the approval process. The template is the starting point, not the finish line. Implementation, training, and consistent enforcement are what transform a document into a functioning governance capability.
Book an AI Readiness Audit with Pertama Partners for help customizing governance frameworks to your specific needs.
Related Reading
[AI Governance 101: What It Is and Why It Matters]. [How to Set Up an AI Governance Committee]. [AI Governance for mid-market: A No-Bureaucracy Approach].
Common Questions
A comprehensive AI governance policy should include principles, scope definitions, risk classification frameworks, approval workflows, acceptable use guidelines, roles and responsibilities, monitoring mechanisms, and compliance reporting requirements.
Start with the template framework and adapt it to your organization's size, industry, risk appetite, and existing governance structures. Ensure alignment with your enterprise risk management framework and regulatory requirements.
Typically, a Chief AI Officer, Chief Data Officer, or Chief Risk Officer owns the policy, with input from legal, compliance, IT, and business units. The board should provide oversight and approval.
References
- AI Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology (NIST) (2023). View source
- ISO/IEC 42001:2023 — Artificial Intelligence Management System. International Organization for Standardization (2023). View source
- EU AI Act — Regulatory Framework for Artificial Intelligence. European Commission (2024). View source
- Model AI Governance Framework (Second Edition). PDPC and IMDA Singapore (2020). View source
- OECD Principles on Artificial Intelligence. OECD (2019). View source
- Personal Data Protection Act 2012. Personal Data Protection Commission Singapore (2012). View source
- ASEAN Guide on AI Governance and Ethics. ASEAN Secretariat (2024). View source

