Back to Insights
AI Governance & Risk ManagementFramework

Generative AI Policy: How to Set Boundaries for ChatGPT and Similar Tools

October 13, 202512 min readMichael Lansdowne Hauge
Updated March 15, 2026
For:Legal/ComplianceCTO/CIOCISOHead of OperationsBoard MemberConsultantIT ManagerCHROData Science/ML

Learn how to create a practical generative AI policy that sets clear boundaries for ChatGPT, Claude, and similar tools while enabling productive use across your organization.

Summarize and fact-check this article with:
Japanese Executive - ai governance & risk management insights

Key Takeaways

  • 1.Generative AI policies must balance enabling productivity with managing data and quality risks
  • 2.Define clear boundaries for what data can and cannot be input into AI tools
  • 3.Require human review of AI outputs before external use or critical decisions
  • 4.Address intellectual property concerns for both inputs and AI-generated outputs
  • 5.Update policies regularly as AI capabilities and organizational needs evolve

Generative AI Policy: How to Set Boundaries for ChatGPT and Similar Tools

Your employees are already using ChatGPT. The question isn't whether to allow generative AI—it's whether you'll govern it proactively or react to incidents after they occur.

Executive Summary

  • Generic AI policies are insufficient. Generative AI tools like ChatGPT, Claude, and Copilot introduce unique risks that traditional [AI governance] frameworks don't address.
  • Data leakage is the primary risk. Employees routinely paste confidential information into consumer-grade AI tools, often without realizing data may be used for model training.
  • Classification frameworks are essential. Not all data can be used with all tools. Clear tiers prevent both over-restriction and dangerous permissiveness.
  • Output verification matters. AI-generated content can contain hallucinations, outdated information, or subtle errors that erode trust and create liability.
  • Enterprise vs. consumer tiers differ significantly. API access and enterprise agreements offer different privacy protections than free consumer versions.
  • Policies must evolve rapidly. The generative AI landscape changes quarterly. Annual policy reviews are inadequate.
  • Training drives compliance. Policies without education fail. Employees need practical guidance, not just rules.
  • Enforcement requires visibility. You can't enforce what you can't observe. Monitoring capabilities must match policy ambitions.

Why This Matters Now

The generative AI adoption curve has been unprecedented. Within 18 months of ChatGPT's public release, studies indicated that 40-60% of knowledge workers had used generative AI tools for work tasks—often without employer knowledge or approval.

This creates three urgent problems:

1. Shadow AI is already in your organization. Employees don't wait for policy approval. They use tools that make them more productive, regardless of whether those tools are sanctioned.

2. Traditional AI policies miss the point. Policies written for predictive models, recommendation systems, or automation don't address the interactive, conversational nature of generative AI or its unique risk profile.

3. The regulatory window is closing. Jurisdictions across ASEAN are developing AI governance frameworks. Singapore's IMDA, Malaysia's MDEC, and Thailand's DEPA are all moving toward clearer expectations. Organizations without policies will face increasing scrutiny.


Definitions and Scope

Generative AI: Artificial intelligence systems that create new content—text, images, code, audio, or video—based on patterns learned from training data. Examples include large language models (LLMs) like ChatGPT, Claude, and Gemini.

Policy scope should cover:

  • Public consumer tools (ChatGPT free tier, Bing Chat, Bard)
  • Enterprise subscriptions (ChatGPT Enterprise, Microsoft Copilot for 365)
  • API integrations (models embedded in existing software)
  • Locally deployed models (open-source LLMs on company infrastructure)
  • Third-party tools with embedded AI features

Out of scope (for this policy):

These require separate governance frameworks, though coordination is essential.


Step-by-Step Implementation Guide

Step 1: Inventory Current Usage (Week 1-2)

Before writing policy, understand reality. Conduct a rapid survey or use network monitoring to identify:

  • Which generative AI tools are employees accessing
  • What use cases are most common
  • What data categories are being input
  • Which teams are heaviest users

Practical approach: Anonymous surveys often yield more honest responses than direct observation. Ask what tools people find useful, not just what they use.

Step 2: Classify Data for AI Interaction (Week 2-3)

Create a data classification matrix specific to generative AI:

Data CategoryConsumer ToolsEnterprise ToolsAPI (Approved Vendors)Local Models
Public information✅ Allowed✅ Allowed✅ Allowed✅ Allowed
Internal (non-sensitive)⚠️ Conditional✅ Allowed✅ Allowed✅ Allowed
Confidential❌ Prohibited⚠️ Conditional⚠️ Conditional✅ Allowed
Highly restricted❌ Prohibited❌ Prohibited⚠️ Conditional⚠️ Conditional
Personal data (PDPA)❌ Prohibited⚠️ Conditional⚠️ Conditional⚠️ Conditional

Conditional means allowed with specific controls, approvals, or use cases. Define these explicitly.

Step 3: Define Use Case Tiers (Week 3-4)

Not all uses carry equal risk. Create tiers:

Tier 1 — Freely Permitted:

  • Brainstorming and ideation
  • Drafting public communications
  • Learning and skill development
  • Summarizing public information
  • Code explanation and debugging (non-production)

Tier 2 — Conditional (Requires Judgment):

  • Drafting internal documents
  • Customer communication templates
  • Data analysis (anonymized/aggregated only)
  • Content creation for review before publication

Tier 3 — Prohibited:

  • Processing customer personal data
  • Legal document drafting without counsel review
  • Medical or safety-critical decisions
  • Inputting trade secrets or proprietary algorithms
  • Processing financial transaction data

Step 4: Draft the Policy Document (Week 4-5)

Structure your generative AI policy with these sections:


POLICY TEMPLATE SNIPPET: Generative AI Acceptable Use

1. Purpose and Scope This policy establishes boundaries for the use of generative AI tools within [Organization Name], including but not limited to ChatGPT, Claude, Gemini, Microsoft Copilot, and similar technologies.

2. Approved Tools The following generative AI tools are approved for organizational use:

  • [Enterprise Tool 1] — approved for [specific use cases]
  • [Enterprise Tool 2] — approved for [specific use cases]

Unapproved tools may not be used for work purposes without explicit written approval from [Designated Authority].

3. Data Handling Requirements Employees must not input the following into any generative AI tool:

  • Customer personal data
  • Employee personal data beyond their own
  • Confidential business information classified as [Classification Level]
  • [Additional prohibited categories]

4. Output Verification All AI-generated content intended for external communication or formal documentation must be:

  • Reviewed for factual accuracy
  • Verified against authoritative sources where claims are made
  • Approved by [appropriate authority] before distribution

5. Disclosure Requirements AI-generated content must be disclosed when:

  • [Specify disclosure requirements]

6. Incident Reporting Employees must report suspected policy violations or data exposure incidents to [Contact] within [Timeframe].

7. Enforcement Violations of this policy may result in [consequences].


Step 5: Establish Monitoring and Enforcement (Week 5-6)

Policy without visibility is wishful thinking. Consider:

Technical controls:

  • Network-level visibility into AI tool access
  • DLP (Data Loss Prevention) integration where feasible
  • Approved enterprise tools with audit logging
  • Browser extensions for policy reminders

Process controls:

  • Regular sampling and review of AI usage
  • Incident reporting mechanisms
  • Periodic access reviews

Step 6: Train and Communicate (Week 6-8)

See for detailed rollout strategies. Key elements:

  • All-hands awareness sessions
  • Role-specific deep dives for high-risk functions
  • Quick reference cards for daily use
  • Regular reminders as tools evolve

Step 7: Establish Review Cadence (Ongoing)

Generative AI evolves faster than annual policy cycles can accommodate. Establish:

  • Quarterly reviews of tool approvals
  • Semi-annual policy effectiveness assessment
  • Immediate review triggers (new major tool releases, incidents, regulatory changes)

Common Failure Modes

1. Writing policy in isolation. Policies created without input from actual users often miss critical use cases and face resistance. Include representatives from IT, legal, HR, and key business functions.

2. Over-restricting without alternatives. Blanket bans push usage underground. If you prohibit a tool, provide an approved alternative that meets the same need.

3. Ignoring the enterprise vs. consumer distinction. Many organizations ban "ChatGPT" without distinguishing between consumer and enterprise tiers. Enterprise agreements often include [data protection] provisions that address key concerns.

4. Setting and forgetting. A policy written in 2024 may be dangerously outdated by late 2025. Build in regular reviews.

5. Underinvesting in training. Employees can't follow policies they don't understand. Complex rules require education, not just documentation.

6. Treating all use cases equally. Brainstorming is not the same risk as processing customer data. Nuanced policies enable productive use while protecting critical assets.


Generative AI Policy Readiness Checklist

Copy and use this checklist to assess your policy readiness:

GENERATIVE AI POLICY CHECKLIST

Pre-Policy Assessment
[ ] Completed shadow AI usage inventory
[ ] Identified primary use cases by department
[ ] Mapped data classification to AI tool categories
[ ] Engaged stakeholders from legal, IT, HR, and business units

Policy Content
[ ] Defined scope (which tools, which users)
[ ] Specified approved tools with use case boundaries
[ ] Established prohibited data categories
[ ] Created use case tiers (allowed/conditional/prohibited)
[ ] Defined output verification requirements
[ ] Established disclosure requirements
[ ] Specified incident reporting procedures
[ ] Outlined enforcement mechanisms

Implementation
[ ] Created training materials
[ ] Scheduled rollout communications
[ ] Established monitoring capabilities
[ ] Designated policy owner and review schedule
[ ] Set up [incident response] procedures

Ongoing Governance
[ ] Quarterly tool review scheduled
[ ] Semi-annual policy effectiveness review scheduled
[ ] Trigger events for immediate review defined
[ ] Feedback mechanism for employees established

Metrics to Track

Measure policy effectiveness with:

MetricTargetFrequency
Employee awareness (survey)>90% aware of policyQuarterly
Training completion rate>95% within 30 daysMonthly
Policy violation incidentsDecreasing trendMonthly
Shadow AI tool detectionDecreasing trendMonthly
Time to incident resolution<48 hoursPer incident
Policy exception requestsStable or decreasingMonthly

Tooling Suggestions (Vendor-Neutral)

Consider these categories of tools to support policy implementation:

Enterprise AI Platforms:

  • Enterprise versions of major LLMs with data protection agreements
  • Private cloud deployments for sensitive use cases

Monitoring and DLP:

  • Network traffic analysis for AI tool detection
  • Cloud access security brokers (CASB) with AI visibility
  • Endpoint DLP with generative AI awareness

Training and Awareness:

  • Learning management systems for policy training
  • Just-in-time reminder tools (browser extensions)

Governance:

  • Policy management platforms
  • Incident tracking systems
  • Audit and compliance reporting tools

Next Steps

Generative AI policies are foundational but not sufficient. They must integrate with your broader AI governance framework.

Related resources:

  • [What Should an AI Policy Include? Essential Components Explained]
  • [AI Acceptable Use Policy Template: Ready-to-Use for Your Organization]
  • [How to Communicate Your AI Policy: Rollout Strategies That Actually Work]
  • [How to Prevent AI Data Leakage: Technical and Policy Controls]

Disclaimer

This article provides general guidance on generative AI policy development. It does not constitute legal advice. Organizations should consult qualified legal counsel regarding their specific regulatory obligations and risk profile.


Common Questions

At minimum, a generative AI policy should cover five areas: permitted and prohibited use cases (clearly specifying what tasks employees can and cannot use AI tools for), data handling rules (which types of data can be entered into AI tools and which are strictly prohibited), output review requirements (mandatory human review processes before AI-generated content is used externally), intellectual property guidelines (ownership of AI-generated work and copyright considerations), and incident reporting procedures (how to report AI-related errors, biases, or data exposure incidents).

Generative AI policies should be reviewed quarterly and formally updated at least every 6 months given the rapid pace of change in AI capabilities and regulation. Trigger events for immediate policy updates include the adoption of new AI tools, changes in regulatory requirements such as the EU AI Act enforcement milestones, significant AI incidents within or outside the organization, and major capability releases from AI providers that change the risk profile of existing approved tools.

References

  1. AI Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology (NIST) (2023). View source
  2. OWASP Top 10 for Large Language Model Applications 2025. OWASP Foundation (2025). View source
  3. ISO/IEC 42001:2023 — Artificial Intelligence Management System. International Organization for Standardization (2023). View source
  4. Model AI Governance Framework (Second Edition). PDPC and IMDA Singapore (2020). View source
  5. Personal Data Protection Act 2012. Personal Data Protection Commission Singapore (2012). View source
  6. EU AI Act — Regulatory Framework for Artificial Intelligence. European Commission (2024). View source
  7. OECD Principles on Artificial Intelligence. OECD (2019). View source
Michael Lansdowne Hauge

Managing Director · HRDF-Certified Trainer (Malaysia), Delivered Training for Big Four, MBB, and Fortune 500 Clients, 100+ Angel Investments (Seed–Series C), Dartmouth College, Economics & Asian Studies

Managing Director of Pertama Partners, an AI advisory and training firm helping organizations across Southeast Asia adopt and implement artificial intelligence. HRDF-certified trainer with engagements for a Big Four accounting firm, a leading global management consulting firm, and the world's largest ERP software company.

AI StrategyAI GovernanceExecutive AI TrainingDigital TransformationASEAN MarketsAI ImplementationAI Readiness AssessmentsResponsible AIPrompt EngineeringAI Literacy Programs

EXPLORE MORE

Other AI Governance & Risk Management Solutions

Related Resources

Key terms:Generative AI

INSIGHTS

Related reading

Talk to Us About AI Governance & Risk Management

We work with organizations across Southeast Asia on ai governance & risk management programs. Let us know what you are working on.