Back to Insights
AI Governance & Risk ManagementFramework

Generative AI Policy: How to Set Boundaries for ChatGPT and Similar Tools

October 13, 202512 min readMichael Lansdowne Hauge
Updated March 15, 2026
For:Legal/ComplianceCTO/CIOCISOHead of OperationsBoard MemberConsultantIT ManagerCHROData Science/ML

Learn how to create a practical generative AI policy that sets clear boundaries for ChatGPT, Claude, and similar tools while enabling productive use across your organization.

Summarize and fact-check this article with:
Japanese Executive - ai governance & risk management insights

Key Takeaways

  • 1.Generative AI policies must balance enabling productivity with managing data and quality risks
  • 2.Define clear boundaries for what data can and cannot be input into AI tools
  • 3.Require human review of AI outputs before external use or critical decisions
  • 4.Address intellectual property concerns for both inputs and AI-generated outputs
  • 5.Update policies regularly as AI capabilities and organizational needs evolve

Generative AI Policy: How to Set Boundaries for ChatGPT and Similar Tools

Your employees are already using ChatGPT. The question isn't whether to allow generative AI, it's whether you'll govern it proactively or react to incidents after they occur.

Executive Summary

Generic AI policies are insufficient. Generative AI tools like ChatGPT, Claude, and Copilot introduce unique risks that traditional [AI governance] frameworks don't address. Data leakage stands as the primary risk, with employees routinely pasting confidential information into consumer-grade AI tools, often without realizing that data may be used for model training.

Effective governance requires classification frameworks that distinguish which data can be used with which tools, preventing both over-restriction and dangerous permissiveness. Output verification is equally critical; AI-generated content can contain hallucinations, outdated information, or subtle errors that erode trust and create liability. Organizations must also recognize that enterprise and consumer tiers differ significantly, as API access and enterprise agreements offer different privacy protections than free consumer versions.

Policies must evolve rapidly because the generative AI landscape changes quarterly, making annual policy reviews inadequate. Training drives compliance because policies without education fail; employees need practical guidance, not just rules. Finally, enforcement requires visibility. You cannot enforce what you cannot observe, so monitoring capabilities must match policy ambitions.


Why This Matters Now

The generative AI adoption curve has been unprecedented. Within 18 months of ChatGPT's public release, studies indicated that 40-60% of knowledge workers had used generative AI tools for work tasks, often without employer knowledge or approval.

This creates three urgent problems.

Shadow AI is already in your organization. Employees don't wait for policy approval. They use tools that make them more productive, regardless of whether those tools are sanctioned. Meanwhile, traditional AI policies miss the point entirely. Policies written for predictive models, recommendation systems, or automation don't address the interactive, conversational nature of generative AI or its unique risk profile. Adding urgency, the regulatory window is closing. Jurisdictions across ASEAN are developing AI governance frameworks. Singapore's IMDA, Malaysia's MDEC, and Thailand's DEPA are all moving toward clearer expectations. Organizations without policies will face increasing scrutiny.


Definitions and Scope

Generative AI refers to artificial intelligence systems that create new content (text, images, code, audio, or video) based on patterns learned from training data. Examples include large language models (LLMs) like ChatGPT, Claude, and Gemini.

Policy scope should cover public consumer tools such as ChatGPT free tier, Bing Chat, and Bard, along with enterprise subscriptions like ChatGPT Enterprise and Microsoft Copilot for 365. API integrations where models are embedded in existing software fall within scope, as do locally deployed open-source LLMs on company infrastructure and third-party tools with embedded AI features.

Traditional machine learning models in production systems, analytics and business intelligence tools, and robotic process automation (RPA) remain out of scope for this policy. These require separate governance frameworks, though coordination is essential.


Step-by-Step Implementation Guide

Step 1: Inventory Current Usage (Week 1-2)

Before writing policy, understand reality. Conduct a rapid survey or use network monitoring to identify which generative AI tools employees are accessing, what use cases are most common, what data categories are being input, and which teams are the heaviest users.

Practical approach: Anonymous surveys often yield more honest responses than direct observation. Ask what tools people find useful, not just what they use.

Step 2: Classify Data for AI Interaction (Week 2-3)

Create a data classification matrix specific to generative AI:

Data CategoryConsumer ToolsEnterprise ToolsAPI (Approved Vendors)Local Models
Public informationAllowedAllowedAllowedAllowed
Internal (non-sensitive)ConditionalAllowedAllowedAllowed
ConfidentialProhibitedConditionalConditionalAllowed
Highly restrictedProhibitedProhibitedConditionalConditional
Personal data (PDPA)ProhibitedConditionalConditionalConditional

Conditional means allowed with specific controls, approvals, or use cases. Define these explicitly.

Step 3: Define Use Case Tiers (Week 3-4)

Not all uses carry equal risk. Create tiers that reflect this reality.

Tier 1 (Freely Permitted) covers low-risk activities such as brainstorming and ideation, drafting public communications, learning and skill development, summarizing public information, and code explanation or debugging for non-production purposes.

Tier 2 (Conditional, Requires Judgment) includes activities that carry moderate risk and demand careful handling. These include drafting internal documents, creating customer communication templates, performing data analysis on anonymized or aggregated data only, and creating content that will be reviewed before publication.

Tier 3 (Prohibited) encompasses high-risk activities that must not involve generative AI tools. Processing customer personal data, drafting legal documents without counsel review, making medical or safety-critical decisions, inputting trade secrets or proprietary algorithms, and processing financial transaction data all fall into this category.

Step 4: Draft the Policy Document (Week 4-5)

Structure your generative AI policy with these sections:


POLICY TEMPLATE SNIPPET: Generative AI Acceptable Use

1. Purpose and Scope This policy establishes boundaries for the use of generative AI tools within [Organization Name], including but not limited to ChatGPT, Claude, Gemini, Microsoft Copilot, and similar technologies.

2. Approved Tools The following generative AI tools are approved for organizational use. Each tool should be listed alongside its approved use cases, such as [Enterprise Tool 1] approved for [specific use cases] and [Enterprise Tool 2] approved for [specific use cases]. Unapproved tools may not be used for work purposes without explicit written approval from [Designated Authority].

3. Data Handling Requirements Employees must not input the following into any generative AI tool: customer personal data, employee personal data beyond their own, confidential business information classified at [Classification Level], and any additional prohibited categories specific to your organization.

4. Output Verification All AI-generated content intended for external communication or formal documentation must be reviewed for factual accuracy, verified against authoritative sources where claims are made, and approved by [appropriate authority] before distribution.

5. Disclosure Requirements AI-generated content must be disclosed when [specify disclosure requirements relevant to your organization and jurisdiction].

6. Incident Reporting Employees must report suspected policy violations or data exposure incidents to [Contact] within [Timeframe].

7. Enforcement Violations of this policy may result in [consequences].


Step 5: Establish Monitoring and Enforcement (Week 5-6)

Policy without visibility is wishful thinking.

On the technical side, organizations should implement network-level visibility into AI tool access, integrate DLP (Data Loss Prevention) where feasible, deploy approved enterprise tools with audit logging, and consider browser extensions for policy reminders.

Process controls are equally important. These include regular sampling and review of AI usage, incident reporting mechanisms, and periodic access reviews to ensure compliance aligns with policy intent.

Step 6: Train and Communicate (Week 6-8)

See the related rollout strategies resource for detailed guidance. Key elements include all-hands awareness sessions to establish baseline understanding, role-specific deep dives for high-risk functions, quick reference cards for daily use, and regular reminders as tools evolve.

Step 7: Establish Review Cadence (Ongoing)

Generative AI evolves faster than annual policy cycles can accommodate. Organizations should establish quarterly reviews of tool approvals, semi-annual policy effectiveness assessments, and immediate review triggers for new major tool releases, incidents, or regulatory changes.


Common Failure Modes

Writing policy in isolation. Policies created without input from actual users often miss critical use cases and face resistance. Include representatives from IT, legal, HR, and key business functions.

Over-restricting without alternatives. Blanket bans push usage underground. If you prohibit a tool, provide an approved alternative that meets the same need.

Ignoring the enterprise vs. consumer distinction. Many organizations ban "ChatGPT" without distinguishing between consumer and enterprise tiers. Enterprise agreements often include [data protection] provisions that address key concerns.

Setting and forgetting. A policy written in 2024 may be dangerously outdated by late 2025. Build in regular reviews.

Underinvesting in training. Employees can't follow policies they don't understand. Complex rules require education, not just documentation.

Treating all use cases equally. Brainstorming is not the same risk as processing customer data. Nuanced policies enable productive use while protecting critical assets.


Generative AI Policy Readiness Checklist

Copy and use this checklist to assess your policy readiness:

GENERATIVE AI POLICY CHECKLIST

Pre-Policy Assessment
[ ] Completed shadow AI usage inventory
[ ] Identified primary use cases by department
[ ] Mapped data classification to AI tool categories
[ ] Engaged stakeholders from legal, IT, HR, and business units

Policy Content
[ ] Defined scope (which tools, which users)
[ ] Specified approved tools with use case boundaries
[ ] Established prohibited data categories
[ ] Created use case tiers (allowed/conditional/prohibited)
[ ] Defined output verification requirements
[ ] Established disclosure requirements
[ ] Specified incident reporting procedures
[ ] Outlined enforcement mechanisms

Implementation
[ ] Created training materials
[ ] Scheduled rollout communications
[ ] Established monitoring capabilities
[ ] Designated policy owner and review schedule
[ ] Set up [incident response] procedures

Ongoing Governance
[ ] Quarterly tool review scheduled
[ ] Semi-annual policy effectiveness review scheduled
[ ] Trigger events for immediate review defined
[ ] Feedback mechanism for employees established

Metrics to Track

Measure policy effectiveness with:

MetricTargetFrequency
Employee awareness (survey)>90% aware of policyQuarterly
Training completion rate>95% within 30 daysMonthly
Policy violation incidentsDecreasing trendMonthly
Shadow AI tool detectionDecreasing trendMonthly
Time to incident resolution<48 hoursPer incident
Policy exception requestsStable or decreasingMonthly

Tooling Suggestions (Vendor-Neutral)

Consider these categories of tools to support policy implementation.

For enterprise AI platforms, look at enterprise versions of major LLMs with data protection agreements and private cloud deployments for sensitive use cases.

For monitoring and DLP, consider network traffic analysis for AI tool detection, cloud access security brokers (CASB) with AI visibility, and endpoint DLP with generative AI awareness.

Training and awareness efforts benefit from learning management systems for policy training and just-in-time reminder tools such as browser extensions.

On the governance side, organizations should evaluate policy management platforms, incident tracking systems, and audit and compliance reporting tools.


Next Steps

Generative AI policies are foundational but not sufficient. They must integrate with your broader AI governance framework.

Related resources:

  • [What Should an AI Policy Include? Essential Components Explained]
  • [AI Acceptable Use Policy Template: Ready-to-Use for Your Organization]
  • [How to Communicate Your AI Policy: Rollout Strategies That Actually Work]
  • [How to Prevent AI Data Leakage: Technical and Policy Controls]

Disclaimer

This article provides general guidance on generative AI policy development. It does not constitute legal advice. Organizations should consult qualified legal counsel regarding their specific regulatory obligations and risk profile.


Common Questions

At minimum, a generative AI policy should cover five areas: permitted and prohibited use cases (clearly specifying what tasks employees can and cannot use AI tools for), data handling rules (which types of data can be entered into AI tools and which are strictly prohibited), output review requirements (mandatory human review processes before AI-generated content is used externally), intellectual property guidelines (ownership of AI-generated work and copyright considerations), and incident reporting procedures (how to report AI-related errors, biases, or data exposure incidents).

Generative AI policies should be reviewed quarterly and formally updated at least every 6 months given the rapid pace of change in AI capabilities and regulation. Trigger events for immediate policy updates include the adoption of new AI tools, changes in regulatory requirements such as the EU AI Act enforcement milestones, significant AI incidents within or outside the organization, and major capability releases from AI providers that change the risk profile of existing approved tools.

References

  1. AI Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology (NIST) (2023). View source
  2. OWASP Top 10 for Large Language Model Applications 2025. OWASP Foundation (2025). View source
  3. ISO/IEC 42001:2023 — Artificial Intelligence Management System. International Organization for Standardization (2023). View source
  4. Model AI Governance Framework (Second Edition). PDPC and IMDA Singapore (2020). View source
  5. Personal Data Protection Act 2012. Personal Data Protection Commission Singapore (2012). View source
  6. EU AI Act — Regulatory Framework for Artificial Intelligence. European Commission (2024). View source
  7. OECD Principles on Artificial Intelligence. OECD (2019). View source
Michael Lansdowne Hauge

Managing Partner · HRDF-Certified Trainer (Malaysia), Delivered Training for Big Four, MBB, and Fortune 500 Clients, 100+ Angel Investments (Seed–Series C), Dartmouth College, Economics & Asian Studies

Advises leadership teams across Southeast Asia on AI strategy, readiness, and implementation. HRDF-certified trainer with engagements for a Big Four accounting firm, a leading global management consulting firm, and the world's largest ERP software company.

AI StrategyAI GovernanceExecutive AI TrainingDigital TransformationASEAN MarketsAI ImplementationAI Readiness AssessmentsResponsible AIPrompt EngineeringAI Literacy Programs

EXPLORE MORE

Other AI Governance & Risk Management Solutions

Related Resources

Key terms:Generative AI

INSIGHTS

Related reading

Talk to Us About AI Governance & Risk Management

We work with organizations across Southeast Asia on ai governance & risk management programs. Let us know what you are working on.