Back to Insights
AI Governance & Risk ManagementFrameworkPractitioner

Generative AI Policy: How to Set Boundaries for ChatGPT and Similar Tools

October 13, 202512 min readMichael Lansdowne Hauge
For:IT DirectorsHR LeadersCompliance OfficersOperations Leaders

Learn how to create a practical generative AI policy that sets clear boundaries for ChatGPT, Claude, and similar tools while enabling productive use across your organization.

Japanese Executive - ai governance & risk management insights

Key Takeaways

  • 1.Generative AI policies must balance enabling productivity with managing data and quality risks
  • 2.Define clear boundaries for what data can and cannot be input into AI tools
  • 3.Require human review of AI outputs before external use or critical decisions
  • 4.Address intellectual property concerns for both inputs and AI-generated outputs
  • 5.Update policies regularly as AI capabilities and organizational needs evolve

Generative AI Policy: How to Set Boundaries for ChatGPT and Similar Tools

Your employees are already using ChatGPT. The question isn't whether to allow generative AI—it's whether you'll govern it proactively or react to incidents after they occur.

Executive Summary

  • Generic AI policies are insufficient. Generative AI tools like ChatGPT, Claude, and Copilot introduce unique risks that traditional AI governance frameworks don't address.
  • Data leakage is the primary risk. Employees routinely paste confidential information into consumer-grade AI tools, often without realizing data may be used for model training.
  • Classification frameworks are essential. Not all data can be used with all tools. Clear tiers prevent both over-restriction and dangerous permissiveness.
  • Output verification matters. AI-generated content can contain hallucinations, outdated information, or subtle errors that erode trust and create liability.
  • Enterprise vs. consumer tiers differ significantly. API access and enterprise agreements offer different privacy protections than free consumer versions.
  • Policies must evolve rapidly. The generative AI landscape changes quarterly. Annual policy reviews are inadequate.
  • Training drives compliance. Policies without education fail. Employees need practical guidance, not just rules.
  • Enforcement requires visibility. You can't enforce what you can't observe. Monitoring capabilities must match policy ambitions.

Why This Matters Now

The generative AI adoption curve has been unprecedented. Within 18 months of ChatGPT's public release, studies indicated that 40-60% of knowledge workers had used generative AI tools for work tasks—often without employer knowledge or approval.

This creates three urgent problems:

1. Shadow AI is already in your organization. Employees don't wait for policy approval. They use tools that make them more productive, regardless of whether those tools are sanctioned.

2. Traditional AI policies miss the point. Policies written for predictive models, recommendation systems, or automation don't address the interactive, conversational nature of generative AI or its unique risk profile.

3. The regulatory window is closing. Jurisdictions across ASEAN are developing AI governance frameworks. Singapore's IMDA, Malaysia's MDEC, and Thailand's DEPA are all moving toward clearer expectations. Organizations without policies will face increasing scrutiny.


Definitions and Scope

Generative AI: Artificial intelligence systems that create new content—text, images, code, audio, or video—based on patterns learned from training data. Examples include large language models (LLMs) like ChatGPT, Claude, and Gemini.

Policy scope should cover:

  • Public consumer tools (ChatGPT free tier, Bing Chat, Bard)
  • Enterprise subscriptions (ChatGPT Enterprise, Microsoft Copilot for 365)
  • API integrations (models embedded in existing software)
  • Locally deployed models (open-source LLMs on company infrastructure)
  • Third-party tools with embedded AI features

Out of scope (for this policy):

These require separate governance frameworks, though coordination is essential.


Step-by-Step Implementation Guide

Step 1: Inventory Current Usage (Week 1-2)

Before writing policy, understand reality. Conduct a rapid survey or use network monitoring to identify:

  • Which generative AI tools are employees accessing
  • What use cases are most common
  • What data categories are being input
  • Which teams are heaviest users

Practical approach: Anonymous surveys often yield more honest responses than direct observation. Ask what tools people find useful, not just what they use.

Step 2: Classify Data for AI Interaction (Week 2-3)

Create a data classification matrix specific to generative AI:

Data CategoryConsumer ToolsEnterprise ToolsAPI (Approved Vendors)Local Models
Public information✅ Allowed✅ Allowed✅ Allowed✅ Allowed
Internal (non-sensitive)⚠️ Conditional✅ Allowed✅ Allowed✅ Allowed
Confidential❌ Prohibited⚠️ Conditional⚠️ Conditional✅ Allowed
Highly restricted❌ Prohibited❌ Prohibited⚠️ Conditional⚠️ Conditional
Personal data (PDPA)❌ Prohibited⚠️ Conditional⚠️ Conditional⚠️ Conditional

Conditional means allowed with specific controls, approvals, or use cases. Define these explicitly.

Step 3: Define Use Case Tiers (Week 3-4)

Not all uses carry equal risk. Create tiers:

Tier 1 — Freely Permitted:

  • Brainstorming and ideation
  • Drafting public communications
  • Learning and skill development
  • Summarizing public information
  • Code explanation and debugging (non-production)

Tier 2 — Conditional (Requires Judgment):

  • Drafting internal documents
  • Customer communication templates
  • Data analysis (anonymized/aggregated only)
  • Content creation for review before publication

Tier 3 — Prohibited:

  • Processing customer personal data
  • Legal document drafting without counsel review
  • Medical or safety-critical decisions
  • Inputting trade secrets or proprietary algorithms
  • Processing financial transaction data

Step 4: Draft the Policy Document (Week 4-5)

Structure your generative AI policy with these sections:


POLICY TEMPLATE SNIPPET: Generative AI Acceptable Use

1. Purpose and Scope This policy establishes boundaries for the use of generative AI tools within [Organization Name], including but not limited to ChatGPT, Claude, Gemini, Microsoft Copilot, and similar technologies.

2. Approved Tools The following generative AI tools are approved for organizational use:

  • [Enterprise Tool 1] — approved for [specific use cases]
  • [Enterprise Tool 2] — approved for [specific use cases]

Unapproved tools may not be used for work purposes without explicit written approval from [Designated Authority].

3. Data Handling Requirements Employees must not input the following into any generative AI tool:

  • Customer personal data
  • Employee personal data beyond their own
  • Confidential business information classified as [Classification Level]
  • [Additional prohibited categories]

4. Output Verification All AI-generated content intended for external communication or formal documentation must be:

  • Reviewed for factual accuracy
  • Verified against authoritative sources where claims are made
  • Approved by [appropriate authority] before distribution

5. Disclosure Requirements AI-generated content must be disclosed when:

  • [Specify disclosure requirements]

6. Incident Reporting Employees must report suspected policy violations or data exposure incidents to [Contact] within [Timeframe].

7. Enforcement Violations of this policy may result in [consequences].


Step 5: Establish Monitoring and Enforcement (Week 5-6)

Policy without visibility is wishful thinking. Consider:

Technical controls:

  • Network-level visibility into AI tool access
  • DLP (Data Loss Prevention) integration where feasible
  • Approved enterprise tools with audit logging
  • Browser extensions for policy reminders

Process controls:

  • Regular sampling and review of AI usage
  • Incident reporting mechanisms
  • Periodic access reviews

Step 6: Train and Communicate (Week 6-8)

See (/insights/ai-policy-communication-rollout-strategies) for detailed rollout strategies. Key elements:

  • All-hands awareness sessions
  • Role-specific deep dives for high-risk functions
  • Quick reference cards for daily use
  • Regular reminders as tools evolve

Step 7: Establish Review Cadence (Ongoing)

Generative AI evolves faster than annual policy cycles can accommodate. Establish:

  • Quarterly reviews of tool approvals
  • Semi-annual policy effectiveness assessment
  • Immediate review triggers (new major tool releases, incidents, regulatory changes)

Common Failure Modes

1. Writing policy in isolation. Policies created without input from actual users often miss critical use cases and face resistance. Include representatives from IT, legal, HR, and key business functions.

2. Over-restricting without alternatives. Blanket bans push usage underground. If you prohibit a tool, provide an approved alternative that meets the same need.

3. Ignoring the enterprise vs. consumer distinction. Many organizations ban "ChatGPT" without distinguishing between consumer and enterprise tiers. Enterprise agreements often include data protection provisions that address key concerns.

4. Setting and forgetting. A policy written in 2024 may be dangerously outdated by late 2025. Build in regular reviews.

5. Underinvesting in training. Employees can't follow policies they don't understand. Complex rules require education, not just documentation.

6. Treating all use cases equally. Brainstorming is not the same risk as processing customer data. Nuanced policies enable productive use while protecting critical assets.


Generative AI Policy Readiness Checklist

Copy and use this checklist to assess your policy readiness:

GENERATIVE AI POLICY CHECKLIST

Pre-Policy Assessment
[ ] Completed shadow AI usage inventory
[ ] Identified primary use cases by department
[ ] Mapped data classification to AI tool categories
[ ] Engaged stakeholders from legal, IT, HR, and business units

Policy Content
[ ] Defined scope (which tools, which users)
[ ] Specified approved tools with use case boundaries
[ ] Established prohibited data categories
[ ] Created use case tiers (allowed/conditional/prohibited)
[ ] Defined output verification requirements
[ ] Established disclosure requirements
[ ] Specified incident reporting procedures
[ ] Outlined enforcement mechanisms

Implementation
[ ] Created training materials
[ ] Scheduled rollout communications
[ ] Established monitoring capabilities
[ ] Designated policy owner and review schedule
[ ] Set up [incident response](/insights/ai-incident-response-plan) procedures

Ongoing Governance
[ ] Quarterly tool review scheduled
[ ] Semi-annual policy effectiveness review scheduled
[ ] Trigger events for immediate review defined
[ ] Feedback mechanism for employees established

Metrics to Track

Measure policy effectiveness with:

MetricTargetFrequency
Employee awareness (survey)>90% aware of policyQuarterly
Training completion rate>95% within 30 daysMonthly
Policy violation incidentsDecreasing trendMonthly
Shadow AI tool detectionDecreasing trendMonthly
Time to incident resolution<48 hoursPer incident
Policy exception requestsStable or decreasingMonthly

Tooling Suggestions (Vendor-Neutral)

Consider these categories of tools to support policy implementation:

Enterprise AI Platforms:

  • Enterprise versions of major LLMs with data protection agreements
  • Private cloud deployments for sensitive use cases

Monitoring and DLP:

  • Network traffic analysis for AI tool detection
  • Cloud access security brokers (CASB) with AI visibility
  • Endpoint DLP with generative AI awareness

Training and Awareness:

  • Learning management systems for policy training
  • Just-in-time reminder tools (browser extensions)

Governance:

  • Policy management platforms
  • Incident tracking systems
  • Audit and compliance reporting tools

Frequently Asked Questions


Next Steps

Generative AI policies are foundational but not sufficient. They must integrate with your broader AI governance framework.

Related resources:


Book an AI Readiness Audit

Not sure where to start? Our AI Readiness Audit helps organizations assess their current state, identify gaps, and build practical roadmaps for responsible AI adoption.

Book an AI Readiness Audit →


Disclaimer

This article provides general guidance on generative AI policy development. It does not constitute legal advice. Organizations should consult qualified legal counsel regarding their specific regulatory obligations and risk profile.


References

  1. Singapore Infocomm Media Development Authority (IMDA). Model AI Governance Framework, Second Edition.
  2. Personal Data Protection Commission Singapore. Advisory Guidelines on the Use of Personal Data in AI Systems.
  3. Malaysia Digital Economy Corporation (MDEC). Malaysia AI Governance Guidelines.
  4. National Institute of Standards and Technology (NIST). AI Risk Management Framework.
  5. OECD. Principles on AI.

Frequently Asked Questions

No. Blanket bans typically fail and push usage underground. A "minimum viable policy" that covers critical risks is better than no policy while you perfect the details.

References

  1. Singapore Infocomm Media Development Authority (IMDA). Model AI Governance Framework, Second Edition.. Singapore Infocomm Media Development Authority Model AI Governance Framework Second Edition
  2. Personal Data Protection Commission Singapore. Advisory Guidelines on the Use of Personal Data in AI Systems..
  3. Malaysia Digital Economy Corporation (MDEC). Malaysia AI Governance Guidelines.. Malaysia Digital Economy Corporation Malaysia AI Governance Guidelines
  4. National Institute of Standards and Technology (NIST). AI Risk Management Framework.. National Institute of Standards and Technology AI Risk Management Framework
  5. OECD. Principles on AI.. OECD Principles on AI
Michael Lansdowne Hauge

Founder & Managing Partner

Founder & Managing Partner at Pertama Partners. Founder of Pertama Group.

generative ai policychatgpt policygenai guidelinesai acceptable usellm governanceai risk managementdata classification

Explore Further

Key terms:Generative AI

Ready to Apply These Insights to Your Organization?

Book a complimentary AI Readiness Audit to identify opportunities specific to your context.

Book an AI Readiness Audit