Back to Insights
ChatGPT Training for WorkPlaybook

ChatGPT Company Policy Template — Ready to Customise

February 11, 202610 min readPertama Partners
Updated March 15, 2026
For:CHROLegal/ComplianceCISOConsultantHead of OperationsIT ManagerCMO

A comprehensive ChatGPT company policy template covering approved use cases, data handling, quality assurance, and employee responsibilities. Ready to customise for your organisation.

Summarize and fact-check this article with:
ChatGPT Company Policy Template — Ready to Customise

Key Takeaways

  • 1.60-70% of Southeast Asian knowledge workers already use ChatGPT at work
  • 2.Companies face data privacy, quality, and legal risks without formal policies
  • 3.Only approved enterprise AI tools should be used for business purposes
  • 4.Restricted data must never be entered into any AI tool
  • 5.All AI-generated content requires human review before external sharing
  • 6.Employee training and manager oversight are essential for successful implementation
  • 7.Policy should be reviewed quarterly as AI capabilities rapidly evolve

Why Your Company Needs a ChatGPT Policy

If your employees are using ChatGPT at work — and surveys show that 60-70% of knowledge workers in Southeast Asia already are — you need a formal policy. Without one, you face:

  • Data privacy risks — Employees may input sensitive customer or company data
  • Quality risks — Unreviewed AI outputs may contain errors or hallucinations
  • Reputational risks — AI-generated content may not align with your brand voice
  • Legal risks — AI use in regulated industries may have compliance implications
  • Consistency risks — Different teams using AI differently, with varying quality standards

A clear policy sets expectations, reduces risk, and empowers employees to use AI confidently.

ChatGPT Company Policy Template

1. Purpose and Scope

This policy governs the use of generative AI tools (including ChatGPT, Claude, Gemini, Copilot, and similar platforms) by all employees, contractors, and temporary staff of [Company Name].

The purpose of this policy is to:

  • Enable productive use of AI tools for business purposes
  • Protect company and customer data
  • Ensure the quality and accuracy of AI-assisted work
  • Maintain compliance with applicable laws and regulations

2. Approved AI Tools

The following AI tools are approved for business use:

ToolApproved PlanApproved For
[ChatGPT][Enterprise/Team][All departments]
[Microsoft Copilot][M365 Copilot][All M365 users]
[Claude][Team/Enterprise][Specified teams]

Important: Only approved tools and subscription plans may be used. Free or personal accounts must not be used for work tasks due to data handling differences.

3. Approved Use Cases

AI tools may be used for:

General (All Employees)

  • Drafting emails, reports, and presentations
  • Summarising documents and meeting notes
  • Research and information gathering
  • Brainstorming and ideation
  • Language translation and proofreading
  • Creating first drafts of internal documents

Department-Specific (With Training)

  • HR: Job descriptions, interview questions, policy drafts (with anonymised data)
  • Sales: Prospect research, proposal drafts, communication templates
  • Marketing: Content drafts, campaign ideas, social media copy
  • Finance: Report narratives, process documentation, analysis frameworks
  • Operations: SOPs, vendor communications, process documentation

4. Prohibited Activities

The following are strictly prohibited:

  • Inputting personal data (NRIC, addresses, phone numbers, salary data)
  • Inputting confidential customer information
  • Inputting trade secrets, proprietary algorithms, or source code
  • Using AI for final decision-making on hiring, firing, or promotions
  • Submitting AI-generated content externally without human review
  • Using AI to generate legal advice, medical advice, or regulatory guidance without professional review
  • Claiming AI-generated work as entirely original without appropriate context
  • Using personal AI accounts for work purposes

5. Data Handling Requirements

Before using any data with AI tools:

  1. Classify the data:

    • Public — Can be freely used (published information, general industry data)
    • Internal — May be used with approved enterprise AI tools only
    • Confidential — Must be anonymised before use; remove names, account numbers, and identifying details
    • Restricted — Must NEVER be entered into any AI tool (PII, financial records, medical data, legal privileged information)
  2. Anonymise when necessary:

    • Replace real names with [Person A], [Person B]
    • Replace company names with [Company X]
    • Replace specific financial figures with approximate ranges
    • Remove dates and locations that could identify individuals

6. Quality Assurance Requirements

All AI-assisted work must undergo human review before use:

Output TypeReview LevelReviewer
Internal emailsSelf-reviewAuthor
Internal reportsPeer reviewColleague
External communicationsManager reviewDirect manager
Customer-facing documentsDepartment headHead of department
Legal/regulatory documentsExpert reviewLegal counsel
Financial statementsDouble reviewFinance manager + auditor

7. Disclosure Requirements

  • Internal use: Disclosure not required for routine tasks (emails, summaries)
  • External publications: Must note when AI was used in content creation
  • Client deliverables: Disclose AI assistance if contractually required
  • Regulatory submissions: Always disclose AI involvement

8. Employee Responsibilities

All employees using AI tools must:

  1. Complete the company's AI training programme before using AI for work
  2. Follow this policy and the data classification guidelines
  3. Review all AI outputs for accuracy before sharing
  4. Report any incidents (data breaches, significant errors) to IT/compliance
  5. Stay updated on policy changes and new guidelines

9. Manager Responsibilities

Managers must:

  1. Ensure team members complete AI training
  2. Monitor AI use within their teams
  3. Address misuse promptly and constructively
  4. Share best practices and successful use cases
  5. Report concerns to the AI governance committee

10. Incident Reporting

Report the following to [IT Security/Compliance team] immediately:

  • Accidental input of restricted data into AI tools
  • Discovery of AI-generated errors in published or shared content
  • Suspected misuse of AI tools by colleagues
  • Any AI-related complaint from customers or external parties

11. Consequences of Non-Compliance

Violations of this policy may result in:

  • Verbal or written warning
  • Temporary suspension of AI tool access
  • Disciplinary action as per the Employee Handbook
  • Termination in cases of gross negligence or repeated violations

12. Policy Review

This policy will be reviewed and updated quarterly by the [AI Governance Committee/IT Department/HR].

Last updated: [Date] Next review: [Date + 3 months]

How to Implement This Policy

  1. Customise the template for your company (replace bracketed items)
  2. Review with legal counsel, IT, and HR
  3. Communicate to all employees via email and town hall
  4. Train all employees on the policy (include in AI training programme)
  5. Enforce consistently and update as AI capabilities evolve

How Leading Company Policies Differ Across Industries

ChatGPT company policies adopted between 2024 and 2026 reveal distinct patterns depending on industry vertical, regulatory exposure, and organizational risk tolerance. Understanding these variations helps policy authors benchmark their approach against comparable organizations.

Financial Services. Banks and insurance companies operating under oversight from regulators like the SEC, FINMA, MAS, and FCA typically enforce the strictest policies. JPMorgan Chase prohibited external generative AI tools entirely for client-facing employees through mid-2025 before launching an internal alternative called LLM Suite. Goldman Sachs implemented tiered access where analysts may use approved tools for research summarization but traders face complete restrictions. These policies commonly reference existing model risk management frameworks like SR 11-7 (Federal Reserve) and SS1/23 (Bank of England).

Healthcare and Pharmaceuticals. HIPAA-covered entities in the United States require that any patient-related information processing comply with the Privacy Rule and Security Rule. Company policies from organizations like Mayo Clinic and Kaiser Permanente explicitly prohibit entering protected health information into external AI services, even when providers offer BAA (Business Associate Agreement) coverage. Pharmaceutical companies like Roche and Novartis permit ChatGPT for literature reviews and protocol drafting but mandate human review checkpoints before any AI-generated content enters regulatory submissions to the FDA, EMA, or PMDA.

Technology and Software. Technology firms generally adopt permissive policies paired with technical guardrails. Atlassian, Shopify, and HubSpot encourage AI experimentation while deploying code scanning tools like GitHub Advanced Security and GitGuardian to prevent proprietary source code from appearing in prompts. Samsung notably banned ChatGPT usage after three separate incidents where employees uploaded confidential semiconductor fabrication data.

Essential Policy Sections Often Overlooked

Beyond standard acceptable use clauses, effective ChatGPT company policies should address:

  • Procurement governance: Approval workflows for purchasing AI tool licenses, including security questionnaire requirements modeled on the CAIQ (Consensus Assessments Initiative Questionnaire) from the Cloud Security Alliance
  • Output ownership: Clarification of intellectual property rights for AI-assisted deliverables, referencing jurisdictional guidance from the USPTO (February 2024 inventorship guidance) and the European Patent Office
  • Incident classification: Taxonomy defining severity levels for AI-related data incidents, integrated into existing ITSM workflows through platforms like ServiceNow or Jira Service Management
  • Sunset and review cadence: Mandatory quarterly policy reviews triggered by model version updates, regulatory changes, or internal incident postmortems — preventing policies from becoming outdated as capabilities evolve

Organizations domiciled under Malaysian jurisdiction reference PDPA 2010 alongside Cybersecurity Act 2024 provisions when drafting permissible usage clauses. Multinational corporations spanning Shenzhen, Yokohama, and Bangalore incorporate extraterritorial compliance matrices addressing PIPL, APPI, and DPDPA statutory obligations simultaneously. Policy architects holding CIPP/A certifications from IAPP construct graduated enforcement taxonomies distinguishing negligent misuse from deliberate circumvention, calibrating disciplinary escalation through proportionality doctrines established in employment jurisprudence.

Common Questions

Yes. If employees use AI tools at work — and most do — a formal policy is essential. Without one, companies face data privacy risks, quality issues, and potential regulatory violations. A policy empowers employees to use AI confidently while protecting the company.

A comprehensive policy should cover: approved tools and plans, approved use cases by department, prohibited activities, data classification and handling rules, quality assurance requirements, disclosure guidelines, employee and manager responsibilities, incident reporting procedures, and consequences of non-compliance.

AI company policies should be reviewed quarterly due to the rapid pace of AI development. Major updates are needed when: new AI tools are adopted, regulations change, incidents occur, or the company expands AI use to new departments or use cases.

References

  1. AI Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology (NIST) (2023). View source
  2. OWASP Top 10 for Large Language Model Applications 2025. OWASP Foundation (2025). View source
  3. ISO/IEC 42001:2023 — Artificial Intelligence Management System. International Organization for Standardization (2023). View source
  4. Model AI Governance Framework (Second Edition). PDPC and IMDA Singapore (2020). View source
  5. Personal Data Protection Act 2012. Personal Data Protection Commission Singapore (2012). View source
  6. EU AI Act — Regulatory Framework for Artificial Intelligence. European Commission (2024). View source
  7. OECD Principles on Artificial Intelligence. OECD (2019). View source

EXPLORE MORE

Other ChatGPT Training for Work Solutions

INSIGHTS

Related reading

Talk to Us About ChatGPT Training for Work

We work with organizations across Southeast Asia on chatgpt training for work programs. Let us know what you are working on.