
If your employees are using ChatGPT at work — and surveys show that 60-70% of knowledge workers in Southeast Asia already are — you need a formal policy. Without one, you face:
A clear policy sets expectations, reduces risk, and empowers employees to use AI confidently.
This policy governs the use of generative AI tools (including ChatGPT, Claude, Gemini, Copilot, and similar platforms) by all employees, contractors, and temporary staff of [Company Name].
The purpose of this policy is to:
The following AI tools are approved for business use:
| Tool | Approved Plan | Approved For |
|---|---|---|
| [ChatGPT] | [Enterprise/Team] | [All departments] |
| [Microsoft Copilot] | [M365 Copilot] | [All M365 users] |
| [Claude] | [Team/Enterprise] | [Specified teams] |
Important: Only approved tools and subscription plans may be used. Free or personal accounts must not be used for work tasks due to data handling differences.
AI tools may be used for:
General (All Employees)
Department-Specific (With Training)
The following are strictly prohibited:
Before using any data with AI tools:
Classify the data:
Anonymise when necessary:
All AI-assisted work must undergo human review before use:
| Output Type | Review Level | Reviewer |
|---|---|---|
| Internal emails | Self-review | Author |
| Internal reports | Peer review | Colleague |
| External communications | Manager review | Direct manager |
| Customer-facing documents | Department head | Head of department |
| Legal/regulatory documents | Expert review | Legal counsel |
| Financial statements | Double review | Finance manager + auditor |
All employees using AI tools must:
Managers must:
Report the following to [IT Security/Compliance team] immediately:
Violations of this policy may result in:
This policy will be reviewed and updated quarterly by the [AI Governance Committee/IT Department/HR].
Last updated: [Date] Next review: [Date + 3 months]
ChatGPT company policies adopted between 2024 and 2026 reveal distinct patterns depending on industry vertical, regulatory exposure, and organizational risk tolerance. Understanding these variations helps policy authors benchmark their approach against comparable organizations.
Financial Services. Banks and insurance companies operating under oversight from regulators like the SEC, FINMA, MAS, and FCA typically enforce the strictest policies. JPMorgan Chase prohibited external generative AI tools entirely for client-facing employees through mid-2025 before launching an internal alternative called LLM Suite. Goldman Sachs implemented tiered access where analysts may use approved tools for research summarization but traders face complete restrictions. These policies commonly reference existing model risk management frameworks like SR 11-7 (Federal Reserve) and SS1/23 (Bank of England).
Healthcare and Pharmaceuticals. HIPAA-covered entities in the United States require that any patient-related information processing comply with the Privacy Rule and Security Rule. Company policies from organizations like Mayo Clinic and Kaiser Permanente explicitly prohibit entering protected health information into external AI services, even when providers offer BAA (Business Associate Agreement) coverage. Pharmaceutical companies like Roche and Novartis permit ChatGPT for literature reviews and protocol drafting but mandate human review checkpoints before any AI-generated content enters regulatory submissions to the FDA, EMA, or PMDA.
Technology and Software. Technology firms generally adopt permissive policies paired with technical guardrails. Atlassian, Shopify, and HubSpot encourage AI experimentation while deploying code scanning tools like GitHub Advanced Security and GitGuardian to prevent proprietary source code from appearing in prompts. Samsung notably banned ChatGPT usage after three separate incidents where employees uploaded confidential semiconductor fabrication data.
Beyond standard acceptable use clauses, effective ChatGPT company policies should address:
Organizations domiciled under Malaysian jurisdiction reference PDPA 2010 alongside Cybersecurity Act 2024 provisions when drafting permissible usage clauses. Multinational corporations spanning Shenzhen, Yokohama, and Bangalore incorporate extraterritorial compliance matrices addressing PIPL, APPI, and DPDPA statutory obligations simultaneously. Policy architects holding CIPP/A certifications from IAPP construct graduated enforcement taxonomies distinguishing negligent misuse from deliberate circumvention, calibrating disciplinary escalation through proportionality doctrines established in employment jurisprudence.
Yes. If employees use AI tools at work — and most do — a formal policy is essential. Without one, companies face data privacy risks, quality issues, and potential regulatory violations. A policy empowers employees to use AI confidently while protecting the company.
A comprehensive policy should cover: approved tools and plans, approved use cases by department, prohibited activities, data classification and handling rules, quality assurance requirements, disclosure guidelines, employee and manager responsibilities, incident reporting procedures, and consequences of non-compliance.
AI company policies should be reviewed quarterly due to the rapid pace of AI development. Major updates are needed when: new AI tools are adopted, regulations change, incidents occur, or the company expands AI use to new departments or use cases.