Setting Clear Boundaries for ChatGPT at Work
One of the biggest challenges companies face with AI adoption is ambiguity. Employees want to use ChatGPT but are unsure what is allowed. Without clear guidelines, the result is either shadow AI use (employees using tools without permission) or AI avoidance (employees not using tools out of fear).
This guide provides a clear framework for categorising ChatGPT use cases.
The Three Categories
Approved (Green Light)
Use freely without additional approval. These tasks involve no sensitive data and produce outputs that are reviewed before sharing.
Conditionally Approved (Yellow Light)
Permitted with specific safeguards. Requires data anonymisation, manager awareness, or additional review steps.
Prohibited (Red Light)
Never permitted, regardless of circumstances. These use cases involve unacceptable data, legal, or ethical risks.
Approved Use Cases (Green Light)
| Use Case | Department | Notes |
|---|---|---|
| Draft emails | All | Review before sending |
| Summarise public documents | All | Verify key facts |
| Brainstorm ideas | All | No restrictions |
| Proofread and edit writing | All | Check suggestions are appropriate |
| Create meeting agendas | All | No sensitive content required |
| Research public information | All | Verify with primary sources |
| Draft social media posts | Marketing | Review for brand consistency |
| Generate blog outlines | Marketing | Edit and add original insights |
| Create training quiz questions | L&D | Review for accuracy |
| Write job descriptions | HR | Use standard role info only |
| Draft SOP templates | Operations | Use process info, not data |
| Explain concepts/calculations | Finance | No confidential figures |
| Create presentation outlines | All | Add real data manually |
| Language translation | All | Review for accuracy and tone |
Conditionally Approved Use Cases (Yellow Light)
| Use Case | Safeguard Required | Who Approves |
|---|---|---|
| Analyse survey results | Anonymise all responses first | Manager |
| Draft HR policies | No employee-specific info | HR Head |
| Create proposal templates | Remove pricing/confidential terms | Sales Manager |
| Summarise meeting notes | Remove sensitive discussions | Attendees' approval |
| Generate performance review drafts | Use competency frameworks, not names | HR + Manager |
| Draft customer communications | No account details or PII | Team Lead |
| Create vendor evaluation templates | Remove vendor names if confidential | Procurement |
| Analyse operational data | Aggregate and anonymise | Operations Head |
| Draft board paper structure | Illustrative figures only | CFO/CEO |
| Create training case studies | Based on public examples only | L&D Manager |
Prohibited Use Cases (Red Light)
| Use Case | Why Prohibited |
|---|---|
| Input customer PII (names, IC, addresses) | PDPA violation risk |
| Input employee salary or personal data | Privacy and employment law risk |
| Paste proprietary source code | IP and trade secret risk |
| Input financial statements before public release | Insider information risk |
| Use AI for hiring/firing decisions | Bias and discrimination risk |
| Generate legal or medical advice | Professional liability risk |
| Input audit findings or investigation details | Confidentiality breach |
| Create deepfakes or impersonate others | Ethical and legal violation |
| Bypass company security controls | Security policy violation |
| Use personal AI accounts for work | Data governance violation |
How to Handle Edge Cases
When an employee encounters a use case not clearly covered:
- Apply the data classification test: Is any Restricted (Red) data involved? If yes, it is prohibited.
- Apply the disclosure test: Would you be comfortable if the AI company's employees could see this prompt? If no, do not proceed.
- Apply the review test: Can the output be properly reviewed before use? If no, find an alternative approach.
- Ask your manager: If still unsure, escalate before using AI.
Department-Specific Quick Reference
HR Team
- Approved: Job descriptions, interview questions, training content, process documentation
- Conditional: Survey analysis (anonymise), policy drafts (no individual data)
- Prohibited: Salary data, performance reviews with names, disciplinary records
Sales Team
- Approved: Prospect research (public info), email drafts, proposal templates, objection handling
- Conditional: Client communication drafts (remove details), CRM summaries
- Prohibited: Client contracts, pricing agreements, competitive intelligence documents
Finance Team
- Approved: Report structure drafts, SOP creation, concept explanations, formula help
- Conditional: Management report narratives (illustrative figures only)
- Prohibited: Actual financial data, tax calculations, audit findings, investor information
IT Team
- Approved: Technical documentation, error research (public messages), architecture discussions
- Conditional: Code review (non-proprietary code only)
- Prohibited: Source code with trade secrets, API keys, security configurations, access credentials
Communicating These Guidelines
- Create a one-page quick reference card that employees can keep at their desk
- Include in AI training as a mandatory module
- Post in team channels (Slack, Teams) for easy reference
- Update quarterly as new use cases emerge
- Celebrate good examples — share success stories of approved AI use
Related Reading
- ChatGPT Company Policy — Create a formal usage policy around approved use cases
- AI Use-Case Intake Process — A structured process for evaluating and approving AI use cases
- AI Evaluation Framework — Measure quality, risk, and ROI of AI implementations
Approved Use Case Framework: Categorization by Risk Level
Organizations implementing generative tools benefit from structured categorization rather than binary approved/prohibited lists. The following framework, developed through Pertama Partners engagements across forty-three enterprises in Singapore, Malaysia, Indonesia, and the Philippines between 2024 and March 2026, provides actionable classification guidance:
Green Category — Pre-Approved for All Employees. Internal email drafting and formatting, meeting note summarization from transcripts already classified as internal, research synthesis from publicly available sources, code comment generation and documentation improvement, presentation outline creation, and grammar correction for non-confidential documents. These applications involve no sensitive data exposure and generate productivity gains averaging twelve to eighteen percent based on participant self-reporting across Pertama Partners training cohorts.
Yellow Category — Approved with Manager Awareness. Customer communication drafting (requires human review before sending), competitive analysis compilation, marketing copy generation for internal review workflows, technical documentation authoring, and project status report summarization. These use cases require established review procedures but pose manageable risk when employees follow output verification protocols.
Red Category — Requires Formal Exception Approval. Processing customer personally identifiable information, generating legal contract language, producing financial projections shared externally, creating medical or pharmaceutical content, developing pricing recommendations, and any application involving regulated data categories under PDPA, GDPR, or Indonesia's UU PDP legislation.
How Approved Use Cases Differ Across Industries
Professional Services (Legal, Accounting, Consulting). Firms including Baker McKenzie, KPMG, and Accenture publicly disclosed approved use case registries during 2025. Common approvals include research memorandum drafting, engagement letter template customization, and timesheet narrative generation. Prohibited applications typically include client advice generation, tribunal submission drafting, and audit opinion formulation.
Healthcare and Pharmaceutical. Approved applications concentrate on administrative workflows — appointment scheduling communication, insurance pre-authorization documentation, and clinical trial recruitment material drafting. Patient-facing content generation remains restricted pending regulatory guidance from national health authorities across ASEAN jurisdictions.
Financial Services. Banks and insurers operating under MAS, Bank Negara Malaysia, or OJK supervision maintain narrower approved registries reflecting prudential requirements. Typical approvals cover internal reporting summarization, regulatory filing draft preparation, and staff training material development. Customer-facing applications require additional approval layers involving compliance, legal, and technology risk management committees.
Maintaining Currency: Quarterly Use Case Reviews
Approved registries become outdated rapidly as model capabilities evolve and regulatory landscapes shift. Establishing quarterly review cycles — scheduled for March, June, September, and December — ensures organizations capture newly viable applications while identifying emerging risk categories. Each review should incorporate employee feedback submissions, incident reports from the preceding quarter, updated vendor capability documentation, and regulatory developments from relevant jurisdictions.
Organizations curating sanctioned usage inventories benefit from NIST's AI 100-1 taxonomy distinguishing generative, discriminative, and reinforcement-based application archetypes. Approved registries at multinational corporations spanning Petronas, DBS Group, and Ayala Corporation categorize permissible deployments through ROPA (Records of Processing Activities) templates satisfying GDPR Article 30 documentation obligations. Use-case triage protocols incorporate DPIA (Data Protection Impact Assessment) threshold screening calibrated against Singapore's PDPC Advisory Guidelines and Malaysia's PDPA 2010 Commissioner's enforcement precedents. Procurement teams cross-reference vendor security postures through SOC 2 Type II attestation reports, ISO 27001 certification scopes, and CAIQ (Consensus Assessment Initiative Questionnaire) submissions maintained through CSA STAR registry infrastructure, preventing shadow-IT proliferation through unauthorized SaaS procurement circumventing centralized architectural governance.
Common Questions
Approved uses include: drafting emails, summarising public documents, brainstorming, proofreading, creating presentations, and research. Conditional uses (requiring anonymisation or approval) include: survey analysis, policy drafts, and customer communication templates. Prohibited uses include: inputting PII, financial data, or source code.
Prohibited uses include: inputting customer personal data, employee salary information, proprietary source code, pre-release financial data, or audit findings. Using AI for hiring decisions, generating legal/medical advice, or bypassing security controls is also prohibited.
Best practices: (1) create a one-page quick reference card, (2) include in mandatory AI training, (3) post in team communication channels, (4) update quarterly, and (5) share positive examples of approved AI use. Clear, accessible guidelines reduce shadow AI use and increase responsible adoption.
References
- AI Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology (NIST) (2023). View source
- ISO/IEC 42001:2023 — Artificial Intelligence Management System. International Organization for Standardization (2023). View source
- Model AI Governance Framework (Second Edition). PDPC and IMDA Singapore (2020). View source
- Model AI Governance Framework for Generative AI. Infocomm Media Development Authority (IMDA) (2024). View source
- ASEAN Guide on AI Governance and Ethics. ASEAN Secretariat (2024). View source
- OECD Principles on Artificial Intelligence. OECD (2019). View source
- EU AI Act — Regulatory Framework for Artificial Intelligence. European Commission (2024). View source
