Setting Clear Boundaries for ChatGPT at Work
One of the biggest challenges companies face with AI adoption is ambiguity. Employees want to use ChatGPT but are unsure what is allowed. Without clear guidelines, the result is either shadow AI use (employees using tools without permission) or AI avoidance (employees not using tools out of fear).
This guide provides a clear framework for categorising ChatGPT use cases.
The Three Categories
Approved (Green Light)
Use freely without additional approval. These tasks involve no sensitive data and produce outputs that are reviewed before sharing.
Conditionally Approved (Yellow Light)
Permitted with specific safeguards. Requires data anonymisation, manager awareness, or additional review steps.
Prohibited (Red Light)
Never permitted, regardless of circumstances. These use cases involve unacceptable data, legal, or ethical risks.
Approved Use Cases (Green Light)
| Use Case | Department | Notes |
|---|---|---|
| Draft emails | All | Review before sending |
| Summarise public documents | All | Verify key facts |
| Brainstorm ideas | All | No restrictions |
| Proofread and edit writing | All | Check suggestions are appropriate |
| Create meeting agendas | All | No sensitive content required |
| Research public information | All | Verify with primary sources |
| Draft social media posts | Marketing | Review for brand consistency |
| Generate blog outlines | Marketing | Edit and add original insights |
| Create training quiz questions | L&D | Review for accuracy |
| Write job descriptions | HR | Use standard role info only |
| Draft SOP templates | Operations | Use process info, not data |
| Explain concepts/calculations | Finance | No confidential figures |
| Create presentation outlines | All | Add real data manually |
| Language translation | All | Review for accuracy and tone |
Conditionally Approved Use Cases (Yellow Light)
| Use Case | Safeguard Required | Who Approves |
|---|---|---|
| Analyse survey results | Anonymise all responses first | Manager |
| Draft HR policies | No employee-specific info | HR Head |
| Create proposal templates | Remove pricing/confidential terms | Sales Manager |
| Summarise meeting notes | Remove sensitive discussions | Attendees' approval |
| Generate performance review drafts | Use competency frameworks, not names | HR + Manager |
| Draft customer communications | No account details or PII | Team Lead |
| Create vendor evaluation templates | Remove vendor names if confidential | Procurement |
| Analyse operational data | Aggregate and anonymise | Operations Head |
| Draft board paper structure | Illustrative figures only | CFO/CEO |
| Create training case studies | Based on public examples only | L&D Manager |
Prohibited Use Cases (Red Light)
| Use Case | Why Prohibited |
|---|---|
| Input customer PII (names, IC, addresses) | PDPA violation risk |
| Input employee salary or personal data | Privacy and employment law risk |
| Paste proprietary source code | IP and trade secret risk |
| Input financial statements before public release | Insider information risk |
| Use AI for hiring/firing decisions | Bias and discrimination risk |
| Generate legal or medical advice | Professional liability risk |
| Input audit findings or investigation details | Confidentiality breach |
| Create deepfakes or impersonate others | Ethical and legal violation |
| Bypass company security controls | Security policy violation |
| Use personal AI accounts for work | Data governance violation |
How to Handle Edge Cases
When an employee encounters a use case not clearly covered:
- Apply the data classification test: Is any Restricted (Red) data involved? If yes, it is prohibited.
- Apply the disclosure test: Would you be comfortable if the AI company's employees could see this prompt? If no, do not proceed.
- Apply the review test: Can the output be properly reviewed before use? If no, find an alternative approach.
- Ask your manager: If still unsure, escalate before using AI.
Department-Specific Quick Reference
HR Team
- Approved: Job descriptions, interview questions, training content, process documentation
- Conditional: Survey analysis (anonymise), policy drafts (no individual data)
- Prohibited: Salary data, performance reviews with names, disciplinary records
Sales Team
- Approved: Prospect research (public info), email drafts, proposal templates, objection handling
- Conditional: Client communication drafts (remove details), CRM summaries
- Prohibited: Client contracts, pricing agreements, competitive intelligence documents
Finance Team
- Approved: Report structure drafts, SOP creation, concept explanations, formula help
- Conditional: Management report narratives (illustrative figures only)
- Prohibited: Actual financial data, tax calculations, audit findings, investor information
IT Team
- Approved: Technical documentation, error research (public messages), architecture discussions
- Conditional: Code review (non-proprietary code only)
- Prohibited: Source code with trade secrets, API keys, security configurations, access credentials
Communicating These Guidelines
- Create a one-page quick reference card that employees can keep at their desk
- Include in AI training as a mandatory module
- Post in team channels (Slack, Teams) for easy reference
- Update quarterly as new use cases emerge
- Celebrate good examples — share success stories of approved AI use
Related Reading
- ChatGPT Company Policy — Create a formal usage policy around approved use cases
- AI Use-Case Intake Process — A structured process for evaluating and approving AI use cases
- AI Evaluation Framework — Measure quality, risk, and ROI of AI implementations
Frequently Asked Questions
Approved uses include: drafting emails, summarising public documents, brainstorming, proofreading, creating presentations, and research. Conditional uses (requiring anonymisation or approval) include: survey analysis, policy drafts, and customer communication templates. Prohibited uses include: inputting PII, financial data, or source code.
Prohibited uses include: inputting customer personal data, employee salary information, proprietary source code, pre-release financial data, or audit findings. Using AI for hiring decisions, generating legal/medical advice, or bypassing security controls is also prohibited.
Best practices: (1) create a one-page quick reference card, (2) include in mandatory AI training, (3) post in team communication channels, (4) update quarterly, and (5) share positive examples of approved AI use. Clear, accessible guidelines reduce shadow AI use and increase responsible adoption.
