
When employees use ChatGPT at work, every prompt they type potentially shares company data with an external service. While enterprise AI plans have stronger data protections, the risk of data leakage is real β and one careless prompt can expose customer information, trade secrets, or confidential business data.
This guide explains the specific risks and practical steps to prevent data leakage.
An employee pastes a customer complaint email (including the customer's name, account number, and order details) into ChatGPT to draft a response. The customer's personal data is now processed by an external service.
Over multiple prompts, an employee shares enough context about a confidential project β team names, financial targets, strategic plans β that the accumulated information constitutes a confidential briefing.
A developer pastes proprietary source code into ChatGPT for debugging help. The code may contain algorithms, API keys, or business logic that constitutes trade secrets.
With consumer-tier AI products, user prompts may be used to improve the model. This means sensitive data could theoretically influence future outputs visible to other users. (Enterprise plans typically exclude data from training.)
The first defence against data leakage is a clear data classification system. Every piece of information in your company falls into one of these categories:
Information that is already publicly available or intended for public distribution.
AI Rule: Can be freely used with any AI tool.
Information that is not confidential but is meant for internal use only.
AI Rule: May be used with approved enterprise AI tools only (not free-tier consumer products).
Information that could harm the company or individuals if disclosed.
AI Rule: Must be anonymised before use. Remove all identifying details (names, numbers, dates). Use only with approved enterprise AI tools.
Information that must never enter any external AI system.
AI Rule: NEVER enter into any AI tool, under any circumstances.
Consumer-tier AI products (free ChatGPT, free Claude) have different data handling practices than enterprise plans. Key differences:
| Feature | Consumer/Free | Enterprise |
|---|---|---|
| Data used for training | Often yes | Typically no |
| Data retention | Extended | Limited/configurable |
| Admin controls | None | Full |
| Usage monitoring | None | Audit logs |
| Data processing agreement | None | Available |
| Compliance certifications | Limited | SOC 2, ISO 27001 |
Every employee who uses AI tools must understand:
Before pasting any text into an AI tool, check for and remove:
When data leakage occurs (or is suspected):
The Personal Data Protection Act requires organisations to protect personal data and obtain consent for its use. Inputting personal data into AI tools without proper safeguards may constitute a breach. Penalties can reach S$1 million per breach.
Malaysia's Personal Data Protection Act similarly requires organisations to safeguard personal data. Sharing personal data with AI services may violate data processing principles if proper consent and safeguards are not in place.
A company with effective AI data protection:
Yes, if employees input sensitive information into AI tools. The risks include: direct input of personal data, accumulation of confidential context across prompts, and exposure of intellectual property. Enterprise AI plans provide stronger protections, but employee training and data classification are essential safeguards.
ChatGPT Enterprise is significantly safer than consumer/free versions. Data is not used for model training, retention is configurable, admin controls are available, and SOC 2 compliance is maintained. However, even with Enterprise, employees must follow data classification guidelines β do not input restricted data (PII, credentials, source code).
Immediately stop the session, document what was shared, and report to IT Security within 1 hour. If personal data was involved, assess PDPA notification requirements. Then update safeguards to prevent recurrence β this may include additional training, technical controls, or policy updates.