
An AI Acceptable Use Policy (AUP) is a practical, employee-facing document that tells your team exactly what they can and cannot do with AI tools at work. Unlike a comprehensive AI governance framework (which is designed for leadership and compliance teams), an AUP is written in plain language for everyday employees.
Think of it as the difference between a full employment law manual and a simple employee handbook. The AUP is the handbook — clear, actionable, and designed to be read in 10 minutes.
Many companies make the mistake of embedding AI rules into their broader IT policy or data privacy policy. The problem is that employees rarely read those documents, and even when they do, the AI-specific guidance gets lost in dozens of pages of general IT rules.
A standalone AI AUP:
Effective date: [DATE] Applies to: All employees, contractors, and temporary staff Policy owner: [CTO / CISO / Head of Digital]
This policy applies to your use of any artificial intelligence tool for work purposes, including but not limited to:
You may only use AI tools that have been approved by [COMPANY]. Currently approved tools are:
| Tool | What It Can Be Used For |
|---|---|
| [Tool 1] | [Approved use cases] |
| [Tool 2] | [Approved use cases] |
| [Tool 3] | [Approved use cases] |
Important: Free or personal versions of AI tools (e.g. the free version of ChatGPT) are not approved for work use. Always use the enterprise/company account.
To request a new AI tool, contact [designated person/team].
Data rules — never input these into any AI tool:
Usage rules:
Before using any AI output in your work:
| Situation | Disclosure Required? |
|---|---|
| Internal emails and notes | No |
| Internal reports and presentations | No, but recommended for transparency |
| Client deliverables | Yes — inform the client or follow client-specific AI policies |
| Regulatory filings or submissions | Yes — must be reviewed by compliance team |
| Published content (blog, social media) | Follow [COMPANY]'s content policy |
| Recruitment and HR decisions | Yes — document AI involvement |
If you accidentally input restricted data, encounter a concerning AI output, or are unsure about an AI use case:
Reporting incidents promptly is essential and will not result in disciplinary action for good-faith mistakes.
By using AI tools at [COMPANY], you agree to:
Fill in the approved tools table, designate the policy owner and incident contact, and adjust the disclosure requirements to match your business context.
The AUP should be no more than 2-3 pages. If it is longer, employees will not read it. Move detailed technical and legal content to a separate governance document.
Post the AUP on your intranet, include it in onboarding materials, and distribute printed copies during AI training sessions.
An email attachment is not sufficient. Walk employees through the policy in a 30-minute session, with real examples of do's and don'ts.
The policy only works if it is enforced. Address violations promptly and consistently, while encouraging good-faith incident reporting.
| AI Policy (Governance Document) | AI AUP (Employee Document) |
|---|---|
| Comprehensive, 10-20+ pages | Concise, 2-3 pages |
| Covers strategy, risk, compliance | Covers daily dos and don'ts |
| Audience: leadership, legal, compliance | Audience: all employees |
| Updated quarterly | Updated as tools/rules change |
| References regulations in detail | References regulations simply |
Most companies need both. The AI policy is the governance foundation; the AI AUP is the practical employee guide derived from it.
Policy enforcement requires clear consequences for violations combined with accessible support channels that help employees comply. Define a graduated response framework where initial minor violations trigger educational interventions and additional training rather than punitive measures, while repeated or serious violations involving data breaches or regulatory non-compliance trigger formal disciplinary processes. Establish an AI policy helpdesk or designated contact person where employees can ask questions about whether specific AI uses are permitted before taking action. Regular compliance spot checks, where managers randomly review team AI usage patterns against policy requirements, maintain ongoing awareness that the policy is actively monitored and enforced rather than existing only on paper.
AI capabilities expand rapidly, creating new use cases that existing policies may not address. Establish a process for employees to submit new AI use case requests for policy review, enabling the organization to evaluate novel applications against risk criteria before widespread adoption. The review process should assess data sensitivity implications, regulatory compliance requirements, intellectual property considerations, and potential reputational risks for each proposed use case. Approved use cases should be added to the policy with specific guidelines, while rejected use cases should be documented with explanations that help employees understand the reasoning and identify alternative approaches that would be permitted.
Different departments face distinct AI usage risks that a comprehensive acceptable use policy must address. Legal departments using AI for contract review must ensure that AI-generated analyses are verified by qualified attorneys before being relied upon. Marketing teams using AI for content creation need guidelines about disclosure requirements, brand voice consistency, and intellectual property verification for AI-generated materials. Engineering teams using AI coding assistants require guidance about code review requirements, license compliance for AI-suggested code, and documentation standards for AI-assisted development. Addressing these department-specific considerations within the overarching policy ensures that all employees receive relevant guidance while maintaining organizational consistency.
Establish a structured annual review process that evaluates whether the acceptable use policy remains current, practical, and effective. Gather input from employees across departments about policy provisions that are unclear, unnecessarily restrictive, or missing coverage for new AI tools and use cases they have encountered. Review incident reports from the past year to identify policy gaps revealed by actual violations or near-misses. Benchmark policy provisions against industry peers and regulatory developments to ensure your organization maintains competitive and compliant AI usage standards.
An AI policy is a comprehensive governance document covering strategy, risk management, compliance, and organisational AI oversight. An AI acceptable use policy (AUP) is a shorter, employee-facing document that provides clear do's and don'ts for daily AI use. Most companies need both — the policy for governance, the AUP for practical guidance.
Yes, for work purposes. Free versions of AI tools like ChatGPT often use inputs for model training and lack enterprise data protection. The AUP should require employees to use only company-approved enterprise versions, which offer better security, data handling, and audit controls.
Enforcement starts with training — employees must understand the policy before they can follow it. Beyond that, enforcement includes regular reminders, manager accountability, technical controls (blocking unapproved tools via IT), incident reporting mechanisms, and consistent disciplinary follow-up for violations.