Clear policy is the foundation of academic integrity. But many schools are working with pre-AI policies that don't address the nuances of AI assistance—or they've hastily added AI bans that can't be enforced.
This guide provides a comprehensive policy template and implementation roadmap.
Executive Summary
- Effective AI academic honesty policies are clear, enforceable, and focused on learning
- Policies should define categories of AI use, not just blanket rules
- Assignment-level guidance is essential—general policy alone isn't enough
- Consequences should be graduated based on severity and intent
- Implementation requires teacher training, student communication, and ongoing review
- The policy should evolve as AI capabilities and understanding change
Policy Template: AI Academic Honesty
[School Name] Academic Honesty Policy: Artificial Intelligence
Version: [1.0] Effective Date: [Date] Review Date: [Date + 1 year]
1. Purpose
This policy establishes expectations for academic honesty regarding artificial intelligence (AI) tools. It aims to:
- Ensure students develop genuine understanding and skills
- Provide clear guidance on acceptable and unacceptable AI use
- Prepare students for ethical AI use in their future studies and careers
- Maintain fairness for all students
2. Scope
This policy applies to all students at [School Name] for all academic work including assignments, projects, assessments, and examinations.
3. Definitions
Artificial Intelligence (AI) tools include:
- Large language models (ChatGPT, Claude, Gemini, etc.)
- AI writing assistants with generative features
- AI code generators
- AI image, audio, or video generators
- AI-powered research or summarization tools
- Any tool that generates, writes, or creates content using AI
Academic work means any work submitted for academic credit or evaluation.
Original work means work that represents the student's own thinking, understanding, and effort, even when using permitted tools or sources.
4. Core Principles
4.1 Learning is the goal. Academic work should demonstrate and develop your understanding, not just produce output.
4.2 Transparency matters. When AI use is permitted, be honest about how you used it.
4.3 Follow assignment guidelines. Specific assignments may permit, restrict, or prohibit AI use. Follow these guidelines.
4.4 Demonstrate understanding. Be prepared to explain, discuss, or build upon any work you submit.
5. AI Use Categories
Teachers will specify which category applies to each assignment:
| Category | Symbol | Meaning |
|---|---|---|
| AI Prohibited | 🚫 | No AI tools may be used in any part of the work |
| AI for Research Only | 🔍 | AI may be used to find information (like a search engine) but not to generate content |
| AI as Assistant | ✍️ | AI may be used for grammar, spelling, structure suggestions, and brainstorming |
| AI with Disclosure | 📝 | AI may be used more substantially, but you must disclose how |
| AI Unrestricted | ✅ | Use AI however you wish (learning objectives accommodate AI use) |
Default: Unless otherwise specified, assignments are AI for Research Only (🔍).
6. Disclosure Requirements
When AI use requires disclosure (📝 category), include:
- Which AI tool(s) you used
- How you used them (research, drafting, editing, etc.)
- Which portions of the work were AI-assisted
Example disclosure: "I used ChatGPT to brainstorm initial ideas and create an outline. All writing and analysis is my own."
7. Prohibited Conduct
The following constitute academic honesty violations:
7.1 Using AI tools when prohibited for an assignment (🚫)
7.2 Submitting AI-generated content as your own original work without appropriate disclosure
7.3 Using AI to complete work in ways that misrepresent your understanding
7.4 Having AI complete work while only making superficial edits
7.5 Failing to disclose AI use when disclosure is required
7.6 Using AI to circumvent learning objectives (e.g., having AI write an essay for a writing skills assessment)
8. Consequences
Consequences are determined based on:
- Severity of the violation
- Whether the student understood the rules
- Whether this is a first or repeat offense
- The student's response when addressed
| Level | Circumstances | Typical Consequences |
|---|---|---|
| Level 1 | Minor violation, first offense, rules unclear | Educational conversation, redo assignment |
| Level 2 | Clear violation, first offense | Grade reduction, parent notification, recorded warning |
| Level 3 | Significant violation or repeat offense | Zero on assignment, formal disciplinary record |
| Level 4 | Severe or repeated violations | Failure in course, extended disciplinary action |
9. Investigation Process
9.1 Initial concern: Teacher identifies potential violation
9.2 Conversation: Teacher discusses with student privately before any accusation
9.3 Evidence gathering: Teacher considers multiple factors (detection tools are one input, never sole evidence)
9.4 Decision: Following school disciplinary procedures
9.5 Appeal: Students may appeal through standard school procedures
10. Teacher Responsibilities
Teachers will:
- Clearly communicate AI expectations for each assignment
- Specify which AI use category applies
- Design assessments that promote genuine learning
- Apply policy consistently
- Report concerns following school procedures
11. Student Rights
Students have the right to:
- Clear communication about AI expectations
- Fair and consistent application of policy
- Opportunity to explain their work before conclusions are drawn
- Appeal decisions through school procedures
- Not be accused based solely on detection tool output
12. Policy Review
This policy will be reviewed annually and updated as AI technology and educational understanding evolve.
Acknowledgment
I have read and understand [School Name]'s AI Academic Honesty Policy.
Student Name: _________________________ Date: _____________
Student Signature: _________________________
Parent/Guardian Name: _________________________ Date: _____________
Parent/Guardian Signature: _________________________
Implementation Guide
Phase 1: Development (4-6 weeks)
Week 1-2: Stakeholder input
- Gather teacher feedback on current challenges
- Review student understanding of existing policy
- Consult with legal/compliance if needed
Week 3-4: Policy drafting
- Adapt template to school context
- Review with department heads
- Legal review if significant changes
Week 5-6: Approval
- Present to leadership
- Board approval if required
- Finalize documentation
Phase 2: Communication (2-4 weeks)
Teachers:
- Professional development session on policy
- Training on AI use categories
- Practice applying categories to assignments
Students:
- Assembly or class presentation
- Discussion in advisory/homeroom
- Signed acknowledgment
Parents:
- Newsletter communication
- Parent information session (optional)
- FAQ document
Phase 3: Implementation (Ongoing)
First month:
- Teachers label all assignments with AI category
- Focus on education rather than enforcement
- Collect questions and confusion points
First semester:
- Address issues as learning opportunities
- Gather feedback from teachers and students
- Note needed policy clarifications
End of year:
- Formal policy review
- Update based on experience
- Communicate any changes
Common Implementation Challenges
Challenge 1: Teachers apply policy inconsistently
Solution: Regular calibration sessions. Share examples of how different teachers are applying policy. Create department-level alignment.
Challenge 2: Students claim they didn't understand
Solution: Require signed acknowledgment. Teachers must specify AI category on every assignment. Over-communicate at start.
Challenge 3: Parents disagree with policy
Solution: Explain rationale (learning, not punishment). Offer to discuss concerns. Be willing to listen but maintain core principles.
Challenge 4: Detection tools create conflict
Solution: Clear protocol that detection is never sole evidence. Train teachers on limitations. Focus on investigation, not accusation.
Frequently Asked Questions
Q1: What if teachers forget to specify AI category?
Default applies (AI for Research Only). Teachers should be reminded to specify, especially for major assignments.
Q2: How do we handle AI built into tools students already use (Grammarly, Google Docs)?
Generally acceptable as writing assistants unless assignment specifies AI Prohibited. Teachers should clarify when basic tool AI differs from generative AI use.
Q3: What about collaborative work where one student used AI?
All group members are responsible for group submissions. Groups should discuss and follow AI guidelines together.
Q4: Can students appeal based on detection tool inaccuracy?
Detection tools shouldn't be sole evidence. If they were, that's grounds for reconsideration.
Next Steps
Adapt this template to your school context, gather stakeholder input, and commit to ongoing review as AI and education evolve together.
Need help developing your school's academic integrity approach?
→ Book an AI Readiness Audit with Pertama Partners. We'll help you create policies that work.
References
- International Baccalaureate. (2024). Academic Integrity Policy Guidance.
- International Center for Academic Integrity. (2024). Fundamental Values.
- UNESCO. (2024). AI in Education Policy Framework.
Frequently Asked Questions
Define what AI use is acceptable in different contexts, disclosure requirements, consequences for violations, how AI detection will be used, and how the policy will evolve.
Be explicit about what's allowed in each assignment, provide examples, train faculty on consistent messaging, and create resources students can reference easily.
Review at least annually given rapid AI evolution. Build in processes to update policies between reviews when significant AI developments occur.
References
- International Baccalaureate. (2024). Academic Integrity Policy Guidance.. International Baccalaureate Academic Integrity Policy Guidance (2024)
- International Center for Academic Integrity. (2024). Fundamental Values.. International Center for Academic Integrity Fundamental Values (2024)
- UNESCO. (2024). AI in Education Policy Framework.. UNESCO AI in Education Policy Framework (2024)

