Back to Insights
AI in Schools / Education OpsGuide

ChatGPT Policy for Schools: Specific Guidelines for Students and Teachers

December 7, 20257 min readMichael Lansdowne Hauge
Updated March 15, 2026
For:CTO/CIOConsultantCISOCHRO

Separate policy templates for students and teachers regarding ChatGPT and generative AI tools. Practical, enforceable guidelines for school communities.

Summarize and fact-check this article with:
Education Library Research - ai in schools / education ops insights

Key Takeaways

  • 1.Develop clear guidelines for student ChatGPT usage
  • 2.Establish teacher policies for AI tool integration
  • 3.Create age-appropriate rules across grade levels
  • 4.Build enforcement and monitoring procedures
  • 5.Communicate policies effectively to all stakeholders

Generic AI policies aren't enough. Schools need specific guidance for ChatGPT and similar generative AI tools, with different rules for students and staff.

This guide provides separate policy templates for students and teachers.


Executive Summary

  • ChatGPT and similar tools require specific policy, not just general AI guidelines
  • Students and teachers need different guidance (different contexts, responsibilities)
  • Policy should address both educational use and data protection concerns
  • Be specific about permitted/prohibited uses rather than blanket rules
  • Account for different age groups and maturity levels
  • Update policy as tools and understanding evolve

Student ChatGPT Policy


[School Name] Student Guidelines: ChatGPT and AI Writing Tools

For Students in Grades [X-Y]


What is ChatGPT?

ChatGPT is an AI tool that can write text, answer questions, and help with many tasks. It's powerful but has limitations—it can be wrong, it doesn't truly understand things, and it can't replace your own thinking and learning.

Why does this matter?

School is about learning. When you use AI to do your thinking for you, you miss the learning. Our guidelines help you use AI in ways that support your learning rather than replace it.


General Rules:

  1. Don't submit AI writing as your own. Work you turn in should represent your thinking and effort.

  2. Follow assignment rules. Each assignment may have different AI rules. Read them carefully.

  3. Be honest about AI use. When you use AI (when allowed), be open about it.

  4. Think critically. AI can be wrong. Don't assume it's correct.

  5. Protect your privacy. Don't put personal information into ChatGPT.


When You CAN Use ChatGPT:

✅ To learn how something works (like a smart search engine) ✅ To explain concepts you don't understand ✅ To brainstorm ideas (then develop them yourself) ✅ To check grammar and spelling (like a spell-checker) ✅ When your teacher specifically says AI is allowed


When You CANNOT Use ChatGPT:

🚫 To write assignments you submit as your own work 🚫 To complete assessments meant to show YOUR understanding 🚫 When your teacher says "No AI" for an assignment 🚫 To take tests or quizzes 🚫 To write personal statements or applications


How to Use AI Responsibly:

  1. Use it to learn, not to avoid learning. Ask "why" questions, not "do my homework" questions.

  2. Verify information. AI makes things up. Check important facts.

  3. Credit when appropriate. If AI helped, say so when required.

  4. Start with your own thinking. AI works best when you have ideas first.


Consequences for Misuse:

Using AI inappropriately is an academic integrity issue. Consequences follow our academic honesty policy and depend on:

  • Whether you knew the rules
  • How you used AI
  • Whether this is a first or repeat issue

Questions?

If you're unsure whether AI use is okay for an assignment, ASK YOUR TEACHER FIRST.


Teacher ChatGPT Policy


[School Name] Guidelines: ChatGPT and AI Tools for Teachers


Purpose:

These guidelines help teachers use AI tools productively while managing risks to students, data security, and academic integrity.


Permitted Uses:

Lesson Planning: ✅ Generating lesson plan ideas and structures ✅ Creating differentiated content variations ✅ Brainstorming activities and discussion questions ✅ Developing rubrics and assessment criteria

Content Creation: ✅ Drafting instructional materials ✅ Creating practice problems or examples ✅ Generating quiz questions (review before use) ✅ Writing parent communication drafts

Administrative Tasks: ✅ Drafting routine communications ✅ Summarizing meeting notes ✅ Organizing information ✅ Creating templates

Professional Learning: ✅ Exploring AI capabilities firsthand ✅ Understanding what students have access to ✅ Developing AI literacy


Restricted Uses (Proceed with Caution):

⚠️ Student data: Never input identifiable student information (names, grades, behavior details) into ChatGPT or similar public AI tools ⚠️ Assessment content: AI-generated questions may be easier for students to find/anticipate ⚠️ Sensitive communications: Review AI drafts carefully before sending ⚠️ Substitute for your judgment: AI can inform decisions but shouldn't make them


Prohibited Uses:

🚫 Inputting student names, grades, or personal information 🚫 Uploading student work for analysis (data protection concern) 🚫 Using AI to write student evaluations or reports without substantial personal input 🚫 Relying on AI for student recommendations without verification 🚫 Using AI in ways that violate school data protection policies


Data Protection Requirements:

  1. Anonymize. If you need AI help with student-related content, remove identifying information.

  2. Use school-approved tools. If the school has enterprise AI tools with data protection, prefer those over public tools.

  3. Assume public. Treat anything you put into public AI tools as potentially public.

  4. Check vendor terms. Understand whether your inputs are used for training.


Academic Integrity Responsibilities:

  1. Communicate clearly. Specify AI expectations for each assignment.

  2. Design thoughtfully. Create assessments that promote learning even with AI availability.

  3. Model good use. Show students how to use AI appropriately.

  4. Address violations fairly. Follow school protocols; never rely solely on detection tools.


Staying Current:

AI tools change rapidly. Commit to:

  • Ongoing learning about AI capabilities
  • Reviewing and updating practices
  • Sharing effective approaches with colleagues
  • Updating student guidance as needed

Implementation Guide

For Students

Communication:

  1. Age-appropriate assembly or class presentation
  2. Discussion in advisory/homeroom
  3. Poster/visual reminder of key rules
  4. Q&A opportunity

Reinforcement:

  • Reference policy when giving assignments
  • Discuss AI use openly in class
  • Address questions without judgment

For Teachers

Training:

  1. Overview session on AI capabilities
  2. Hands-on exploration with ChatGPT
  3. Discussion of permitted/restricted uses
  4. Data protection reminder

Support:

  • FAQ document for common questions
  • Point person for AI questions
  • Regular check-ins/updates

Next Steps

Adapt these templates to your school context. Train teachers first, then communicate to students with clarity and consistency.

Need help developing your school's ChatGPT approach?

Book an AI Readiness Audit with Pertama Partners. We'll help you create policies that work for your community.


Teacher Guidelines for Classroom ChatGPT Usage

Teachers need clear guidelines specifying how they may use ChatGPT for instructional preparation, student interaction, and professional activities. Permitted uses should include lesson plan development, differentiated assignment creation, formative assessment question generation, and administrative communication drafting. Prohibited uses should address sharing student data with ChatGPT, using AI-generated content for formal student evaluations without human review, and substituting AI responses for personalized student feedback. Guidelines should also encourage teachers to model responsible AI use by discussing their ChatGPT usage with students, demonstrating critical evaluation of AI outputs, and sharing examples of how AI assists but does not replace professional judgment.

Student Guidelines for Responsible ChatGPT Use

Student ChatGPT guidelines should be age-appropriate, clearly written, and accompanied by specific examples that illustrate acceptable and unacceptable use. For each assignment type, specify the level of AI assistance permitted: no AI use, AI for brainstorming only, AI for research assistance with required attribution, or AI-assisted drafting with mandatory disclosure. Require students to document their AI interactions by saving conversation logs or maintaining AI usage journals that instructors can review.

Schools should establish regular policy review cycles that coincide with the academic calendar, evaluating policy effectiveness at the end of each semester and updating provisions to reflect new AI capabilities, emerging academic integrity challenges, and feedback from teachers, students, and parents. Including student representatives in the policy review process ensures that guidelines remain practical and relevant to actual student experiences rather than reflecting administrative assumptions about how students interact with AI tools.

How School ChatGPT Policies Have Evolved Since 2023

Early school ChatGPT policies in 2023 fell into two extremes: blanket bans that proved unenforceable, or unrestricted access that created academic integrity chaos. By 2025, most institutions converged on nuanced policies that permit AI assistance within defined boundaries specific to assignment type, subject area, and student grade level. The most effective current policies use a tiered permission model: Level 0 (no AI permitted) for assessments testing recall and foundational skills, Level 1 (AI for brainstorming only) for developing original arguments, Level 2 (AI as research assistant with attribution) for complex projects, and Level 3 (full AI collaboration) for designated innovation assignments.

Comparing Different Schools' Approaches to ChatGPT Policy

School ChatGPT policies fall into four observable models worldwide. Prohibition models ban AI tool usage entirely, which proves unenforceable and drives usage underground. Permission models allow unrestricted AI use with disclosure, which creates inconsistency across subjects and grade levels. Tiered models define different AI usage levels per assignment type, providing the clearest guidance but requiring significant teacher training investment. Integration models redesign curriculum to incorporate AI as a learning tool, representing the most forward-thinking approach but requiring the largest institutional transformation effort. Most schools transitioning from prohibition toward tiered or integration models report improved academic integrity outcomes.

How AI Policy Differs Between Elementary, Middle, and High School

Age-appropriate AI policy requires different guidelines for different developmental stages. Elementary school policies (ages 5-10) should restrict direct student-AI interaction and limit AI usage to teacher-mediated classroom demonstrations. Middle school policies (ages 11-13) can introduce supervised AI interaction for structured research and brainstorming activities with mandatory teacher review of all AI outputs. High school policies (ages 14-18) should teach critical AI literacy alongside progressive independence in AI tool usage, preparing students for the AI-integrated university and workplace environments they will soon enter.

Schools should publish example scenarios alongside their policies showing exactly where the boundary lies between acceptable and unacceptable AI use. For instance: using ChatGPT to brainstorm essay topics (acceptable), using it to generate an outline you then write from (Level 1), having it draft paragraphs you revise (Level 2), and submitting AI-generated text without disclosure (violation). These concrete examples prevent the ambiguity that undermines policy compliance.

Practical Next Steps

To put these insights into practice for chatgpt policy for schools, consider the following action items:

  • Establish a cross-functional governance committee with clear decision-making authority and regular review cadences.
  • Document your current governance processes and identify gaps against regulatory requirements in your operating markets.
  • Create standardized templates for governance reviews, approval workflows, and compliance documentation.
  • Schedule quarterly governance assessments to ensure your framework evolves alongside regulatory and organizational changes.
  • Build internal governance capabilities through targeted training programs for stakeholders across different business functions.

Common Questions

Most schools find blanket bans unenforceable and counterproductive. Better approaches define acceptable use contexts, require disclosure, and teach responsible AI use as a future skill.

Teachers need guidelines for instructional use (creating materials, providing feedback) while students need boundaries for learning (when AI helps vs. replaces learning).

Younger students need simpler rules and more supervision. Older students can handle nuanced policies about different contexts. All need guidance on responsible AI citizenship.

References

  1. Guidance for Generative AI in Education and Research. UNESCO (2023). View source
  2. Recommendation on the Ethics of Artificial Intelligence. UNESCO (2021). View source
  3. The Fundamental Values of Academic Integrity (Third Edition). International Center for Academic Integrity (2021). View source
  4. AI Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology (NIST) (2023). View source
  5. Model AI Governance Framework (Second Edition). PDPC and IMDA Singapore (2020). View source
  6. OECD Principles on Artificial Intelligence. OECD (2019). View source
  7. EU AI Act — Regulatory Framework for Artificial Intelligence. European Commission (2024). View source
Michael Lansdowne Hauge

Managing Director · HRDF-Certified Trainer (Malaysia), Delivered Training for Big Four, MBB, and Fortune 500 Clients, 100+ Angel Investments (Seed–Series C), Dartmouth College, Economics & Asian Studies

Managing Director of Pertama Partners, an AI advisory and training firm helping organizations across Southeast Asia adopt and implement artificial intelligence. HRDF-certified trainer with engagements for a Big Four accounting firm, a leading global management consulting firm, and the world's largest ERP software company.

AI StrategyAI GovernanceExecutive AI TrainingDigital TransformationASEAN MarketsAI ImplementationAI Readiness AssessmentsResponsible AIPrompt EngineeringAI Literacy Programs

EXPLORE MORE

Other AI in Schools / Education Ops Solutions

INSIGHTS

Related reading

Talk to Us About AI in Schools / Education Ops

We work with organizations across Southeast Asia on ai in schools / education ops programs. Let us know what you are working on.