Back to Insights
AI in Schools / Education OpsGuide

ChatGPT Policy for Schools: Specific Guidelines for Students and Teachers

December 7, 20257 min readMichael Lansdowne Hauge
Updated March 15, 2026
For:CTO/CIOConsultantCISOCHRO

Separate policy templates for students and teachers regarding ChatGPT and generative AI tools. Practical, enforceable guidelines for school communities.

Summarize and fact-check this article with:
Education Library Research - ai in schools / education ops insights

Key Takeaways

  • 1.Develop clear guidelines for student ChatGPT usage
  • 2.Establish teacher policies for AI tool integration
  • 3.Create age-appropriate rules across grade levels
  • 4.Build enforcement and monitoring procedures
  • 5.Communicate policies effectively to all stakeholders

Generic AI policies aren't enough. Schools need specific guidance for ChatGPT and similar generative AI tools, with different rules for students and staff.

This guide provides separate policy templates for students and teachers.


Executive Summary

ChatGPT and similar generative AI tools demand their own dedicated policy frameworks rather than vague general AI guidelines. Students and teachers operate in fundamentally different contexts with distinct responsibilities, which means they require separate guidance. Any credible policy must address both educational use and data protection concerns, specifying permitted and prohibited uses with precision rather than resorting to blanket rules. The policy should account for different age groups and maturity levels, and schools should plan to update their frameworks as these tools and institutional understanding of them continue to evolve.


Student ChatGPT Policy


[School Name] Student Guidelines: ChatGPT and AI Writing Tools

For Students in Grades [X-Y]


What is ChatGPT?

ChatGPT is an AI tool that can write text, answer questions, and help with many tasks. It's powerful but has limitations, it can be wrong, it doesn't truly understand things, and it can't replace your own thinking and learning.

Why does this matter?

School is about learning. When you use AI to do your thinking for you, you miss the learning. Our guidelines help you use AI in ways that support your learning rather than replace it.


General Rules:

All work you submit should represent your own thinking and effort, which means you must never submit AI-generated writing as your own. Each assignment may carry different AI rules, so read instructions carefully every time. When AI use is permitted, be transparent about how you used it. Always think critically about AI outputs because they can be factually wrong. Finally, protect your privacy by never entering personal information into ChatGPT.


When You CAN Use ChatGPT:

You may use ChatGPT to learn how something works, much like a smart search engine, or to have concepts explained that you do not yet understand. Brainstorming ideas is also acceptable, provided you develop them yourself afterward. Using it to check grammar and spelling, in the same way you would use a spell-checker, is fine. Above all, you may use it whenever your teacher specifically says AI is allowed for a given task.


When You CANNOT Use ChatGPT:

You may not use ChatGPT to write assignments you plan to submit as your own work, nor to complete assessments designed to measure your personal understanding. If your teacher has marked an assignment "No AI," that instruction is absolute. Tests, quizzes, personal statements, and applications are always off-limits to AI assistance.


How to Use AI Responsibly:

Use AI to learn, not to avoid learning. Frame your prompts around "why" questions rather than "do my homework" requests. Verify any information it provides because AI regularly fabricates facts. Credit AI assistance whenever required. Most importantly, start with your own thinking first, as AI works best when you already have ideas to build on.


Consequences for Misuse:

Using AI inappropriately is an academic integrity issue. Consequences follow our academic honesty policy and depend on whether you knew the rules, how you used AI, and whether this is a first or repeat issue.


Questions?

If you're unsure whether AI use is okay for an assignment, ASK YOUR TEACHER FIRST.


Teacher ChatGPT Policy


[School Name] Guidelines: ChatGPT and AI Tools for Teachers


Purpose:

These guidelines help teachers use AI tools productively while managing risks to students, data security, and academic integrity.


Permitted Uses:

Lesson Planning: Teachers may use AI for generating lesson plan ideas and structures, creating differentiated content variations for diverse learners, brainstorming classroom activities and discussion questions, and developing rubrics along with assessment criteria.

Content Creation: Permitted applications include drafting instructional materials, creating practice problems or worked examples, generating quiz questions (which must be reviewed before use), and writing initial drafts of parent communications.

Administrative Tasks: AI may assist with drafting routine communications, summarizing meeting notes, organizing information, and creating reusable templates for common workflows.

Professional Learning: Teachers are encouraged to explore AI capabilities firsthand to understand what students have access to and to develop their own AI literacy.


Restricted Uses (Proceed with Caution):

Teachers should never input identifiable student information such as names, grades, or behavior details into ChatGPT or similar public AI tools. AI-generated assessment questions carry risk because students may find them easier to anticipate. Any AI-drafted sensitive communications require careful human review before sending. While AI can inform professional decisions, it should never substitute for a teacher's own judgment.


Prohibited Uses:

Under no circumstances should teachers input student names, grades, or personal information into public AI tools. Uploading student work for analysis raises serious data protection concerns and is not permitted. Writing student evaluations or reports without substantial personal input is prohibited, as is relying on AI for student recommendations without independent verification. Any use of AI that violates the school's data protection policies is strictly forbidden.


Data Protection Requirements:

If you need AI help with student-related content, anonymize the data by removing all identifying information before input. Whenever the school provides enterprise AI tools with built-in data protection, prefer those over public alternatives. Treat anything entered into a public AI tool as potentially public. Check vendor terms of service to understand whether your inputs are being used to train future models.


Academic Integrity Responsibilities:

Communicate AI expectations clearly for each assignment so students understand exactly what is and is not permitted. Design assessments thoughtfully so they promote genuine learning even when AI tools are available. Model responsible use by showing students how you incorporate AI into your own professional practice. When violations occur, address them fairly by following school protocols and never relying solely on AI detection tools, which remain unreliable.


Staying Current:

AI tools change rapidly. Teachers should commit to ongoing learning about AI capabilities, regularly reviewing and updating their classroom practices, sharing effective approaches with colleagues, and updating student guidance as new tools and challenges emerge.


Implementation Guide

For Students

Communication: Begin with an age-appropriate assembly or class presentation that introduces the policy. Follow this with a discussion in advisory or homeroom where students can process the guidelines in a smaller setting. Display a poster or visual reminder of the key rules in classrooms. Provide a dedicated Q&A opportunity so students can raise questions without pressure.

Reinforcement: Reference the policy explicitly when giving assignments, discuss AI use openly and regularly in class, and address student questions without judgment to maintain an environment where honest disclosure feels safe.

For Teachers

Training: Start with an overview session on current AI capabilities so all staff share a common baseline. Follow with hands-on exploration time where teachers can experiment with ChatGPT directly. Then hold a structured discussion of permitted and restricted uses, and close with a focused data protection reminder.

Support: Provide a FAQ document that addresses common questions, designate a point person for ongoing AI queries, and schedule regular check-ins and updates as the landscape evolves.


Next Steps

Adapt these templates to your school context. Train teachers first, then communicate to students with clarity and consistency.

Need help developing your school's ChatGPT approach?

Book an AI Readiness Audit with Pertama Partners. We'll help you create policies that work for your community.


Teacher Guidelines for Classroom ChatGPT Usage

Teachers need clear guidelines specifying how they may use ChatGPT for instructional preparation, student interaction, and professional activities. Permitted uses should include lesson plan development, differentiated assignment creation, formative assessment question generation, and administrative communication drafting. Prohibited uses should address sharing student data with ChatGPT, using AI-generated content for formal student evaluations without human review, and substituting AI responses for personalized student feedback. Guidelines should also encourage teachers to model responsible AI use by discussing their ChatGPT usage with students, demonstrating critical evaluation of AI outputs, and sharing examples of how AI assists but does not replace professional judgment.

Student Guidelines for Responsible ChatGPT Use

Student ChatGPT guidelines should be age-appropriate, clearly written, and accompanied by specific examples that illustrate acceptable and unacceptable use. For each assignment type, specify the level of AI assistance permitted: no AI use, AI for brainstorming only, AI for research assistance with required attribution, or AI-assisted drafting with mandatory disclosure. Require students to document their AI interactions by saving conversation logs or maintaining AI usage journals that instructors can review.

Schools should establish regular policy review cycles that coincide with the academic calendar, evaluating policy effectiveness at the end of each semester and updating provisions to reflect new AI capabilities, emerging academic integrity challenges, and feedback from teachers, students, and parents. Including student representatives in the policy review process ensures that guidelines remain practical and relevant to actual student experiences rather than reflecting administrative assumptions about how students interact with AI tools.

How School ChatGPT Policies Have Evolved Since 2023

Early school ChatGPT policies in 2023 fell into two extremes: blanket bans that proved unenforceable, or unrestricted access that created academic integrity chaos. By 2025, most institutions converged on nuanced policies that permit AI assistance within defined boundaries specific to assignment type, subject area, and student grade level. The most effective current policies use a tiered permission model: Level 0 (no AI permitted) for assessments testing recall and foundational skills, Level 1 (AI for brainstorming only) for developing original arguments, Level 2 (AI as research assistant with attribution) for complex projects, and Level 3 (full AI collaboration) for designated innovation assignments.

Comparing Different Schools' Approaches to ChatGPT Policy

School ChatGPT policies fall into four observable models worldwide. Prohibition models ban AI tool usage entirely, an approach that proves unenforceable in practice and drives usage underground. Permission models allow unrestricted AI use with disclosure, which creates inconsistency across subjects and grade levels. Tiered models define different AI usage levels per assignment type, providing the clearest guidance but requiring significant teacher training investment. Integration models go furthest by redesigning curriculum to incorporate AI as a learning tool, representing the most forward-thinking approach but demanding the largest institutional transformation effort. Most schools transitioning from prohibition toward tiered or integration models report improved academic integrity outcomes.

How AI Policy Differs Between Elementary, Middle, and High School

Age-appropriate AI policy requires fundamentally different guidelines for different developmental stages. Elementary school policies covering ages 5 through 10 should restrict direct student-AI interaction and limit AI usage to teacher-mediated classroom demonstrations. Middle school policies for ages 11 through 13 can introduce supervised AI interaction for structured research and brainstorming activities, with mandatory teacher review of all AI outputs. High school policies for ages 14 through 18 should teach critical AI literacy alongside progressive independence in AI tool usage, preparing students for the AI-integrated university and workplace environments they will soon enter.

Schools should publish example scenarios alongside their policies showing exactly where the boundary lies between acceptable and unacceptable AI use. For instance: using ChatGPT to brainstorm essay topics is acceptable, using it to generate an outline you then write from falls under Level 1, having it draft paragraphs you revise qualifies as Level 2, and submitting AI-generated text without disclosure constitutes a violation. These concrete examples prevent the ambiguity that undermines policy compliance.

Practical Next Steps

Translating these insights into practice requires a focused sequence of actions. Begin by establishing a cross-functional governance committee with clear decision-making authority and regular review cadences. Document your current governance processes and identify gaps against regulatory requirements in your operating markets. Create standardized templates for governance reviews, approval workflows, and compliance documentation. Schedule quarterly governance assessments to ensure your framework evolves alongside regulatory and organizational changes. Finally, build internal governance capabilities through targeted training programs for stakeholders across different business functions.

Common Questions

Most schools find blanket bans unenforceable and counterproductive. Better approaches define acceptable use contexts, require disclosure, and teach responsible AI use as a future skill.

Teachers need guidelines for instructional use (creating materials, providing feedback) while students need boundaries for learning (when AI helps vs. replaces learning).

Younger students need simpler rules and more supervision. Older students can handle nuanced policies about different contexts. All need guidance on responsible AI citizenship.

References

  1. Guidance for Generative AI in Education and Research. UNESCO (2023). View source
  2. Recommendation on the Ethics of Artificial Intelligence. UNESCO (2021). View source
  3. The Fundamental Values of Academic Integrity (Third Edition). International Center for Academic Integrity (2021). View source
  4. AI Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology (NIST) (2023). View source
  5. Model AI Governance Framework (Second Edition). PDPC and IMDA Singapore (2020). View source
  6. OECD Principles on Artificial Intelligence. OECD (2019). View source
  7. EU AI Act — Regulatory Framework for Artificial Intelligence. European Commission (2024). View source
Michael Lansdowne Hauge

Managing Partner · HRDF-Certified Trainer (Malaysia), Delivered Training for Big Four, MBB, and Fortune 500 Clients, 100+ Angel Investments (Seed–Series C), Dartmouth College, Economics & Asian Studies

Advises leadership teams across Southeast Asia on AI strategy, readiness, and implementation. HRDF-certified trainer with engagements for a Big Four accounting firm, a leading global management consulting firm, and the world's largest ERP software company.

AI StrategyAI GovernanceExecutive AI TrainingDigital TransformationASEAN MarketsAI ImplementationAI Readiness AssessmentsResponsible AIPrompt EngineeringAI Literacy Programs

EXPLORE MORE

Other AI in Schools / Education Ops Solutions

INSIGHTS

Related reading

Talk to Us About AI in Schools / Education Ops

We work with organizations across Southeast Asia on ai in schools / education ops programs. Let us know what you are working on.