Do You Have an AI Policy on Your Syllabus? Here’s Why You Need One (+ How to Write It)
Having an AI policy on your syllabus isn't just about preventing cheating: it's about helping students succeed while maintaining academic integrity.

While many instructors have considered how students may be using AI in their classrooms and perhaps even some of the pros and cons that stem from that use, what is often overlooked—but arguably even more important—is including an AI policy in their syllabus.
This may seem like a small step in your overall classroom AI strategy, and it is! But it's one that can help guide and set the tone for all other AI conversations in your course(s). By taking the time to consider why you do (or do not) want students using AI in your classroom; how students may (or may not) benefit from these tools; how AI can be leveraged for deeper learning; and how becoming fluent in AI use, ethics, and best practices can benefit students, you will be setting up a strong foundation for both you and your students to successfully navigate the boundaries you've established.
At a Glance
Clarity prevents academic integrity issues
- Eliminates the "I didn't know" defense when students cross boundaries
- Reduces anxiety for students who genuinely want to follow the rules
- Addresses the ambiguity in traditional academic integrity language like "independent effort"
Protects both students and instructors
- Provides documented expectations and policies to reference in cases of misuse
- Prevents retroactive policy enforcement that puts everyone in difficult positions
- Creates a foundation for fair and consistent application of course standards
Enables authentic assessment design
- Allows you to design assignments with your AI policy deliberately in mind
- Creates opportunities for meaningful learning experiences that align with your goals
- Helps you avoid conflicts between unstated assumptions and student behavior
Why You Need a Clear AI Policy on Your Syllabus
While it may seem easiest to not mention AI at all, or to assume that using AI tools for classwork is universally understood to be disallowed, this creates ambiguity for everyone. Across campus, instructors are embracing a variety of stances on AI: some allow it broadly, others allow it within certain parameters, and some have chosen to disallow all AI use in coursework.
Regardless of what your policy ends up being, the most important part is that you take time to carefully consider why and when you choose to allow - or not allow - its use.
What to Consider When Creating Your AI Policy
It seems easy to just say yes or no to AI use in student work. However, I'd love to invite you to consider it from a pedagogical perspective. Here are some questions to get you started:
- What are my learning outcomes for this course? How might AI support these goals? How might AI hinder them?
- What assignment(s) could students leverage AI for (if any)? What assignments might AI not be appropriate for (if any)?
- How might students use AI in a job related to what I'm teaching in this course?
- Could AI add context, depth, creativity, and/or perspective to what I'm teaching in this course?
- What is my own comfort level with AI? How have I used it? Have I given it a thorough test drive before making a decision about its utility?
Once you've had time to think through the possibilities of integrating AI with human oversight and engagement, you'll be able to craft a syllabus policy that reflects your thoughtful stance on why AI is or isn't appropriate, in what circumstances, and why.
Note: As you are creating your policy, I always recommend that you take the time to include why you've created your AI policy as you have. Students (and humans, in general) often want to understand the why behind boundaries: not just the boundaries themselves. Using AI is no different: giving students the bigger picture about how AI may help or hinder their learning is an important piece of the conversation and sets them up for success moving forward.
Examples of AI Policies
I want to quickly outline two assignments as examples of how and why AI use might help or hinder learning. Let's consider two scenarios: a nutrition course and a medical terminology course.
Nutrition Course Example: AI as a Starting Point
In a nutrition course, an assignment might be to create a menu for a person with specific circumstances and dietary needs/restrictions. Allowing AI for this assignment could look like this: asking ChatGPT for a menu, then reviewing it critically and making necessary changes. The final submission would include the prompt the student put into the AI, the original output, notes, or a narrative about any changes made and why (supported by course content), and then the final product.
This type of assignment gives students the opportunity to leverage AI for ideas and input, but also ensures that they are bringing it back to course content, research, and critical thinking. The "why" here is that AI can provide valuable ideas and input for the student to get started, but isn't a replacement for critical thinking, listening to their client, and knowing, ultimately, what will work for them. It's an exercise in both learning to work with a tool they will invariably encounter in their career, as well as how to leverage it along with their own knowledge and humanity to create a high-quality final product.
Medical Terminology Example: When AI May Hinder Learning
Another example might be a medical terminology course with an exam about common medical terms. AI might not be permitted on this test since students need to quickly recognize, understand, and be able to use these terms without support/reference. AI here would be a hindrance since it might bypass deep learning of these foundational terms. The "why not" in this case is so that students develop the basic foundational knowledge they will invariably need in their careers.
While these may seem like obvious reasons to us, clearly communicating them to students helps them connect to the deeper purpose and goals of the assignments/activities and reinforces which skills they are being asked to develop and demonstrate. Helping students understand the "why" behind what they are learning/doing in a course is a best practice in itself, but it becomes even more important when navigating the use of AI in the classroom.
Final Thoughts
We want our students to succeed. We want them to learn, think critically, and articulate their knowledge and expertise in meaningful ways. Giving them clear, well-thought-out ground rules helps them do this, in assignments, course content, and in AI use.
Without specific guidelines, students who want to follow the rules can accidentally cross boundaries they didn't know existed. This creates confusion for everyone and puts both students and instructors in difficult positions when expectations weren't communicated (or weren't understood) upfront.
Clear AI policies also protect everyone involved by establishing expectations (and, potentially, consequences for non-compliance) from day one. When you're explicit about your AI stance - whether you allow it, restrict it, or prohibit it entirely - you can design your assignments and assessments with that policy in mind. This means you can create meaningful learning experiences that align with your goals instead of discovering conflicts between your unstated assumptions and student behavior after the fact. It also gives you something to refer back to if any misuse arises.
If you are ready to learn more about our recommendations for creating an AI policy for your syllabus (and see some examples), read Guidelines for Syllabus Statements About Generative AI.