Generative AI and Teaching
As generative AI (G-AI) tools become a regular part of academic and professional life, educators face many evolving questions about how to design courses, support students, and maintain the integrity of learning. This page brings together the Delphi Center's current thinking, practical resources, and guidance to help you navigate those questions.
We also recommend bookmarking our regularly updated SharePoint site Teaching and Learning with AI, which complements this page with dynamic resources to support your ongoing work.
Generative vs. Agentic AI
Generative AI refers to a category of artificial intelligence tools that produce new content in response to a user's input. When you type a question or a request into a tool like ChatGPT, Claude, Gemini or Microsoft Copilot, the system generates a response by drawing on patterns learned from enormous amounts of data. It doesn’t look things up the way a search engine does, and it doesn’t truly "understand" your question the way a person would. Instead, it predicts what a useful or coherent response would look like based on what it has learned.
These tools can be remarkably fluent and helpful. They can do a variety of different tasks like draft text, summarize readings, brainstorm ideas, explain concepts or write and debug code. They can also produce confident-sounding responses that are factually wrong, reflect biases present in their training data or miss the nuance that a specific discipline or context requires. Understanding both the capabilities and the limitations of these tools is essential for making informed decisions about how you might incorporate them into your teaching and your students' learning.
You may be starting to hear the term "agentic AI" alongside generative AI. While the two are related, they describe meaningfully different things.
Using a generative AI tool starts when you give it a prompt. In response to your prompt, the tool gives you an output. The exchange is relatively contained, and each interaction requires your input. Agentic AI works differently. An AI "agent" is a system that can pursue a goal across multiple steps, make decisions along the way, use tools (like web browsers, calendars, email or databases) and take actions. An agent is given a task like "research this topic, summarize the key findings, draft an email to my team and schedule a follow-up meeting". The agent will then work through all those steps, often without any human oversight.
Agentic AI is increasingly available in tools students and instructors may already use. As these capabilities become more common, the line between "getting help with a task" and "having AI complete a task" becomes harder to draw. An agentic system could, in principle, complete a multi-part assignment with steps like conducting research, drafting a response and formatting a submission with minimal student involvement.
This doesn't mean agentic AI is simply a threat to be managed. These tools also have genuine educational potential. They could be used to help students plan complex projects, work through iterative processes or simulate professional workflows. As with generative AI, the emergence of agentic AI means that questions around authorship, effort, learning and integrity are important. While there isn’t a single answer to the problem, we do know that assessment design practices will need to change in response to the growth of AI-based tools.
Desirable Friction and Assessment Design
Productive struggle describes the cognitive effort required to work through genuinely challenging and meaningful tasks. This type of struggle isn’t an obstacle to learning. It’s one of its primary drivers that should be retained in all classrooms. We also know that when assessments are disconnected from students' real contexts or feel purely performative, they create the conditions in which cognitive offloading to AI tools becomes tempting and easy.
A high-impact assessment approach addresses this situation directly. Rather than asking whether a student used AI, it asks whether the assessment is designed in a way that makes authentic engagement more valuable than AI substitution. This might look like process documentation that captures thinking over time, iterative drafts with instructor or peer feedback, reflective components that require students to connect course material to their own experience, or moments where a student must demonstrate understanding in conversation or context. These design choices have the benefit of reducing AI dependence while also providing richer evidence of learning.
Agentic AI raises these concerns in a more urgent way, particularly for instructors who teach online asynchronous courses. Because agentic systems can pursue multi-step tasks independently, a poorly designed asynchronous course could, in principle, be navigated almost entirely by an AI agent with minimal student involvement. This doesn't mean asynchronous learning is broken or untrustworthy. It means the design principles that have always made asynchronous courses effective, including authentic tasks, meaningful reflection, visible thinking, and genuine human connection, now also serve as the most practical safeguards against AI completing work in a student's place.
Instructors teaching asynchronous courses might consider a few approaches that are difficult for agentic AI to replicate convincingly. Assignments that ask students to respond to something specific and recent (ex. a development in their field, a conversation from the course discussion board, feedback from a peer) require the kind of situated awareness that AI agents can simulate but not genuinely produce. Reflective prompts that ask students to connect course content to their own professional context, personal history, or evolving thinking over the semester similarly resist easy automation. Low-stakes, frequent check-ins are another place where you can build in activities that require the student presence.
Consider assignments like a brief voice memo, a short video reflection, or a one-paragraph response to an instructor prompt. None of these strategies require synchronous participation and all of them were good pedagogical practice long before agentic AI existed.
The goal should not be to design assessments that police student behavior. Instead, design courses so that showing up is more interesting and more rewarding than sending an agent in your place.
The staff at the Delphi Center for Teaching and Learning would be happy to discuss the design of high-impact assessments with you. Please use our consultation link to let us know your availability and we'll be in touch as quickly as possible. Alternatively, the following prompts can be used with a G-AI tool of your choice to help you begin working through this task. Select the one that best fits your starting point or mix and match elements from several.
I teach [course name/discipline] at a university level. My course enrolls approximately [X] students and is organized around the following learning outcomes: [list outcomes]. I want to design one or more assessments from the ground up that reward authentic thinking, require genuine cognitive effort, and make AI substitution less appealing or useful. Please suggest two or three assessment approaches suited to my course context, and for each one explain how it supports real learning while reducing overreliance on AI tools.
I currently use the following assessment in my [course name/discipline] course: [describe assessment]. I am concerned that it may be easy for students to complete using generative AI with little authentic engagement. Please help me revise or redesign this assessment so that it emphasizes process, reflection, or human connection in a way that makes meaningful participation more valuable than AI substitution. My class has approximately [X] students, so please keep scalability in mind.
I teach an asynchronous online course in [discipline] with approximately [X] students. I am looking for practical ways to introduce intentional human touchpoints where students must demonstrate understanding in a more personal or interactive way. I do not want to dramatically increase my workload or require synchronous participation. Please suggest two or three specific strategies or assessment modifications that would work within an asynchronous format. I'll review and then ask some follow-up questions to learn more about the option that is best aligned with my preferred approach to assessment.
I am redesigning assessments for my [course name/discipline] course with the goal of building in productive struggle. I usually have [X] students enrolled in my course. I want to encourage my students to use the assessments in this course as an opportunity for the type of cognitive effort that supports real learning. First, I want you to ask me to identify a specific outcome for my course. When I give you an outcome, ask me to describe an assessment in my course that is connected to that outcome. Once I do this, I want you to suggest some ways that students might complete the assignment without deep engagement and then suggest alternatives that restore it in ways that are appropriate for my discipline and my course size. Ask me to provide feedback on the strategies. Respond to my feedback. Continue this until I tell you I want to move on to the next outcome. When I tell you that we should move on to the next outcome, restart this process.
On the Use of AI Detection Tools
Current AI detection tools are unreliable and frequently produce false positives that disproportionately affect multilingual writers and students who use writing tools such as grammar checkers or the word prediction feature in Microsoft Word. Beyond accuracy concerns, most third-party tools require student work to be uploaded to unvetted external platforms, creating real risks related to data privacy, FERPA compliance and student consent. The University of Louisville does not hold an institutional contract with any AI detection provider, and instructors who use these tools do so as individuals, assuming personal responsibility for any resulting liability. Outputs from these tools should never serve as the sole basis for an academic integrity decision.
More fundamentally, heavy reliance on AI detection can erode the trust that effective teaching depends on. When students feel surveilled rather than supported, it can discourage transparency, reduce help-seeking behaviors and damage the instructor-student relationship in ways that are difficult to repair.
Additional Resources
Hearing directly from students about how they are experiencing and using generative AI tools offers valuable insight for course design and policy decisions. The following resources center student voices:
- UofL Student Testimonials 2026 - University of Louisville
- 5 College Students. 5 Views on Generative AI - The Chronicle of Higher Education
- How AI Is Changing - Not 'Killing' - College - Inside Higher Ed
- Is AI the New Homework Machine? Understanding AI and Its Impact on Higher Education - WCET Frontiers
- AI's Future for Students Is in Our Hands - Brookings Institution, Center for Universal Education. Drawing on interviews with hundreds of educators, parents, and students, as well as a review of more than 400 research articles, this piece examines both the promise and the risks of generative AI for student learning and development - and argues that how this technology shapes education depends on choices we are still making.
- Teaching and Learning with AI SharePoint Site - A regularly updated hub of resources, tools, and guidance curated for UofL instructors.
- The AI Ate My Homework Podcast - Produced through UofL's Digital Media Suite, this podcast explores how G-AI tools can cheat on assignments and what that means for teaching and learning.
- Gen AI Learning Circle - Join a supportive community of UofL educators exploring the use of generative AI in teaching and learning.
- Generative AI for College Students - A Blackboard-hosted microcourse created by Ekstrom Library in partnership with the Writing Center and Digital Media Suite. Six mini-lessons, each approximately 15–20 minutes, to help students engage thoughtfully with G-AI in the college classroom.
- Workshops - The Delphi Center regularly offers workshops on generative AI topics, including using AI to review your syllabus, develop assignments and more.