As generative AI (G-AI) tools become a regular part of academic and professional life, educators face many evolving questions about how to design courses, support students, and maintain the integrity of learning. This page brings together the Delphi Center's current thinking, practical resources, and guidance to help you navigate those questions.

We also recommend bookmarking our regularly updated SharePoint site Teaching and Learning with AI, which complements this page with dynamic resources to support your ongoing work.

Generative vs. Agentic AI

Desirable Friction and Assessment Design 

Productive struggle describes the cognitive effort required to work through genuinely challenging and meaningful tasks. This type of struggle isn’t an obstacle to learning. It’s one of its primary drivers that should be retained in all classrooms. We also know that when assessments are disconnected from students' real contexts or feel purely performative, they create the conditions in which cognitive offloading to AI tools becomes tempting and easy.

A high-impact assessment approach addresses this situation directly. Rather than asking whether a student used AI, it asks whether the assessment is designed in a way that makes authentic engagement more valuable than AI substitution. This might look like process documentation that captures thinking over time, iterative drafts with instructor or peer feedback, reflective components that require students to connect course material to their own experience, or moments where a student must demonstrate understanding in conversation or context. These design choices have the benefit of reducing AI dependence while also providing richer evidence of learning.

Agentic AI raises these concerns in a more urgent way, particularly for instructors who teach online asynchronous courses. Because agentic systems can pursue multi-step tasks independently, a poorly designed asynchronous course could, in principle, be navigated almost entirely by an AI agent with minimal student involvement. This doesn't mean asynchronous learning is broken or untrustworthy. It means the design principles that have always made asynchronous courses effective, including authentic tasks, meaningful reflection, visible thinking, and genuine human connection, now also serve as the most practical safeguards against AI completing work in a student's place.

Instructors teaching asynchronous courses might consider a few approaches that are difficult for agentic AI to replicate convincingly. Assignments that ask students to respond to something specific and recent (ex. a development in their field, a conversation from the course discussion board, feedback from a peer) require the kind of situated awareness that AI agents can simulate but not genuinely produce. Reflective prompts that ask students to connect course content to their own professional context, personal history, or evolving thinking over the semester similarly resist easy automation. Low-stakes, frequent check-ins are another place where you can build in activities that require the student presence.

Consider assignments like a brief voice memo, a short video reflection, or a one-paragraph response to an instructor prompt. None of these strategies require synchronous participation and all of them were good pedagogical practice long before agentic AI existed.

The goal should not be to design assessments that police student behavior. Instead, design courses so that showing up is more interesting and more rewarding than sending an agent in your place.

The staff at the Delphi Center for Teaching and Learning would be happy to discuss the design of high-impact assessments with you. Please use our consultation link to let us know your availability and we'll be in touch as quickly as possible. Alternatively, the following prompts can be used with a G-AI tool of your choice to help you begin working through this task. Select the one that best fits your starting point or mix and match elements from several.

On the Use of AI Detection Tools

Current AI detection tools are unreliable and frequently produce false positives that disproportionately affect multilingual writers and students who use writing tools such as grammar checkers or the word prediction feature in Microsoft Word. Beyond accuracy concerns, most third-party tools require student work to be uploaded to unvetted external platforms, creating real risks related to data privacy, FERPA compliance and student consent. The University of Louisville does not hold an institutional contract with any AI detection provider, and instructors who use these tools do so as individuals, assuming personal responsibility for any resulting liability. Outputs from these tools should never serve as the sole basis for an academic integrity decision.

More fundamentally, heavy reliance on AI detection can erode the trust that effective teaching depends on. When students feel surveilled rather than supported, it can discourage transparency, reduce help-seeking behaviors and damage the instructor-student relationship in ways that are difficult to repair.

Additional Resources