Generative AI and Teaching

Good teaching has always been, at its core, a human endeavor. Regardless of course modality, the most effective learning experiences begin with genuine connection. It's about the connections between instructors and students, between students and ideas, and between course content and the real lives students bring with them into the classroom. No technology changes that foundation.

As artificial intelligence becomes a regular part of academic and professional life, educators face many evolving questions about course design, student support, and academic integrity. This page brings together the Delphi Center's current thinking, practical resources, and guidance to help you navigate those questions. Our goal is always to protect and strengthen the human connections that make learning meaningful in the first place.

We also recommend bookmarking our regularly updated SharePoint site Teaching and Learning with AI, which complements this page with dynamic resources to support your ongoing work.

Generative vs. Agentic AI

Desirable Difficulties and Assessment Design

The term 'desirable difficulties' describes the cognitive effort required to work through genuinely challenging and meaningful tasks (Bjork & Bjork, 2020). Conditions that slow down or complicate learning in the moment tend to produce stronger retention over time. We also know that when assessments are disconnected from students real contexts or feel purely performative, it can create the conditions that make cheating feel like a viable option (Miles et al., 2022). In the age of artificial intelligence, cheating may include cognitive offloading to AI tools (Flaherty, 2025).

A high-impact assessment approach addresses this situation directly. Rather than asking whether a student used AI, it asks whether the assessment is designed in a way that makes authentic engagement more valuable than AI substitution citation (Su et al., 2024). This might look like process documentation that captures thinking over time, iterative drafts with instructor or peer feedback, reflective components that require students to connect course material to their own experience, or moments where a student must demonstrate understanding in conversation or context. These design choices have the benefit of reducing AI dependence while also providing richer evidence of learning.

Agentic AI and Asynchronous Online Courses

A Delphi Center team member would be happy to work through these questions with you. Please let us know that you would this assistance using our consultation link.

On the Use of AI Detection Tools

Current AI detection tools are unreliable and frequently produce false positives that disproportionately affect multilingual writers and students who use writing tools such as grammar checkers or the word prediction feature in Microsoft Word. Beyond accuracy concerns, most third-party tools require student work to be uploaded to unvetted external platforms, creating real risks related to data privacy, FERPA compliance and student consent. The University of Louisville does not hold an institutional contract with any AI detection provider, and instructors who use these tools do so as individuals, assuming personal responsibility for any resulting liability. Outputs from these tools should never serve as the sole basis for an academic integrity decision.

More fundamentally, heavy reliance on AI detection can erode the trust that effective teaching depends on. When students feel surveilled rather than supported, it can discourage transparency, reduce help-seeking behaviors and damage the instructor-student relationship in ways that are difficult to repair.

Additional Resources