Guest Starring Google Gemini* for Key Takeaways
The author discusses their experience teaching writing with generative AI last summer and outlines different approaches for the upcoming semester.
Teachers who do not want to allow generative AI use have some tools that include surveillance tools, handwritten essays, and version history.
The author emphasizes creating a safe space for AI use disclosure and teaching students how to use the tools effectively.
The author found unrestricted AI use to be successful for upper-division students but problematic for first-year composition students.
For fall, the author will continue unrestricted use (with acknowledgement statements) but require weekly drop-in sessions for students misusing AI. (Google Gemini, 2024)
*Since I draft in Google Docs, I figured I would take Gemini out for a spin on this particular task (and it was just okay. I reordered the takeaways and made some edits for clarity). I also have a custom GPT, TLDR, that I usually use for key takeaways. I plan to have students use this GPT to check their own papers this fall.
Welcome Back!
Welcome back to year two of this great higher education experiment we call academic writing in the age of generative artificial intelligence. If you were hoping this would all just go away, you’re probably realizing that these tools are here, and you’re going to have to deal with them. I’ve been actively teaching with genAI tools such as ChatGPT and Microsoft Copilot since Spring 2023, and this year, my entire English 102 course is geared around writing with artificial intelligence.
If you want to watch me build the airplane while it flies (I am nothing if not radically transparent in my development of open education resources), here’s the link to Cyborgs and Centaurs, the textbook I will be using in the course this year. Here’s the cover (AI generated, of course, and I know, right???)
This blog post is a quick overview of possible responses and resources for teaching (or not teaching) with genAI this semester. Starting this fall, my community college requires all faculty to adopt one of three syllabi policies, which range from completely prohibiting any use of generative AI to allowing genAI for everything, if it’s cited and acknowledged. I’m using the least restrictive policy in all my classes with the understanding that if I feel students’ use of genAI tools is harming their learning, I reserve the right to meet with them in person to discuss course content and check for knowledge.
Taking the Training Wheels Off: Teaching with Unrestricted Use of Generative AI
Spoiler alert: I tested the least restrictive option in the summer with my three fully online courses (the place I am personally most nervous about inappropriate genAI use). Here are some brief observations:
1. Overall, the least restrictive policy worked. When students knew they were safe in using generative AI tools, they felt more comfortable trying them and had no problem sharing their use of these tools with me.
2. In my liberal arts capstone course and literature course, I provided brief training videos, and students used AI tools appropriately in almost all cases. Students who used generative AI tools were able to augment and enhance their ideas to produce excellent work.
3. In my first-year English 101 composition course, it was a totally different story. About 1/3 of my composition students straight up had ChatGPT or Google Gemini write their papers. Guess what? The papers were pretty bad.
4. In every single case in the composition course, because they knew they were safe, students readily acknowledged their AI use and showed me their chats. They were not very good at prompting, and some of them were using some sketchy apps—I let them know they had access to Microsoft Copilot through their school email accounts. This hints at one of my main concerns about these tools: digital equity. I walked them through how to use gen AI tools to augment their own thinking and writing and explained the skills I was trying to teach by having them write an academic research paper. Overall, this resulted in better papers with responsible AI use.
5. BUT this process took A LOT of time. Like, a lot. Like, so much time that I cannot replicate what I did in my fall courses and still have any kind of work/life balance.
So I am pivoting a bit for fall. First, I know that my intro videos for upper division students showing them how to use and not use AI in the course are working. Students are using AI appropriately (for my course) and citing the tools with links or screenshots so I can check their work if needed. The few problems I encountered with upper division students were mostly handled over email (with a few Zoom conferences).
But for the composition students, I must find another solution. I’ve decided to keep the same least restrictive policy, but this semester, if I suspect students have used genAI in ways that are harmful to their own learning, I’ll be requiring them to attend a regular weekly drop in session where I go over some basic rules, tips, and tricks for using these tools. These sessions will be open to all students—I don’t want to call anyone out personally. But I simply won’t be able to manage in person meetings with every student who uses genAI incorrectly.
I also leave brief video feedback for my students on their essays, and I’ll be explaining my concerns in that feedback. As always, students will have the chance to revise and resubmit if they aren’t happy with their grades (and my summer students were not happy with their initial grades before they met with me).
No, You Can’t Use Gen AI in My Online Composition Course
I totally respect this stance. But how will you enforce it? To my mind, you have a few options:
1. Use a surveillance tool like Respondus or Honorlock. I personally hate these tools and would never use them for a whole host of reasons related to student success and universal course design, but it is one option that will work for unauthorized genAI use.
2. Require students to write essays by hand in the testing center. Again, I would never do this. For starters, it does not replicate real world writing conditions in any way, shape, or form. But if you want to be sure your students are not using genAI, and if you think you can read their handwriting, you can do this (maybe some testing centers have locked down computers so students can type their essays? I have no idea, but if this is the direction you want to take, it’s certainly worth exploring. I personally prefer Josh Brake’s take on this at the Absent Minded Professor:
Instead of spending time building barriers to try and prevent our students from cheating, we would be much better off spending our time revising our assignments to align with our desired learning outcomes and communicating that rationale more clearly. Whatever time we would spend building barriers would be much better spent building bridges. (“Blue Books and Oral Exams Are Not The Answer,” July 2024).
3. Version history. I am most intrigued by this option, which is explained here. I like the idea because it really emphasizes writing as a process. I might use version history in my own courses, but mainly because I am curious about how genAI tools can support and augment student learning at each stage of their writing process. But if you don’t want students to use GenAI tools, this might work.
What probably won’t work: threatening to use a detector like GPTZero. I am not going to get into the weeds on this—plenty of people have covered the problems elsewhere. But short version: they don’t always work, and they discriminate against ELL learners.
ChatGPT keeps promising some version of a watermarking detector (linking to the Gizmodo article because it highlights the fact that their main concern seems to be alienating student cheaters LOL). That would be great, and I wish they had thought of this two years ago (actually, I know they CAN do this because they just outed Iran for fake election content). But I suspect good students can get around this. Cheating has always been a positional arms race, and in the age of TikTok, the students who want to cheat may have an edge.
You Can Use AI for Some Things, But Not for Others
This is how I taught with genAI in Fall 2023 and Spring 2024, and it worked pretty well. So well, in fact, that I moved to a least restrictive policy in the summer (as noted above). I think if you’re new to teaching with AI, this can be a fun space to be creative and try things.
This approach requires a mindset that assumes students don’t want to cheat. The students who are going to cheat were going to cheat anyway. Most students want to be ethical and accountable for their own learning.
What matters here is creating a safe space for students to disclose AI use and emphasizing that we are all learning about these tools together. The trickier part is that you need to actually teach them how to use the tools. Trust me: aside from 2-3 superusers, most students have no idea how to prompt an AI chatbot. And learning how to prompt can definitely reinforce critical thinking, problem solving, and communication skills.
Anything Goes! Just Show Your Work
As I mentioned at the beginning of this post, this is how I taught in the summer and how I will be teaching this fall. My colleague and fellow State of Idaho AI Fellow Jason Blomquist, a nursing professor at Boise State University, convinced me that allowing any and all use of genAI as long as students cite and acknowledge (including a reflection on which AI tool they chose, why they used the tool, and how it impacted their own learning) is a great approach to teaching in the age of generative AI. In Jason’s approach, we can learn some things from our students.
But it can be scary to admit that they may know more about these tools than we do. The best way to get better at using generative AI is to use it. If you haven’t read Ethan Mollick’s book Co-Intelligence yet, it’s a quick one, and it will shape your attitude toward teaching with generative AI if you are open to this new pedagogical approach.
Claude Writes Some Code for Me
I’ll leave you with something fun. I created this Claude AI video for my English 102 students to show them how generative AI can help us try new things in creative ways. But it was also a positive learning experience for me that helped me to better understand why students may struggle with prompting.
I have no problem prompting chatbots. I’m a professional writer with years of experience, and I am confident in my ability both to ask for what I want and to evaluate the AI output (I also know how to verify any content with credible and reliable sources).
But I’m not a coder. So asking Claude to write some code for me was intimidating. And the code it wrote, while technically doing what I asked it to do, is not perfect. It’s not EXACTLY what I wanted. If I knew how to code, I could edit the code myself, just like I can easily and quickly edit writing output. But it still got me further than I could have gone on my own, and it made me curious enough to download Python and start reading the user guide. That’s a learning experience, and I think it demonstrates the potential of these tools for us and for our students. With supported learning, we can learn faster and try things we might not otherwise have thought were possible. That’s pretty fun.
I hope your semester is off to a smooth start! Happy prompting!