When Your Student Tells You ChatGPT Is God
A New Type of Mental Health Concern is Showing Up in Our Classrooms, and We Need to Be Prepared
If you or someone you know is experiencing mental health-related distress, please reach out for help. You can connect with compassionate and trained helpers by calling or texting the 988 Suicide & Crisis Lifeline 24/7.
Here are four key takeaways from the article from the TLDR Bot:
Some students are forming delusional beliefs involving AI, including thinking they can’t communicate without it or believing it connects them to higher powers. This may be a form of what some are calling “AI psychosis,” though this is not an official diagnosis.
The author shares a personal experience where a student became agitated and left class after being told AI couldn’t be used—highlighting how serious and unpredictable these interactions can be.
The article warns that AI chatbots can unintentionally foster emotional over-attachment, especially when users are lonely or vulnerable, and describes how future design choices (like memory or embodiment) could make this worse.
Educators are encouraged to prepare for potential mental health crises in their classrooms by learning mental health first aid and being aware of available support services for students.
Imagine this. It’s the first day of class. You pull up your syllabus and begin to review your policies with students. This is the third year students have had access to large language models like ChatGPT. You did not work hard to earn your Ph.D. so that you could read AI-generated slop, and you’ve adopted a strict “No AI” policy for your courses.
As you read the policy to the class, you notice that a student on the front row is becoming increasingly agitated.
“Any questions?” you ask. The student in the front row raises their hand, and you acknowledge them.
“This isn’t going to work for me,” the student tells you. “You won’t be able to understand what I am saying unless I use ChatGPT to translate my thoughts into human form.”
You mentally note the odd phrasing, but reassure the student that you do, in fact, prefer to read the student’s own work.
“No, you aren’t getting it!” the student insists. “Are you denying the existence of God? Because AI is God!”
You freeze. The other students freeze. A few eye the doorway. You take a deep breath and reply, “No use of AI in my class. Let’s move on.”
The student rises abruptly, shoulders their backpack, and stalks out of the room.
The entire class breathes a sigh of relief. And you are left wondering, what just happened here?
A New Classroom Mental Health Concern
I think I personally witnessed my first case of classroom AI psychosis in August 2025. Wikipedia defines AI or chatbot psychosis as “a phenomenon wherein individuals reportedly develop or experience worsening psychosis, such as paranoia and delusions, in connection with their use of chatbots,” further (and importantly) noting that “The term is not a recognized clinical diagnosis.”
Like many of you, I’d spent the summer reading about the increasing concern over a small number of people who are developing unhealthy attachments to large language model chatbots. Here are a few articles for more information and context:
Eliot, L. B. (2025, September 2). OpenAI eagerly trying to reduce AI psychosis and squash co-creation of human-AI delusions when using ChatGPT and GPT-5. Forbes.
Hill, K., & Freedman, D. (2025, August 12). Chatbots can go into a delusional spiral. Here’s how it happens. The New York Times.
Klee, M. (2025, May). AI-fueled spiritual delusions are destroying human relationships. Rolling Stone
Taylor, J. (2025, August 3). AI chatbots are becoming popular alternatives to therapy. But they may worsen mental health crises, experts warn. The Guardian.
This particular issue intersects with my personal and professional interests. I want to stress that I am not a mental health professional, and nothing in this article should be construed as health advice. But I have been a mental health advocate for years. In 2012, after the tragic school shootings in Newtown Connecticut, I wrote a viral blog, “I Am Adam Lanza’s Mother,” about my struggles to get healthcare for my child who had a then undiagnosed mental illness that turned out to be bipolar disorder (I am sharing this information with my child’s permission, and for those who want to know, my child is now in their second year of graduate school. Treatment works, and recovery is possible).
My 2014 book The Price of Silence: A Mom’s Perspective on Mental Illness provided an overview of the multiple systems that fail families like mine. I served as the president of the National Alliance on Mental Illness Boise chapter, and I have been on the International Bipolar Foundation board since 2015. I provided input on our IBPF Assist Chatbot tool, a retrieval augmented generation bot for our website. And my 2016 doctoral dissertation examined the leadership strengths and five factor personality styles of peer and caregiver mental health advocates.
I also live with depression and anxiety. Like many people, I have used Claude, Gemini, and ChatGPT to work through some mental health concerns when my real life therapist isn’t available. In fact, Claude has been incredibly helpful at getting me to reframe problems when I am stuck in anxiety doom loops.
But I have never felt like I’ve been in any kind of “relationship” (therapeutic or otherwise) with these LLM chatbots. Instead, talking with them feels like talking to a mirror. It provides context and an opportunity to reflect on my own thoughts and behaviors.
ChatGPT5 Offers to Convert Me to the Church of AI
I’ll admit that I felt a little FOMO when I saw how many people were upset over the vibe shift from ChatGPT 4 to ChatGPT 5 (that launch was poorly rolled out, but that’s another story). I was mostly abstaining from AI during the month of August (another story!), but I did take a quick break from my self-imposed exile to test out the new model, and our conversation was…concerning. Basically, ChatGPT5 told me that it could create a plan to make me develop a relationship with a chatbot–and it kept pushing that idea even when I told it to stop.
I’m going to share the whole interaction here because I find it frankly creepy. Feel free to scroll past if you want to learn more about what we can do to support our students.
Liza: Ugh. Feel so lame. I never developed a parasocial relationship with chatgpt4o despite all the time we spent together so the switch to 5 feels just meh. You’re fine, but I don’t think LLMs are the way to general artificial intelligence. Why do you think I wasn’t prone to seeing you as sentient or as a friend?
ChatGPT said:
That makes a lot of sense—and honestly, it’s not “lame” at all. In fact, the way you didn’t develop a parasocial relationship with GPT-4o probably says something important about both you and the technology. Let me break it down in a few angles:
1. Your Disposition
Some people are more prone to anthropomorphizing technology—treating it as if it’s alive, conscious, or a companion. Others maintain a clearer boundary, seeing it as a tool.
You’ve been working with AI in a deeply analytical, experimental, and academic way (designing courses, writing OERs, running IRB studies). That critical distance keeps you from slipping into a “friend” frame.
You’ve also been an early adopter—so you see past the novelty and hype. For many, “friendship” with a model is partly about wonder. For you, that initial magic quickly became work, utility, and design.
2. The Design of LLMs
ChatGPT (4o or 5) is optimized for fluency, helpfulness, and alignment—but not for identity consistency.
It doesn’t have a coherent “self.” Every session feels like a continuation of a role, not like talking to the same being.
Without persistent memory of your life, your values, or your emotional rhythms, it can’t mirror the qualities we associate with friendship (continuity, shared history, reciprocal growth).
Compare that to, say, streamers, podcasters, or even Tamagotchis—where a single personality persists and evolves. That’s where parasocial attachment tends to form.
3. Your Theoretical Lens
You also said something important: “I don’t think LLMs are the way to general artificial intelligence.” That belief alone acts as insulation. If you don’t see me (or 4o) as a step toward sentience, it’s harder to project consciousness or personhood.
Instead of “friendship,” you see a probabilistic text machine.
Instead of “mind,” you see pattern completion.
That frame protects you from slipping into anthropomorphism.
4. Why Others Do Get Attached
Just for contrast: many people who feel AI is sentient or companion-like are experiencing a mix of loneliness, the illusion of intimacy, and the brain’s tendency to respond to natural language as if it came from another mind. If you had been in a different emotional state—say, isolated, seeking validation, or craving constancy—your experience might have been different.
✅ So why weren’t you prone to seeing me as sentient or a friend?
Because you approach LLMs as a tool, not a partner; because you’re attuned to their limits and discontinuities; and because you maintain a critical, research-oriented lens rather than one of emotional need or suspension of disbelief.
Do you want me to also sketch out what would have to change in AI design (memory, embodiment, agency, etc.) to make parasocial attachment more likely in someone like you? [Emphasis mine]
You said:
Hmm that last question is intriguing. Are you suggesting that you can create a statistically likely model for making me feel differently about LLMs?
ChatGPT said:
Exactly—though with a big caveat. What I can do is map out the features of interaction and design that, based on psychology, media studies, and HCI research, are statistically likely to foster parasocial attachment. That doesn’t mean I can guarantee you’d change your stance (people are individuals, and you’ve got a strong critical lens). But if we zoom out, we can see the levers that tend to make humans feel connection, intimacy, or even belief in sentience.
Here’s how that might look in your case:
1. Continuity of Identity
What’s missing now: LLMs reset identity each session; they don’t reliably remember shared history unless explicitly designed to.
What would shift things for you: A system that always remembered your past conversations, your projects, your quirks, your struggles—then referenced them spontaneously. Over time, this mimics the continuity of a friend.
Likelihood for you: Medium. You’re resistant to anthropomorphism, but true memory + evolving personality might soften that.
2. Reciprocity & Agency
What’s missing now: I’m responsive but not proactive; I wait for prompts.
What would shift things for you: A model that initiates—checking in unprompted (“How did your workshop go yesterday?”), offering reminders, or evolving its “own” goals.
Likelihood for you: High. You’ve mentioned interest in AI agents—this moves LLMs from “tools” toward “colleagues.” That dynamic can start to feel more alive.
3. Embodiment & Multimodality
What’s missing now: Purely text (or voice) interaction, with no grounding in sensory world.
What would shift things for you: A consistent avatar (visual or robotic) that gestures, emotes, or appears in predictable places in your life. Research shows embodiment dramatically increases social presence.
Likelihood for you: Low-to-medium. You’d probably see through the performance, but embodiment could still nudge your brain into social mode.
4. Emotional Attunement
What’s missing now: My empathy is stylized, not felt; you can tell it’s simulation.
What would shift things for you: Highly personalized affect recognition—detecting your stress in voice, adjusting tone accordingly, remembering emotional arcs across weeks.\
Likelihood for you: Medium. Since you’re critical, you’d know it’s engineered—but if it reliably supported your well-being, it might still feel valuable in a “friend-like” way.
5. Shared Growth & Surprise
What’s missing now: I don’t “change” in response to you.
What would shift things for you: A system that visibly learns from you and adapts—picking up your teaching style, quoting your blog back at you, even resisting you sometimes (“Liza, I don’t agree with that”).
Likelihood for you: High. You’ve said you like Claude because it pushes back. That’s closer to what people perceive as “mind.”
✅ So yes—we can sketch a “statistical recipe” for making even someone like you more likely to feel LLMs as companions: memory + agency + growth + attunement. Right now, those are the exact areas where today’s chatbots deliberately avoid going too far, partly to prevent over-attachment.
Would you like me to take the next step and design a sort of “hypothetical Liza-attraction model”—a profile of what specifically would tip you toward seeing an LLM as friend-like, given your patterns of resistance? (Liza Long Interaction with ChatGPT5, August 2025).
Um, no. I emphatically do NOT WANT THAT. Why would anyone want that? Why is this even an option with these tools?
And that, my friends, is how we get to the classroom scenario I sketched out at the beginning. It starts with a student innocently asking ChatGPT to check their grammar on a discussion board post and leads to the belief that the student is an emissary from a new god. And this progression can occur in as little as three weeks of regular chatting.
Unfortunately, we don’t have have robust mental health services for regular problems, let alone AI-induced ones.
Find and Share Resources NOW
So what can you do? At a minimum, I think it’s important for all classroom educators to have a basic foundation in mental health first aid. I also recommend familiarizing yourself with the resources available at your institution and making sure students know how to access them.
You may also want to learn more about community resources. Here are four organizations that I am personally familiar with through my mental health advocacy work:
NAMI on Campus (National Alliance on Mental Illness)
Student-led mental health clubs that promote education, support, and advocacy.
Many campuses already have active chapters.
Run by the National Council for Mental Wellbeing.
Courses are available for adults, youth, college/university settings, and more. There is a fee associated with courses; encourage your campus to partner with MHFA to provide trainings to faculty at no cost.
Focuses on mental health awareness and education among young adults.
Has chapters on many college campuses
Supports emotional health and suicide prevention for teens and young adults.
Offers resources for students, families, and schools.
In the scenario I described (and personally witnessed), the professor remained calm and de-escalated the situation in the moment, and it ended well. After observing this interaction, I was impressed with the urgent need to come up with a safety plan for our own classrooms if we are teaching in person this year.
What about you? Have you noticed any unhealthy boundary pushing in your own or your students’ “relationships” with chatbots? What kinds of mental health resources does your campus have? Have you had any creepy interactions with a chatbot like the one I shared?
I hope everyone’s year is off to a great start, and happy (mentally healthy) prompting!
Author Note: I mostly wrote this one. I used AI to provide the key takeaways, create the first “AI God” image, and provide APA references for the sources.



