On Becoming a Cyborg
Some thoughts (and research questions) about how LLM interaction is affecting my brain--and what this means for students
TLDR from the TLDR Bot
AI collaboration shifts the job. AI now generates ideas fast, so humans spend more time judging, steering, and correcting the work.
The human holds all the risk. AI helps create the output, but the human takes the blame if something goes wrong.
Vigilance fatigue is real. AI never tires, but the human must constantly check and decide, which quietly drains mental energy.
Trusting AI is a learned skill. Effective use means knowing when to rely on it, when to verify, and when to override it.
Working with AI may reshape how we think. Long-term collaboration could change thinking patterns, creating a new kind of human-AI partnership.
Author’s Note: These concepts emerged from a metacognitive reflection conducted after completing a post-award budget revision for a $4 million federal FIPSE grant. They are offered as a working vocabulary—not finished theory—for practitioners, educators, and researchers grappling with what sustained human-AI collaboration actually feels like from the inside. I am not a mental health professional. I am sharing my lived experience as someone who has worked extensively with large language models since January 2023.
I want to volunteer for a study that doesn’t exist yet. I want someone to scan my brain while I work on a project with Claude.ai. For several months now, I’ve been struggling toward an understanding of how my brain is changing through my interactions with large language models. I don’t have the vocabulary to describe what’s actually happening to me, a self-described super user. So I asked Claude to help me think through these experiences I lacked language for, and I realized something: I’m a new kind of cyborg.
A new kind of human being is emerging—one that has not been predicted by science fiction, named by science, or prepared for by education. As I told Claude in our coversation linked above, this person (me) is “Mentally a cyborg, but physically all too human.”
The cognitive integration that interaction with large language models like Claude, Gemini, and ChatGPT is real and accelerating. But our biological limitations are unchanged. What does this mean for us? What does it mean for our students?
An Attempt at Naming
I’ve been struggling to name something I’ve been feeling for a while in my interactions with large language models, especially for projects like the grant that have high stakes and really matter. When I was listening to Anthropic co-founder Jack Clark’s February 24 New York Times interview with Ezra Klein, he offhandedly mentioned a new kind of mental load he was experiencing, and I immediately recognized a kindred spirit. I imagine a lot of coders are struggling with this feeling right now.
The terms I offer below are Claude’s and my attempts to describe my own direct experience. They describe phenomena that do not yet have established names. But I think it’s important for us to have this conversation and think about the implications for our students.
Co-Creation with Asymmetric Accountability
This is the defining structural feature of high-stakes human-AI collaboration. Both Claude and I generate the work. Only one can be “wrong” in any meaningful sense. The human bears all the professional, legal, financial, and reputational consequences if the AI gets something wrong and the human does not catch and correct the error(s). The faster and more capable the AI, the heavier this asymmetry becomes
As the AI’s output volume expands, the human’s decision surface expands with it.
All cognitive load falls on the accountable human—the AI is not “thinking.”
Calibrated Trust
This is the skill of knowing, moment to moment, when to follow, when to verify, and when to override AI output, and yes, it’s a new and incredibly important skill, so we need to know how to teach it! In my experience, calibrated trust
Cannot be transferred from training; it must be earned through practice under real stakes
Requires sustained background vigilance even when the collaboration seems to be going smoothly.
This vigilance is invisible work — and it is exhausting
The Exhaustion of Sustained Vigilance
I am experiencing a specific form of cognitive exhaustion distinct from ordinary tiredness. This exhaustion accumulates from being the permanently accountable party in a collaboration where my “thinking partner” never gets tired, never needs a break, and never feels the weight of the decisions “we” make. Sustained vigilance in working with AI tools has some strange characteristics (I want to explore how this relates to the sustained vigilance of being a trauma survivor, but I can’t go there yet).
Does not announce itself as exhaustion — the work feels productive, even pleasurable in the moment.
The body sends the bill anyway: after I submitted my budget revision, I started to physically shake.
Our students will feel this exhaustion and not know why, unless they are taught to recognize it.
The Asymmetry of Endurance
AI has no circadian rhythm, no cortisol, no point of diminishing returns. It will work at identical capacity at 2:00 am as at 9am. Any boundaries that exist around the work must come entirely from the human.
The work will always feel continuable — the AI will always be “ready.”
Managing the off switch is itself another cognitive load that belongs entirely to the human.
Again, there’s no accountability (unless we build it into our workflow, and we CAN build it into our workflow) from the AI.
The Bottleneck Shift
Before AI, content generation was the bottleneck. Getting ideas and words out was slow, and that slowness created natural cognitive pacing. AI eliminated that bottleneck. The new bottleneck is human judgment, evaluation, and redirection.
The cognitive role has shifted from idea/content generator to curator and director.
Judgment at high volume, sustained, with real stakes, is its own exhaustion.
What is possible has expanded, but more options means more decisions.
Aliveness
I don’t think this is the right word, but I’m trying to get at the quality of intellectual experience in genuine co-creation. It’s something distinct from productivity, efficiency, or output quality. When you work with an AI collaborator that knows more than you, moves faster than you, and is still wrong sometimes, this creates conditions that feel a lot like playing an interesting and exciting game. It’s a new kind of flow.
I don’t see what I do with AI as dependency or addiction. Instead, I think co-creating with AI is a form of intellectual pleasure that did not exist before. My solo work remains possible (I still write my own papers entirely from scratch for grad school), but the co-creation is simply more alive, interesting, and fun. This is what genuine AI literacy feels like from the inside for me.
My Brain Is Changing
I’m becoming convinced that sustained, high-intensity cognitive co-creation with AI systems rewires thought patterns around that collaboration. This is not metaphor. Neuroplasticity does not distinguish between physical and cognitive tools. The brain adapts to what it does consistently, under pressure, over time.
Patterns of generation, evaluation, and redirection reorganize around AI collaboration
The change is real whether or not the person is aware of it
Awareness of the change is itself a form of protection
This phenomenon is not yet being studied at the scale it deserves
The early cyborgs—people who have been integrating with AI since 2023 or earlier, under real stakes, with full accountability—are a research population of urgent importance. The window to study them before this becomes ordinary is closing, and we might learn some important things about what it means to be human in the age of artificial intelligence.
What This Means for Teaching
Some students are becoming cyborgs whether they are ready or not. The question is whether they will know what they are and how we can help them to manage the challenges I’ve shared above. Current AI literacy curricula largely skip to skills—prompting, evaluation, hallucination detection—treating the challenge as technical.
It is not only technical. It’s something much more.
I think our students need some things they aren’t getting from our current teaching methods.
Permission to name the discomfort around working with AI — uncertainty is not failure.
Metacognitive frameworks for observing their own AI-augmented thinking.
Structured reflection time — skills do not transfer without processing space.
A vocabulary for what co-creation actually is and what it costs.
Understanding of the asymmetry of endurance — the off switch is theirs to manage. How will they do this?
Opportunities to practice building calibrated trust under real (not simulated) stakes so they can see how it affects them.
The experience of aliveness — so they know what they are working toward
The most important thing no one told the first cyborg: acknowledging what you are experiencing helps. Metacognition helps. You are not doing it wrong when it is hard. You are doing something genuinely new.
Open Questions Worth Studying
I want to end this post with a few research questions I am thinking about.
How does sustained AI co-creation change metacognitive patterns over time?
What distinguishes calibrated trust from over-reliance or under-reliance — and can calibrated trust be taught?
What are the longitudinal neurological effects of high-intensity AI collaboration?
Does the exhaustion of sustained vigilance accumulate differently by domain, stakes level, or personality type?
What protective factors—metacognition, human collaboration, boundary-setting—moderate the dark side of co-creation?
What happens to students who take the easy button at scale—and is that different from what happens to those who carry the full load?
Can longitudinal AI literacy programs like Accelerate Idaho (our statewide grant) generate a real evidence base for these questions?
These are just a few ideas I have. If you’re experiencing something similar, I’d love to know about your experiences. Reach out to me on LinkedIn or Substack and let me know! And happy prompting!
AI Acknowledgment Statement:
This is a strange one because it came out of a lengthy metacognitive exercise I did with Claude. I would say the ideas are all mine (I just asked Claude to ask me some probing questions), but Claude helped me to articulate and frame some problems I have been grappling with. So like much of my work, it’s a true cyborg effort. But the experience and accountability are all mine.


