The Art of Being Human
Why We Must Advocate for Human-Centered Generative Artificial Intelligence
Adding the link to my OpenCon Ohio 2024 slide deck (with links to the resources I shared). Thanks to everyone who attended!
Author’s Note: I started writing this post last week, before the major infodumps from OpenAI and Google. I have…thoughts. Marc Watkins summed them up neatly, so I’ll lead with that: ““[W]e are in a grand public experiment with AI no one asked for. Quite a few of us are going to great lengths to ignore the reality of how quickly our world and interactions are now exposed to automation. With this latest release, I don’t see that position being tenable any longer. Education is being ushered into this new generative era whether we like it or not and we can either take a position demanding ethical and transparent behavior from developers and adopters or risk being ushered aside in favor of sweeping technological change.”—Marc Watkins, “AI Has Changed Learning, Why Aren't We Regulating It?”
I know this blog is supposed to be about generative artificial intelligence. But bear with me. I am grieving.
On April 17, Representative Sue Chew, the longest serving Democrat in the Idaho State Legislature and someone I was fortunate to call a friend, passed away from pancreatic cancer. It was one month to the day since the last time we met. That day (Saint Patrick’s Day), she brought me a rose, something she was in the habit of doing because, in her words, “it makes people happy.” She personally preferred calendulas with their bright orange joy.

That day, some friends and I had to convince her to go back to the hospital, and when we finally got her there, she did not want us to leave. I think she knew then, that time was short. I know she did not want to leave. My last words to her were, “I love you,” and her last words to me were, “I love you too.”
I played “Amazing Grace,” “Bridge over Troubled Water,” and “Fire and Rain” for her memorial service. The Unitarian Universalist church was packed with former legislators from both parties, many of whom I knew from my work as a mental health advocate and former board member and president of the NAMI Boise chapter. It feels like another life sometimes, but I once was an accidental advocate for children’s mental health, a mother furious at a system that continued to fail my family and my beloved child. And I never forget that May is Mental Health Awareness month.
That’s how I first met Sue. She was a passionate advocate for the underdog, fighting for those whose voices are too often silenced in our “reddest of the red” state.
At Sue’s memorial service, friends and family spoke of her kindness and compassion. But what I will remember most is her intelligence. Sue was one of the most strategic and brilliant people I have ever met, in that quiet sort of way that people who are problem solvers so often have. Her special skill, I think, was getting louder people to think that her innovative ideas were their own. She was a pragmatic realist who understood the political calculus of Idaho politics. Sue always wore mismatched socks because she shared her pairs with people she disagreed with, especially moderate Republican colleagues, as a token of their shared humanity.
We need people with Sue’s brilliance, strategic vision, and compassion desperately in this moment. And fortunately, we have many such people in our midst. People like Ethan Mollick, Lance Eaton, Anna Mills, Laura Dumin, Marc Watkins, and Leon Furze, to name a few.
It strikes me, as I write this, that while I initially started this blog to collect and share teaching ideas and resources for generative artificial intelligence, my posts have often focused on the essential nature of the human experience, and as my friend Sue said on more than one occasion, “death is a part of life.”
Maybe my real goal all along has been to figure out what makes us human. If it’s not language, then what?
Just a few days before Sue died, I had the opportunity to advocate for creating a college-wide committee to oversee our collective implementation and management of generative artificial intelligence. When I made my case for the committee and for funding to support faculty through training, I focused on our students. They are always at the center of my work, and I am fortunate to work at an institution with likeminded people.
But at that meeting, I learned something that shook me: the “student” representing our school on our website was AI-generated to meet a marketing demographic (they have since taken the image down).
What does it look like to advocate for ethical, responsible, safe artificial intelligence? And how can we move past the conversation about students having robots write their essays and on to the much more important task of teaching students to practice critical thinking any time they engage with AI?
Here’s what worries me about what is clearly a “move fast and break things” approach to this technology. Generative AI is designed to be helpful, not truthful.
I learned this early on when it made up citations.
Then I asked it to write my biography, which sounds very convincing but is only about one-third true. I was not, for example, born in Pocatello (though I am starting a Ph.D. program there in the fall). And I have not written a book called The Spider and the Fly: A Mother’s Courageous Journey from Fear to Freedom (though I think I’m going to co-write it with Claude because that is an awesome title).
But if you didn’t know me, you would not have any reason to doubt the biography.
One thing that needs to stop immediately: AI-generated results to Google searches. They are rife with errors large and small, just like my biography. I’ve come to trust Google to give me the information I need, and the AI-generated search results are neither helpful nor useful. They are rapidly eroding my trust.
In the past few weeks, I’ve told more than one person who made the mistake of stopping by my office that I think writing as we have traditionally understood it may be dead. I thought that before OpenAI’s announcement about ChatGPT 4o, and I think it even more now. The question is, what replaces writing? How will we still know what we think? Will anyone care? (I’m in long, earnest discussions with Microsoft Copilot about this right now, and I’ll update you at some point).
What’s clear now is that we have to advocate, as Marc Watkins urges, for regulation of these tools. Anna Mills has called for watermarking—that’s a good start, but could probably be gamed/hacked/whatever. But more importantly, we need to figure out why learning, thinking, and writing matters to us as humans. It’s amazing how cavalierly these tech bros decided, of all the things to automate, writing should be at the top of the list. Did they really hate their high school English teachers that much? Or (as I suspect), do they really hate being human that much?