What Would Truly Open AI Look Like?
Spoiler Alert: OpenAI Is not the answer. Some thoughts on the OpenAI Educator Forum
Key Takeaways (from the TLDR Bot)
The author presented at the 2024 Open Education Conference and attended an OpenAI Educator’s Forum, where they found OpenAI's focus on higher education and equity lacking.
OpenAI is transitioning to a more for-profit model, with ambitious goals like creating technology that could make current AI use cases obsolete, including running businesses with minimal human involvement.
There were concerns about the gender gap in AI and the absence of women in prominent roles at the forum, despite women actively contributing to AI advancements.
The forum highlighted a disconnect between OpenAI's leadership and the broader public, raising concerns about AI’s future impact on jobs and education.
The author advocates for integrating humanities with AI and exploring smaller, open models to ensure that the benefits of AI are accessible and equitable.
Last week I was living my best academic life, complete with lobster rolls (my partner teases me that the only thing I eat when I’m in New England is lobster rolls, and he’s not wrong). I attended and presented at the 2024 Open Education Conference in Rhode Island: See “Robots Won’t Replace Us (We Hope)” on teaching with generative AI with my College of Western Idaho colleague Joel Gladd. Here’s a link to our slides, which have several resources.
Then, since everything on the East Coast is so close, and there are trains (!!!), I took Amtrak from Providence to New York City on Thursday to attend the OpenAI Educator’s Forum. I first heard about this forum from Stefan Bauschard, and while I’m not a “senior vice president” or “academic provost,” I do have one of two AI fellowships with the Idaho State Board of Education, and I thought this might be an excellent opportunity to learn more about OpenAI’s plans for higher education.
I was disappointed. Marc Watkins has already posted his excellent take on the OpenAI forum on his Substack (which you should subscribe to if you don’t—ditto Stefan’s linked above). Marc and I came away with some similar takes. I especially like the questions he asks about equity and access in his post.
Here are the notes I took during the forum:
Students are the biggest users of ChatGPT (Leah Belsky). Surprise, surprise.
OpenAI wants us to pay them so that we can tell them what their product is good for. (Read that again slowly).
Higher education really isn’t a priority for OpenAI.
Addressing the growing and alarming gender gap in AI adoption and use also isn’t really a priority for OpenAI.
They are becoming a for-profit company so they can create something that will make all this work we are doing to develop use cases obsolete (AGI). One presenter boasted that in the near future, one person (or maybe two) would be able to run a million dollar company.
OpenAI seems to see themselves as both an enterprise and a direct-to-consumer company, with a stated goal of providing access to everyone (we’ll see). But in reality, they have no plan for how to serve their biggest customer base: students.
Now, I have to admit that for the first 20 minutes or so, I was trying to stave off a panic attack because I was convinced that I had forgotten to unplug my curling iron and that consequently, Manhattan (or at least Chelsea) would be consumed in a conflagration only rivaled by that infamous episode with Mrs. O’Leary’s cow in Chicago. (I had in fact unplugged my curling iron—see picture below). So I might have missed some things (Just keeping it real, folks—the Liza Long persona bot would never write anything like this paragraph. I think).
But overall, it was a thrill to connect with so many brilliant and curious people, and I was so grateful to see Ethan Mollick in action. He’s been an inspiration to me for much of the work I’ve done with generative AI. Mollick is an unapologetically fast talker, and I was hanging on every word.
Unfortunately, the educators and uses that OpenAI showcased were disturbingly women-free. This is not because there aren’t women doing some amazing things in this space: see Anna Mills, Laura Dumin, Amanda Larson, and yeah, I guess me. I have co-written a literary theory textbook with ChatGPT 3.5 and am teaching an English 102 course that fully integrates generative AI prompting at each step of the writing process—more on that in a future post, but as a teaser, while I was out last week, 100% of my students turned in drafts of their first essay within three days of the due date, which is unprecedented in my teaching career.
Back to the forum: The obvious unspoken question (which no one got to ask) is this: What about jobs? Unfortunately, in higher ed we decided our mission was to train students for the workforce rather than to create engaged and knowledgeable citizens. And both the workplace and our democracy are changing fast.
Maybe we need a different unique value proposition. I have argued and continue to argue that the advent of generative AI is in fact a golden opportunity for the humanities, which is why we need to stop complaining about student plagiarism and start figuring out how to emphasize a humanities-centered approach to education again. At the OpenAI forum, Harvard’s Mitchell Weiss argued that it’s essential to get “fingers on keys,” and this really resonated with me as an essential first step for those of us who aren’t in STEM. In writing I can do everything genAI can do, but it does all these things faster. In coding, I’m illiterate. AI has been a game changer for me.
But if everyone is now a coder, what do we need OpenAI for?
What does truly open AI look like?
This brings me back where we started: to the Open Education conference. In his thought-provoking keynote address, Dr. Richard Baraniuk from Rice University encouraged all of us to think about the many intersections between open education and generative artificial intelligence. After the OpenAI forum, I’m more persuaded than ever that I need to start messing with my own smaller scale models. If I’m going to do all that work anyway, I’d rather openly license it so that others can benefit.
A final note: I don’t know if I want artificial general intelligence (and I’m not alone: Read this great essay by Sigal Samuel at Vox). I do know that I don’t want OpenAI to make that decision for me. The disconnect between OpenAI’s leaders and the general public is a vast one, and this is concerning. Like I said, we need the humanities, y’all.
I’ll leave you with a picture of sunset at the Cape, where I was truly fortunate to spend a weekend hiking and thinking about what I learned and what I want to do next with generative artificial intelligence. What do you think? What does truly open AI look like to you? Find me on Threads (@anarchistmom) or Linked In and let me know. And happy prompting!