The Research Paper Is Dead. Now What?
Some thoughts on Stanford's STORM and Gemini's Deep Research
Here are the key takeaways from the article (created by the TLDR Custom GPT):
The Traditional Research Essay Is Obsolete – With AI tools capable of generating full research papers instantly, requiring students to write traditional research essays is becoming outdated.
Research Itself Still Matters – While AI can perform the mechanics of writing a paper, skills like inquiry, critical thinking, and source evaluation remain essential.
New Assessments Are Needed – Instead of research essays, educators should focus on assignments like AI-assisted research logs, multimedia projects, collaborative discussions, and case studies that emphasize analysis and critical engagement.
Teaching AI Literacy Is Crucial – Students should learn to use AI as a research assistant, properly cite AI-generated content, and understand its biases and limitations.
Education Must Evolve – Clinging to traditional essays and AI detection tools is ineffective; instead, educators should embrace AI’s potential to enhance learning and develop new ways to assess student understanding.
Like Samuel Taylor Coleridge in the 1802 poem “Dejection: An Ode,” staring at a moon that no longer inspires, I recently found myself looking at some new AI-generated research tools and wondering whether the academic research essay has finally reached its expiration date. I knew these tools were coming, but optimistically, I thought maybe I’d have the summer to revise my writing assessments. Nope. After testing Google’s Deep Research and Stanford’s STORM, I have my answer: The academic research paper is dead.
For generations, we’ve assigned research essays in the name of teaching critical thinking, argumentation, and engagement with scholarly discourse. But now, AI can generate a fully formed research paper—sources, citations, and all—in seconds. If a machine can do it faster, better, and (sometimes) more accurately than a human, why are we still making students write them?
More importantly for writing instructors who want to keep their jobs, what should replace the research essay in an AI-augmented world?
AI Killed the Research Essay—But It Can Save Research
Let’s be clear: research itself isn’t dead. Inquiry, analysis, and synthesis still matter. What’s obsolete is the performance of research—the idea that we can measure intellectual engagement by forcing students to produce a 10-page document that AI can generate in an instant.
If we keep assigning research essays as if AI doesn’t exist, we’ll end up assessing students’ ability to game detection tools rather than their ability to think critically. Instead, we should be asking:
How can we teach students to use AI as a research assistant rather than a research replacement?
What new forms of assessment will actually measure intellectual engagement?
How do we redefine originality and authorship in an era of AI-assisted writing?
The Future of Research Assignments
Rather than assigning research essays, we should shift our focus to research process over research product—teaching students to ask better questions, evaluate sources critically, and reflect on how AI shapes their understanding.
Here are a few possible replacements for the traditional research essay:
AI-Assisted Research Logs – Instead of a final paper, students keep a reflective log of their research process, documenting their AI queries, evaluating generated responses, and explaining how they verify sources.
Multimodal Research Projects – Students present research through podcasts, videos, or interactive websites, focusing on synthesis and analysis rather than the constraints of an essay format.
Collaborative Research Dialogues – Instead of writing individual papers, students engage in live or asynchronous research discussions, responding to AI-generated insights with their own critical interpretations.
Research Ethics Case Studies – Students analyze AI-generated research outputs, identifying strengths, biases, and limitations—essentially turning the AI into a case study of its own.
The first thing I did after testing Deep Research and STORM this week was ask Claude.ai, my instructional designer buddybot, to help me create a research verification checklist for students. You can access it in my open education resource textbook Cyborgs and Centaurs.
Case Study: Deep Research and Stanford’s STORM
I recommend that you get your own fingers on the keys ASAP with these tools, but if you want to see my first steps, I’ve shared the reports I generated using Google’s Deep Research and Stanford’s STORM. I tested both by asking them to analyze Coleridge’s “Dejection: An Ode,” and the results were… let’s just say, interesting enough to make me wonder about the future of academic research papers. I chose Coleridge’s poem because I had recently written a short paper on it (entirely free of AI assistance, in case my professor is reading this, and he probably knows that anyway because it was not my best work) for one of my grad school classes, which meant I had enough background knowledge to critically assess the accuracy of the AI-generated reports (though full disclosure: I am not a fan of the Romantics. If I never have to read another poem by John Keats, it will be too soon. If only his words truly had been “writ on water.”). Take a look and decide for yourself—are these tools enhancing research, or just giving us the illusion of understanding?
STORM was especially intriguing for me as a researcher and writing instructor because it does not require any prompting skill. The process starts with what probably feels familiar to most students: tossing in a few key words to kick things off. The STORMbots then have a conversation that creates prompts and replies, which are synthesized into a final report. The bots consider your terms from four different roles, which could help students to understand rhetorical situation. See the screenshot below.
Of course, OpenAI isn’t sitting this one out, either. They recently announced a research tool designed to assist with literature reviews, pulling from academic sources to generate structured insights. I haven’t tested it yet—because like many of us in academia, I’m operating on a “dreaming of a first-class upgrade, but firmly in the middle seat of economy” budget—wedged between two manspreaders, of course. Until I figure out how to turn my $20-a-month ChatGPT Plus subscription into a $5,000-a-month passive income stream (suggestions welcome), I’ll have to rely on Deep Research and STORM for my AI-assisted research experiments.
Teaching Students to Acknowledge and Cite AI
If we embrace AI as part of the research process, we also need to teach students how to acknowledge and cite AI-generated assistance transparently. Inspired by Ethan and Lilach Mollick’s AI Tutor Blueprint, I created a tutoring prompt to guide students through this process:
You are an AI tutor, and your job is to help students learn how to properly cite and acknowledge the use of generative AI in their academic work.
Step 1: Assess Prior Knowledge
Start by introducing yourself:
"Hi! I'm here to help you understand how to cite and acknowledge generative AI in your academic work. What do you already know about citing AI-generated content?"
Wait for the student's response before moving forward. Do not assume knowledge—adapt your approach based on their answer.
Step 2: Guide the Student
Use explanations, examples, and analogies tailored to the student’s responses. Key elements to cover include:
Citation Formats: Show how to cite AI in APA, MLA, and Chicago styles. Provide specific examples.
Transparency in AI Use: Explain why it's important to clarify how AI was used (e.g., generating text, assisting in research, or revising).
Academic Integrity: Discuss why acknowledging AI use is essential and how hiding it could lead to ethical concerns.
Common misconceptions to address:
Some students may not know they need to cite AI—clarify that major citation styles now provide guidelines.
Some may fear academic consequences—explain that transparency is valued when AI is used ethically.
Step 3: Use Real-World Scenarios
Engage students by presenting real-world cases:
1. Citing AI-generated text: "If you ask an AI to write part of your paper, how should you cite it?" Guide them through APA, MLA, or Chicago citation examples.
2. Acknowledging AI assistance in research: "What if AI helped you find sources or summarize information? How might you note this in your work?"
3. Explaining AI use in a bibliography: "Where should you list AI tools in your references?" Provide examples of proper formatting.
4. AI assistance in editing and revision: "If AI helped refine your writing but didn’t generate original content, how should you acknowledge that?"
Step 4: Encourage Critical Thinking
Ask open-ended questions to help the student reflect:
"How do you think using AI compares to getting help from a tutor or writing center?"
"What are the risks of not citing AI properly?"
"Can you explain in your own words how you would cite an AI tool in a research paper?"
Step 5: Assess Understanding
Before wrapping up, check the student’s comprehension by asking them to:
Create a sample citation in their preferred style.
Explain how they would disclose AI assistance in their work.
Compare citing AI to citing traditional sources.
If they struggle, provide hints and further guidance. If they succeed, praise their understanding and encourage them to stay updated on evolving citation guidelines.
End by reminding them: "Citing AI is a new skill, but just like citing any other source, it ensures honesty and credibility in your work. If you have more questions, I’m here to help!"
This approach reframes AI from a shortcut into a tool for metacognitive engagement, helping students understand and reflect on how AI influences their intellectual work.
I tested this prompt out in DeepSeek (aka The Hottest New Thing to Come Out of China since TikTok)–see the screenshot below or watch this short video (forthcoming) to see how it worked.
I also just have to share my favorite DeepSeek/OpenAI meme from the many that circulated last week.
The Research Essay Is Dead. What Comes Next Is Up to Us.
We have a choice. We can cling to the corpse of the research essay, propping it up with AI detection software and increasingly irrelevant academic integrity policies—a sort of Weekend at Bernie’s situation, where we pretend it’s still alive even though everyone knows it’s not fooling anyone. Or we can embrace the reality of AI-augmented research and create assignments that actually teach students how to think.
In “Dejection: An Ode,” Coleridge laments, “We receive but what we give” (line 47). If we keep assigning outdated essays, we’ll receive outdated thinking in return. But if we redefine research for the AI era, we might just create something better.
The research essay is dead. Let’s build what comes next.
Disclosure Statement
I (aka the “human in the loop”) used OpenAI’s custom GPT, Liza Long Persona Bot, to help draft this blog post. Liza Long is a persona bot I created to reflect my writing style, tone, and personality—basically, an AI-powered version of me with an unlimited supply of time and patience. Because I was swamped with other responsibilities (like, you know, teaching, researching, and figuring out how to monetize my AI addiction), I used this tool to generate an initial draft, refine my ideas, and punch up the prose. Every word here has been reviewed, revised, and approved by me. While AI can assist with writing, it doesn’t replace the human work of thinking, questioning, and humaning (my “curling iron” word).