
2025 included MANY grand claims about transformational potential for AI:
- “AI will enhance education (and civilization) in these magnificent ways,” or
- “AI will destroy education (and civilization) in these grisly ways.”
Here’s a question that got less air time:
“Can you name me just one specific — even mundane — way that AI can make teaching and learning simpler and better?”
As of today, my answer to that question is YES.
Here’s the story.
Regular readers have heard me talk about the benefits of “prequestions” before: for example, here, here, and here.
The prequestion process goes like this:
- Students try to answer questions about a topic even though they haven’t learned about it yet.
- Unsurprisingly, they answer most of these questions incorrectly.
- Students then read a passage on the topic.
- Result: these students remember more about the topic than others who didn’t answer prequestions.
By the way: these prequestions can be very straightforward. For instance: “What distinguishes hydraulic brakes from mechanical brakes in automobiles?”
Researchers are still trying to figure out WHY prequestions help learning; they’ve got several theories. But we’ve got enough research on this strategy for me to conclude that it’s a thing — not a well-intentioned research-based fluke.
As we try to apply this finding in actual classrooms, we come across a few stark problems:
- For teachers: writing prequestions takes time. (Boo!)
- For students: how can they write their own prequestions when they haven’t studied the material? (Paradox!)
A research team — led by Dr. Steven Pan — wondered: can we use AI to generate effective prequestions? That is: do AI-created prequestions benefit learning the same way that human-generated prequestions do?
Researching Step by Step
Team Pan’s questions sound simple. But when researchers approach a topic like this, they face several demands.
First: they have lots technical steps to follow: sample sizes and active control groups and intricate calculations and so forth. (In my estimation, they checked all these boxes.)
Second: when done well, research studies try to disprove their own hypotheses. Researchers don’t so much kick the tires as try to puncture them. (In my view, they explored plausible alternatives admirably.)

To meet these challenges, Pan’s crew undertook four related experiments. I won’t go through all the nitty gritty, but the highlights make for encouraging reading.
In Pan’s most basic experiment, one group of students read a passage about different kinds of brakes: air brakes, mechanical brakes, hydraulic brakes — you get the idea. A second group read two AI-generated prequestions about that passage before they read it. Sure enough: the students who read the AI-created prequestions scored higher on a follow-up quiz than those who didn’t.
Unsurprisingly, they scored higher when answering the same questions that they initially read as prequestions. They ALSO scored higher when answering novel questions. In other words: it seems that the benefits of answering prequestions goes beyond the precise questions themselves to the passage overall.
Don’t Stop Now
To make sure they have a persuasive case, Pan’s team didn’t stop there.
They asked: do AI-generated prequestions provide as much benefit as human-generated prequestions?
Short answer: “yes.” Technically speaking, in some cases the human-generated prequestions led to slightly higher quiz scores — but the differences were tiny.
They asked: do AI-generated prequestions help more or less than previewing an AI-generated outline?
Short answer: “trying to answer the prequestions helped considerably more than previewing an outline.”
They asked: do detailed prompts produce better questions, or does a basic prompt work well too?
Short answer: “a basic prompt worked just fine.” In one study, the basic prompt led to more effective prequestions than the detailed one.
To summarize the good news in this study:
- AI-generated prequestions help students learn from reading new information
- They help as much as human-generated prequestions
- Prequestions improve memory of the entire passage — not just the answers to the questions themselves
- They help more than some other kinds of warm-up activities, like studying an outline
- Even basic prompts work just fine
Good News…Bad News?
With all that good news, is there any bad news?
Honestly, I don’t see much “bad news” here. But — as always — I do see limitations.
- As far as I know, prequestions have been studied for learning from reading passages. I don’t know if we have evidence showing they benefit students who, say, listen to a discussion or a teacher presentation. For that reason, this strategy isn’t obviously road-tested for younger grades.
- This study focused on adult learners; the average participant age for these studies was in the low 30s. We don’t know if AI-generated prequestions will help in 8th grade — although it’s not obvious to me why they wouldn’t.
- This study, like most research into prequestions, tests memory after a few minutes. Will it help over longer periods of time? I don’t think we know.
- Prequestions aren’t a panacea. We don’t need to use them all the time. They should be one strategy that we use judiciously, not a hard-and-fast requirement.
Once we acknowledge those limitations, I think we have a compelling case. We know that — under the right circumstances — prequestions can help students learn. And, thanks to Pan and his colleagues, we know that AI-generated prequestions provide the benefits we want.
At the top of this post, I asked this question: “Can you name me just one specific — even mundane — way that AI can make teaching and learning simpler and better?”
Team Pan answers: “AI can help both students and teachers generate prequestions. And those AI-created questions help learning.”
Perhaps you’ll begin new classroom year with a prequestion or two.
Pan, S. C., Schweppe, J., Teo, A. Z. J., Indrajaya, A., & Wenzel, N. (2025). Using ChatGPT-generated prequestions to improve memory and text comprehension. Journal of Applied Research in Memory and Cognition. Advance online publication. https://dx.doi.org/10.1037/mac0000254
