Skip to main content

Andrew Watson About Andrew Watson

Andrew began his classroom life as a high-school English teacher in 1988, and has been working in or near schools ever since. In 2008, Andrew began exploring the practical application of psychology and neuroscience in his classroom. In 2011, he earned his M. Ed. from the “Mind, Brain, Education” program at Harvard University. As President of “Translate the Brain,” Andrew now works with teachers, students, administrators, and parents to make learning easier and teaching more effective. He has presented at schools and workshops across the country; he also serves as an adviser to several organizations, including “The People’s Science.” Andrew is the author of "Learning Begins: The Science of Working Memory and Attention for the Classroom Teacher."

Should We Teach Math and English the Same Way?
Andrew Watson
Andrew Watson

Because we teachers are a busy lot, we sometimes want simplicity and clarity:

I’m honestly too busy to sort through all the options and variables; just tell me what to do.

In fact, when I went to my first Learning and the Brain conference in 2008, that was exactly my plan.

The researchers would tell me what to do.

I would do it.

VOILA: brain-based teaching.

The more time I spend in this field, the more I doubt that logical chain. In fact, I frequently warn people against that kind of thinking.

4 students sitting at a table discussing something visible on a laptop

My regular mantra:

“Don’t just DO this thing. Instead, THINK this way.”

In other words, we teachers always have to translate research-based advice to our own context.

Today’s News

I recently came across a study looking at twelve different instructional activities in English and math classrooms.

In this study, “instructional activities” include …

… open discussion among teacher and students,

… use of whiteboard by teacher

… students working in groups

… one-to-one teaching

… students copying from the whiteboard

And so forth.

The research team asked: do students benefit from the same instructional activites in both disciplines?

To answer this question, researchers had observers keep track of teachers’ instructional activities. These observers — during more than 2500 visits! —  recorded whether teachers did these 12 activities “none or very little,” “some of the time,” or “most or all of the time.”

And, they then looked at the students’ scores on national exams in English and math. (This study was done in England, where most students take the GSCE when they’re 16 years old.)

So, what did they find when they put all those pieces together.

Fascinating Results

First, this research team found that teachers do different things:

Some teachers spend much of class time using traditional direct instruction, including lecturing and the use of textbooks, while other teachers devote more class time to students working with their classmates or individual practice.

For instance: one third of teachers use “open discussion” most or all of the time, but one quarter don’t do so at all.

Second, those different instructional activities matter.

In math classes, students benefit from a) practicing on their own, and b) teachers’ checking for understanding.

Students who engage in these activities “all or most of the time” score significantly higher than those who do so “some of the time.” (In this case, “significantly higher” is a bit hard to describe. Probably the easiest way to say this is: both statisticians and the students themselves would notice the difference.)

In English classes, however, students benefit from working and talking with each other (and the teacher).

So, to answer the question in this post’s title: at least according to this study, we shouldn’t teach all disciplines in the same way.

What This Finding DOES Mean

If you teach math or English to high-school students in England, I think you should give this study a careful look to guide your classroom practice.

That is: I’ve given an introduction — but the study includes A LOT more information that could be practically helpful to you.

Even more important:

If you don’t fit in that teaching category, this study means that research-based teaching advice always requires translation and adaptation.

Students benefit from different instructional activities in math and English. And, presumably, in other disciplines as well.

That is: you might go to a conference session that highlights the importance of mind-maps. (I’m picking this example at random.) That session shows research about its effectiveness in helping students learn.

However, this study clearly reminds us that we might need to adapt that advice to our own classrooms.

High-school English teachers might have students create mind-maps together; remember, students benefit from “working and talking together.”

High-school math teachers might have them create mind-maps solo; students benefit from “working on their own.”

More generally, this study might prompt you to ask some direct questions during that mind-map session. Was the research done with students in different grades? In different school or community cultures? Studying different topics? With diagnosed learning differences?

In other words: this specific research finding reminds us of a general lesson. We should be curious about and open to research-based suggestions. AND, we should check and be sure the research aligns with our teaching context before we make drastic changes.

What This Finding DOESN’T Mean

The flipside of my last point is: this research should encourage you to adapt your teaching practices only if your classrooms look like these classrooms.

Do you teach history? This research might not (or might) talk directly to you.

Do you teach second grade? Ditto.

Perhaps you teach in a different cultural context — say, Korea, Cairo, or Kansas.

Perhaps your school has a specific teaching philosophy (a Montessori school; a military academy) that rules out these approaches.

In other words: don’t just DO what this research tells you to do. THINK about your teaching practice with these ideas in mind — and see if they fit and make sense in your world.


Burgess, S. M., Rawal, S., & Taylor, E. S. (2022). Teachers’ use of class time and student achievement (No. w30686). National Bureau of Economic Research.

When Prior Knowledge Bites Back: The Dangers of Knowing Too Much
Andrew Watson
Andrew Watson

In this blog, we typically highlight the benefits of prior knowledge.

For example: if a student knows a lot about baseball, she’ll be much more successful in understanding a reading passage about baseball.

Young rowan tree seedling grow from old stump in a sunlit forest.

That same student could struggle mightily with a passage about cricket. What’s an “over”? A “wicket”? A “badger”?

In the world of cognitive load theory, prior knowledge helps because it reduces working memory load.

An expert knows relevant definitions, concepts, procedures – and the relationships among them.

And because experts have all that knowledge in long-term memory, they don’t need to noodle it around as much in working memory.

The teaching implications of this insight:

First: find out how much prior knowledge students have on any given topic.

Second: ensure student have the prior knowledge they need before starting on any given topic. Don’t start it until they do.

NB: This second insight has important implications for many project pedagogies.

This conclusion is well settled in cognitive load theory. But: is it always true?

Is it possible that prior knowledge might increase working memory load? Could it make thinking and problem solving more difficult?

Thinking the Unthinkable

Here’s a question:

“To mitigate the effects of climate change, would it be a good idea to plant more Douglas fir, oak, and beech trees in the Black Forest?”

I know a bit about climate change, and a bit about trees, and I’m generally inclined to say “yes.” Because I’m a novice – that is, I don’t have lots of prior knowledge on these topics – the question strikes me as straightforward.

However, if I were an expert, I might draw on my prior knowledge to see additional complexities in the question.

For instance…

…those trees might be vulnerable to particular diseases or pests,

…they might harm the ecosystem in the Black Forest,

…they might – paradoxically – do some tree thing or another that would ultimately exacerbate climate change rather than mitigate it.

In this case, an expert’s prior knowledge could introduce complicating variables – and thereby increase working memory load.

A research team, made up of scholars from Germany and Australia*, tested this hypothesis.

As you would expect, they asked forestry experts and forestry non-experts to consider (roughly) the tree-planting question above.

The experts considered the question more complicated than the novices did. That is: that said that it required more thought, more simultaneous contemplation of variables, and more complex thinking..

And – here’s the kicker – their answers weren’t any better than the novice’s answers.

In Other Words

Putting all these pieces together…

Forestry experts’ higher level of prior knowledge increased their perception of the problem’s complexity;

It did so (probably) because they thought of additional variables not included in the question;

These additional variables increased working memory load;

Because of additional strain on working memory, these experts didn’t benefit from their prior knowledge – and didn’t answer the question more effectively than novices.

Wow.

The research team then went ahead and tested this same idea with 4th graders in Australia.

As often happens in research, the details get complicated. The headline is: when they tested a classroom analog of the same problem, they got somewhat similar results.

Students with higher levels of prior knowledge DID perceive the cognitive load to be higher.

However, when those students solved problems, they scored higher than when they did not have relevant prior knowledge. (Remember: for the forestry example, higher cognitive load eliminated the experts’ advantage in solving the problem.)

In other words: the potential dangers of prior knowledge do show up in the classroom, not just in abstract research exercises.

Teaching Implications, Take II

Above I wrote:

First: find out how much prior knowledge students have on any given topic.

Second: ensure student have the prior knowledge they need before starting on any given topic. Don’t start it until they do.

Based on this study, I think we should add another implication:

Third: stop and consider – how might a student’s expert prior knowledge interfere with their learning of this concept? What other concepts or procedures might they draw into a question in ways that unhelpfully complicate their thinking?

At this point, I don’t think we have enough research into the dangers of prior knowledge to have refined or thorough guidance in answer to those new questions.

I do think, however, we should get in the habit of asking them.

TL;DR

Typically, prior knowledge benefits students by reducing working memory load.

Therefore, typically, we should ensure they have relevant prior knowledge before starting a topic.

In some cases – according to this research – prior knowledge can complicate thinking when experts bring in too many ideas from their knowledge base.

In these cases, we should be sure to think through those potential dangers, and head them off as best we can.

And: we should follow this research pool. It’s an intriguing topic!


* One of the researchers here is none other than Ollie Lovell, who wrote an EXCELLENT book on Cognitive Load Theory for teachers. You can read my review here.


Endres, T., Lovell, O., Morkunas, D., Rieß, W., & Renkl, A. (2022). Can prior knowledge increase task complexity?–Cases in which higher prior knowledge leads to higher intrinsic cognitive load. British Journal of Educational Psychology.

Practical Advice for Students: How to Make Good Flashcards
Andrew Watson
Andrew Watson

Flashcards feel to me like a research sweet-spot.

In the first place: for the most part, students believe that they help — and are even willing to make them!

In the second place: flashcards should help. After all, flashcards promote retrieval practice. And as you know, research shows that retrieval practice really helps students learn.

So, if we can find specific research about flashcards, it should be especially useful in our work.

portrait of father teaching daughter how to read by using simple words and letters on a flash card at home

It would be even better if one of the researchers were Mark McDaniel — who co-authored make it stick: one of the great books on memory research for teachers.

If you agree with me on these points, I’ve got some good news for you today!

Starting with Questions

Far and away the most common flashcard question I hear is: “does it matter if students make the flashcards themselves?”

The logic behind this question makes sense. When students think about the material in order to make good flashcards, then that thought might promote learning.

In other words: flashcard making isn’t just the bad kind of “active learning” (students are BUSY!) but the good kind of “active learning” (students are THINKING!).

I have two doubts about this thought process.

First: students might not know enough to make good flashcards.

If their cards prompt them to recall relatively unimportant ideas and processes, then the subsequent retrieval practice won’t really help.

Second: making flashcards takes time.

If students have access to good flashcards — ones that highlight the core concepts, procedures, and facts — then studying with those cards will (perhaps) be more efficient than taking time to make their own.

Two other questions also suggest themselves:

What kind of questions should be on the flashcards?

Most students make detail flashcards. That is: flashcards that focus on facts, definitions, dates, and so forth.

They might also — or instead — make “conceptual” flashcards. That is: flashcards that combine details in compare/contrast patterns, or process steps.*

Question #3:

Do flashcards help some students more than others?

The good news: a recent study explores all those questions.

The First Question

Researchers started with a straightforward experiment. They had students read textbook passages – one about biological anthropology, the other about geology – and then study for a quiz.

The students were divided into four groups, based on how they studied:

Group A studied however they chose.

Group B received 12 flashcards prepared by the textbook makers.

Group C made their own 12 flashcards. They didn’t get any instructions about them.

Group D made their own 12 flashcards. They did get special instructions: “include 4 ‘conceptual’ questions” – that is, questions that compare/contrast, or that highlight several steps in a process.

Let’s return to the first question I asked above: did the students who made their own flashcards learn more than those who studied with pre-made flashcards?

Nope.

That is: students in Groups C & D (who made their own cards) did NOT learn more than those in Group B (who received pre-made flashcards).

Even worse: they DID spend more time.

So, at least in this experiment, asking students to make their own flashcards just isn’t very efficient. They DO spend more time, but DON’T learn more. A bad combination.

Okay, but what about the second question I asked above?

Did the students who made “conceptual” flashcards learn more than those who got no special instructions?

Again, Nope.

Students in Group C — who got no special instructions — mostly made “detail” flashcards. Students in Group D — who were instructed to make 4 “conceptual” flashcards – basically followed those instructions; they made 4 “conceptual” and 8 detail flashcards.

But: both groups spent the same amount of time, and got the same score on the quiz.

Digging Deeper

This research team had hypothesized that the “conceptual” flashcards would benefit learning, and were therefore surprised by the results of this first experiment.

However, they quickly saw a plausible explanation.

C-Group students – who got no instructions – made 12 flashcards. On average, 10 of them were detail flashcards, and the other 2 were “conceptual.”

D-Group students – instructed to make 4 conceptual flashcards – did so: 8 detail cards and 4 “concept” cards.

So you can see, not much of a difference there.

In their next experiment, these researchers doubled down on these two different strategies.

Two new groups of students read the same two passages.

Group E received detail-only flashcards.

Group F received “conceptual”-only flashcards.

Now is there a difference?

By George I think they’ve got it.

Sure enough, in high enough percentages, “conceptual” flashcards do help students learn more.

Now We’re Getting Somewhere

Based on these first two experiments, I think we have some useful answers to our initial questions:

First: at least so far, we don’t have good evidence that students learn more when they make their own flashcards. Alas, this strategy seems inefficient, based on experiment #1.

Second: conceptual flashcards do help students more than detail flashcards, as long as there are enough of them.

I do think this second conclusion requires further nuance.

In the first place, some disciplines really do require lots of detail knowledge. When I studied Czech, I had (literally) HUNDREDS of vocabulary flashcards. Other fields might require a similarly detail-heavy flashcard deck.

In the second place, I do think K-12 students might need detail flashcards more than college students. After all, college students already know more details than younger students do – especially at the highly selective college where this study was performed.

Finally, the distinction between “detail” and “conceptual” might be overdrawn. Here’s a technique I’ve used in my own work.

Step 1: ask a student to draw two vocabulary flashcards, and to define those words.

In my English class, the student might define the words “protagonist” and “sympathetic.”

Step 2: ask the student to make a connection between the two words.

Now the student might say: “Well, a protagonist is typically a sympathetic character – like Janie Mae Crawford. But not always: Macbeth certainly is the protagonist, and certainly isn’t a sympathetic character.”

With this technique, two “detail” flashcards combine to require “conceptual” thinking – at least as defined by the researchers.

TL;DR

As long as we allow for nuance, and the possibility that future research will invite us to rethink these conclusions, this study suggests:

A: Students don’t benefit from making their own flashcards – as long as we provide them with good ones, and

B: Students DO benefit from flashcards that ask them to combine and organize information, not simply recall free-standing facts.

These suggestions – and others that have good research support – give us useful pointers to pass along to our students.


A Final Note:

Eagle-eyed readers will have noticed that I didn’t answer my third question: “do flashcards benefit some students more than others?”

This study does point to an answer, but … I don’t fully understand it.

In brief, “high structure-building” students don’t benefit as much from conceptual flashcards, because they already do better than “low structure-bulding” students – who DO benefit from conceptual flashcards.

Sadly, I don’t understand exactly what “high and low structure-building” means here. Those words sound like a proxy for “high scoring” and “low scoring,” but not exactly. Rather than pretend I know, I’m simply fessing up that there’s an extra variable here.

If you figure it out, please let me know!


* The authors of the study I’m describing acknowledge that this definition of “conceptual” is incomplete. They’re using that word as a handy shorthand for “flashcards that go beyond single details.”  In this blog post, I put the word “conceptual” in quotation marks to highlight the narrow specificity of their definition.


Lin, C., McDaniel, M. A., & Miyatsu, T. (2018). Effects of flashcards on learning authentic materials: The role of detailed versus conceptual flashcards and individual differences in structure-building ability. Journal of applied research in memory and cognition7(4), 529-539.

ChatGPT and Beyond: The Best Online Resources for Evaluating Research Claims
Andrew Watson
Andrew Watson

If we’re going to make “research-based” claims about education — and we are! — then we better have the research goods.

So, how can we at Learning and the Brain — and you as teachers and administrators — find and evaluate research?

For a few years now, I’ve relied on two great sources.

Scite.ai helps me see if a particular study is an outlier, or in alignment with other research findings.

Connectedpapers.com shows a spider web of research studies on any given topic.

They’A neuron made up of walls of binary code: 0s and 1s.re great, and you should check them both out. (I’m on those sites several times a week.)

But, we’ve heard A LOT about AI in recent weeks. Can we rely on newer websites to help us out?

I spent an hour or so on Monday playing around on two sites; here’s what I found.

The Inevitable ChatGPT

I won’t rehash all the Twitter agita on this subject, but gosh ChatGPT has gotten lots of attention. (Twitter tells me it’s either the savior of education, or the end of education, a deranged stalker of New York Times reporters, or a nothingburger.)

Can it help us explore research into education?

To answer this question, I asked Chat GPT to describe “schema theory.” Here are the first two paragraphs of its answer:

“Schema theory” is a cognitive psychology theory that describes how people understand, organize, and store information about the world. The theory suggests that people use mental frameworks or “schemas” to categorize information and simplify their understanding of the world around them.

A schema can be thought of as a mental structure that represents knowledge about a particular category of information, such as objects, events, or social interactions. For example, a person might have a schema for a “restaurant” that includes information about the typical layout, menu items, and social norms associated with dining out.

So far, so good. (The restaurant example of schema is quite common.)

I asked it to dig a bit deeper, and to identify problems or limitations with schema theory. Again, I got a solid set of answers. For instance, a limitation:

Overgeneralization: Schemas can sometimes lead to overgeneralization, causing people to perceive things in a stereotypical or biased manner. For example, people with a schema of a “criminal” may unconsciously associate criminal behavior with certain groups.

When I asked it for research supporting schema theory, its first three answers were several decades old, so I asked it for research from the last 20 years.

Of the three it listed, this one looked most promising:

Lee, J. J., & Anderson, N. (2015). The role of schema in the interpretation of ambiguous scenes. Memory & Cognition, 43(8), 1186-1197.

But, strangely, when I tried to find that study on Google Scholar, it didn’t seem to exist.

Stranger still, when I went to the website for the journal Memory and Cognition, it also doesn’t exist. (To be clear: the journal exists; the article doesn’t exist in it.)

You’re not going to believe this, but it seems that ChatGPT simply MADE UP a research study supporting schema theory.

The Chat Gets Awkward

I pointed out to ChatGPT that the source doesn’t exist, and it responded:

I apologize, it seems I made a mistake. The study by Lee and Anderson (2015) that I mentioned in my previous answer does not seem to exist. As a language model, I strive to provide accurate and up-to-date information, but sometimes I may make errors or misunderstand the information given to me. I appreciate your feedback and will take this into account in my future responses.

Okay, I’m not happy about this. But it did fess up to its mistake, and promise to do better.

So, the conversation continued:

Me: Can you identify recent studies — within the last 20 years — that study the cognitive psychology of schema theory specifically for students learning academic topics?

ChatGPT: Yes, there have been recent studies that have investigated the role of schemas in students’ learning of academic topics.* Here are a few examples:

Kail, R. (2017). Schema theory and its implications for the design of learning environments. Review of Educational Research, 87(1), 93-135.

Well, that first one looks promising. But, I don’t want to be fooled twice, so I asked the obvious question:

Me: Does the Kail 2017 study really exist?

ChatGPT: Yes, the Kail (2017) study does exist. It was published in the Review of Educational Research, a highly respected peer-reviewed journal in the field of education.

Reader, the Kail study does NOT exist! ChatGPT just MADE UP A SECOND STUDY. And, having made it up, it then stuck by its (untrue) story.

I won’t go through all the iterations of this journey in detail.

But, yes, I asked about a third study. Yes, ChatGPT assured me it exists. No, it really doesn’t.

So, at this point, I’m kinda done. Any artificial intelligence that simply conjures sources out of thin air…I myself won’t be relying on it soon.

If only there were another option…

Another Option!

At about this time, my friend Rob McEntarffer asked me if I’d tried Elicit.org. (Important note: bad things might happen if you type “illicit” instead of “elicit.” I haven’t checked, but: be careful out there.)

Rob is wise in the ways of MBE, and so I tried it.

At this point, I’m as hopeful about Elicit.org as I am discouraged about ChatCPT.

Elicit asks users to frame fairly specific questions. It then looks for study abstracts that seem relevant to those questions, and reports back.

So, I asked:

“Can schema theory improve classroom instruction?”

I got a list of seven studies. All seven sounded exactly on topic. And — here’s some surprisingly good news — at least four of the studies exist!

I know because I downloaded them and printed them out. They are, in fact, in my “read this right now” pile.

Now, Elicit has a somewhat easier job that ChatGPT, because it’s answering narrow questions about research studies, not broad questions about ANYTHING.

But, I found it easy to use and — at least on my first test drive — reliable and helpful.

TL;DR

If you’re looking for online sources to find and evaluate research-based claims:

Add Elicit.org to scite.ai and connectedpapers.com as useful research resources.

Until it stops making stuff up, avoid ChatGPT.


* Notice, by the way, that ChatGPT got the possesive (“students’ learning”) correct in this answer. That’s an unusual rule — “use the possessive case before gerunds” — and a difficult apostrophe: plural possessive. So, it’s not great with sources, but it knows from grammar and punctuation!

“Seductive Details” meet “Retrieval Practice”: A Match Made in Cognitive Heaven
Andrew Watson
Andrew Watson

Here’s a common problem: your job today is to teach a boring topic. (You don’t think it’s boring, but your students always complain…)

What’s a teacher to do?

One plausible strategy: You might enliven this topic in some entertaining way.

You’ve got a funny video,

or a clever cartoon,

or a GREAT anecdote about a colleague’s misadventure.

Okay, so this video/cartoon/anecdote isn’t one of today’s learning objectives. BUT: it just might capture your students’ interest and help them pay attention.

However tempting, this strategy does create its own problems. We’ve got lots of research showing that these intriguing-but-off-topic details can get in the way of learning.

That is: students rTwo baby goats, one brown and white, theo other black and white, frolicking in a field.emember the seductive details (as they’re known in the research literature), but less of the actual content we want them to know.

Womp womp.

Some time ago, I wrote about a meta-analysis showing that — yup — seductive details ACTUALLY DO interfere with learning: especially for beginners, especially in shorter lessons.

What could we do to fix this problem? If we can’t use our anecdotes and cartoons, do we just have to bore our students?

“Right-Sized” Retrieval Practice

Here’s one approach we might try: right-sized retrieval practice.

What does “right-sized” mean? Here goes:

One retrieval practice strategy is a brain dump. The instructions sounds something like this: “write down everything you remember about today’s grammar lesson.”

Another retrieval practice strategy calls for more specific questions: “what’s the differenece between a gerund and a participle?” “How might a participle create a dangling modifier?”

A group of scholars in Germany studied this hypothesis:

If teachers use the brain dump approach, students will remember the seductive detail — and it will become a part of their long-term memory.

If, on the other hand, teachers ask specific questions, students will remember the important ideas of the lesson — and not consolidate memory of the seductive detail.

They ran a straightforward study, considering a topic close to every teacher’s heart: coffee.

100+ college students in Germany read a lengthy passage on coffee: information about the coffee plant, its harvesting, its preparation, and its processing.

Half of them read a version including fun-but-extraneous information. For instance: do you know coffee was discovered?

Turns out: goat herders noticed that their goats ate the coffee beans and then did a kind of happy dance. Those herders wondered: could we get the same happy effects? Thus was born today’s coffee industry…

Remembering the GOAT

After reading these coffee passages — with or without seductive details — students answered retrieval practice questions.

Some got a “brain dump” promt: “What do you remember about coffee?”

Others got the specific questions: “What harvesting methods do you remember, and how do they differ?”

So, what effect did those specific questions have on memory of seductive details one week later?

Sure enough, as the researchers had hypothesized, students who answered specific retrieval practice questions remembered MORE of the lesson’s meaningful content.

And, they remembered LESS (actually, NONE) of the seductive details. (Of course, the details get complicated, but this summary captures the main idea.)

BOOM.

So, what’s a classroom teacher to do?

As is so often the case, we should remember that researchers ISOLATE variables and teachers COMBINE variables.

We always have to think about many (many!) topics at once, while research typically tries to find out the importance of exactly one thing.

Putting all these ideas together, I’d recommend the following path:

If I have to teach a topic my students find dull, I can indeed include some seductive details (Ha ha! Goats!) to capture their interest — as long as I conclude that lesson with some highly specific retrieval practice questioning.

And, based on this earlier post on seductive details, this extra step will be especially important if the lesson is short, or the students are beginners with this topic.

TL;DR

Seductive details can capture students’ interest, but also distract them from the important topics of the lesson.

To counteract this problem, teachers should plan for retriveal practice including specific questions — not just a brain dump.


By the way: I first heard about this “retrieval practice vs. seductive details” study from Bradley Busch (Twitter: @BradleyKBusch) and Jade Pearce (Twitter: @PearceMrs). If you’re not familiar with their work, be sure to look them up!


Eitel, A., Endres, T., & Renkl, A. (2022). Specific questions during retrieval practice are better for texts containing seductive details. Applied Cognitive Psychology36(5), 996-c1008.

Sundararajan, N., & Adesope, O. (2020). Keep it coherent: A meta-analysis of the seductive details effect. Educational Psychology Review32(3), 707-734.

Starting Class with “Prequestions”: Benefits, Problems, Solutions
Andrew Watson
Andrew Watson

We’ve known for many years now that retrieval practice works.

Hispanic student wearing a blue shirt raising his hand to ask a question in class

That is: after we have introduced students to a topic, we might REVIEW it with them the next day. However, they’ll remember it better if we ask them to try to RETRIEVE ideas and procedures about it.

As Dr. Pooja Agarwal and Patrice Bain write, we want students to “pull information out of their brains” (retrieve) not “put information back into their brains” (review).

Sadly, we know that students’ intuition contradicts this guidance. They really want to reread or review their notes, rather than ask themselves questions.

In this (very sad) study, for instance, Dr. Nate Kornell and Dr. Lisa Son found that students think review works better than retrieval even when they do better on quizzes following retrieval!

Yes, even the experience of learning more doesn’t persuade students that they learned more.

YIKES.

The More Things Change…

Let’s take this retrieval practice idea one step further.

I wrote above that answering questions helps students learn AFTER they have been introduced to a topic.

But: does answering questions help students learn a topic even BEFORE they study it?

On the one hand, this suggestion sounds very strange. Students can’t get these “prequestions” right, because they haven’t yet studied the topic.

On the other hand, we’ve got research showing that this strategy works!

In one of my favorite studies ever, Dr. Lindsay Richland found that “prequestions” help students learn. And, she then worked really hard to disprove her own findings. When she couldn’t explain away her conclusions, she finally accepted them. *

Similarly, a more recent study suggests that learning objectives framed as questions (“Where are mirror neurons located?”) helps students learn more than LOs framed as statements (“You will learn where mirror neurons are located.”).

Although this prequestion strategy hasn’t been studied as much as retrieval practice, I do think it has enough research behind it to merit teachers’ respectful attention.

However, I do think this approach has a practical classroom problem…

Sustaining Motivation

For the most part, my high-school students are an amiable lot. If I ask them to do something … say, answer retrieval practice questions … they’ll give it a go.

And, they almost certainly want to get those questions right.

In a class discussion about Their Eyes Were Watching God, for instance, we might compare Janie’s three “husbands.” If I ask a student the following day to list some points of comparison from memory (retrieval practice!), they’ll feel that they ought to remember an answer or two.

Let’s try this logic with prequestioning.

Imagine I ask my students this prequestion: “Why do you think the novel’s protagonist will have the nickname ‘Alphabet’?”

My students will gamely try some answers.

However, I worry that – over time – they’ll start losing interest.

They almost never get these answers right.

And, there’s no “penalty” for getting them wrong, or reward for getting them right. (We don’t want students to focus on rewards and penalties, but schools typically work this way…)

From the student perspective, in other words, the whole prequestion strategy feels like an exercise in futility.

Why should they bother to think seriously about these un-answerable questions? They feel like wasted mental effort…

Two Solutions

First: I’ve tried in the past to solve this problem by using the strategy infrequently.

If my students don’t experience this quirky frustration too often, I hope, they won’t mind participating in this odd ritual.

Recent research, however, offers a second solution – a more honorable solution than mine.

In this study, by Dr. Steven Pan and Dr. Michelle Rivers, prequestions consistently helped students learn.

However, students didn’t really notice the benefit of prequestions – even when they learned more from answering them. (This result sounds a lot like the Kornell and Son study about retrieval practice; students don’t register the benefits they experience.)

So, Pan and Rivers tried several solutions. Specifically, they found benefits to a multi-step approach:

Step 1: have students learn some info with prequestions, and some without.

Step 2: give them a no-stakes quiz on the info.

Step 3: let them see that they remembered information better after prequestions.

Step 4: next time, ask students to recall how well they remembered after answering prequestions.

In other words: students need to experience the benefits and to have them repeatedly pointed out. This combination, probably, helps students believe that prequestions really do help.

This insight (probably?) helps with the motivation problem that has been troubling me in the past.

In other words: students who believe that prequestions will help are much likelier to participate in the curious mental exercise of trying to answer questions whose answer they can’t yet know.

TL;DR

When students answer questions about information they’re about to learn, they remember that information better – even if they get the answers wrong.

This strategy might be effective in the short term, but hamper motivation over time. After all, why should students even try to answer questions if they’re unlikely to know the answer?

To counteract this motivational problem, take students through Pan & Rivers’s procedure for them to experience and remember the benefits that prequestions provide.

We don’t have LOTS of research on this strategy, but we do have enough to make it a plausible approach.


* Sadly, the “prequestion” strategy has frequently been called “pretesting.” Of course, the presence of the stem “test” both confuses the strategy (there’s no testing!) and disinclines people from participating (who wants more testing?).

So, let me emphasize: “prequestions” are simply questions. They’re not a test.

BTW: I’ve recently seen the word “pretrieval” as a way to avoid the “pretest” moniker. You might like it better than “prequestions.”


Agarwal, P. K., & Bain, P. M. (2019). Powerful teaching: Unleash the science of learning. John Wiley & Sons.

Kornell, N., & Son, L. K. (2009). Learners’ choices and beliefs about self-testing. Memory17(5), 493-501.

Pan, S. C., & Rivers, M. L. (2023). Metacognitive awareness of the pretesting effect improves with self-regulation support. Memory & Cognition, 1-20.

Richland, L. E., Kornell, N., & Kao, L. S. (2009). The pretesting effect: Do unsuccessful retrieval attempts enhance learning?. Journal of Experimental Psychology: Applied15(3), 243.

Sana, F., Forrin, N. D., Sharma, M., Dubljevic, T., Ho, P., Jalil, E., & Kim, J. A. (2020). Optimizing the efficacy of learning objectives through pretests. CBE—Life Sciences Education19(3), ar43.

The Trad/Prog Debate Gets Weird
Andrew Watson
Andrew Watson

Few debates rage hotter in education circles than that between educational progressives and educational traditionalists. (I’m emphasizing “educational” in these phrases, because they don’t necessarily align with political trad/prog divides. This blog doesn’t do politics.)

One recent summary — relying heavily on Dewey — describes the debate this way:

Educational traditionalists “argue that teachers should carefully select and sequence the best knowledge from their subject areas and then deliver it directly to the whole class, while maintaining order.”

A graphic of two heads facing each other in conversation: one with a lightbulb inside, the other with a question mark.

Whereas

Educational progressives “argue that teachers should focus on facilitating individualised learning experiences in which pupils can explore their natural inclinations, thus nurturing their interests and developing general thinking skills.”

Of course, the debate gets MUCH more complicated than these summaries, but it’s handy to have a quick definition. According to this summary article cited above, roughly 2/3 of teachers side with one or the other of these two positions.

But: do we have research favoring one approach or the other?

Problems and Solutions

Because both educational philosophies encompass substantial sets of teaching ideas — everything from pedagogy to curriculum to motivation to metacognition — they resist efforts to evaluate them in their entirety.

We might run a study that shows … say … this “ed prog” unit motivates 6th graders more strongly than the analogous “ed trad” unit. However, such a study doesn’t necessarily mean that the philosophy itself works for all students, all subjects, all cultures, and all definitions of “works.”

So, what to do?

Education scholars Dr. Sam Sims and Dr. John Jerrim have found an ENORMOUS data set from Germany that just might make this possible.

It shows how much academic progress several thousand German students made over several years.

It measures their expressed motivation for studying German and math.

Heck, it even tracks their metacognative facility.

And,

It asks questions about their teachers’ expressed place on the ed prog/ed trad continuum.

It also asks questions about the teachers’ educational practices (PBL, lecture), to see if they align with those expressed beliefs.

WOW.

If we crunch the numbers just right, we should be able to answer several questions:

Do students learn more in one or the other of these approaches?

Does their motivation vary depending on them?

Is one approach or another better for more or less successful students?

So, what do Sims and Jerrim discover when they run their equations?

Not What I Was Expecting

The blog title promises something “weird.” So, here goes…

Because the ed prog/ed trad debate has an ethical valance to it, it often prompts dramatic calls:

“This way is the right way, so our profession really must do it right! After all, anything else is wrong.”

This study — weirdly — comes to an astonishingly bland conclusion.

That is:

Which philosophy helped students learn more?

Honestly, both worked equally well.

But wait: which one helped struggling learners more?

Meh. Both worked equally well.

But surely one fostered student motivation more than the other!

Well, kinda. Educationally traditional teaching practices resulted in slightly higher levels of motivation in German. But, not in math. And, not much even in German.

Um, metacognition?

Again, no big difference — although a slight benefit for team ed trad.

So, this ferocious debate we’ve been having for decades? Maybe we’ve been arguing about the wrong topics…

Trying to Process

Honestly, I’m still trying to wrap my head around this research. (By the way, I heard about it from Peps Mccrea. If you haven’t signed up for his “Research Snacks,” do so NOW. And, you can hear his webinar on motivation March 18th.)

Here’s a random grab-bag of my early thoughts:

First, I don’t doubt that LOTS of people will simply reject these findings. One easy way to do so: they’re published not in a peer reviewed journal, but as a working paper.

Debates about the relative merits of peer review rage in the scholarly community. Readers who typically reject working papers for their lack of formal peer review might simply set this one aside.

Second, if these two approaches basically work equally well, then we shouldn’t focus on choosing one or the other: we should focus on doing both as well as we possibly can.

My own view is that cognitive science will help us do so. For instance: no matter my educational philosophical stance, my students will benefit if I understand how working memory works.

Third, the finding about motivation seems especially surprising — or at least provocative.

Champions of educationally progressive education typically trumpet motivation as one of its obvious benefits. (Hence the contemptuous phrase “drill and kill” to mock direct instruction.)

This research suggests that educationally traditional methods produce slightly higher levels of motivation (in one of the two subjects they measured).

But wait: if students in ed trad classes felt more motivation but didn’t learn more, something very strange is going on.

Perhaps (dare I write it?), motivation doesn’t matter for learning? (How can that possibly be?)

Perhaps (dare I write it?), ed trad methods produce slightly more motivation but slightly less learning — and those two effects balance each other out? (All my ed trad friends just howled in rage.)

Or perhaps there’s some other obvious explanation I’m missing?

Currently, I’m stumped.

Next Steps

Me, I’m going to watch the responses to Sims and Jerrim’s working paper, and see what additional wisdom shakes out.

If you’ve got additional or alternative perspectives, I hope you’ll share them in the comments.

 

The Hidden Lives of Learners
Andrew Watson
Andrew Watson

Many times over the last several years, I’ve heard enthusiastic reviews of a seemingly-magical book called The Hidden Lives of Learners, by Graham Nuthall.

Book Cover for The Hidden Lives of Learners by Graham Nuthall. The cover shows a mountain range in front of a blue and cloudy sky.

Here’s the magic: Nuthall’s frankly astonishing research method.

Working in New Zealand classrooms in the 1980s, he put mics on all students and teachers. And, he had cameras in the classroom.

He and his team also broke down the teachers’ unit plans into granular learning goals. For instance, a unit on Antarctica might have 80 specific facts or concepts that the students should learn.

Finally, Nuthall’s team tested students both before and after these units.

Given this quite extraordinary data set, Team Nuthall could look at remarkably specific questions:

How much information about each topic did students already know before the unit began?

How much did they learn?

What, very specifically, did each student do and say to learn each specific new concept?

You can see why readers have responded so strongly to Nuthall’s method.

So, based on all his data, what did Nuthall conclude?

The Magic Number

Regular blog readers already know about the Spacing Effect.

That is: students learn more when they spread practice out than when they do the same amount of practice all at once.

In my experience, this research finding started getting broader notice in … say … 2015 or so. (I completed my grad program in 2012, and I don’t remember the spacing effect getting much — or any — attention at that time.)

Well, Nuthall’s research led him to a very similar conclusion more than a decade before.

That is: in Hidden Lives, Nuthall writes…

We discovered that a student needed to encounter, on at least three different occasions, the complete set of the information she or he neede to understand a concept.

If the information was incomplete, or not experienced on three different occasions, the student did not learn the concept. (63)

Similar to research into the spacing effect, Nuthall’s research shows that students must devote brain space to an idea several times — spread out over more than one class meeting — to consolidate that idea in long-term memory.

Later in Hidden Lives (p. 126), Nuthall suggests that students should “encounter the complete set of information” on four occassions — not three.

For me, the precise number (is it 4? is it 3?) is less important than the broader concept: teachers should build curricula that ensure students delve into an idea several times. One or two encounters can’t create enough momentum to change memory systems.

I think that Nuthall’s method provides substantial support for translating the spacing effect research into classroom practice. Both psychology research AND Nuthall’s deep classroom investigation arrive independently at substantially similar ideas.

Changing the Focus

Most research in this field focuses on what teachers do. Nuthall — wisely — insists that we focus on what students do.

His methodology — all those microphones, all those transcripts — helps him recognize all those “encounters” with ideas. And, crucially, students often “encounter” ideas in their conversations and projects with other students.

This observation leads to several important insights.

First, students often have prior knowledge about a topic.

When that prior knowledge is incorrect, it BOTH hinders their understanding of new ideas AND hampers their classmates’ efforts to learn correct ideas.

For this reason — I’m extrapolating from Nuthall here — teachers really should focus on students’ prior misconceptions.

Unless we know what our students (wrongly) think they know, their misinformation will substantially muddle the learning process.

Second, building classroom culture matters.

This seemingly obvious statement comes from one of Nuthall’s most alarming findings (well: alarming to me).

The students in these classes were AMAZINGLY unkind to one another. Casual insults — even racial epithets — made up a regular part of classroom dialogue.

Nuthall proposes two solutions to this problem.

Option A: “Teachers therefore need to know who is in which friendship groups, who wants to be liked by whom, who has status, who is rejected.

They also need to know the kinds of beliefs and culture — about music, clothes, curriculum, learning, co-operating, and the like — that hold students’ relationships together.” (p. 37)

While I understand the logic behind this statement, it strikes me as frankly impossible. As I think over my various sophomore and senior English classes, it’s simply inconceivable to me that I would know — with any level of consistent detail — what the exact relationships are among all these people.

I might have a dim idea that this student is especially popular, or that those two are dating, or that some song or another has everyone’s attention. But for that knowledge to be broad and current: no way.

In fact, I think it would be inappropriate for me to know such things. Inquiring too closely into students’ personal and romantic lives does not strike me as healthy or appropriate.

A Better Way?

Happily, Nuthall proposes Option B:

“Some teachers have tried to deal with this problem [peer-to-peer unkindness] by creating an alternative culture within their classrooms — a culture of mutual respect and cooperation, a culture in which everyone is expected to succeed in some significant aspect of classroom activities.” (p. 37)

Now, this approach seems healthy, appropriate, and necessary.

Yes, I want my students to learn about Macbeth and topic sentences, but I also insist that they know how to treat one another well.

Nuthall’s findings about casual peer cruelty has reminded me how much happens in my classroom that I can’t see (“hidden lives of learners”), and how important it is that I solve those invisible problems.

The Very Big Picture

One final point stood out for me in Nuthall’s book, although my interpretation of it might not persuade you. Here’s the story…

Because Nuthall measured how much students already knew, and what they did to learn new information, he could track important patterns. One pattern went like this:

Students who didn’t know much about the topic learned most from the teacher.

Students who already knew a lot learned most by working on their own, or with peers. (pp. 86-7)

I think this finding might help us see past a controvesial binary in the field of education.

Current schooling debates have encouraged us to pick sides. Either we believe in direct instruction, or we believe in project pedagogies. (This sentence oversimplifies a very complex debate, but is a useful shorthand at this moment.)

Nuthall’s findings (and my own reading of schema theory) suggest an alternative viewpoint. Perhaps

Students who don’t know much about a topic (a.k.a. “novices”) learn most from the teacher (a.k.a. “direct instruction”), whereas

Students who already know a lot (a.k.a. “relative experts”) learn most by working on their own, or with peers (a.k.a. “project pedagogies”).

That is: before we say whether direct instruction or independent investigation is better for a student, we have to know where the student lies on the novice/expert continuum.

Novices need lots of guidance; relative experts benefit from more open-ended, self-driven exploration.

To be clear: I’ve been quietly advocating for this view for a few years now. It seems to me — although I could be wrong — that Nuthall’s data roughly support it.

Read This Book If…

…You’re intrigued by the possibility of extremely granular classroom research, focusing directly on the students’ experience,

…you want to see how the spacing effect plays out in the classroom,

…perhaps you want to know more about how students actually treat each other in day-to-day interactions.

…you want to hear an inventive and thoughtful researcher think aloud about his findings.

I don’t agree with everything that Nuthall has written. For instance, his account of working memory is not at all in line with current models of this cognitive function.

But, gosh: he and his book have given me lots to think about, and new ways to think about old ideas.

To 600, and Beyond…
Andrew Watson
Andrew Watson

Photograph of the author, wearing a blue shirt, pink tie, and glasses, smiling at the cameraWordPress informs me that this is the 601st article I’ve posted on this blog. That’s a few hundred thousand words since 2015 or so.

I’ve been honored over the years to meet so many of you who read this blog, and who think aloud with me about its topics. (If you see me at a Learning and the Brain conference, I hope you’ll introduce yourself!)

And, I thoroughly enjoyed the opportunities I’ve had to chat with researchers and other scholars as I try to understand their arguments.

As I look back over these years, some emerging themes stand out to me:

A Man, a Plan

When I attended my first Learning and the Brain conference in 2008, I knew what was going to happen:

Step 1: the “brain researchers” would tell me what to do.

Step 2: I would do it.

I would, thus, be practicing “brain-based teaching.” My students would learn SO MUCH MORE than they had in the past.

How hard could it be to follow researchers’ instructions? It turns out: it’s extremely hard simply to “follow researchers’ instructions.”

In the years since that conference, I’ve realized — over and over — how little I knew about what I didn’t know.

Surprise #1: One Size Does Not Fit

The first problem with my 2-step plan: I almost certainly SHOULDN’T DO what the researchers did.

Why?

Let’s say researchers studying the spacing effect asked college students to study three math topics.

Those students did five practice problems once a week for five weeks.

Voila: those students learned more than students who just did all 25 problems at once.

So, I should have my students do five practice problems once a week for five weeks, right?

Hmmm.

I’m a high school teacher. I teach English. I might not teach only three topics at a time. I might have more than 25 practice problems.

So, I can’t simply use the researchers’ formula for my own teaching plan.

Instead of doing what the researchers did, I should think the way the researchers thought.

The researchers’ successes resulted — in part — from the goodness of fit between their method, their students, and their topic.

To get those same successes in my classroom, I have to adapt their ideas to my particular context.

And: all teachers have to do exactly that kind of adapting.

In Step 2 above, I can’t just do what the researchers did. I always have to tailor their work to my teaching world.

Surprise #2: People Are COMPLICATED

Back in 2008, I assumed that “brain-research” would consistently show the same correct answers.

If I knew a correct answer, I could simply do the correct thing.

Alas, it turns out that research studies doesn’t always arrive at the same answer — because PEOPLE ARE COMPLICATED.

So: is focusing on Growth Mindset a good idea?

Although Mindset Theory has been VERY popular, it also generates lots of controversy.

Should schools require mindful meditation?

Ditto.

How much classroom decoration is too much?

If you look at the comments on this post, you’ll see that many teachers REALLY don’t like research-based answers to that question.

In other words, I can’t just “do what the research tells me to do,” because research itself comes up with contradictory (and unpopular) answers.

Surprise Research #3: “Brain Research” Isn’t (Exactly) One Thing

Throughout this post, I’ve been putting the words “brain research” in quotation marks.

Why?

Well, I was surprised to discover that researchers study the “brain” in at least two different ways.

If you really like biology, and want to study the “brain” as a physical object, you’ll go into a field called “neuroscience.”

You’ll look at neurons and neurotransmitters and glial cells and fMRI and EEF and myelination and the nucleus accumbens.

You’ll look at cells under microscopes, and prod them with pointy things while wearing gloves.

BUT

If you really like thoughts and emotions, and want to study the “brain” according to its mental processes, you’ll go into a field called “psychology.”

You’ll look at attention and memory and stress and learning and perception.

Notice: psychologists don’t look at attention under a (literal) microscope. They can’t pick up “stress” the way they can pick up a brain or an amygdala. They don’t need to wear gloves. Nothing damply biological is happening.

Yes, these days these neuroscience and psychology are blurring together. We have people interested in “neuro-psychology”: the biological underpinnings of all those mental processes — memory, curiosity, generosity.

But that blurring is very recent — a couple of decades at most.

And most people in those fields don’t blur. They stick to one team or the other. (For most of the 20th century, these two fields eyed each other with disapproval and suspicion.)

Surprise #4: Psychology First

I don’t like the sentences I’m about to type, but I do think they’re true.

Back in 2008, when I first got into this field, I was REALLY interested in the neuroscience

The very first question I asked at a Learning and the Brain conference was “where does attention happen in the brain?”

But, the more time I spend in this field, the more I think that teachers need information from psychology more than from neuroscience.

Yes, the neuro is fascinating. But, it almost never helps me teach better.

For instance:

I don’t need to know where long-term memories are stored in the physical brain. (That’s a question that neuroscientists try to answer.)

I do need to know what teaching strategies help students form new long-term memories. (That’s a question that psychologists try to answer.)

I focus on this topic — the relative importance of psychology for teachers — because so many people use neuroscience to boss teachers around.

Heck, I recently wrote a post about the bizarre claim that “neurotransmitters are happiness chemicals“: a claim that uses neuroscience to tell teachers what to do.

I myself think that anyone who wants to tell teachers “do this new thing!” should have tested that new thing directly with students. We call that research “psychology.”

TL;DR

Here’s what I would tell my 2008 self:

“This field you’re entering will help you and your students SO MUCH!

And, you should know:

You’ll always be translating research findings to your own classroom.

Because researchers and teachers disagree, you’ll always sort through controversy before you know what to do.

Neuroscience research is fascinating (fMRI is SO COOL), but psychology research will provide specific and practical suggestions to improve your teaching and help your students learn.”

I hope this blog has helped make some of those ideas clear and interesting over the years. And: I’m looking forward to exploring them with you even more…

My Detective Adventure: “VR Will Transform Education”
Andrew Watson
Andrew Watson

A friend recently sent me a link to an article with a click-baity headline: something like “Virtual Reality Will Change Education Forever.”

Man wearing Virtual Reality goggles, making gestures in the air

Her pithy comment: “This is obviously nonsense.” (It’s possible she used a spicier word that ‘nonsense.’)

On the one hand, I’m skeptical that ANYTHING will change education forever. Heck, if Covid didn’t transform education, who knows what will.

More specifically, ed-tech claims about “transforming education” have been around for a long time. Their track record doesn’t dazzle. (Smart boards, anyone?)

On the other hand, I always like to find reserch that challenges my long-held beliefs. After all, if I can’t learn from people who disagree with me, who can I learn from?

So, I followed my usual process.

In essence, I switched into Detective Mode, and started asking lots of questions.

If I ask the right questions, I thought, I’ll get a much clearer picture of potential educational benefits of VR.

Act I: The Investigation Begins

When I reviewed the article my friend sent, I noticed a troubling gap: the article didn’t link to underlying research.

As I’ve written in the past, this absence creates a red flag. If the article champions “research-based innovation,” why not link to the research?

So, I asked my first detective question. I emailed the author of the article and asked to see the research.

How simple is that?

Obviously, any resistance to this request — “sorry, we can’t share that at this moment” — would underline my friend’s skeptical verdict: “nonsense.”

However, the author responded immediately with a link to a research summary.

A promising development…

The Plot Thickens

This research summary showed real promise.

In brief:

Some college students in an introductory Biology course followed the typical path — readings, lectures, labs. (That’s the “control group.”)

Other students in the same course followed an alternative path: readings, lectures, supplementary Virtual Reality experience, alternative labs based on the VR experience.

When researchers looked at all sorts of results, they found that students on the alternative VR path did better.

That is: not only did the students enjoy the VR experiences; not only did they engage more with the material; they (on average) learned more.

However — and this is a BIG however — this research didn’t look like it was published.

In fact, when I asked that direct question, the article author confirmed that the research hadn’t yet been published in a peer-reviewed journal.

Now, the topic of peer review creates LOTS of controversy. The peer-review system has MANY troubling flaws.

However, that system probably reduces the amount of deceptive nonsense that gets published.

I almost never blog about research that hasn’t been peer reviewed, and so I thought my detecting was at its logical end. The VR claim might not be ‘nonsense,’ but it didn’t yet have enough published evidence to strengthen it.

And then, an AMAZING thing happened: the lead researcher emailed me to say she would be happy to talk with me about the study.

Over the years, I have occasionally reached out to researchers to be sure I understand their arguments.

But no researcher has EVER straight-up volunteered for such a meeting. And I mean: EVER.

The Payoff

Honestly, I’d love to transcribe my conversation with Dr. Annie Hale and Lisa Fletcher (“Chief of Realm 4”) — both at Arizona State University because it was both fascinating and inspiring.

Because you’re busy, I will instead boil it into three key points:

First:

Hale and Fletcher have done — and continue to do — incredibly scrupulous research.

For instance, in the description above, I put the words “control group” in quotations marks.

I did so because of Hale and Fletcher’s insistance. The two groups of Biology students had somewhat similar academic experiences.

But the research paradigm required enough differences to make the words “control group” technically inappropriate.

Hale and Fletcher insisted on this precision throughout our discussion. For instance, they regularly pointed out that a particular calculation suggested a positive result, but didn’t reache statistical significance.

In other words, they highlighted both the strengths and weaknesses of their own argument.

This habit, it my view, makes them MUCH more reliable guides in this field.

Second:

Here’s a shocker: Hale and Fletcher do not claim that virtual reality will transform education.

No, really, they don’t.

The headline of the article my friend sent me made that claim, but the researchers themselves don’t.

Instead, they make a very different claim. The alternative Biology path included at least three big changes from the typical path:

Change #1: students had the VR experience (and their lab was based on that experience)

Change #2: the key underlying biology concepts had been translated into stories. For instance, this “narratively-driven virtual reality” includes an imaginary species called the Astelar. (Some of the students got QUITE protective of these imaginary creatures.)

Change #3: the TAs in these alternative path classes got special training, inspired by Doug Lemov’s Teach Like a Champion.

We can’t know — and, Hale and Fletcher don’t say they know — which of these three parts made the biggest difference.

We can tentatively suspect that these three elements working together produced all those learning benefits. And, Hale and Fletcher are planning lots of further research to confirm this tentative belief.

But, they’re not trying to get VR goggles on every forehead.

Key Point #3

Here’s one of my mantras:

Researchers isolate variables. Teachers combine variables.

In other words: research — as much as possible — looks at the effect of just one thing.

For instance: “mid-lecture aerobic movement improves learning in college students.”

However, teachers juggle hundreds of variables at every second. All those isolated variables studied by researchers might not provide me with useful guidance.

For instance: if I teach in a business school, my formally-dressed students might not appreciate my insistance that they do jumping jacks in the middle of the lecture hall.

My particular combination of variables doesn’t helpfully align with that isolated exercise variable.

Here’s my point: Hale and Fletcher seem to be changing the research half of this paradigm.

In their research, notice that they aren’t isolating variables. They are, instead, looking at combinations of variables.

VR + stories + Lemov training –> more learning

In fact, if I understand their argument right, they don’t really think that isolating variables can produce the most useful results — at least not in education research.

After all (and here I’m adding my own perspective), if teachers combine variables, shouldn’t research also look at combinations?

An Early Verdict

I set out on this detective adventure feeling quite skeptical. Both the initial claim (“transform education!”) and the absence of links made me all-but-certain that the strong claim would implode. (Example here.)

However, by persistently asking reasonable detective questions, I’ve arrived at a very different place:

VR + [concepts as stories] + [Lemov-inspired TA training] just might produce big learning gains, at least for some students.

And — crucially — a thoughtful, precise, imaginative, and cautious group of scholars is exploring this possibility in detail.

As I said back at the beginning, I’ve always got something to learn.


This post was edited on April 7, 2023 to correct Lisa Fletcher’s title.