L&B Blog – Page 11 – Education & Teacher Conferences Skip to main content
Getting the Details Just Right: “Pre-questions”
Andrew Watson
Andrew Watson

Teachers, of course, ask students questions. ALL THE TIME with the questions.

We ask questions DURING a lesson in order to “check for understanding.”

We encourage students to ask themselves questions AFTER class, because “retrieval practice” promotes learning.

And, we ask questions BEFORE a unit — for at least two very good reasons.

In the first place, we need to know what our students already know. If we don’t evaluate their prior knowledge, we struggle to build on that prior knowledge in a coherent way.

Young students reading and concentrating

In the second place, we have increasingly strong research about the benefits of “prequestions.”

Unlike “checks for understanding” and “retrieval practice,” “prequestions” come before the unit.

And unlike “measuring prior knowledge,” “prequestions” deliberately focus on facts and procedures that students don’t yet know.

So: if I’m teaching a unit on Their Eyes Were Watching God, I might ask my students:

“What is the definition of a ‘bildungsroman’?”

“Describe the friendship between Langston Hughes and Countee Cullen.”

“What does hair often symbolize in literature?”

Truth to tell, it’s quite unlikely that my 10th grade students know the answers to these questions. So: those are prequestions — not checks for understanding, or retrival practice, or confirmations of prior knowledge.

Here’s the headline: we have reason to believe that “prequestions” — used correctly — help students learn information.

Here’s the story…

Hot Off the Presses

Long-time readers know that Dr. Elizabeth Ligon Bjork has done LOTS of essential work in the field of long-term memory formation and “desireable difficulties.”

And, you know my admiration of Dr. Nick Soderstrom, whose distinction between “short-term performance” and “long-term learning” should inform all teachers’ discussions.

So: when the two work together, they have my attention!

In this case, they JUST published a study on the topic of “prequestions.”

And, this study took place in actual college classrooms — not simply in a psychology lab. For that reason, its conclusions have a better chance of applying to the real-world work that other teachers do in classrooms.

In this research, students answered prequestions at the beginning of a few lectures. The subsequent lectures then provided answers to those questions. (By the way: students got only about 1/3 of those prequestions right — so for the most part they didn’t know the answers.)

On the final exam, students had to answer questions that …

… DIRECTLY related to those prequestions, or

… INDIRECTLY related to those prequestions, or

… were NOT related to the prequestions.

Sure enough, they did better on both directly and indirectly related questions, compared to the unrelated questions.

In brief: prequestions really did help college students learn in the classroom.

So simple! So effective!

So, Those “Details”?

My title promises that we need to “get the details just right.” In this case, as in so many others, I have thoughts. (Important note: at this point, I’m switching from reporting on research to offering my experience-based opinions.)

First Thought

Soderstrom and Bjork specifically write that prequestions helped because students took them seriously.

Here’s my concern: while college students may have the metacognitive perspective to take prequestions seriously, I do worry that younger students might not.

That is: once younger students realize that their answers to these questions don’t really matter, they might not take them as seriously as their college-age selves would.

The structure of prequestions, in fact, might discourage seriousness. Students rarely know the answers to these questions — that’s the point. Why would students attend seriously to questions they can’t possibly answer?

This potential problem leads to two tentative suggestions:

TELL students how and why prequestions might help, and

Use prequestions only RARELY.

After all, the more often that students must answer un-answerable questions, the less likely they are to give them appropriate mental effort.

My hope is: students who encounter prequestions only rarely won’t get cynical about trying to answer them.

Second Thought

If we use prequestions only rarely, are some times better than others?

My instincts are: yes.

Simply put: use prequestions at the beginning of a unit to highlight the most important concepts.

If we can get the benefit of this technique only rarely, then use it at the most important times.

This advice comes from common sense, not  from research — but common sense isn’t entirely forbidden on this blog.

Third Thought

Not all prequestions are created equal.

If a prequestion forces a student to think — that’s a good prequestion: even if they get a wrong answer.

However, if a prequestion activates a prior misconception, that question will actively interfere with learning.

For that reason, we should follow this rule:

Ask prequestions where students don’t know what the answer is, and where they don’t wrongly believe that they do know what the answer is.

For instance:

If I ask my student “which falls faster: a 10-pound bowling ball or a 15-pound bowling ball,” they almost certainly …

… don’t know the correct answer (that’s good), but

… wrongly think that they DO know the correct answer (that’s bad).

So: that prequestion would activate a prior misconception — and make learning harder.

On the other hand, those prequestions I asked at the top of this post (definition of “bildungsroman”) almost certainly don’t active prior misconceptions.

A Secret Unveiled; A Plea for Teamwork

I confess I have one deep frustration with this research pool.

Almost all teachers — and all students — hate tests.

So: if I name something “the testing effect,” teachers and students will HATE it — even if it’s beneficial. (Hint: the “testing effect” is just another way of talking about “retrieval practice.”)

And, if I name something “pretesting,” teachers and students will HATE it — even if it’s beneficial. Pretesting sounds like a test, no?

Sure enough, researchers have named a beneficial teaching “pretesting,” thereby ensuring confusion, and discouraging its use.

But — of course — “pretesting” simply means “asking questions on a topic before you’ve taught the material.” It’s NOT A TEST. It’s just a set of QUESTIONS.

So, I’ve been writing about “prequestions,” although everyone else in this field calls them “pretests.”

I hope you’ll join me in this virtuous rebranding.

TL;DR

Prequestions (aka “pretesting”) help students learn new material — and not just the information in the questions themselves.

Because the technique works if students take it seriously, I suggest …

… using it rarely,

… using it for important material, and

… asking prequestions that DON’T activate prior misconceptions.


Soderstrom, N. C., & Bjork, E. L. (2023). Pretesting Enhances Learning in the Classroom. Educational Psychology Review35(3), 88.

Getting the Details Just Right: Highlighting
Andrew Watson
Andrew Watson

Because the school year starts right now, I’m using this month’s blog posts to give direct classroom guidance.

Female student using pale blue highlighter in a book

Last week, I wrote about a meta-analysis showing that — yup — retrieval practice is awesome.

Teachers should be aware of a few detail (e.g.: “brain dumps” are among the least effective kinds of retrieval practice).

But for the most part, asking students to retrieve stuff (facts, processes, etc.) helps them remember that stuff better — and to transfer their understanding to new situations.

This week, let’s talk about another strategy that teachers and students might use: highlighting.

We know that retrieval practice is awesome. Is highlighting equally awesome? More or less so? When and how should students highlight?

Start Here

For several years, the go-to answer to this question has come from this research summary, by John Dunlosky, Dan Willingham, and others.

Their rather bleak conclusion:

we rate highlighting and underlining as having low utility. In most situations that have been examined and with most participants, highlighting does little to boost performance.

It may help when students have the knowledge needed to highlight more effectively, or when texts are difficult, but it may actually hurt performance on higher level tasks that require inference making. (emphasis added)

They reached this conclusion 10 years ago. Do we know anything more today?

Who Times Two

Last year, Ponce, Mayer & Méndez published a meta-analysis looking at the potential benefits of highlighting.

They found two key variables not included in the earlier research summary.

First: the students’ age/grade.

Second: the person doing the highlighting.

That is: they found that …

If the INSTRUCTOR does the highlighting, doing so benefits college students AND K-12 students, but

If the STUDENT does the highlighting, doing so benefits college studets but NOT K-12 students.

These findings make rough-n-ready sense.

We teachers know what the important ideas are. For that reason, our highlighting help students (on average) focus on those important ideas — so they learn and understand more.

Students — especially younger students — probably don’t know what the important ideas are. For that reason, their own highlighting might not accentuate important ideas (on average), and so they don’t benefit from highlighting.

When I ask a student why he highlighted a passage, I sometimes get a version this answer: “Honestly, I realized I hadn’t highlighted anything in a few pages, so I thought I really needed to find something that sounded important.”

Little wonder, then, that my 10th graders don’t benefit from highlighting.

Classroom Specifics

Of course, this meta-analysis also arrived at other useful conclusions.

This first one came to me as something of a shock: although highlighting does benefit some students, reviewing the highlights doesn’t.

The researchers write:

“on average, reviewing highlighted text previously highlighted by learners did not improve learning significantly more than students who only read or studied the text.”

I infer from this finding that highlighting helps (if at all) because it prompts students to FOCUS ON and THINK ABOUT information the first time they read it.

It does not, however, help students when they return to the highlighted passage later.

That’s useful to know!

Another conclusion is less of a surprise: training helps.

That is: we can help students (yes, even K-12 students) highlight more effectively.

According to the meta-analysis, we can…

… show students examples of good and bad highlighting,

… help them distinguish between main ideas and secondary ones, and

… emphasize that too much highlighting reduces the benefit.

For example:

I myself don’t ask my English students to highlight much. But, I do ask them to note very specific parts of the text.

When we read Macbeth, I ask them to circle/highlight every time they see the words “do,” “done,” or “deed.” (Believe it or not, those words show an important pattern in the play.)

When we read Their Eyes Were Watching God, they highlight various symbols: hair, gates/fences, mules, trees.

I hope that these very modest highlights help students spot patterns they otherwise would have missed — without distracting them too much from other important parts of the story.

In other words: used judiciously and narrowly, highlighting can provide some benefit.

TL;DR

This recent meta-analysis gives us helpful specifics on how best to use highlighting.

Ideally, we teachers do the highlighting ourselves, especially in K-12 classrooms ; we teach students how to highlight (not too much!); we don’t encourage them to review their highlights.

In fact, as we saw in last week’s post, retrieval practice should replace “review the highlights” as a way to review and study.


Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., & Willingham, D. T. (2013). Improving students’ learning with effective learning techniques: Promising directions from cognitive and educational psychology. Psychological Science in the Public interest14(1), 4-58.

Ponce, H. R., Mayer, R. E., & Méndez, E. E. (2022). Effects of learner-generated highlighting and instructor-provided highlighting on learning from text: a meta-analysis. Educational Psychology Review34(2), 989-1024.

 

 

 

Getting the Details Just Right: Retrieval Practice
Andrew Watson
Andrew Watson

As we gear up for the start of a new school year, we’re probably hearing two words over and over: retrieval practice.

That is: students have two basic options when they go back over the facts, concepts, and procedures they’ve learned.

Option 1: they could review it; that is, reread a passage, or rewatch a video, or review their notes.

Option 2: they could retrieve it; that is, ask themselves what they remember about a passage, a video, or a page of notes.

Well, the research verdict is clear: lots of research shows that OPTION 2 is the winner. The more that students practice by retrieving, the better they remember and apply their learning in the long term.

This clear verdict, however, raises lots of questions.

How, exactly, should we use retrieval practice in classrooms.

Does it work in all disciplines and all grades?

Is its effectiveness different for boys and girls?

Does retrieval practice help students remember material that they didn’t practice?

Do multiple choice questions count as retrieval practice?

And so forth.

Given that we have, literally, HUNDREDS of studies looking at these questions, we teachers would like someone to sort through all these sub-questions and give us clear answers.

Student contentrating on taking notes and reading books in the library

Happily, a research team recently produced just such a meta-analysis. They looked at 222 studies including more than 48,000 students, and asked nineteen specific questions.

These numbers are enormous.

Studies often get published with a few dozen participants – which is to say, a lot less than 48,000.

Researchers often ask 2 or 3 questions – or even 1. I don’t recall ever seeing a study or meta-analysis considering nineteen questions.

As a result, we’ve got a lot to learn from this meta-analysis, and can feel more confidence than usual in its conclusions.

The Big Picture

For obvious reasons, I won’t discuss all nineteen questions in detail. Instead, I’ll touch on the big-picture conclusions, highlight some important questions about practical classroom implementation, and point out a few surprises.

The high-level findings of this meta-analysis couldn’t be more reassuring.

YES: retrieval practice enhances long-term memory.

YES: in fact, it enhances memory of facts and concepts, and improves subsequent problem solving. (WOW.)

YES: it benefits students from kindergarten to college, and helps in all 18 (!!) disciplines that the researchers considered.

NO: the student’s gender doesn’t matter. (I was honestly a little surprised they studied this question, but since they’ve got an answer I’m reporting it here.)

I should note that these statistical results mostly fall in the “medium effect size” range: a hedges g of something like 0.50. Because I’m commenting on so many findings, I won’t comment on statistical values unless they’re especially high or low.

So the easy headline here is: retrieval practice rocks.

Making Retrieval Practice Work in the Classroom

Once teachers know that we should use retrieval practice, we’ve got some practical questions about putting it into practice.

Here again, this meta-analysis offers lots of helpful guidance.

Does it help for students to answer similar questions over multiple days?

Yes. (Honestly, not really surprising – but good to know.)

More specifically: “There is a positive relationship between the number of [retrieval practice] repetitions and the [ultimate learning outcome], indicating that the more occasions on which class content is quizzed, the larger the learning gains.”

Don’t just use retrieval practice; REPEAT retrieval practice.

Is feedback necessary?

Feedback significantly increases the benefit of retrieval practice – but the technique provides benefits even without feedback.

Does the mode matter?

Pen and paper, clicker quizzes, online platforms: all work equally well.

Me: I write “do now” questions on the board and my students write down their answers. If you want to use quizlet or mini-white boards, those strategies will work just as well.

Does retrieval practice help students learn untested material?

This question takes a bit of explaining.

Imagine I design a retrieval exercise about Their Eyes Were Watching God. If I ask my students to recall the name of Janie’s first husband (Logan Killocks), that question will help them remember his name later on.

But: will it help them remember the name of her second husband? Or, her third (sort-of) husband?

The answer is: direct retrieval practice questions help more, but this sort of indirect prompt has a small effect.

In brief, if I want my students to remember the names Jody Starks and Vergible Woods, I should ask them direct questions about those husbands.

Shiver Me Timbers

So far, these answers reassure me, but they don’t surprise me.

However, the meta-analysis did include a few unexpected findings.

Does the retrieval question format matter? That is: is “matching” better than “short answer” or “free recall” or “multiple choice”?

To my surprise, “matching” and “fill-in-the-blank” produce the greatest benefits, and “free recall” the least.

This finding suggests that the popular “brain dump” approach (“write down everything you remember about our class discussion yesterday!”) produces the fewest benefits.

I suspect that “brain dumps” don’t work as well because, contrary to the advice above, they don’t directly target the information we want students to remember.

Which is more effective: a high-stakes or a low-stakes format?

To my astonishment, both worked (roughly) equally well.

So, according to this meta-analysis, you can grade or not grade retrieval practice exercises. (I will come back to this point below.)

Should students collaborate or work independently on retrieval practice answers?

The studies included in the meta-analysis suggest no significant difference between these approaches. However, the researchers note that they don’t have all that many studies on the topic, so they’re not confident about this answer. (For a number of reasons, I would have predicted that individual work helps more.)

Beyond the Research

I want to conclude by offering an opinion that springs not from research but from experience.

For historical reasons, “retrieval practice” had a different name. Believe it or not, it was initially called “the testing effect.” (In fact, the authors of this meta-analysis use this term.)

While I understand why researchers use it, I think we can agree that “the testing effect” is a TERRIBLE name.

No student anywhere wants to volunteer for more testing. No teacher anywhere either.

And – crucially – the benefits have nothing to do with “testing.” We don’t need to grade them. Students don’t need to study. The retrieving itself IS the studying.

For that reason, I think teachers and schools should focus as much as possible on the “retrieval” part, and as little as possible on the “testing.”

No, HONESTLY, students don’t need to be tested/graded for this effect to work.

TL;DR

Retrieval practice — in almost any form — helps almost everybody learn, remember, and use almost anything.

As long as we don’t call it “testing,” schools should employ retrieval strategically and frequently.


Yang, C., Luo, L., Vadillo, M. A., Yu, R., & Shanks, D. R. (2021). Testing (quizzing) boosts classroom learning: A systematic and meta-analytic review. Psychological Bulletin147(4), 399.

Using “Worked Examples” in Mathematics Instruction: a New Meta-Analysis
Andrew Watson
Andrew Watson

Should teachers lets students figure out mathematical ideas and processes on their own?

Or, should we walk students through those ideas/processes step by step?

3 students working together on a math problemThis debate rages hotly, from eX-Twitter to California teaching standards.

As best I understand them, the arguments goes like this:

If students figure out ideas and processes for themselves, they think hard about those mathematical ideas. (“Thinking hard” = more learning.)

And, they feel emotionally invested in their discoveries. (“Emotional investment” = more learning.)

Or,

If students attempt to figure out math ideas for themselves, they first have to contemplate what they already know. Second, they contemplate where they’re going. And third, they have to (basically) guess until they figure out how to get from start to finish.

Holding all those pieces — starting place, finish line, all the potential avenues in between — almost certainly overwhelms working memory. (“Overwhelmed working memeory” = less learning.)

Therefore, teachers should walk students directly through the mathematical ideas/process with step-by-step “worked” examples. This process reduces cognitive load and builds schema. (“Reduced cognitive load” + “building schema” = more learning.)

Depending on your philosophical starting place, both argument might sound plausible. Can we use research to answer the question?

Enter the Meta

One problem with “using research to answer the question”: individual studies have yielded different answers.

While it’s not true that “you can find research that says anything,” it IS true — in this specific case — that some studies point one way and some point another.

When research produces this kind of muddle, we can turn to a mathematical technique called “meta-analysis.” Folks wise in the ways of math take MANY different studies and analyze all their results together.

If scholars do this process well, then we get an idea not what ONE study says, but what LOTS AND LOTS of well-designed studies say (on average).

This process might also help us with some follow up questions: how much do specific circumstances matter?

For instance: do worked examples help younger students more than older? Do they help with — say — math but not English? And so forth.

Today’s news:

This recent meta-analysis looks at the benefits of “worked examples,” especially in math instruction.

It also asks about specific circumstances:

Do students benefit from generating “self-explanations” in addition to seeing worked examples?

Do they learn more when the worked examples include BOTH correct AND incorrect examples?

So: what did the meta-analysis find?

Yes, No, No

The meta-analysis arrives at conclusions that — I suspect — suprise almost everyone. (If memory serves, I first read about it from a blogger who champions “worked examples,” and was baffled by some of this meta-analysis’s findings.)

In the first place, the meta-analysis found that students benefit from worked examples.

If you do speak stats, you’ll want to know that the g-value was 0.48: basically 1/2 of a standard deviation.

If you don’t speak stats, you’ll want to know that the findings were “moderate”: not a home run, but at least a solid single. (Perhaps another runner advanced to third as well.)

While that statement requires LOTS of caveats (not all studies pointed the same direction), it’s a useful headline.

In the dry language of research, the authers write:

“The worked examples effect yields a medium effect on mathematics outcomes whether used for practice or initial skill acquisition. Correct examples are particularly beneficial for learning overall.”

So, what’s the surprise? Where are those “no’s” that I promised?

Well, in the second place, adding self-explanation to worked examples didn’t help (on average). In fact, doing so reduced learning.

For lots of reasons, you might have expected the opposite. (Certainly I did.)

But, once researchers did all their averaging, they found that “pairing examples with self-explanation prompts may not be a fruitful design modification.”

They hypothesize that — more often than not — students’ self explanations just weren’t very good, and might have included prior misconceptions.

The Third Place?

In the third place came — to me, at least — the biggest surprise: contrasting correct worked examples with incorrect worked examples didn’t benefit students.

That is: they learned information better when they saw the right method, but didn’t explore wrong ones.

I would have confidently predicted the opposite. (This finding, in fact, is the one that shocked the blogger who introduced me to the study.)

Given these findings and calculations, I think we can come to three useful conclusions: in most cases, math students will learn new ideas…

… when introduced via worked examples,

… without being asked to generate their own explanations first,

… without being shown incorrect examples alongside correct ones.

Always with the Caveats

So far, this blog post has moved from plausible reasons why worked examples help students learn (theory) to a meta-analysis showing that they mostly do help (research).

That journey always benefits from a recognition of the argument’s limitations.

First, most of the 43 studies included in the meta-analysis focused on middle- and high-school math: algebra and geometry.

For that reason, I don’t know that we can automatically extrapolate its findings to other — especially younger — grades; or to other, less abstract, topics.

Second, the findings about self-explanations include an obvious potential solution.

The researchers speculate that self-explanation doesn’t help because students’ prior knowledge is incorrect and misleading. So: students’ self-explantions activate schema that complicate — rather than simplify — their learning.

For example: they write about one (non-math) study where students were prompted to generate explanations about the causes of earthquakes.

Because the students’ prior knowledge was relatively low, they generated low-quality self-explanations. And, they learned less.

This logic suggests an obvious exception to the rule. If you believe your students have relatively high and accurate prior knowledge, then letting them generate self-explanations might in fact benefit students.

In my own work as an English teacher, I think of participles and gerunds.

As a grammar teacher, I devote LOTS of time to a discussion of participles; roughly speaking, a participle is “a verb used as an adjective.”

During these weeks, students will occasionally point out a gerund (roughly speaking, a “verb used as a noun”) and ask if it’s a participle. I say: “No, it’s something else, and we’ll get there later.”

When “later” finally comes, I put up sentences that include participles, and others that include similar gerunds.

I ask them to consider the differences on their own and in small groups; that is, I let them do some “self-explanation.”

Then I explain the concept precisely, including an English-class version of “worked examples.”

Because their prior knowledge is quite high — they already know participles well, and have already been wondering about those “something else” words that look like participles — they tend to have high quality explanations.

In my experience, students take gerunds on board relatively easily.

That is: when prior knowledge is high, self-explanation might (!) benefit worked examples.

TL;DR

A recent meta-analysis suggests that worked examples help students learn algebra and geometry (and perhaps other math topics as well).

It also finds that self-explanations probably don’t help, and that incorrect examples don’t help either.

More broadly, it suggests that meta-analysis can offer helpful and nuanced guidance when we face contradictory research about complex teaching questions.


Barbieri, C. A., Miller-Cotto, D., Clerjuste, S. N., & Chawla, K. (2023). A meta-analysis of the worked examples effect on mathematics performance. Educational Psychology Review35(1), 11.

“Teaching” Helps Students Learn: New Research
Andrew Watson
Andrew Watson

A smiling young man wearing a jeans jacket, wool cap, and headphones sits at a desk and talks to a camera in front of him.Not even two months ago, I admitted my skepticism about a popular teaching technique.

While I accept that “students teaching students” SOUNDS like a great idea, I nonetheless worry about the practical application of this idea:

Understanding a new idea requires lots of mental resources. Explaining a new idea requires even more. All those cognitive demands might overwhelm a student’s WM.

Even if students have the mental resources to accomplish these tasks, how can we be sure that their peers are — in fact — LEARNING the new ideas they’re being taught? For instance: what if the student-teachers misunderstood the material they’re meant to teach?

Peers can intimidate. If teachers have “first day of school” anxiety dreams, imagine how students feel when they must take on the teacher’s job. (And: they don’t have our training and experience.)

So: while I think it’s possible that students benefit from teaching their peers, making this pedagogy successful will take LOTS of preparation, skill, and humility.

Today’s Update: Does the Audience Matter?

Happily, Prof. Dan Willingham recently highlighted a new study exploring this pedagogical question. Specifically, researchers wanted to know if it matters whom the students are teaching.

College students in China watched a two-minute video on synapses, specifically:

how signals are transmitted across neurons in the human nervous system and the role of action potentials, calcium ions, synaptic vesicles, neurotransmitters, sodium ions, and receptors.

After a few extra minutes of prepration, they then “taught” a lesson on this topic.

One third of the participants explained chemical synapses to 7 peers;

one third explained to 1 peer;

and the final third explained to a video camera.

Students in all three groups were instructed that the peers would have to take a test based on these explanations.

So, what effect did the audience have on the student doing the explaining?

Results and Conclusions

The researchers had hypothesized that the presence of peers would ramp up stress and reduce the benefits of this teaching methodology.

For that reason, they suspected that students would do better if they taught their lesson to the video camera instead of to live human beings.

Sure enough, students who taught to the camera did better on basically every measurement.

They offered more thorough explanations (Cohen’s d values here ranged from 0.95 – 1.23: unusually high numbers).

They remembered the information better an hour later.

They transferred their understanding to new questions more effectively.

They felt less stress, and lower cognitive load.

As the authors write: “minimizing the social presence of the audience [by have students teach to a camera] during teaching  resulted in maximizing learning outcomes.”

Classroom Implications

At first look, this study seems to suggest that — sure enough! — students DO learn more when they teach.

Alas, I don’t think we can draw that conclusion.

First: this study didn’t measure that question. That is: it didn’t include a control condition where students used some other method to study information about synapses.

This study DOES suggest that teaching to a camera helps more than teaching to peers. But it DOESN’T suggest that teaching (to a camera, or to peers) helps more than something else.

Second: I’m not sure that the verb “teach” makes sense in this context.

The students explained synapses to a camera, and they believed that another student would watch the video and take a test on it.

I suppose we can call that “teaching.” But that’s a very niche-y version of it.

And, in my experience, it’s not AT ALL what teachers think of when they hear about this methodology. More often, students break up into groups to study small parts of a process, and then circulate and “teach” the other groups what they learned.

Third: how would this “teach the camera” plan work in the classroom?

The “explain to a camera” approach might work better than an “explain to peers” version. But I imagine at least two practical problems.

#1: logistically, how does it work? Do I have 25 students explaining to 25 separate cameras simultaneuosly? Do I have a separate place with cameras where students go to record?

#2: In this study, researchers told participants that other students would watch their videos and be tested on their understanding.

Presumably this statement made the teacher-students quite conscientious about their explanations. For that reason (probably), they thought harder and therefore remembered more.

That is: the camera method helped students learn largely because participants believed that others relied on their teaching.

If, however, I use this strategy in my class, that causal chain (conscientiousness –> thinking –> remembering) could easily break down.

Either I DO use those videos to help other students learn — in which case I have to review and critque them scrupulously;

Or I DON’T use those videos — in which case my students know they don’t really have to be so concientious. (Reduced conscientiousness –> reduced thinking –> reduced memory.)

These practical questions might sound mundane, even grouchy. But I’m not trying to be grouchy — I’m trying to help my students learn material!

TL;DR

A recent study suggests that college students benefit more from “teaching” if they teach to a camera than if they teach peers.

Although I’m inclined to believe these results — they certainly make a lot of sense — I still worry that a “students-teaching-students” pedagogy sounds better in theory than it might work in practice.


Wang, F., Cheng, M., & Mayer, R. E. (2023). Improving learning-by-teaching without audience interaction as a generative learning activity by minimizing the social presence of the audience. Journal of Educational Psychology.

 

My Detective Adventure: “VR Will Transform Education” [Reposted]
Andrew Watson
Andrew Watson

Our blogger is off this week. He asked us to repost this piece, because he’ll be chatting with these researchers again soon!


 

A friend recently sent me a link to an article with a click-baity headline: something like “Virtual Reality Will Change Education Forever.”

Man wearing Virtual Reality goggles, making gestures in the air

Her pithy comment: “This is obviously nonsense.” (It’s possible she used a spicier word that ‘nonsense.’)

On the one hand, I’m skeptical that ANYTHING will change education forever. Heck, if Covid didn’t transform education, who knows what will.

More specifically, ed-tech claims about “transforming education” have been around for a long time. Their track record doesn’t dazzle. (Smart boards, anyone?)

On the other hand, I always like to find reserch that challenges my long-held beliefs. After all, if I can’t learn from people who disagree with me, who can I learn from?

So, I followed my usual process.

In essence, I switched into Detective Mode, and started asking lots of questions.

If I ask the right questions, I thought, I’ll get a much clearer picture of potential educational benefits of VR.

Act I: The Investigation Begins

When I reviewed the article my friend sent, I noticed a troubling gap: the article didn’t link to underlying research.

As I’ve written in the past, this absence creates a red flag. If the article champions “research-based innovation,” why not link to the research?

So, I asked my first detective question. I emailed the author of the article and asked to see the research.

How simple is that?

Obviously, any resistance to this request — “sorry, we can’t share that at this moment” — would underline my friend’s skeptical verdict: “nonsense.”

However, the author responded immediately with a link to a research summary.

A promising development…

The Plot Thickens

This research summary showed real promise.

In brief:

Some college students in an introductory Biology course followed the typical path — readings, lectures, labs. (That’s the “control group.”)

Other students in the same course followed an alternative path: readings, lectures, supplementary Virtual Reality experience, alternative labs based on the VR experience.

When researchers looked at all sorts of results, they found that students on the alternative VR path did better.

That is: not only did the students enjoy the VR experiences; not only did they engage more with the material; they (on average) learned more.

However — and this is a BIG however — this research didn’t look like it was published.

In fact, when I asked that direct question, the article author confirmed that the research hadn’t yet been published in a peer-reviewed journal.

Now, the topic of peer review creates LOTS of controversy. The peer-review system has MANY troubling flaws.

However, that system probably reduces the amount of deceptive nonsense that gets published.

I almost never blog about research that hasn’t been peer reviewed, and so I thought my detecting was at its logical end. The VR claim might not be ‘nonsense,’ but it didn’t yet have enough published evidence to strengthen it.

And then, an AMAZING thing happened: the lead researcher emailed me to say she would be happy to talk with me about the study.

Over the years, I have occasionally reached out to researchers to be sure I understand their arguments.

But no researcher has EVER straight-up volunteered for such a meeting. And I mean: EVER.

The Payoff

Honestly, I’d love to transcribe my conversation with Dr. Annie Hale and Lisa Fletcher (“Chief of Realm 4”) — both at Arizona State University because it was both fascinating and inspiring.

Because you’re busy, I will instead boil it into three key points:

First:

Hale and Fletcher have done — and continue to do — incredibly scrupulous research.

For instance, in the description above, I put the words “control group” in quotations marks.

I did so because of Hale and Fletcher’s insistance. The two groups of Biology students had somewhat similar academic experiences.

But the research paradigm required enough differences to make the words “control group” technically inappropriate.

Hale and Fletcher insisted on this precision throughout our discussion. For instance, they regularly pointed out that a particular calculation suggested a positive result, but didn’t reache statistical significance.

In other words, they highlighted both the strengths and weaknesses of their own argument.

This habit, it my view, makes them MUCH more reliable guides in this field.

Second:

Here’s a shocker: Hale and Fletcher do not claim that virtual reality will transform education.

No, really, they don’t.

The headline of the article my friend sent me made that claim, but the researchers themselves don’t.

Instead, they make a very different claim. The alternative Biology path included at least three big changes from the typical path:

Change #1: students had the VR experience (and their lab was based on that experience)

Change #2: the key underlying biology concepts had been translated into stories. For instance, this “narratively-driven virtual reality” includes an imaginary species called the Astelar. (Some of the students got QUITE protective of these imaginary creatures.)

Change #3: the TAs in these alternative path classes got special training, inspired by Doug Lemov’s Teach Like a Champion.

We can’t know — and, Hale and Fletcher don’t say they know — which of these three parts made the biggest difference.

We can tentatively suspect that these three elements working together produced all those learning benefits. And, Hale and Fletcher are planning lots of further research to confirm this tentative belief.

But, they’re not trying to get VR goggles on every forehead.

Key Point #3

Here’s one of my mantras:

Researchers isolate variables. Teachers combine variables.

In other words: research — as much as possible — looks at the effect of just one thing.

For instance: “mid-lecture aerobic movement improves learning in college students.”

However, teachers juggle hundreds of variables at every second. All those isolated variables studied by researchers might not provide me with useful guidance.

For instance: if I teach in a business school, my formally-dressed students might not appreciate my insistance that they do jumping jacks in the middle of the lecture hall.

My particular combination of variables doesn’t helpfully align with that isolated exercise variable.

Here’s my point: Hale and Fletcher seem to be changing the research half of this paradigm.

In their research, notice that they aren’t isolating variables. They are, instead, looking at combinations of variables.

VR + stories + Lemov training –> more learning

In fact, if I understand their argument right, they don’t really think that isolating variables can produce the most useful results — at least not in education research.

After all (and here I’m adding my own perspective), if teachers combine variables, shouldn’t research also look at combinations?

An Early Verdict

I set out on this detective adventure feeling quite skeptical. Both the initial claim (“transform education!”) and the absence of links made me all-but-certain that the strong claim would implode. (Example here.)

However, by persistently asking reasonable detective questions, I’ve arrived at a very different place:

VR + [concepts as stories] + [Lemov-inspired TA training] just might produce big learning gains, at least for some students.

And — crucially — a thoughtful, precise, imaginative, and cautious group of scholars is exploring this possibility in detail.

As I said back at the beginning, I’ve always got something to learn.


This post was edited on April 7, 2023 to correct Lisa Fletcher’s title.

The Unexpected Problem with Learning Styles Theory [Reposted]
Andrew Watson
Andrew Watson

Our blogger will be taking the first two weeks of August off.

This post generated plenty of conversation when he published it last October.


 

I recently read a much-liked Twitter post that said (I’m paraphrasing here):

If you try to debunk Learning Styles Theory and you face unexpected resistance, start looking for the profit motive.

Hmmm.

To be clear: learning styles theory just doesn’t have plausible research support.

If and when we can debunk it, we certainly should.

But, in my own experience at least, teachers who believe the theory often do so with the best of motives.

Mocking those motives — or, even worse, implying believers have wicked motives — seems unfair. And, likely to prove counterproductive.

Yes, grifters exist. Yes, we should call them out. But most teachers who offer “unexpected resistance” can explain why — for reasons that have nothing to do with profits. (Honestly, if teachers were driven by profits, would we have joined this profession?)

Surface Plausibility

In the first place, MANY teachers learned about Learning Styles Theory in their education programs.

In fact, Blake Harvard — “The Effortful Educator” — searched the websites of 9 major schools of education, and found that MOST referenced Learning Styles Theory positively.

Can we be surprised that teachers believe what their professors teach them?

Equally important, this theory seems to align with much of our classroom experience.

After all, daily classroom life suggests that students learn differently. Some students respond well to this approach, while others need another approach entirely.

So, it seems that Learning Styles Theory (helpfully?) explains these differences, and (helpfully?) suggests a coherent way to respond to them.

Why wouldn’t teachers believe a theory that a) we learned in graduate school, and b) aligns with our daily experience?

Getting Personal

In fact, “unexpected resistance” to Learning Styles Theory often stems from an even deeper source.

Many dedicated teachers have been relying on it for years. Often, their self-definition as a good and caring teachers begins with or includes their fidelity to this theory:

“My students know I care about them because I tailor my instruction to their learning style!

When we tell teachers that we simply have no evidence to support the theory (and, to be clear, we don’t), we’re not simply asking them to change what they do and believe.

Instead, we are — in effect — asking them to admit their their exemplary teaching practice was (at best) useless, and (possibly) detrimental. FOR YEARS.

That admission, of course, is incredibly painful and troubling.

For us to mock teachers (“look for the profit motive!”) for this painful struggle … well, I simply don’t understand how that approach will help. I can’t remember the last time that mockery helped me change my teaching practice for the better.

Plausible Alternatives

If we shouldn’t accuse people of being charlatans (hint: I think we mostly shouldn’t), how should we contradict these misbeliefs?

As I’ve written before, I do think this is a very difficult problem.

We really should contradict those false beliefs, but I’m not at all sure that doing so encourages people to adopt new ones.

My current approach relies on these steps.

First: rather that asking teachers to stop believing one thing, I encourage them to start thinking about something else.

My hopeful theory: the more time they’re thinking about, say, working memory, the less time they’re thinking about Learning Styles Theory.

Second: I don’t contradict in public. I try to chat with believers one-on-one.

Honestly, this approach includes perils. If I don’t contradict in public, others might believe that theory does have merit.

However, as noted above, I think increasing shame reduces the likelihood that new advice will stick.

Third: I provide research, and ask lots of genuinely curious questions.

I hope that peer-to-peer curiosity will ultimately change more minds than more confrontational strategies.

 

To be clear, I’m not certain that my approach has more merit than others. I certainly have no research suggesting that it will work.

But experience tell me that “supportive listening” beats “questioning motives” as a motivational approach.

If you’ve got suggestions and strategies, please share them!

Open Classroom Plans: The Effects on Reading
Andrew Watson
Andrew Watson

I’ve written frequently over the years about the effects of classroom decoration on learning.

The headline is: althought many teachers have been trained to DECORATE, those decorations can distract students and thereby reduce learning.

We’ve tested this question for students from kindergarten to college.

We’ve tested them in different disciplines.

Heck, we’ve even tested them over very long periods of time (15 weeks!).

Sure enough: students don’t get used to decorations. Instead, they continue to be distracted and to learn less.

Looking past the door handle into an empty classroom

To be clear: I don’t think classrooms should be utterly sterile. But, I do think that research suggests we should take a “less is more” approach to decoration.

This set of findings raises an important corollary: are there other kinds of distraction that should worry us?

How about: distractions from other students…

Experiments Past

Enthusiasm for open classrooms began — I believe — in the 1960s and ’70s.

The basic ideas are:

Philosophically speaking: open classrooms feel less authoritarian — more student-centered than teacher-centered, and

Pedagogically speaking: they allow for a greater variety of combinations and collaborations — across grades, for instance.

The potential hazards, of course, are DISTRACTION. Having all those people and all that noise might make learning much harder.

Of course, this question isn’t easy to research. To do so, we would need…

… large groups of students who

… spend substantial learning time in both environments, and

… measurements that track their relevant academic progress.

Honestly: that’s A LOT to ask of a study.

Crunching the Numbers

I have good news!

A group of scholars in Australia have undertaken just such a study, looking at 7-10 year-old students in several schools.

In this study, researchers tracked classes that switched from open to enclosed to open classrooms (or, the other way around) over three terms.

The measurement of interest: reading words per minute.

Of course, this measurement makes good sense. We teachers REALLY CARE how well our students can read. And this particular measurement correlates with all sorts of academic outcomes.

So, what did the research team find: The envelope please….

Fully two-thirds of students improved more in enclosed classrooms than in open classrooms.

For some students, the classroom difference didn’t matter.

For a few — those with especially good attention, and/or academic background — the open plan resulted in greater improvement.

Those seem like impressive numbers.

Final Thoughts

I’ve looked around for research that contradicts this finding (a habit of mine), and so far I haven’t located anything persuasive. (If you know of such a study, please send it my way!)

Truthfully, I haven’t found lots of research in this field at all — many studies date from the ’70s and ’80s.

In brief, I think we have one very compelling data point. In this study, open classrooms reduce learning for most students, especially those who most need help in school.

If that result holds up with further research, we should be strongly inclined (in most circumstances) to teach students in the self-contained classrooms that foster learning.

Learning How to Learn: Optimists and Realists
Andrew Watson
Andrew Watson

In schools, optimism helps teachers a lot.

At the beginning of the year, my students JUST DON’T KNOW all sorts of things: how to write a good essay; how to analyze Macbeth; how to define “gerund.”

In all likelihood, your students don’t know things too.

Group of middle-school children working with electrical equipment and an ipad

Because I’m often an optimist, I trust that — by the end of the year — they WILL KNOW all those things, and lots more. So, I suspect, are you.

Our optimism prevents despair. (“They’ll just never learn the …”)

At times, however, that optimism can build unreasonable hopes and expectations.

For instance:

I hear a GREAT DEAL of enthusisam about the concept of teaching students “to learn how to learn.”

If students know all the cool stuff that we discuss at Learning and the Brain conferences, surely their academic lives will be better.

The optimist in me says: “that sounds GREAT.” The realist in me says: “I want to slow down and ask some pointed questions…”

A Core Principle

I make my living by explaining cognitive science to teachers. I talk about working memory and attention and retrieval practice and prior misconceptions and executive attention…the list goes on.

When I think about explaining cognitive science to students, I return to this vital question:

Realistically speaking, can students DO SOMETHING with this information?

If the realistic answer to that question (have I written “realistic” often enough?) is not an emphatic “YES,” then I’m very hesitant about sharing it.

For example: retrieval practice.

By now, this blog’s readers know that actively calling information to mind (“retrieval practice”) enhances learning — especially when compared with rereading (“simple review”).

I could link to dozens of sources, but — to keep things simple — I’ll simply highlight this book review, and this website.

Obviously, we teachers should use as much retrieval practice as we reasonably can.

But: should we tell our students about retrieval practice, so they can “learn how to learn”?

Let’s go back to my guiding principle:

Realistically speaking, can students DO SOMETHING with information about retrieval practice?

Well: obviously YES.

Students can…

… make flashcards

… quiz one another (rather than review their notes)

… outline a chapter from memory (rather than reread the chapter).

Students have ALL SORTS of ways to make retrieval practice their own, and to benefit from it.

So, it makes sense to tell students about this part of cognitive science.

In truth, I tell my students about retrieval practice at the beginning of every year. (I once had a student THANK ME for doing so much retrieval practice. No, I’m not making that up.)

In this specific case, students can absolutely “learn how to learn.”

Next Question

If students should learn about retrieval practice, what about other “desireable difficulties”? What about — for example — spacing and interleaving?

You know the drill:

Realistically speaking, can students DO SOMETHING with information about spacing and interleaving?

Honestly, I have my doubts.

Students have relatively little control over their practice schedule. They practice when their homework requires them to do so.

In other words: they can’t simply decide to do half their “gerund” exercises tonight and half later in the week. They have to do them when I assign them.

So: it’s on me (and other teachers) to design our syllabi to space and interleave practice.

But, for the most part, students can’t do much with this information, so I don’t make a big deal about it with them. (I’m relieved to know that Bradley K. Busch largely agrees with me on this point.)

Yes: students should probably know about spacing and interleaving so that they can manage their college study habits well. But until they get to college, that information doesn’t offer them much practical guidance (except in unusual circumstances).

And the next…

Sadly, I think most information from cognitive science fits in this latter category.

That is: TEACHERS should know about working memory. (Teachers should obsess about working memory.)

But, I don’t think students can do much with information on this topic.

That is: when they’re experiencing working memory overload, they don’t have enough working memory left to get metacognitive about working memory reduction strategies.

The potential solution, paradoxically, exacerbates the problem.

Heck, it took me YEARS to figure out how to apply my knowledge of working memory limitations to my teaching. It seems somehow unfair to ask students to accomplish a task that challenged me so greatly.

Heretically, I don’t think that students really need to know much about mindsets either. Instead, I think TEACHERS and SCHOOLS should create policies and practices with this theory in mind.

Those mindset posters probably don’t do any harm.

But if they’re not enacted by the teachers’ words and decisions, that contradiction makes the students’ knowledge useless.

Telling students about mindsets (probably) doesn’t help them learn how to learn. Behaving and speaking as if we have a growth mindset helps students learn.

An Alternative Perspective

I’ll say all this in a different way: I believe teachers have a distinct and vital role in students’ learning.

Students have their work to do — obviously. And we have ours.

Those two roles sometimes overlap. But just as often, they remain distinct.

We should know our subject.

And, we should know from cognitive science.

In those few cases (retrieval practice!) where students can use that knowledge immediately, we should eagerly share it with them.

In the more common cases where they can’t (e.g.: Posner’s tripartite theory of attention; dopytocin), we should put it to use, but not burden students with extraneous cognitive load.

This perspective might not be uplifting and optimistic, but I hope its realism will ultimately help my students learn better.

Introducing “Schema Theory”
Andrew Watson
Andrew Watson

In the last few years, I’ve increasingly wondered if “schema theory” just might work a special kind of magic.

If I understand it right (and if it’s true), then schema theory unites two distinct topics:

the cognitive science behind good teaching, and

the curriculum.

Because that result would be, ahem, SPECTACULAR, the theory merits careful attention.

In this post, I’ll try to explain:

What schema theory is,

Why teachers should care, and

What its limitations seem to be.

I’m thinking of this post as the first of a series: I hope to flesh out this concept more substantially over time.

What Is Schema Theory?

Schema theory models the mental structure of knowledge.

In other words: if I say that I know something, schema theory tells me what that knowledge might look and act like in the mind.

This theory rests on two key points.

First: a schema comprises a vast, interconnected web of declarative and procedural knowledge.

So, if I say “I know what a ‘pet’ is,” I’m claiming to have a “pet” schema. That schema includes declarative/procedural knowledge:

Specific animals: dog, cat, goldfish, hamster.

Concepts, like “tame” or “belongs inside the home.”

Procedures, like “take for a walk” or “clean the litterbox.”

Second: in those schema, the bits of knowledge function together fluentlywhich is to say automatically.

If I tell a friend that I’ve gotten a new pet, she would IMMEDIATELY know a) that I’m talking about a particular group of animals, b) that my furniture might be in peril, and c) that our early morning walks might be disrupted if I’m bringing a dog along.

She doesn’t have to stop and think her way through all those pieces. They spring instantly to mind, because she has activated the “pet”schema.

Similarly, if I told her I’d gotten a pet lion, she would IMMEDIATELY think

“Lions aren’t typically pets!”

“I wouldn’t want a lion inside my house!!”

“I wonder who has to clean THAT litterbox!!!”

Those thoughs all happen unprompted because I’ve violated the “pet” schema, and she’s trying to make “lion” fit into it.

To review these two key points:

LOTS of intricately connected declarative and procedural information,

used FLUENTLY/AUTOMATICALLY together.

That’s a (very basic definition of a) schema.

Why Teachers Should Care About Schema

Two reasons (at least).

First:

We teachers often struggle to identify our goal. Do we want our students to…

… achieve today’s learning objective?

… demonstrate proficiency in the curriculum?

… meet the state standards?

If yes, which of these goals takes priority?

In my view, the concept of “schema” brings all those goals together.

When students build effective and useful schema, they unite granular bits (say, “learning objectives”) into larger coherent and fluent wholes (say, “the curriculum” as a way of meeting “state standards”).

In other words, no matter which way we think about students’ acadecmic and curricular progress, we can talk about “schema.” Conversations that once seemed fragmented and incoherent can come together into a complex, thoughtful whole.

Second:

Cognitive science helps us understand the strategies that most effectively build schema.

How do we get all those small bits (“cat, dog, clean litterbox, tame, not lion”) to fit together so they operate fluentely as a whole (“pet”)?

Well, let’s talk about working memory. And retrieval practice. And generative learning. And desirable difficulties. And…

In other words:

We can use the same conceptual structure (“schema theory”) to unite the content we want to teach with cognitive science.

We’ve got one big framework that captures both curriculum and pedagogy.

That’s (potentially) AMAZING AND HELPFUL.

 Just imagine how clarifying such conversations could be.

What Are the Limitations of Schema Theory?

In a word: research. As far as I can tell, we ain’t got much.

When I ask about the research basis for schema theory — asking for a “research basis” is a hobby of mine — I get incomplete answers.

Some folks refer me to scholars who wrote in the 1950s (or 1930s). That’s an interesting theoretical basis, but it isn’t current psychology research.

Others point to individual studies here and there. (Anderson 1983 gets a lot of attention.) But those individual studies — in my view — don’t (yet) remotely add up to strong support for the theory.

One scholar I spoke with responded with this question: “well, how would you research the theory? What study would you do?”

That’s an important question…but in this field we focus on research-based assertions. We can’t simply wave away the need for research.

I’ve been trying to make sense of this research field in recent months; I’m currently working with a friend to organize it all.

So, here’s the conundrum I face:

Schema theory could be spectacularly useful.

We don’t seem to have lots of research making a strong case for the theory (although LOTS of people act as if we do).

Of course, at Learning and the Brain, we’re ALL ABOUT the research. Until I see more, I’m always hesitant to espouse the theory — no matter how useful — too strongly.

Some Additional (Unrelated) Notes

First:

Oddly, schema theory lives a double life.

In Britain, it’s old news. I believe they went through a “schema theory” phase 20 years ago, and now Brits (well, Brits on eduTwitter, anyway) talk about schemas as if we all know what they are.

In the US, almost no one talks about them at all. (I am, as far as I know, the only person in Learning-and-the-Brain world to do so regularly).

Second:

Technically speaking, the plural of “schema” is “schemata” (think “stigma/stigmata”). Very few people actually use that word. Some say “schemas.” Others use “schema” as both singular and plural.

Third:

If you know from schema theory, you’re quite possible vexed that this post is so inadequate.

I haven’t linked to Dr. Efrat Furst’s specatularly useful website. I haven’t linked to Sarah Cottingham’s immensely helpful blog post.

I’ve even left out the famous restaurant example — everyone’s go to for explaining a schema.

This frustration has merit, because I’ve barely introduced a complex (and potentially vitally important) topic.

If you have studies you want to share, books to recommend, websites to laud, PLEASE let me know.

I’ll keep working out my thinking, and I’m hoping you’ll help me along the way.