Skip to main content

Andrew Watson About Andrew Watson

Andrew began his classroom life as a high-school English teacher in 1988, and has been working in or near schools ever since. In 2008, Andrew began exploring the practical application of psychology and neuroscience in his classroom. In 2011, he earned his M. Ed. from the “Mind, Brain, Education” program at Harvard University. As President of “Translate the Brain,” Andrew now works with teachers, students, administrators, and parents to make learning easier and teaching more effective. He has presented at schools and workshops across the country; he also serves as an adviser to several organizations, including “The People’s Science.” Andrew is the author of "Learning Begins: The Science of Working Memory and Attention for the Classroom Teacher."

Navigating Complexity: When 1st Order Solutions Create 2nd Order Problems
Andrew Watson
Andrew Watson

Here’s a common classroom problem.

As I’m explaning a complex concept, a student raises a hand.

“Just a moment,” I say, and finish my explanation.

Now I turn and smile at the student: “what was your question?” I ask.

All too often, the student answers, “I forgot my question.”

What’s going on here?

As is so often the case, the answer is: working memory overload.

Working memory HOLDS and PROCESSES information. When a student fails to hold and process, that’s working memory overload.

A primary school student wearing a backpack and sitting at a desk raises an eager hand.

In this case, my student was processing my explanation, and so failed to hold the question.

The solution?

It might seem simple. Don’t ask students to hold questions while they process explanations.

Instead, I should answer students’ questions right away. Problem solved….

When Solutions Create Problems

Wait just a moment.

This “solution” I just offered might solve the student’s problem.

At the same time, it might create new problems.

The student’s question — even a well-intentioned one — might throw my explanation off track.

My students might lose their tentative understanding of my complex explanation.

I might lose my own train of thought.

So I fixed one classroom problem but now have yet another one. YIKES.

What’s a teacher to do?

First Things First

This example — but one of many — might throw our entire project into question.

Teachers turn to psychology and neuroscience to solve classroom problems.

However, if these “research-based solutions” simply transform one problem into some other headache, why bother with the research?

We could save time by sticking with the old problem, right?

I think the fair answer to that question is: “actually, no.” Here’s why…

Teachers don’t need research to solve classroom problems. We need research to solve COMPLEX classroom problems.

When our classroom problems are simple, we just solve them on our own. We are — after all — teachers! We specialize in problem solving.

For that reason, we turn to research only when the problem isn’t simple.

And for that reason, we shouldn’t be surprised when the answer isn’t simple either.

OF COURSE we can’t fix the “questions-interrupting-my-explanation” problem with one easy research-based step.

If it were so simple a problem, we would have solved it without the research.

Changing the Lens

As I’ve explored this question with wise teachers in recent weeks, I’ve been struck by a pattern:

PROBLEM ONE requires SOLUTION ONE.

But: SOLUTION ONE creates PROBLEM TWO.

And: it’s often true that PROBLEM TWO comes from a different cognitive function than PROBLEM ONE.

So, in the example above, I started with a working memory problem: my student coudn’t hold and process information.

My solution (“take questions right away”) created another problem — but not a working memory problem.

When I answer questions mid-explanation, my students lose focus. That is, the working memory problem has been transformed into an attention problem.

To solve this second problem, I need to switch from working memory solutions to attention solutions.

In other words, I need to think about a separate cognitive function. I’ll find solutions to this 2nd order problem in a different research field.

Again with the Mantra

If you’ve ever heard me speak at a Learning and the Brain conference, you know my mantra: “don’t just do this thing; instead, think this way.”

In other words: psychology research can’t provide teachers with a list of “best practices.” The strategy that works in my 10th grade English classroom at a boarding school might not help 1st graders add numbers in a Montessori program.

But: the thought process I follow with my 10th graders might lead to beneficial solutions for those 1st graders. The answer (“do this thing”) might be different, but the mental pathway (“think this way”) will be the same.

The point I’m making here is: these thought processes might require us to leap from mental function to mental function in search of a more successful solution.

A solution to a long-term memory problem might uncover a motivational problem.

The solution to an alertness problem might promt an orienting problem.

When I reduce my students’ stress, I might ramp up their working memory difficulties.

And so forth.

When we understand research into all these topics, we can anticipate that these solutions might unveil an entirely different set of troubles.

And by moving nimbly from research topic to research topic, we can ultimately solve that complex problem that once seemed intractable.

All this nimbling about takes practice. And, ironically, it might threaten our own working memory capacity.

But once we get used to thinking this new way, we will arrive at solutions that fit our classrooms, and that work.

Collaborative Learning and Working Memory Overload: Good News or Bad?
Andrew Watson
Andrew Watson

Consider the following paradox:

Teachers need to give students instructions — of course we do!

After all, instructions help students do what they need to do, so that they can learn what we want them to learn.

3 middle school students working together on a problem from a textbook

At the same time, too many instructions might very well overwhelm working memory.

After all, the student has to HOLD the instructions in memory while PROCESSING each one individually. And: “holding while processing” is one handy definition of working memory function.

In brief: the right number of instructions can help learning, but too many instructions can impede learning.

I recently asked a group of wise and experienced teachers this question:

“Can you think of other teaching practices — like instructions — that are beneficial in small amounts, but might create working memory overload in large amounts?”

After some time to think and discuss, one teacher answered: group work.

After all, he mused, collaboration might simplify some mental processes. But collaboration itself creates additional mental taxes — all that negotiating and delegating and coordinating and explaining.

And disagreeing.

And resolving.

Are they ways that teachers can reduce those “mental taxes” so that students get the benifits without the penalties?

If only we had a research-based answer to those questions…

Inspired by this teacher’s observation, I hunted up this study.

Quadratics in Quito

To explore this question, researchers working in Quito, Ecuador worked with 15-year-olds solving quadratic equations.

Specifically, they wanted to know if practice collaborating helps students collaborate effectively.

As is always true, research design gets tricky. But the overall design makes sense.

Some students did practice solving quadratic equations collaboratively; others didn’t.

For a second round of math learning, all students were then sorted into groups for collaborative learning.

So, did students who practiced collaborating do better on later collaboration?

For complex equations: YES. Both three days later and seven days later, students who practiced collaborating did better solving problems than students who didn’t.

For simple equations: NO. If the mental work wasn’t very hard, students didn’t need to practice to collaborate effectively.

In light of these findings, the researchers’ recommendations make good sense.

If learners are novices, learning tasks are complex, and information distribution demands [extensive cooperation], teachers should prepare group members … using similar problems.

If task information does not demand [extensive cooperation], it is not necessary for the teachers to prepare learners to collaborate.

I want to highlight one part of this summary: “using similar problems.”

This research team emphasizes that “collaboration” is NOT a general skill. Collaboration will look different depending on the precise demands of the discipline and the topic.

So: students who “practiced” were given a VERY specific format for learning how to collaborate on this task.

If we want to get the benefits of practice for our own students, we should be sure to tailor the practice in very specific ways.

The Story Behind the Story: an Analogy and a Principle

A research article like this study always begins with a summary of earlier research findings, conclusions, and questions.

This summary includes a particularly helpful foundational inquiry.

When does collaboration increase WM load so much as to threaten learning?

When does collaboration reduce WM load enough to promote learning?

Is there some sort of taxonomy to consider or principle to explore?

To explain, I’ll start with an analogy.

Imagine I want to illuminate a yard at night.

For all sorts of reasons, it would be simplest to have one lamp to do so. Having multiple lamps just adds to the complexity and expense of the project.

So, if my yard is small enough to be covered by one lamp, then I should use one. Adding more lamps makes the project worse — more complicated, more expensive — not better.

But at some point, a yard gets big enough to need multiple lamps. If I use only one lamp, I just can’t illuminate the full yard.

In this case, the additional expense and complexity of having multiple lamps provides a meaningful benefit.

You can see where this is going.

Here’s a potential principle:

If a cognitive problem is simple enough, then one student can solve it on her own.

Adding other students (“collaborative leaning”!) increases the WM complexity of the situation without providing any additional benefit.

In this case, each student’s mental effort has become less effective, not more effective.

If, on the other hand, a cognitive problem gets complex enough, then it goes beyond any one student’s cognitive capacity.

In that case, the problem benefits from additional students’ cognitive efforts — even though all those extra students do increase the complexity of the problem.

At some tipping point, when a problem gets complicated enough, it needs to be divided into sub-tasks — despite the complexity of managing that work.

At that tipping point, well-structured and precisely-practiced collaboration probably is more beneficial than harmful.

TL;DR

Groupwork (probably) increases WM demands on simple cognitive tasks, but reduces WM demands for complex cognitive tasks.

To get the most benefits from collaboration, students should practice that skill — and teachers should tailor the practice to the precise demands of the cognitive work.


Zambrano, J., Kirschner, F., Sweller, J., & Kirschner, P. A. (2019). Effects of group experience and information distribution on collaborative learning. Instructional Science47, 531-550.

The Dangers of “The Big Ask”: In Defense of Stubborn (?) Teachers
Andrew Watson
Andrew Watson

Let’s face it: teaching is hard.

I’ve been a classroom teacher for roughly 20 years — how do I count summer school? — and I still find the work exhillarating, exhausting, baffling, uplifting, frustrating, humbling, and joyous.

Exasperated teacher standing in the middle of a chaotic classroom, holding her hands on her head and shouting

And that was Tuesday.

And: I think I’m not the only one who finds teaching to be an extra-ordinary challenge. I mean, don’t get me wrong, I love it. But GOSH it’s hard.

This hard-won experience leads me — and perhaps you — to two conclusions:

First: people who haven’t taught in the classroom don’t fully understand the challenges of the work.

Until you’ve tried to follow a scrupulously-devised lesson plan despite the fact that an un-announced fire-drill is in progress, two students have switched sections, three don’t have their notebooks, and four don’t think the cell-phone policy applies at just this moment…you just don’t really know.

How could you? It’s “Misson Impossible: Chalkdust” in here.

Second: I need all the help I can get.

No, really.

You’ve got some research that might …

… help me create a more effective lesson plan?

… explain how attention really works?

… suggest study strategies to help my students learn?

… foster motivation, at the beginning of a lesson on grammar?

I’m all ears. Please. I’m practically begging here…

My Learning and the Brain Journey

I attended my first LatB conference in 2008; the topic was The Science of Attention.

I IMMEDIATELY realized that this conference was just what I needed. So much wisdom and advice. So many compelling suggestions.

And, so many graphs and pictures of brains!

I returned to the classroom and started rethinking everything.

What should be rewrite policy be?

Should my classroom decorations be in primary colors?

What’s the right number of new vocabulary words to teach per class?

I had research-y answers to all those questions.

After several years of attending conferences, I went back to grad school and got a degree combining education, psychology, and neuroscience.

And, in addition to teaching, I started training other teachers.

Now I offered LOTS of advice of my own:

Because working memory does this, teachers should do that.

Because long-term memory benefits from this, teachers should do that.

Because stress affects this…you get the picture.

Many teachers appreciated all this guidance. But some obviously didn’t.

For whatever reason, they just didn’t want to do what I was telling them to do!

I was genuinely surprised; after all, I knew I was right because the research said so!

“The Big Ask”

Over time, I’ve come to realize that those teachers didn’t want to do what I was telling them — and “what the research said” — because I had forgotten the first lesson described above.

I mean: yes (lesson #2), “I need all the help I can get.” So, research-based advice MIGHT help.

But also,

Yes (lesson #1), “people who haven’t taught in the classroom don’t fully understand the challenges of the work.” So, research-based advice might not apply to this classroom, this topic, this teacher, or this student.

In other words,

When I say to teachers, “you should change the way you teach because research says so, ” I’m making a REALLY BIG ASK.

After all, those students are ultimately their responsibility, not mine.

Yes, I have taught — but I teach English to 10th graders at a very selective school. I should be very careful when offering guidance to, say …

… 2nd grade teachers who work with struggling readers, or

… those with lots of students on the Autism spectrum, or

… teachers who work in different cultures.

Because I do know research well, and I know classrooms fairly well, I can make those Big Asks. But I should be humble about doing so. And I should be respectful when a teacher says, “your ‘research-based advice’ …

… conflicts with our school’s mission, or

… might work with college students, but probably won’t work with 2nd graders, or

… requires personality traits that I don’t have.”

Yes, I think teachers should listen thoughtfully to the guidance that comes from research. And, I think those of us who cite research should listen thoughtfully to the classroom specifics, and the experience, of teachers.

Teachers who resist ‘research-based advice’ might seem “stubborn,” but they also might be right.

The Story Behind the Story

In last week’s blog post, I summarized research about using gestures to teach specific science concepts. I also sounded a few notes of caution:

I don’t fully understand the concept of “embodied cognition,” and

I worry that a few very specific studies will be used to insist that teachers make broad changes to our classroom work.

Even as I wrote that post, I could hear colleagues’ voices in my head: “why are you always so grouchy? Why don’t you listen when people with PhDs say ‘this is the Next Important Thing’?”

The answer to those question is this week’s blog post.

I’m ‘grouchy’ because I worry our field is constantly making Big Asks of teachers.

We often make those Asks without acknowledging a) the limits of our research knowledge, and b) the breadth of teachers’ experience.

I am indeed optimistic about combining cognitive psychology research with teacherly experience to improve teaching and foster learning.

To make that combination work, we should respect “stubborn” teachers by making Respectful Asks.

“Embodied Cognition” in Action: Using Gestures to Teach Science
Andrew Watson
Andrew Watson

Here’s a topic that has gotten lots of enthusiastic attention in recent years: embodied cognition.

As the name suggests, that phrase means — basically — “thinking with your body, not just your mind.”

Because your brain is a part of your body (it is, in fact, physically attached to your body), the concept makes rough-and-ready sense.

In at least two ways, this perspective has well-established research support.

First, physical fitness improves cognition — at least, up to a point.

We don’t need to be Olympic athletes to learn chemistry.

But if I’m conspicuously out of shape, the related health detriments harm my brain just as they harm my lungs; that harm makes learning harder. (If you want to get your neuroscience geek on, look up “brain-derived neurotrophic factor.”)

Second, some degree of physical movement during class moderates students’ alertness levels.

If my students are nodding off or bouncing giddily, I’ll get them up on their feet for a bit to get the blood moving (or to burn off some of that excess energy).

In these ways, the body’s physical state obviously matters for cognition.

And yet, over the years, I’ve had two basic concerns about broader claims within this field. Let me try to explain…

Better Definitions, Please

Scientific conclusions typically require precise measurements; precise measurements require precise defintions.

That is: I can tell you that this rock weighs more than that rock because I can measure it (on my scale) according to well-defined measurements (pounds or kilos).

But: if I want to say that this student is paying more attention than that student, I need a really good definition of attention, and a way to measure it. “This student demonstrates 6 attention units, whereas that one demonstrates only 4.”

Picture of a student doing acrobatic movement in the classroom while carrying backpack with doodles on the blackboard

Sadly, the concept of “embodied cognition” invites definitional muddle.

For instance: is mindful meditation “embodied cognition”? (It often includes a focus on the body.)

More broadly, here’s Wikipedia’s entry on embodied cognition. I’m not gonna lie; I get lost really quickly when I read that entry.

So, problem #1: I don’t always understand exactly what the claims about embodied cognition really are.

More Research, Please

I think I do understand one of the claims under the “embodied cognition” umbrella. I think the claim is:

Adding the right gestures to teaching helps students learn.

That is: using gestures (“embodied”) helps students think and learn (“cognition”).

A recent study in Australia pursued just this line of inquiry.

In this study, 33 students (aged 12-14) learned about Brownian motion.

Half of them saw a typical lesson — a powerpoint presentation, group discussion, worksheets — taught by an experienced teacher.

The other half saw the same lesson (powerpoint presentation, etc.) with additional, carefully designed hand gestures.

By the way, the teacher used the hand getures, and encouraged the students to do so as well.

Two days later, the students who saw and used the meaningful gestures (a.k.a., “iconic” gestures) scored a lot higher on a simple quiz. (For stats folks, the Cohen’s d was 0.98, which is really big!)

Now, I admit to some concerns about this study:

33 is a very modest sample size.

“2 days later” isn’t really learning.

Most important: there is no “active control group.”

That is: the researchers didn’t compare iconic gestures with another new strategy. Instead, they compared gestures to “business as usual.”

“Business as usual” isn’t often a very persuasive control group; after all, the novelty might explain the effect.

These concerns aside, I do think the study — combined with other similar studies — gives us some reason to think that the right gestures just might help students learn better.

I was especially glad to see an emphasis on students’ use of the gestures. This variable hasn’t gotten much attention in other studies I’ve seen, so I’m encouraged to see it getting greater visibility.

Lingering Questions

And yet, I STILL want more research. Here’s why:

Problem #2: I don’t think we have nearly enough research (yet) to establish useful principles for instructive gestures.

In other words: these gestures probably helped 13-year-olds learn about states of matter.

But: what sorts of gestures can help what ages learn about what topics?

Specifically:

If I want my students to know the difference between “comedy” and “tragedy” (and I do!), can gestures help with those concepts? How should I think about desiging those gestures?

What sorts of topics in a history class would benefit from gestures?

Should foreign language teachers have students make specific gestures — say — when they learn different declensions? When they learn masculine or feminine nouns?

I’m not trying to be difficult or grouchy when I ask these questions. I’m trying to understand how seeming success in this one case could be translated to other topics, other disciplines, and other age groups.

Growing Concerns

More broadly, I worry that “iconic gestures/embodied cognition” will become the Next Thing We’re All Talking About.

Teachers will get instruction about Iconic Gestures, be required to use them, and be evaluated on their use … even though we don’t have even basic guidelines on how to create or use them. (At least, as far as I know.)

For instance: the topic of Brownian motion was chosen, in part, because it is “susceptible to being taught using specific gesticulation.”

What about topics that aren’t obviously susceptible?

In fact, if you look at the gestures used during the lesson, they don’t seem too far off from the sorts of gestures that teachers might make spontaneously.

Are “iconic gestures” simply “the sorts of gestures we’d use anyway, but formally planned, scripted, practiced, and repeated by students”?

If yes, does the entire topic of iconic gestures change from “revolutionary” to “a modest technical update to something we’re doing anyway”?

I’m entirely open to the possibility that gestures (“embodied”) can help students learn (“cognition”) … but we need more research to know for sure.

TL;DR

Because the brain is in the body, the body’s physical state obviously matters for learning.

This recent study from Australia (and others) suggest that well crafted hand gestures can help students learn some concepts.

However, the principles that guide us in the creation and use of those hand gestures are not yet well mapped. So: we just don’t know how widely this technique might benefit teachers, schools, and students.

If someone insists you start using gestures because “research in embodied cognition says you must!”, ask to see the specific study.


Bentley, B., Walters, K., & Yates, G. C. (2023). Using iconic hand gestures in teaching a year 8 science lesson. Applied Cognitive Psychology.

Getting the Principles Just Right: Classroom Decoration
Andrew Watson
Andrew Watson

The benefits of classroom decoration seem intuitive.

After all, we decorate our homes in order to make ourselves — and our guests — comfortable there.

An artist's table, covered with an organized but overwhelming collection on pencils, pens, markers, and so forth

Little wonder that decorating a classroom feels like a natural way to welcome our students, and make them feel right at home.

Also compelling: we can control our classroom decoration.

Whereas so many other parts of teaching life must respond — second by second — to the random chaos of young learners, our classrooms show what we can do when our plans come beautifully to fruition.

And, let’s be honest: we’re often evaluated on classroom decoration. If we can get easy points for decoration on an evaluation form — why not grab them?

To add to all these incentives, let’s add the potential for one more: research. I often see highly specific claims about the benefits of classroom decoration.

For instance, one popular blog post notes that research encourages classroom decorations  — although teachers should leave 20% of wall space blank. (I’ll come back to this number, so it might be worth remembering.)

Beyond Intuition

If our intuition and experience tell us that classroom decorations benefit students, can we find research support for that intuition?

For several years now, research has increasingly thrown those intuitions into doubt.

For the most part, research suggests that classroom decorations can overwhelm students’ limited cognitive resources: working memory, and attention.

Ten years ago, a research team found that kindergarten students learn less in “more” decorated classrooms compared to “less” decorated ones.

Over several years, a research team in Portugal has found that K-16 students score lower on attention and working memory tests taken in busy environments.

Most recently, researchers found that students don’t get used to decorations. That is: decorations distract students in the first week of school, and still distract them 15 weeks later.

If we set intuition (and training) aside, the research-based answer to our question seems clear: less decoration probably results in more concentration and learning.

And yet, in my experience, teachers find this research-based answer unsatisfying…even alarming.

We have, after all, been trained to decorate. We’ve been evaluated on our decorations. The colleagues we most esteem, and the grad-school professors who seemed the wisest, all champion the importance of decoration.

What should we do when our beliefs (decorate more!) crash into research findings (decorate less!).

Guiding Principles

Earlier posts this month have focused on getting details just right. This post, instead, looks at core principles.

First Principle: when research and intuition/training conflict, resist the urge to choose one over the other. Ask if we can improve teaching by drawing on both perspectives.

In this case: can we use research to inform our decorating strategy?

For instance, this well-known review crunches an ENORMOUS amount of data. Only a few of its conclusions focus narrowly on “decoration,” but at least one point strikes me as important.

Specifically, researchers look at the question of “ownership”: the degree to which the students feel like the classroom belongs to them. Their conclusion:

Personal displays by the children create a ‘sense of ownership’ and this was significantly correlated with learning progress.

The word “correlated” is important in that sentence. We can’t say that putting up students’ work causes them to learn more.

But: if both research and our teacherly intuition suggest that personal displays boost learning — that’s a great combination right there.

Second principle: keep the decorations largely academic.

Twenty years ago, I used to have lots of interesting photographs and posters and quotations up in my room. They didn’t relate directly to the material I taught — but they seemed somehow inspiring and energizing.

These days, I keep things much simpler. For instance: I have a set of posters highlighting analytical vocabulary (definitions of “metaphor” and “personification” and “symbolism”).

We have some research suggesting that — in addition to a sense of “ownership” — classroom decorations that highlight academic content can boost learning.

Third principle: investigate research-based claims skeptically.

I noted above that a blog post encourages teachers to leave 20% of the wall space blank. This blog cites the Barrett study to make that claim…but I don’t find evidence to support it anywhere.

Several years ago, I reviewed a book on the subject of classrom design and decoration. It had exactly ZERO footnotes.

When I emailed the author to ask for the research basis of his suggestions, he responded: “It’s ALL based on research.” He did not, however, provide any citations.

So, if someone tells you that “the research shows…,” ask them “what research?” Keep asking until you get an answer.

If you don’t get an answer, you know what to do.

Fourth principle: all in all, less is probably more.

Based on the research cited above, I think our profession has largely gotten in the habit of over-decorating.

It’s painful to admit that old habits might not have been wise; but, now that we know better we can do better.

When we think about each bit of classroom decoration, the question we should ask is not “why should we take it down?” but “am I sure I need to put it up?”

No doubt we can find ways to make our classrooms welcoming, comfortable, and scholarly without overwhelming our students’ cognitive abilities.


Barrett, P., Davies, F., Zhang, Y., & Barrett, L. (2015). The impact of classroom design on pupils’ learning: Final results of a holistic, multi-level analysis. Building and Environment89, 118-133.

Fisher, A. V., Godwin, K. E., & Seltman, H. (2014). Visual environment, attention allocation, and learning in young children: When too much of a good thing may be bad. Psychological science25(7), 1362-1370.

Godwin, K. E., & Kaur, F. (2021). The Decorated Learning Environment: Simply Noise or an Opportunity for Incidental Learning?. In Proceedings of the Annual Meeting of the Cognitive Science Society (Vol. 43, No. 43).

Godwin, K. E., Leroux, A. J., Seltman, H., Scupelli, P., & Fisher, A. V. (2022). Effect of Repeated Exposure to the Visual Environment on Young Children’s Attention. Cognitive science46(2), e13093.

Rodrigues, P. F., & Pandeirada, J. N. (2018). When visual stimulation of the surrounding environment affects children’s cognitive performance. Journal of experimental child psychology176, 140-149.

Getting the Details Just Right: “Pre-questions”
Andrew Watson
Andrew Watson

Teachers, of course, ask students questions. ALL THE TIME with the questions.

We ask questions DURING a lesson in order to “check for understanding.”

We encourage students to ask themselves questions AFTER class, because “retrieval practice” promotes learning.

And, we ask questions BEFORE a unit — for at least two very good reasons.

In the first place, we need to know what our students already know. If we don’t evaluate their prior knowledge, we struggle to build on that prior knowledge in a coherent way.

Young students reading and concentrating

In the second place, we have increasingly strong research about the benefits of “prequestions.”

Unlike “checks for understanding” and “retrieval practice,” “prequestions” come before the unit.

And unlike “measuring prior knowledge,” “prequestions” deliberately focus on facts and procedures that students don’t yet know.

So: if I’m teaching a unit on Their Eyes Were Watching God, I might ask my students:

“What is the definition of a ‘bildungsroman’?”

“Describe the friendship between Langston Hughes and Countee Cullen.”

“What does hair often symbolize in literature?”

Truth to tell, it’s quite unlikely that my 10th grade students know the answers to these questions. So: those are prequestions — not checks for understanding, or retrival practice, or confirmations of prior knowledge.

Here’s the headline: we have reason to believe that “prequestions” — used correctly — help students learn information.

Here’s the story…

Hot Off the Presses

Long-time readers know that Dr. Elizabeth Ligon Bjork has done LOTS of essential work in the field of long-term memory formation and “desireable difficulties.”

And, you know my admiration of Dr. Nick Soderstrom, whose distinction between “short-term performance” and “long-term learning” should inform all teachers’ discussions.

So: when the two work together, they have my attention!

In this case, they JUST published a study on the topic of “prequestions.”

And, this study took place in actual college classrooms — not simply in a psychology lab. For that reason, its conclusions have a better chance of applying to the real-world work that other teachers do in classrooms.

In this research, students answered prequestions at the beginning of a few lectures. The subsequent lectures then provided answers to those questions. (By the way: students got only about 1/3 of those prequestions right — so for the most part they didn’t know the answers.)

On the final exam, students had to answer questions that …

… DIRECTLY related to those prequestions, or

… INDIRECTLY related to those prequestions, or

… were NOT related to the prequestions.

Sure enough, they did better on both directly and indirectly related questions, compared to the unrelated questions.

In brief: prequestions really did help college students learn in the classroom.

So simple! So effective!

So, Those “Details”?

My title promises that we need to “get the details just right.” In this case, as in so many others, I have thoughts. (Important note: at this point, I’m switching from reporting on research to offering my experience-based opinions.)

First Thought

Soderstrom and Bjork specifically write that prequestions helped because students took them seriously.

Here’s my concern: while college students may have the metacognitive perspective to take prequestions seriously, I do worry that younger students might not.

That is: once younger students realize that their answers to these questions don’t really matter, they might not take them as seriously as their college-age selves would.

The structure of prequestions, in fact, might discourage seriousness. Students rarely know the answers to these questions — that’s the point. Why would students attend seriously to questions they can’t possibly answer?

This potential problem leads to two tentative suggestions:

TELL students how and why prequestions might help, and

Use prequestions only RARELY.

After all, the more often that students must answer un-answerable questions, the less likely they are to give them appropriate mental effort.

My hope is: students who encounter prequestions only rarely won’t get cynical about trying to answer them.

Second Thought

If we use prequestions only rarely, are some times better than others?

My instincts are: yes.

Simply put: use prequestions at the beginning of a unit to highlight the most important concepts.

If we can get the benefit of this technique only rarely, then use it at the most important times.

This advice comes from common sense, not  from research — but common sense isn’t entirely forbidden on this blog.

Third Thought

Not all prequestions are created equal.

If a prequestion forces a student to think — that’s a good prequestion: even if they get a wrong answer.

However, if a prequestion activates a prior misconception, that question will actively interfere with learning.

For that reason, we should follow this rule:

Ask prequestions where students don’t know what the answer is, and where they don’t wrongly believe that they do know what the answer is.

For instance:

If I ask my student “which falls faster: a 10-pound bowling ball or a 15-pound bowling ball,” they almost certainly …

… don’t know the correct answer (that’s good), but

… wrongly think that they DO know the correct answer (that’s bad).

So: that prequestion would activate a prior misconception — and make learning harder.

On the other hand, those prequestions I asked at the top of this post (definition of “bildungsroman”) almost certainly don’t active prior misconceptions.

A Secret Unveiled; A Plea for Teamwork

I confess I have one deep frustration with this research pool.

Almost all teachers — and all students — hate tests.

So: if I name something “the testing effect,” teachers and students will HATE it — even if it’s beneficial. (Hint: the “testing effect” is just another way of talking about “retrieval practice.”)

And, if I name something “pretesting,” teachers and students will HATE it — even if it’s beneficial. Pretesting sounds like a test, no?

Sure enough, researchers have named a beneficial teaching “pretesting,” thereby ensuring confusion, and discouraging its use.

But — of course — “pretesting” simply means “asking questions on a topic before you’ve taught the material.” It’s NOT A TEST. It’s just a set of QUESTIONS.

So, I’ve been writing about “prequestions,” although everyone else in this field calls them “pretests.”

I hope you’ll join me in this virtuous rebranding.

TL;DR

Prequestions (aka “pretesting”) help students learn new material — and not just the information in the questions themselves.

Because the technique works if students take it seriously, I suggest …

… using it rarely,

… using it for important material, and

… asking prequestions that DON’T activate prior misconceptions.


Soderstrom, N. C., & Bjork, E. L. (2023). Pretesting Enhances Learning in the Classroom. Educational Psychology Review35(3), 88.

Getting the Details Just Right: Highlighting
Andrew Watson
Andrew Watson

Because the school year starts right now, I’m using this month’s blog posts to give direct classroom guidance.

Female student using pale blue highlighter in a book

Last week, I wrote about a meta-analysis showing that — yup — retrieval practice is awesome.

Teachers should be aware of a few detail (e.g.: “brain dumps” are among the least effective kinds of retrieval practice).

But for the most part, asking students to retrieve stuff (facts, processes, etc.) helps them remember that stuff better — and to transfer their understanding to new situations.

This week, let’s talk about another strategy that teachers and students might use: highlighting.

We know that retrieval practice is awesome. Is highlighting equally awesome? More or less so? When and how should students highlight?

Start Here

For several years, the go-to answer to this question has come from this research summary, by John Dunlosky, Dan Willingham, and others.

Their rather bleak conclusion:

we rate highlighting and underlining as having low utility. In most situations that have been examined and with most participants, highlighting does little to boost performance.

It may help when students have the knowledge needed to highlight more effectively, or when texts are difficult, but it may actually hurt performance on higher level tasks that require inference making. (emphasis added)

They reached this conclusion 10 years ago. Do we know anything more today?

Who Times Two

Last year, Ponce, Mayer & Méndez published a meta-analysis looking at the potential benefits of highlighting.

They found two key variables not included in the earlier research summary.

First: the students’ age/grade.

Second: the person doing the highlighting.

That is: they found that …

If the INSTRUCTOR does the highlighting, doing so benefits college students AND K-12 students, but

If the STUDENT does the highlighting, doing so benefits college studets but NOT K-12 students.

These findings make rough-n-ready sense.

We teachers know what the important ideas are. For that reason, our highlighting help students (on average) focus on those important ideas — so they learn and understand more.

Students — especially younger students — probably don’t know what the important ideas are. For that reason, their own highlighting might not accentuate important ideas (on average), and so they don’t benefit from highlighting.

When I ask a student why he highlighted a passage, I sometimes get a version this answer: “Honestly, I realized I hadn’t highlighted anything in a few pages, so I thought I really needed to find something that sounded important.”

Little wonder, then, that my 10th graders don’t benefit from highlighting.

Classroom Specifics

Of course, this meta-analysis also arrived at other useful conclusions.

This first one came to me as something of a shock: although highlighting does benefit some students, reviewing the highlights doesn’t.

The researchers write:

“on average, reviewing highlighted text previously highlighted by learners did not improve learning significantly more than students who only read or studied the text.”

I infer from this finding that highlighting helps (if at all) because it prompts students to FOCUS ON and THINK ABOUT information the first time they read it.

It does not, however, help students when they return to the highlighted passage later.

That’s useful to know!

Another conclusion is less of a surprise: training helps.

That is: we can help students (yes, even K-12 students) highlight more effectively.

According to the meta-analysis, we can…

… show students examples of good and bad highlighting,

… help them distinguish between main ideas and secondary ones, and

… emphasize that too much highlighting reduces the benefit.

For example:

I myself don’t ask my English students to highlight much. But, I do ask them to note very specific parts of the text.

When we read Macbeth, I ask them to circle/highlight every time they see the words “do,” “done,” or “deed.” (Believe it or not, those words show an important pattern in the play.)

When we read Their Eyes Were Watching God, they highlight various symbols: hair, gates/fences, mules, trees.

I hope that these very modest highlights help students spot patterns they otherwise would have missed — without distracting them too much from other important parts of the story.

In other words: used judiciously and narrowly, highlighting can provide some benefit.

TL;DR

This recent meta-analysis gives us helpful specifics on how best to use highlighting.

Ideally, we teachers do the highlighting ourselves, especially in K-12 classrooms ; we teach students how to highlight (not too much!); we don’t encourage them to review their highlights.

In fact, as we saw in last week’s post, retrieval practice should replace “review the highlights” as a way to review and study.


Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., & Willingham, D. T. (2013). Improving students’ learning with effective learning techniques: Promising directions from cognitive and educational psychology. Psychological Science in the Public interest14(1), 4-58.

Ponce, H. R., Mayer, R. E., & Méndez, E. E. (2022). Effects of learner-generated highlighting and instructor-provided highlighting on learning from text: a meta-analysis. Educational Psychology Review34(2), 989-1024.

 

 

 

Getting the Details Just Right: Retrieval Practice
Andrew Watson
Andrew Watson

As we gear up for the start of a new school year, we’re probably hearing two words over and over: retrieval practice.

That is: students have two basic options when they go back over the facts, concepts, and procedures they’ve learned.

Option 1: they could review it; that is, reread a passage, or rewatch a video, or review their notes.

Option 2: they could retrieve it; that is, ask themselves what they remember about a passage, a video, or a page of notes.

Well, the research verdict is clear: lots of research shows that OPTION 2 is the winner. The more that students practice by retrieving, the better they remember and apply their learning in the long term.

This clear verdict, however, raises lots of questions.

How, exactly, should we use retrieval practice in classrooms.

Does it work in all disciplines and all grades?

Is its effectiveness different for boys and girls?

Does retrieval practice help students remember material that they didn’t practice?

Do multiple choice questions count as retrieval practice?

And so forth.

Given that we have, literally, HUNDREDS of studies looking at these questions, we teachers would like someone to sort through all these sub-questions and give us clear answers.

Student contentrating on taking notes and reading books in the library

Happily, a research team recently produced just such a meta-analysis. They looked at 222 studies including more than 48,000 students, and asked nineteen specific questions.

These numbers are enormous.

Studies often get published with a few dozen participants – which is to say, a lot less than 48,000.

Researchers often ask 2 or 3 questions – or even 1. I don’t recall ever seeing a study or meta-analysis considering nineteen questions.

As a result, we’ve got a lot to learn from this meta-analysis, and can feel more confidence than usual in its conclusions.

The Big Picture

For obvious reasons, I won’t discuss all nineteen questions in detail. Instead, I’ll touch on the big-picture conclusions, highlight some important questions about practical classroom implementation, and point out a few surprises.

The high-level findings of this meta-analysis couldn’t be more reassuring.

YES: retrieval practice enhances long-term memory.

YES: in fact, it enhances memory of facts and concepts, and improves subsequent problem solving. (WOW.)

YES: it benefits students from kindergarten to college, and helps in all 18 (!!) disciplines that the researchers considered.

NO: the student’s gender doesn’t matter. (I was honestly a little surprised they studied this question, but since they’ve got an answer I’m reporting it here.)

I should note that these statistical results mostly fall in the “medium effect size” range: a hedges g of something like 0.50. Because I’m commenting on so many findings, I won’t comment on statistical values unless they’re especially high or low.

So the easy headline here is: retrieval practice rocks.

Making Retrieval Practice Work in the Classroom

Once teachers know that we should use retrieval practice, we’ve got some practical questions about putting it into practice.

Here again, this meta-analysis offers lots of helpful guidance.

Does it help for students to answer similar questions over multiple days?

Yes. (Honestly, not really surprising – but good to know.)

More specifically: “There is a positive relationship between the number of [retrieval practice] repetitions and the [ultimate learning outcome], indicating that the more occasions on which class content is quizzed, the larger the learning gains.”

Don’t just use retrieval practice; REPEAT retrieval practice.

Is feedback necessary?

Feedback significantly increases the benefit of retrieval practice – but the technique provides benefits even without feedback.

Does the mode matter?

Pen and paper, clicker quizzes, online platforms: all work equally well.

Me: I write “do now” questions on the board and my students write down their answers. If you want to use quizlet or mini-white boards, those strategies will work just as well.

Does retrieval practice help students learn untested material?

This question takes a bit of explaining.

Imagine I design a retrieval exercise about Their Eyes Were Watching God. If I ask my students to recall the name of Janie’s first husband (Logan Killocks), that question will help them remember his name later on.

But: will it help them remember the name of her second husband? Or, her third (sort-of) husband?

The answer is: direct retrieval practice questions help more, but this sort of indirect prompt has a small effect.

In brief, if I want my students to remember the names Jody Starks and Vergible Woods, I should ask them direct questions about those husbands.

Shiver Me Timbers

So far, these answers reassure me, but they don’t surprise me.

However, the meta-analysis did include a few unexpected findings.

Does the retrieval question format matter? That is: is “matching” better than “short answer” or “free recall” or “multiple choice”?

To my surprise, “matching” and “fill-in-the-blank” produce the greatest benefits, and “free recall” the least.

This finding suggests that the popular “brain dump” approach (“write down everything you remember about our class discussion yesterday!”) produces the fewest benefits.

I suspect that “brain dumps” don’t work as well because, contrary to the advice above, they don’t directly target the information we want students to remember.

Which is more effective: a high-stakes or a low-stakes format?

To my astonishment, both worked (roughly) equally well.

So, according to this meta-analysis, you can grade or not grade retrieval practice exercises. (I will come back to this point below.)

Should students collaborate or work independently on retrieval practice answers?

The studies included in the meta-analysis suggest no significant difference between these approaches. However, the researchers note that they don’t have all that many studies on the topic, so they’re not confident about this answer. (For a number of reasons, I would have predicted that individual work helps more.)

Beyond the Research

I want to conclude by offering an opinion that springs not from research but from experience.

For historical reasons, “retrieval practice” had a different name. Believe it or not, it was initially called “the testing effect.” (In fact, the authors of this meta-analysis use this term.)

While I understand why researchers use it, I think we can agree that “the testing effect” is a TERRIBLE name.

No student anywhere wants to volunteer for more testing. No teacher anywhere either.

And – crucially – the benefits have nothing to do with “testing.” We don’t need to grade them. Students don’t need to study. The retrieving itself IS the studying.

For that reason, I think teachers and schools should focus as much as possible on the “retrieval” part, and as little as possible on the “testing.”

No, HONESTLY, students don’t need to be tested/graded for this effect to work.

TL;DR

Retrieval practice — in almost any form — helps almost everybody learn, remember, and use almost anything.

As long as we don’t call it “testing,” schools should employ retrieval strategically and frequently.


Yang, C., Luo, L., Vadillo, M. A., Yu, R., & Shanks, D. R. (2021). Testing (quizzing) boosts classroom learning: A systematic and meta-analytic review. Psychological Bulletin147(4), 399.

Using “Worked Examples” in Mathematics Instruction: a New Meta-Analysis
Andrew Watson
Andrew Watson

Should teachers lets students figure out mathematical ideas and processes on their own?

Or, should we walk students through those ideas/processes step by step?

3 students working together on a math problemThis debate rages hotly, from eX-Twitter to California teaching standards.

As best I understand them, the arguments goes like this:

If students figure out ideas and processes for themselves, they think hard about those mathematical ideas. (“Thinking hard” = more learning.)

And, they feel emotionally invested in their discoveries. (“Emotional investment” = more learning.)

Or,

If students attempt to figure out math ideas for themselves, they first have to contemplate what they already know. Second, they contemplate where they’re going. And third, they have to (basically) guess until they figure out how to get from start to finish.

Holding all those pieces — starting place, finish line, all the potential avenues in between — almost certainly overwhelms working memory. (“Overwhelmed working memeory” = less learning.)

Therefore, teachers should walk students directly through the mathematical ideas/process with step-by-step “worked” examples. This process reduces cognitive load and builds schema. (“Reduced cognitive load” + “building schema” = more learning.)

Depending on your philosophical starting place, both argument might sound plausible. Can we use research to answer the question?

Enter the Meta

One problem with “using research to answer the question”: individual studies have yielded different answers.

While it’s not true that “you can find research that says anything,” it IS true — in this specific case — that some studies point one way and some point another.

When research produces this kind of muddle, we can turn to a mathematical technique called “meta-analysis.” Folks wise in the ways of math take MANY different studies and analyze all their results together.

If scholars do this process well, then we get an idea not what ONE study says, but what LOTS AND LOTS of well-designed studies say (on average).

This process might also help us with some follow up questions: how much do specific circumstances matter?

For instance: do worked examples help younger students more than older? Do they help with — say — math but not English? And so forth.

Today’s news:

This recent meta-analysis looks at the benefits of “worked examples,” especially in math instruction.

It also asks about specific circumstances:

Do students benefit from generating “self-explanations” in addition to seeing worked examples?

Do they learn more when the worked examples include BOTH correct AND incorrect examples?

So: what did the meta-analysis find?

Yes, No, No

The meta-analysis arrives at conclusions that — I suspect — suprise almost everyone. (If memory serves, I first read about it from a blogger who champions “worked examples,” and was baffled by some of this meta-analysis’s findings.)

In the first place, the meta-analysis found that students benefit from worked examples.

If you do speak stats, you’ll want to know that the g-value was 0.48: basically 1/2 of a standard deviation.

If you don’t speak stats, you’ll want to know that the findings were “moderate”: not a home run, but at least a solid single. (Perhaps another runner advanced to third as well.)

While that statement requires LOTS of caveats (not all studies pointed the same direction), it’s a useful headline.

In the dry language of research, the authers write:

“The worked examples effect yields a medium effect on mathematics outcomes whether used for practice or initial skill acquisition. Correct examples are particularly beneficial for learning overall.”

So, what’s the surprise? Where are those “no’s” that I promised?

Well, in the second place, adding self-explanation to worked examples didn’t help (on average). In fact, doing so reduced learning.

For lots of reasons, you might have expected the opposite. (Certainly I did.)

But, once researchers did all their averaging, they found that “pairing examples with self-explanation prompts may not be a fruitful design modification.”

They hypothesize that — more often than not — students’ self explanations just weren’t very good, and might have included prior misconceptions.

The Third Place?

In the third place came — to me, at least — the biggest surprise: contrasting correct worked examples with incorrect worked examples didn’t benefit students.

That is: they learned information better when they saw the right method, but didn’t explore wrong ones.

I would have confidently predicted the opposite. (This finding, in fact, is the one that shocked the blogger who introduced me to the study.)

Given these findings and calculations, I think we can come to three useful conclusions: in most cases, math students will learn new ideas…

… when introduced via worked examples,

… without being asked to generate their own explanations first,

… without being shown incorrect examples alongside correct ones.

Always with the Caveats

So far, this blog post has moved from plausible reasons why worked examples help students learn (theory) to a meta-analysis showing that they mostly do help (research).

That journey always benefits from a recognition of the argument’s limitations.

First, most of the 43 studies included in the meta-analysis focused on middle- and high-school math: algebra and geometry.

For that reason, I don’t know that we can automatically extrapolate its findings to other — especially younger — grades; or to other, less abstract, topics.

Second, the findings about self-explanations include an obvious potential solution.

The researchers speculate that self-explanation doesn’t help because students’ prior knowledge is incorrect and misleading. So: students’ self-explantions activate schema that complicate — rather than simplify — their learning.

For example: they write about one (non-math) study where students were prompted to generate explanations about the causes of earthquakes.

Because the students’ prior knowledge was relatively low, they generated low-quality self-explanations. And, they learned less.

This logic suggests an obvious exception to the rule. If you believe your students have relatively high and accurate prior knowledge, then letting them generate self-explanations might in fact benefit students.

In my own work as an English teacher, I think of participles and gerunds.

As a grammar teacher, I devote LOTS of time to a discussion of participles; roughly speaking, a participle is “a verb used as an adjective.”

During these weeks, students will occasionally point out a gerund (roughly speaking, a “verb used as a noun”) and ask if it’s a participle. I say: “No, it’s something else, and we’ll get there later.”

When “later” finally comes, I put up sentences that include participles, and others that include similar gerunds.

I ask them to consider the differences on their own and in small groups; that is, I let them do some “self-explanation.”

Then I explain the concept precisely, including an English-class version of “worked examples.”

Because their prior knowledge is quite high — they already know participles well, and have already been wondering about those “something else” words that look like participles — they tend to have high quality explanations.

In my experience, students take gerunds on board relatively easily.

That is: when prior knowledge is high, self-explanation might (!) benefit worked examples.

TL;DR

A recent meta-analysis suggests that worked examples help students learn algebra and geometry (and perhaps other math topics as well).

It also finds that self-explanations probably don’t help, and that incorrect examples don’t help either.

More broadly, it suggests that meta-analysis can offer helpful and nuanced guidance when we face contradictory research about complex teaching questions.


Barbieri, C. A., Miller-Cotto, D., Clerjuste, S. N., & Chawla, K. (2023). A meta-analysis of the worked examples effect on mathematics performance. Educational Psychology Review35(1), 11.

“Teaching” Helps Students Learn: New Research
Andrew Watson
Andrew Watson

A smiling young man wearing a jeans jacket, wool cap, and headphones sits at a desk and talks to a camera in front of him.Not even two months ago, I admitted my skepticism about a popular teaching technique.

While I accept that “students teaching students” SOUNDS like a great idea, I nonetheless worry about the practical application of this idea:

Understanding a new idea requires lots of mental resources. Explaining a new idea requires even more. All those cognitive demands might overwhelm a student’s WM.

Even if students have the mental resources to accomplish these tasks, how can we be sure that their peers are — in fact — LEARNING the new ideas they’re being taught? For instance: what if the student-teachers misunderstood the material they’re meant to teach?

Peers can intimidate. If teachers have “first day of school” anxiety dreams, imagine how students feel when they must take on the teacher’s job. (And: they don’t have our training and experience.)

So: while I think it’s possible that students benefit from teaching their peers, making this pedagogy successful will take LOTS of preparation, skill, and humility.

Today’s Update: Does the Audience Matter?

Happily, Prof. Dan Willingham recently highlighted a new study exploring this pedagogical question. Specifically, researchers wanted to know if it matters whom the students are teaching.

College students in China watched a two-minute video on synapses, specifically:

how signals are transmitted across neurons in the human nervous system and the role of action potentials, calcium ions, synaptic vesicles, neurotransmitters, sodium ions, and receptors.

After a few extra minutes of prepration, they then “taught” a lesson on this topic.

One third of the participants explained chemical synapses to 7 peers;

one third explained to 1 peer;

and the final third explained to a video camera.

Students in all three groups were instructed that the peers would have to take a test based on these explanations.

So, what effect did the audience have on the student doing the explaining?

Results and Conclusions

The researchers had hypothesized that the presence of peers would ramp up stress and reduce the benefits of this teaching methodology.

For that reason, they suspected that students would do better if they taught their lesson to the video camera instead of to live human beings.

Sure enough, students who taught to the camera did better on basically every measurement.

They offered more thorough explanations (Cohen’s d values here ranged from 0.95 – 1.23: unusually high numbers).

They remembered the information better an hour later.

They transferred their understanding to new questions more effectively.

They felt less stress, and lower cognitive load.

As the authors write: “minimizing the social presence of the audience [by have students teach to a camera] during teaching  resulted in maximizing learning outcomes.”

Classroom Implications

At first look, this study seems to suggest that — sure enough! — students DO learn more when they teach.

Alas, I don’t think we can draw that conclusion.

First: this study didn’t measure that question. That is: it didn’t include a control condition where students used some other method to study information about synapses.

This study DOES suggest that teaching to a camera helps more than teaching to peers. But it DOESN’T suggest that teaching (to a camera, or to peers) helps more than something else.

Second: I’m not sure that the verb “teach” makes sense in this context.

The students explained synapses to a camera, and they believed that another student would watch the video and take a test on it.

I suppose we can call that “teaching.” But that’s a very niche-y version of it.

And, in my experience, it’s not AT ALL what teachers think of when they hear about this methodology. More often, students break up into groups to study small parts of a process, and then circulate and “teach” the other groups what they learned.

Third: how would this “teach the camera” plan work in the classroom?

The “explain to a camera” approach might work better than an “explain to peers” version. But I imagine at least two practical problems.

#1: logistically, how does it work? Do I have 25 students explaining to 25 separate cameras simultaneuosly? Do I have a separate place with cameras where students go to record?

#2: In this study, researchers told participants that other students would watch their videos and be tested on their understanding.

Presumably this statement made the teacher-students quite conscientious about their explanations. For that reason (probably), they thought harder and therefore remembered more.

That is: the camera method helped students learn largely because participants believed that others relied on their teaching.

If, however, I use this strategy in my class, that causal chain (conscientiousness –> thinking –> remembering) could easily break down.

Either I DO use those videos to help other students learn — in which case I have to review and critque them scrupulously;

Or I DON’T use those videos — in which case my students know they don’t really have to be so concientious. (Reduced conscientiousness –> reduced thinking –> reduced memory.)

These practical questions might sound mundane, even grouchy. But I’m not trying to be grouchy — I’m trying to help my students learn material!

TL;DR

A recent study suggests that college students benefit more from “teaching” if they teach to a camera than if they teach peers.

Although I’m inclined to believe these results — they certainly make a lot of sense — I still worry that a “students-teaching-students” pedagogy sounds better in theory than it might work in practice.


Wang, F., Cheng, M., & Mayer, R. E. (2023). Improving learning-by-teaching without audience interaction as a generative learning activity by minimizing the social presence of the audience. Journal of Educational Psychology.