Skip to main content

Andrew Watson About Andrew Watson

Andrew began his classroom life as a high-school English teacher in 1988, and has been working in or near schools ever since. In 2008, Andrew began exploring the practical application of psychology and neuroscience in his classroom. In 2011, he earned his M. Ed. from the “Mind, Brain, Education” program at Harvard University. As President of “Translate the Brain,” Andrew now works with teachers, students, administrators, and parents to make learning easier and teaching more effective. He has presented at schools and workshops across the country; he also serves as an adviser to several organizations, including “The People’s Science.” Andrew is the author of "Learning Begins: The Science of Working Memory and Attention for the Classroom Teacher."

Do Gestures Improve Learning? Baby Steps Toward a Conclusion
Andrew Watson
Andrew Watson

The idea that “gestures improve learning” has be increasingly popular in recent years. I first heard about this idea from Dr. Susan Goldin-Meadow at a Learning and the Brain conference back in 2015, and the claim has gotten more emphatic with every passing year.

Drawings of men and women making a variety of hand gestures

This emphasis on gesture — within the broader field of “embodied cognition” — is approaching certainty in the field.

As is so often the case, I myself am less confident than others about many of these claims.

In this post, I want to

  • Explain my concerns about the “gestures promote learning” claims
  • Explore a recent study that pushes back against my concerns, and
  • Highlight the features of that study that support my concerns.

In other words: we’re taking baby steps here. This study that I’ll explain does offer modest support to this claim — and offers some useful specifics.

But it certainly does not make the “gestures promote learning” a slam dunk. And it flatly contradicts some of the most vehement statements you’ll hear when people champion embodied cognition and gestures.

Initial Concerns

I am certainly open to the idea that gestures might promote learning. No, honestly: it could be true!

At the same time, I want to

  • understand the SPECIFICS of that claim, and
  • have confidence in the research that supports them.

These requests feel reasonable to me.

So, specifically:

Who is making the gesture? The teacher? The learner? Both? A video?

Does some kind of learning get more benefit from gestures than others?

For example, I know how I could use a gesture to reinforce the word “hatchet.” (A hatchet gesture is easy to envisage.)

But: how would I make a gesture to help students learn “protagonist” and “antagonist”? Or, “pleistocene”? Or “tradition”?

What if students learn the same gesture in two different classes? Say: the “hatchet” gesture means “hatchet” in my Spanish class but “guillotine” in my history class.

Do gestures help students of all ages? Are there cultural differences?

In other words: what are the rules to govern the creation and use of gestures? And: what research supports each specific component?

Last year, I wrote about a study that starts answering some of these questions. But: we’ve got a long way to go.

Frankly, the questions listed above fit quite nicely under the heading of “boundary conditions”: we need to know the “boundaries” within which teaching suggestions work, and outside of which they don’t. I don’t quite understand why people look vexed when I ask such questions.

A Recent Study Strengthens the Case

Because I have outlier views on this topic, I’m always on the lookout for research that will change my mind.

Recently, I came across a study from 2021 that offers lots of helpful information.

This study does, in fact, give me some increased confidence in the ideas that gestures can help. And, it offers helpful specifics about ways that gestures MIGHT NOT help. That is: it clarifies those boundary conditions.

This research has LOTS of moving parts, so I’ll try to summarize it accurately and briskly.

Twenty-two German adults spent 4 days studying foreign language vocabulary.

Specifically, they learned 90 word pairs: a word in German, and its “translation” in a made-up language called “Vimmi.” (Researchers used made-up words so that the participants’ prior knowledge wouldn’t muddle their results.)

45 of those word pairs were accompanied by a gesture. (The participants saw someone making the gesture; and they also made the gesture themselves.)

The other 45 pairs were accompanied by a drawing of the word; in other words, not a gesture.

This group of non-gesture word pairs gives the researchers a useful comparison point. They can ask not just “did gestures helps participants learn?,” but “did the gestures help compared to the pictures?”

Now things get REALLY interesting.

Researchers then used a fancy gizmo to DISRUPT NEURAL PROCESSING in a part of the brain that coordinates gestures.

Specifically, several days after participants learned the word-pairs, researchers asked participants to recall them out loud. While participants did so, researchers used the (perfectly safe) neuro-scrambler for half of the words; they then faked the neuro-scrambling for the other half of the words. The participants, in other words, didn’t know when the scrambler was on or off.

Let’s add two more cool variables — variables that help answer my pesky questions:

Researchers tested participants five days and five months later. So: they could see how long these effects lasted.

And, they also tested two kinds of words: concrete nouns (“window”) and abstract nouns (“benefaction”).

Based on all these moving pieces, you can see that this study might finally convince me that gestures help — or give us some ideas about boundary conditions.

Yup: (Some) Gestures Help (Some) Learning (Somewhat)

When researchers tested the participants five days later, they found that seeing and producing gestures helped participants learn concrete nouns.

Specifically, the neuro-scrambler (presumably) reduced the neural encoding of the gestures for half of the words. And participants took longer to remember those words than the un-scrambled ones.

The likeliest interpretation: part of the memory of the word pair had been encoded in the motor cortex. When we interfere with that part of the memory, we interfere with the memory overall.

So, even though I’m naturally cautious about such things, I’m comfortable saying:

“This research supports the idea that seeing and making gestures about concrete objects helps learn those words in the short term.”

Baby steps.

Yup: Important Limits Exist

At the same time, you’ve probably noticed the boundary conditions in that statement.

  • The gestures did help participants remember concrete nouns (“window,” “hatchet”); at the same time, participants did not remember abstract nouns (“benefaction”, “protagonist/antagonist”) as smoothy over time. (I should be clear: this statement summarizes a very complex set of findings; others might look at the findings and disagree with that statement.)
  • The gestures did help after 5 days, but not after five months. So: short-term memory got a boost. But long-term memory — not so much.

I should also say that this study design (like ALL study designs) includes some features that require our caution.

  • 22 is not a huge number of participants. I wouldn’t often make big decisions based on such a small group. (Of course, as more and more small studies agree, they might add up to compelling recommendations.)

Also, this study pushes back against one of the more common statements within the broader field of “embodied cognition,” and also in the narrower field of “gestures promote learning.”

“Learning doesn’t just happen in the brain; it happens in the body.

Now, I spend LOTS of time on this blog warning readers against over-reliance on neuroscience. We spend far too much time saying “ooh, look, BRAIN.” (I think we should focus much more on psychology: the study of the mind.)

This study, however, requires that we take the brain itself seriously. Those brain scramblers, after all, scrambled signals in the motor cortex: a part of the brain. They did not do anything to the arm or the rest of the body.

We can, I think, reasonably say that the learning happened WITH the body. But it didn’t happen IN the body; more precisely, it happened IN the part of the body called “the brain.” The arm didn’t change and learn. The brain changed. The mind learned.

TL; DR

This small study adds to a growing body of research that suggests seeing and making the right kind of gesture can help people learn some things, at least for a few days.

It does not reach grander conclusions than those.

And, it strongly contradicts the idea that “learning happens in the body.” Bodily movements can help help the brain change; but the brain itself changes, and the mind itself learns.


 

A Final Note

This study includes substantially more complicated procedures and calculations than I typically try to summarize on the blog. As a result, reasonable readers might believe I have over-simplified the study and its conclusions; you might even think I have MIS-summarized both.

I have, quite genuinely, tried to do justice to the study and its conclusions. And, I include this link so you can check for yourself.


Mathias, B., Waibel, A., Hartwigsen, G., Sureth, L., Macedonia, M., Mayer, K. M., & von Kriegstein, K. (2021). Motor cortex causally contributes to vocabulary translation following sensorimotor-enriched training. Journal of Neuroscience41(41), 8618-8631.

Questions, Questions (First of a Series)
Andrew Watson
Andrew Watson

Because teachers spend so much time asking questions, we’d be grateful if we had research to guide us as we do so.

On many pedagogy topics, we face a tricky problem: not enough research to paint a clear picture.

One this topic — questions — we face a completely different problem: SO MUCH RESEARCH.

In fact, we’ve got so many studies on so many different sub-topics here, we can easily get lost in the muddle of terminology, recommendations, and limitations.

My goal in this series of posts is to sort through the several kinds of research focused on questions, trying to nudge them into useful categories. I’ll also try to point out some easy mistakes to make along the way.

To organize this post, I’ll focus on three points:

When to ask this kind of question?

Who benefits most immediately from doing so?

What do we do with the answers?

Before the Thinking, part I

Daniel Willingham tells us that “memory is the residue of thought.” If we prompt our students to do the right kind of thinking, they’ll get the right kind of residue: useful long-term knowledge.

The first group of questions comes well before we want our students to do that “right kind of thinking.”

To teach an effective unit, teachers need to know our students’ prior knowledge on the subject.

To learn what the word “tragedy” means, students need to know who a “protagonist” is. Heck: they need to know what a “play” is. Do they? I should ask.

To learn about covalent bonds, students need to know what elements are, and what electrons are. Do they? I should ask.

And so forth.

Practically ANY unit draws on this kind of prior knowledge. If my students don’t already know these essential facts, ideas, or skills, they’ll experience working memory overload when the unit gets underway. (If you’re interested in the importance of prior knowledge, check out Graham Nuthall’s bookThe Hidden Lives of Learners.)

Female High School Teacher Asking Pointing at a student who is raising her hand

So: the very first questions I ought to ask my students explore the depth and breadth of their knowledge on a topic they’ll be learning about next week, next month, next semester.

Notice who benefits first from these questions: I — the teacher — do. Because…

What do I do with the answers? Once I evaluate my students’ prior knowledge, I can design a more effective and targeted lesson—one that neither bores nor overwhelms my students.

Recap of “prior knowledge” questions:

When? Before the unit – sometimes well before the unit.

Who benefits? Initially, the teacher – who now has information that will help plan the full lesson.

What to do with the answers? Design upcoming instruction effectively.

Before the Thinking, part II

Of course, not all questions that precede the lesson are for my benefit. We have a growing body of research into “prequestions.”

Notice this distinction:

I ask “prior knowledge questions” to find out what my students do and don’t know.

I ask “prequestions” about important information I’m already sure my students DON’T KNOW.

In fact, I can’t ask effective “prequestions” until I check their prior knowledge; otherwise, my prequestion might ask them about important information they DO already know.

Now, why would I do such a thing? Why ask students to define the Five Pillars of Islam if I’m sure they just don’t know?

Well, this growing body of research suggests that such questions prime students for upcoming learning.

They are likelier to learn those Five Pillars if I ask prequestions about them – even though they almost certainly can’t and won’t answer those questions correctly.

Let’s be honest: this strategy seems peculiar. Asking students questions when I’m sure they don’t know the answer feels a little mean. But, if this research pool is correct, doing so gets them mentally ready to learn new stuff.

Recap of “prequestions”:

When? Before the unit, probably right at the beginning.

Who benefits? The student – who is now better primed to learn this new information.

What to do with the answers? Don’t sweat the incorrect answers. We assume they’ll get incorrect answers! Instead, carry on with the lesson, knowing that students are mentally prepared for this new learning.

Digging Deep

Even this brief review suggests important distinctions.

I should ask both kinds of questions “before the thinking”; that is, before the students have started learning this topic.

But I will plan these questions quite differently.

If I’m asking “prior knowledge” questions, I need to consider quite a broad range of possibilities. Because I don’t yet know what my students don’t know, I should probably ask …

… both conceptual questions and factual questions,

… questions that seem easy to me, and questions that seem hard to me,

… questions from early in the upcoming unit, and questions later in the unit.

And so forth.

However, if I’m asking “prequestions,” I should focus quite narrowly on…

… information that is most important in the upcoming unit, and

… information that I’m confident most of my students don’t already know.

Also, I will respond to the answers quite differently.

If I’m asking “prior knowledge” questions, my students’ answers are simply data. If I ask the right questions, their answers tell me what they do and don’t know — and in this way guide me as I design the upcoming lesson.

If I’m asking “prequestions,” my response will be rather different. Because I’m almost sure my students don’t know the answers, I expect that most answers will be wrong.

That wrongness is a feature, not a bug. (If most students get the answer right, then I didn’t accomplish the goal of priming future learning.)

I should probably be on the lookout for “prior misconceptions.” That is: if several students answer a “prequestion” with a commonly held false belief, that information will be important and helpful to me.

Devils in Details

As I write about these topics, I think two details merit attention.

First: I’ve written above about “prequestions” — asking students information I’m sure they don’t know.

I should confess that most people have a different name for this technique; they call it “pretesting.”

Now, I think “pretesting” is a TERRIBLE name. No teacher and no student wants anything to do with tests. And, the technique isn’t a test! No grading required!!!

In this post and others, I’m trying to rebrand “pretesting ” as “prequestioning.” However, you’ll see the term “pretesting” more often.

Second: you may know Graham Nuthall’s famous finding: students already know about 50% of what we’re going to teach them, but that each one knows a different 50%.

If that’s true, this finding makes both “prior knowlegde” questions and “pretesting” questions very tricky.

I myself worry less about this finding than others do.

While Nuthall did find this statistic to be true, his finding hasn’t been replicated (as far as I know.) We should remember Dan Willingham’s motto: “one study is just one study, folks.”

By the way, Nuthall’s methodology is so extraordinarily complex that I’d be surprised if it has could be replicated frequently.

I also suspect that this 50% statistic will vary widely from situation to situation.

If you teach Spanish 4 — well, it’s likely that most of your students have successfully completed Spanish 3. They’ll have LOTS of prior knowledge in common.

But if you teach 9th grade history in a school that draws from many different feeder schools, you might well work with students whose priok knowledge varies quite widely.

TL;DR

BEFORE we start teaching a unit, we should ask students two different kinds of questions.

Prior knowledge questions help us learn what our students already know.

“Prequestions” help prime students to learn new information and concepts.

Although all these questions are, in fact, questions, their purpose, form, and result differ in important ways. We should plan accordingly.

Experts, Expertise, and Teachers (and Students!)
Andrew Watson
Andrew Watson

Researchers often focus on experts and expertise. And bloggers like me often follow their leads.

You’ll read about the novice-expert continuum, the differences between novices and experts, and the expertise-reversal effect.

A substantial collection of tools organized on a peg board above a workbench

But let’s pause for a minute and ask: what is an expert? What is this “expertise” that novices gradually acquire on their way to becoming an expert?

A recent book by Roger Kneebone — Expert: Understanding the Path to Mastery — takes on these fascinating questions.

Biography, and Beyond

Kneebone himself followed an unusual trajectory to this set of questions. He started his professional life training to be a surgeon; his stories of emergency surgery in South Africa will set the squeamish on edge.

By the way, while not slicing his way through gory neck wounds, Kneebone also spent time learning how to fly at a local airport. Here again, his mishaps as a pilot provide important examples for his investigation of expertise.

After some number of years as a surgeon, he decided to retool himself as a general practitioner in rural England — the kind of doctor we would now call a “primary care provider.”

That is: instead of snipping gall bladders out of patients he barely knows, he discusses hang-nails with patients he’s know for years.

Oh, by the way, he also takes up playing the harpsichord — he even builds one of his own. You guessed it: this pursuit also informs his book.

He finally ends up with yet another career: he helped found a program for training surgeons. He is — rather curiously — an expert in expertise.

Sample Size

To explore the nature of expertise, Kneebone reaches outside his own experience to talk to a remarkable variety of experts. As in:

An expert taxidermist

An expert tailor

An expert harpsichord maker

An expert magician

An expert fighter pilot

An expert ceramicist

And so forth.

In these conversations, Kneebone finds remarkably consistent patterns. That is: the path to becoming an expert surgeon is surprisingly like the path to being an expert tailor or an expert magician — even though the actual work of these professions differs substantially.

In his book, he maps out this path, using examples and stories from all those professions.

I won’t trace the entire path from “apprentice” to “journeyman*” to “master*” — you should read Kneebone’s book if you want the details, but I do want to share a few of his insights.

First, Kneebone sees the phase transition from apprentice to journeyman as a change in focus. An apprentice teacher (for example) focuses on what s/he is doing: what does my lesson plan look like? Am I covering learning objectives?

A journeyman teacher focuses on the effect of those actions on students. Are they learning? Did they understand that example? How do their mistakes this week compare to their mistakes last week?

As a developing teacher, I can’t do the second part (focusing on students) until I’ve made the first part (focusing on myself) routine. But that switch really makes all that initial work worthwhile.

Second: the phase change from journeyman to mastery — if I’m understanding Kneebone correctly — involves another such change in focus. Journeyman teachers focus on their students. Master teachers focus on helping other teachers help their students. They switch to a meta-level, and think about the profession itself: how to pass on — and improve! — professional skills, norms, and knowledge.

Once again, this journeyman-to-mastery switch can’t happen until after MANY years of journeyman-level effort. And, in fact, lots of people never make this second leap: they stay focused on the proximate, not the ultimate, effects of their work.

If you’ve been teaching for a while, perhaps you can see these steps in your work, and your colleagues’. Certainly I can see that progression in the schools where I have worked.

Teaching Implications

As teachers, we’re understandably tempted to ask: “How should I think about helping my students along this path? How can I help my students arrive at expertise?”

Kneebone doesn’t address this question directly, but I suspect I know part of the answer.

In Kneebone’s model, the path from apprentice to journeyman to mastery takes…literally…years. Probably decades.

Kneebone doesn’t object to repetitive drudgery; in fact, he considers it an essential step in the process of developing mastery.

For instance: the master tailor he interviews spent literally months sewing a specialized part of a pocket…over and over (and over) again. While he was doing so, he often felt irritated and confused — all too aware of the seeming pointlessness of the exercise. Only once he’d travelled further along the path did he recognize all the subtleties he had absorbed along the way.

So, I suspect Kneebone would tell me: “Andrew, get real. Your high-school sophomores will not become experts at writing — or Shakespeare, or grammar — in a year. Becoming an expert in Shakespeare — in anything — takes DECADES.”

Instead, I found Kneebone’s book to be most helpful as we think about teacher training: how we can reasonably expect apprentices in our profession explore and sift their experiences on their way to later stages of expertise.

A Final Distinction

While I think Kneebone’s book gives better guidance for training teachers (over several years) than teaching students (over several months), I do think the terms “novice” and “expert” are useful in understanding our day-to-day classroom work.

Specifically, we should be aware that our students (almost always) know much less than we do about the topic we’re teaching; they are, relatively speaking, “novices.” We should not act as if they’re experts; doing so will almost certainly overwhelm their working memory.

And, we should not abandon “expertise” as a goal — as long as we focus on “relative expertise.”

That is: my sophomores won’t be Shakespeare experts at the end of the year. But — if I’m doing my job right — they will have more expertise than they did before.

They’re better at parsing Shakespearean syntax.

They know more about King James I’s obsession with witches, and with deception. (Hello, Gunpowder Plot.)

They’re on the lookout for the words “do,” “done,” and “deed” as they make their way through the poetry.

They’re not experts, but they’re relative experts: that is, experts relative to themselves at the beginning of the year.

As long as we keep the goal of “relative” expertise in mind, then the novice/expert distinction provides lots of useful guidance for our work with students.

As long as we recognize that Kneebone’s insights apply more to teaching training than to student instruction, I think his book provides importand and helpful insights into the nuances, trials, and joys of our work.


* These terms, of course, raise questions. Kneebone considers them, and sticks with this terminology.


Kneebone, R. (2020). Expert: Understanding the path to mastery. Penguin UK.

Do Fidget Spinners Help Children with ADHD?
Andrew Watson
Andrew Watson

Good news: cognitive science research can be SPECTACULARLY useful in guiding and informing teachers.

Less good news: that guidance isn’t always clear or prompt.

After all:

Research findings often contradict one another.

They can also contradict our deeply-held beliefs, and/or our school’s policies.

Not everyone who quotes research actually understands research, or quotes it accurately.

And so forth.

Another problem with research gets less focus: it takes so much time.

In this case: fidget spinners were a thing back in 2017!

Perhaps you — like me — frequently heard the idea that “we can give fidget spinners to students to help them get the wiggles out!”

More specifically: “fidget spinners will help those students diagnosed with ADHD let off jittery steam without wandering around or distracting others.”

At that time, those claims sounded plausible or implausible — depending on your perspective and experience. But we didn’t have any research to support or contradict them.

As one group of pediatricians wrote in 2017:

Fidget spinners and other self-regulatory occupational therapy toys have yet to be subjected to rigorous scientific research. Thus, their alleged benefits remain scientifically unfounded. Paediatricians should […] inform parents that peer-reviewed studies do not support the beneficial claims.

Well: SEVEN years later, now we do have research!

Worth the Wait

Researchers in Florida worked with a group of 60 children enrolled in a summer program for young children diagnosed with ADHD.

A closeup of two hands holding fidget spinners out toward each other

This program offered both academic and behavioral training over eight weeks, to better prepare these children for the upcoming school year.

Both in the first two weeks and the final two weeks, the research team gave several children fidget spinners during the 30-minute ELA class. They looked for data on these questions:

Did the fidget spinners change the amount of wandering around?

Did they change the students’ attention to the class work?

And, did they affect the other children who did not get fidget spinners?

Sure enough, the data they gleaned provide helpful classroom guidance.

Good News, Bad News

If you’re in the pro-fidget spinner camp, you’ll be glad to know that the fidget spinners did NOT lead to an increase it problems among the other students who didn’t get one.

They didn’t wander any more than usual; they didn’t pay less attention than usual. (“Area violations” and “attention violations” were the two categories tracked by researchers.)

That’s the good news.

If you’re pro-fidget spinner, the rest of the news won’t encourage you.

First: the spinners reduced wandering a bit at the beginning of the program. But they did NOT reduce wandering at the end.

Second: the spinners raised inattention levels both at the beginning and at the end of the program. And the increases in inattention were greater than the decreases in wandering.

In brief, fidget-spinner champions will not find much support here.

Questions and Caveats

No one study can answer all questions, so we should keep its limitations in mind.

What about older students? This research doesn’t explore that question.

Won’t students get better at using fidget spinners AND paying attention over time? We don’t know. (But: eight weeks is an unusually long research study.)

Don’t they benefits SOME students? Maybe. A 60-person study doesn’t really allow us to look for granular sub-populations.

A later study should show the opposite results! It certainly could.

In other words, passionate fidget-spinner advocates can ask worthwhile questions. And, we shouldn’t be too emphatic based on one study.

But we can say this:

According to this one research study, fidget spinners did not help young students diagnosed with ADHD pay attention; they did more harm than good.


Graziano, P. A., Garcia, A. M., & Landis, T. D. (2020). To fidget or not to fidget, that is the question: A systematic classroom evaluation of fidget spinners among young children with ADHD. Journal of attention disorders24(1), 163-171.

Attention Contagion in the “Real World”: Plato was Right!
Andrew Watson
Andrew Watson

I’m always grateful to have research guidance for my classroom work, but I have to admit: it can take A LONG TIME.

For instance, I’ve got an upcoming blog post about reseach into figdet spinners — and those little guys were a thing back in 2017. It took (can it be?) SEVEN years to look into their benefits/harms for students with ADHD.

Bored Male Student Listens Lecture at the University. Tired, Exhausted and Overworked Young Male Holds His Head.

However, I have to say that attention contagion is moving quickly.

I first heard about attention contagion — the idea that students can “catch” attentiveness, and inattentiveness, from each other — this summer. Back in that blog post, I noted that we’ve got only two recent studies on the topic. We need more research — and research in conditions that look like real classrooms — before we make too much of this concept.

Well, this research team (led by Noah Forrin) must have heard my request — they ALREADY have another study out. And, this one looks at students in a classroom-like setting. SO COOL.

The Setup

Team Forrin set up a fairly typical lecture hall scenario: rows of desks facing a large screen, where a video tape of lecture played.

60 students attended this “lecture,” and took notes as they did so. Afterwards they took a quiz on the lecture content and filled out a survey about the experience.

Here’s the key: fifteen of those 60 students were — basically — college-age actors. (The technical word is “confederates.”)

For half of the lectures, these actors were trained to be attentive: they took notes, sat upright, focused on the lecture video, and looked intersted.

For the other half, they were trained to be inattentive: they took no notes, slouched, looked around, and looked bored.

Notice — this detail will be important — the inattentive students were not distracting. They didn’t fidget or stretch  or yawn or tap their pens or play games on laptops. (In fact, laptops and cell phones were not allowed.)

Importantly, the seating was carefully arranged. The non-actors were seated either…

… between actors, or

…behind or in front of actors, or

… far away from actors.

So, here are the questions:

Did the students catch attentiveness from the actors? Or, did they catch INattentiveness from actors?

And: did the seating location matter? Specifically, did the in between students or the in front/behind students react differently than the far away students?

The Payoff

Forrin and his colleagues had A LOT of data to sort, and I won’t go through it all. The results, in my view, aren’t terribly surprising — but they are very interesting. And, helpful.

First: yes, students could catch inattention from the actors.

Researchers know this because, when seated near inattentive actors:

On their surveys, the students rated themselves as more inattentive.

The took fewer notes.

They scored lower on the post-lecture quiz.

Second: students catch inattention when sitting next to or between inattentive actors.

I am — honestly — not surprised that students seated far away didn’t catch inattentiveness. (If you check out the seating diagram on page 4 of the study, you’ll see why.)

I am — and the researchers were — surprised that students DIDN’T catch inattentiveness when sitting behind or in front of inattentive actors.

By the way, you remember the important detail from above: the actors were trained to be inattentive but not distracting. Sure enough, those end-of-lecture surveys showed that the students were not distracted by classmates.

This point merits focus because we can have some confidence that the problem was actual inattentiveness — not distraction. The researchers, in other words, effectively isolated a variable — even though it’s a difficult one to isolate.

Practical Implications

Teachers since Plato have known to sit the distractible students between focused students. Well, this research suggests that we’ve been right all along.

More surprising, sitting students in front of or behind attentive peers doesn’t (in this study) have the same effect.

And, completely unsurprisingly, students sitting far away from attentive peers do not “catch” their focus.

Forrin’s team concludes by suggesting that further research be done in actual classrooms. Here’s hoping they publish that study soon!


Forrin, N. D., Kudsi, N., Cyr, E. N., Sana, F., Davidesco, I., & Kim, J. A. (2024). Investigating attention contagion between students in a lecture hall. Scholarship of Teaching and Learning in Psychology.


Thanks to professor Mike Hobbiss for drawing my attention to this study.

 

Retrieval Practice “In the Wild”: Lots of Good News
Andrew Watson
Andrew Watson

Last year at this time, I summarized an ENORMOUS meta-analysis about retrieval practice.

The reassuring headlines:

Retrieval practice helps students of all ages in all disciplines.

Feedback after RP helps, but isn’t necessary to get the benefits.

The mode — online, clickers, pen and paper — doesn’t matter.

The meta also includes some useful limitations:

“Brain Dumps” help less than other kinds of RP.

Sadly, retrieval practice might make it harder for students to recall un-retrieved material.

So, researchers have kicked these tires A LOT. We know retrieval practice works, and we know how to avoid its (relatively infrequent) pitfalls.

What more could research tell us?

From “Lab” to “Classroom”

Psychology researchers typically start studying cognitive functions — like “memory” or “attention” — by doing experiments in their psychology labs, usually on college campuses.

These labs, of course, control circumstances very carfully to “isolate the variable.”

Several middle school students eagerly raise their hands to answer questions

But let’s be honest, classrooms aren’t labs. Teachers don’t isolate variables; teachers combine variables.

So we’d love to know: what happens to retrieval practice when we move it outside of the psych lab into the classroom?

One recent survey study by Bates and Shea, tries to answer this question.

In their research, Bates and Shea sent out a survey to teachers in English K-12 schools to find out what is happening “in the wild.”

Do teachers use retrieval practice?

If yes, how often?

When?

What kind of retrieval practice exercise do they prefer?

What do they do with the results of RP?

And so forth.

Once again, this study brings us LOTS of good news.

First: teachers — or, at least the teachers who responded to this survey — use retrieval practice a lot.

Second: they use a variety of retrieval practice strategies — short quizzes, do nows, even (less frequently) “brain dumps.”

Third: teachers use retrieval practice at different times during class: some at the beginning, some at the end, others throughout the lesson.

In other words: retrieval practice hasn’t simply turned into a precise set of rigid instructions: “you must do five mintues of retrieval practice by asking multiple choice questions at the beginning of every other class.” Instead, it’s a teachnique that teachers use as they see fit in their work.

Better and Better

For me, some of the best news from this survey comes from a surprising finding — well, “surprising” to me at least.

Where did teachers learn about retrieval practice?

Fully 84% learned about RP from their colleagues; 63% from internal staff training; 57% from books. Relatively few — just 20% — heard about it from training outside of school.

You might think that — as someone who blogs for a conference organization — I would want teachers to hear about RP from us.

And, of course, I’m delighted when teachers attend our conferences and hear about all the research on retrieval practice.

But the Bates and Shea data suggest that retrieval practice has in fact escaped the bounds of conference breakout rooms and really is living out there “in the wild.” Teachers hear about it not only from scholars and PowerPoint slides, but from one another.

This development strikes me as enormously good news. After all: I didn’t hear much of anything about RP when I got my graduate degree in 2012. A mere 12 years later, it’s now common knowledge even outside academia.

An Intriguing Question

One finding in the Bates and Shea study raised an interesting set of questions for me: what should teachers do after retrieval practice? In particular, what should teachers do when students get RP questions wrong?

We do have research to guide us here.

We know that students benefit when we correct their incorrect RP answers.

We also know that they learn more from RP than from simple review — even if they don’t get corrective feedback.

So, what do teachers “in the wild” actually do?

Some — 46% — reteach the lesson.

Some — 15% — give corrective feedback.

Some — 10% — use this information to shape homework assignments.

Of course, some teachers choose more than one of these strategies — or others as well (e.g.: use RP answers to guide small group formation).

At present, I don’t know that we have good research-based guidance on which strategy to use when. To me, these numbers suggest that teachers are responding flexibly to the specific circumstances that they face: minute by minute, class by class.

If you read this blog regularly, you know my mantra: “Don’t just do this thing; instead, think this way.”

If I’m reading this survey study correctly, teachers have

a) heard about retrieval practice from colleagues and school leaders,

b) adapted it to their classroom circumstances in a variety of ways, and

c) respond to RP struggles with an equally flexible variey.

No doubt we can fine tune some of these responses along the way, but these headlines strike me as immensely encouraging.


Bates, G., & Shea, J. (2024). Retrieval Practice “in the Wild”: Teachers’ Reported Use of Retrieval Practice in the Classroom. Mind, Brain, and Education.

Graphic Disorganizers; or, When Should Teachers Decorate Handouts?
Andrew Watson
Andrew Watson

Recent research has raised questions about classroom decoration. In this post, our blogger wonders about decorating HANDOUTS:


Teachers regularly face competing goals. For instance:

On the one hand — obviously — we want our students to learn.

And, on the other hand, we equally obviously want them to feel safe, comfortable, at home.

To accomplish that second goal, we might decorate our classrooms. The more adorable cat photos, inspirational posters, and familiar art work, the homier the classroom will feel.

A colorful bar graph, showing 20%, 40%, 60%,etc.

But here’s the problem: what if all that decoration (in pursuit of goal #2) interferes with goal #1?

What if decorations inhibit learning?

The Story so Far

I’ve written about this topic a fair amount, and the story so far gives us reason to concentrate on that question.

So: do decorations get in the way of learning? According to this study: yes.

Is this a problem for all age groups? Research done by this team suggests: yes.

When I showed teachers all this research, they often raised a perfectly plausible doubt:

Don’t students get used to the decorations? According to this recent study: nope.

Given these studies (and many others), I think we’ve got a compelling narrative encouraging our profession to rethink decoration. While I don’t think that classrooms should be sterile fields … I do worry we’ve gone substantially too far down the “let’s decorate!” road.

“I’ve Still Got Questions”

Even with this research pool, I think teachers can reasonably ask for more information. Specifically: “what counts as a decoration?”

I mean: is an anchor chart decration?

How about a graphic organizer?

A striking picture added to a handout? (If they’re answering questions about weather, why would it be bad to have a picture of a thunderstorm on the handout?)

An anchor chart might be “decorative.” But, if students use it to get their math work done, doesn’t it count as something other than a “decoration”?

In other words: if I take down an anchor chart, won’t my students learn less?

Because practically everything in the world can be made prettier, we’ve got an almost infinite number of things that might be decorated. (I’ve done some work at a primary school that has arrows embedded in the floor: arrows pointing to, say, Beijing or Cairo or Los Angeles. Does that count as “decoration”?)

For this reason, research to explore this question gets super detailed. But if we find enough detailed examples that more-or-less resemble our own classroom specifics, we can start to credit a “research-informed” answer.

Graphic Disorganizer?

A friend recently pointed me to a study about reading bar graphs.

This research team wanted to know if “decorated” bar graphs make learning harder for students in kindergarten, and in 1st and 2nd grade.

So, if a bar graph shows the number of gloves in the lost and found box each week, should the bar representing that number…

Be decorated with little glove icons?

Or, should it be filled in with stripes?

How about dots?

This study in fact incorporates four separate experiments; the researchers keep repeating their basic paradigm and modifying a variable or two. For this reason, they can measure quite precisely the problems and the factors that cause them.

And — as you remember — they’re working with students in three different grades. So: they’ve got LOTS of data to report…

The Headlines, Please…

Rather than over-decorate this blog post with a granular description, I’ll hit a few telling highlights.

First: iconic decorations inhibit learning.

That is: little gloves on the bar graph made it harder for students to learn to read those graphs correctly.

Honestly, this result doesn’t surprise me. Gloves are concrete and familiar, whereas bar graphs represent more abstract concepts. No wonder the little tykes get confused.

Second: stripes and dots also inhibit learning.

Once again, the students tend to count the objects contained within the bar — even little dots! — instead of the observing the height of the bar

This finding did surprise me a bit more. I wasn’t surprised that young learners focus on concrete objects (gloves, trees), but am intrigued to discover they also want to count abstract objects (lines, dots) within the bar.

Third: age matters.

That is: 1st graders did better than kindergarteners. And, 2nd graders better than first graders.

On the one hand, this result makes good sense. As we get older, we get better at understanding more abstract concepts, and at controlling attention.

On the other hand, this finding points to an unfortunate irony. Our profession tends to emphasize decoration in classrooms for younger students.

In other words: we decorate most where decoration might do the most harm! (As a high-school teacher, I never got any instructions about decoration, and was never evaluated on it.)

In Brief

We teachers certainly might be tempted to make our environments as welcoming — even festive! — as possible.

And yet, we’ve got a larger (and larger) pool of research pointing out the distraction in all that decoration.

This concern goes beyond — say — adorable dolphin photos on the wall, or uplifting quotations on waterfall posters.

In this one study, something as seemingly-harmless as dots in a bar graph can interfere with our students learning.

When it comes to decorating — even worksheets and handouts — we should keep the focus on the learning.


Fisher, A. V., Godwin, K. E., & Seltman, H. (2014). Visual environment, attention allocation, and learning in young children: When too much of a good thing may be bad. Psychological science25(7), 1362-1370.

Godwin, K. E., Leroux, A. J., Seltman, H., Scupelli, P., & Fisher, A. V. (2022). Effect of Repeated Exposure to the Visual Environment on Young Children’s Attention. Cognitive Science46(2), e13093.

Kaminski, J. A., & Sloutsky, V. M. (2013). Extraneous perceptual information interferes with children’s acquisition of mathematical knowledge. Journal of Educational Psychology105(2), 351.

Overwhelmed Teachers: The Working-Memory Story
Andrew Watson
Andrew Watson

We spend A LOT of time working to prevent student working memory overload. In this post, our blogger asks about the dangers of TEACHER working memory overload.


If I could pick one topic from cognitive science for ALL TEACHERS to study, that topic would be working memory.

This small mental capacity allows us to select, hold, reorganize, and combine bits of information (and other things).

So, if you try to put the five days of the work week in alphabetical order, you’re using your working memory.

Alas, because working memory is so small, it gets easily overwhelmed.

Quick: try to put the twelve months of the year in alphabetical order.

Unless you’re writing words down, you almost certainly can’t do it.

Why does this cognitive insight matter?

Because our students have to select, hold, reorganize, and combine bits of information ALL THE TIME. We call that “learning.”

Stressed teacher sitting in front of a white board with comlex mathematical equations on it

And the situation gets even scarier: when working memory is all-too-easily overloaded, learning stops.

Do you know a scarier sentence than “learning stops”?

For these reasons, I spend much of my professional life talking with teachers about working memory.

Given that I’ve even written a book on the topic, you’d think I’d run out of things to say. But…

More Things To Say

One working memory topic that gets relatively little attention: the teacher’s working memory.

That is: as teachers, we also must – at every second – select, hold, reorganize, and combine bits of information:

The lesson plan

The correctness of this student’s answer

The brewing argument between those two over there

The possibility of an un-announced fire-drill

The page number of the example I want to include

The insightful point I want to bring up at the department meeting next period

Oh, wait! I need to get orange juice on the way home…

This list could easily go on for pages.

In other words: students face the potential for working memory overload all the time. And: TEACHERS DO TOO.

When students’ working memory is overloaded, “learning stops.” When teachers’ working memory is overloaded, our effective functioning also stops. Cognitively, we bonk.

What to Do?

To manange student working memory, I encourage teachers to try a 3-part approach:

Try to ANTICIPATE working-memory overload. (If a lesson plan has lots of instructions, I can predict students’ working memory will crash.)

Try to IDENTIFY overload. (That face the student is making — that’s a sign!)

Try to SOLVE overload. (Using, say, dual coding, or powerful knowledge, or stress reduction…)

If we can do these three things, we’re likely to help students stay within a working-memory comfort zone.

I think that these same three categories might be useful in managing our own working memory.

So: can I ANTICIPATE when my working memory will be threatened in class?

Honestly, that’s easy!

When I have especially important or stressful obligations outside outside of school (say, a trip to the hospital after work).

When I’m teaching a new/complex topic.

When I didn’t get much sleep, and/or am sick.

When I’m managing multiple school roles: teacher AND dean AND coach AND adviser AND…

When I’m trying out a new kind of technology. (Remember your first weeks of zoom teaching?)

Of course, your list will differ from mine — because you and I are two different people. But I suspect you can, fairly easily, come up with your own version of this checklist: “if THIS is happening today, my working memory might really struggle.”

Good news: if you can anticipate when your own working memory might buckle, you know when to start shoring it up…

Check Your Mirrors

Once we have anticipated the times when our own working memory might be overloaded, we should then learn to IDENTIFY the experience of overload.

In my own work, I’ve learned to rely on three key indicators.

First: word salad.

Because I talk about complex and technical topics, I often talk in complex sentences with lots of technical vocabulary.

When my working memory gets overloaded, I find that my sentences fall apart. The subordinate clauses fight with the appositives, and I can no longer remember the subject of my verb.

Instead of trying to “identify” working memory overload, I might tell teachers to “redentify” it. (I don’t think “redentify” is a word.)

When I experience this word chaos, I know my working memory is in trouble.

Secondthird of three

When I discuss working memory with teachers, they — of course! — ask questions.

I often say: “well, there are three answers to your question.”

But … you know where this is going … by the time I’m done with my second answer, I can’t even remember the question (much less the third part of the answer).

Yup: that’s working memory overload.

Thirdemotional barometer

My own cultural background isn’t big on emotions. (Growing up, I was allowed to have mildly positive feelings, but everything else was discouraged. Mildly.)

For that reason, I’m not great at monitoring my own emotional state.

But I have learned: when I start feeling penned in and frustrated — when my chest is a little tight and breathing, a bit of a chore — that feeling almost always results from working memory overload.

My body is saying: “I just can’t handle this mental load right now!”

When that happens, I know: it’s time to break out my working-memory solutions!

Here again, your list might not look like my list: you’ll discover your own ways to identify working memory stress. But, that list might be a useful place to start…

The Last Step

If I can anticipate that my working memory will be overloaded (because, say, I’m explaining the differences between direct objects and subject complements)…

…and I can recognize that my working memory IS overloaded (because, say, I can’t coherently answer my student’s question)…

…then it’s time to SOLVE my working memory problems?

How do I do that?

Well: I don’t want to overwhelm the reader’s working memory — so I’ll write about that in next week’s blog post.

Summer Plans: How Best to Use the Next Few Weeks [Repost]
Andrew Watson
Andrew Watson

Our blogger is on vacation. While he’s away, this post — which first appeared in June — seemed timely:


 

The summer stretches before you like a beach of relaxing joy. With a guilty-pleasure novel in one hand and an umbrella drink in the other, how should you best plan for the upcoming school year?

A woman doing yoga on the beach at sunset; rocks and the ocean visible in the background

Let’s be honest:

You might want to give yourself a break. School is STRESSFUL. Some down time with your best friends — perhaps a refreshing walk in the woods — getting back into a fitness routine … all these sound like excellent ideas to me.

If, however, you’re the sort of person who reads education blogs in the summer, well, you might be looking for some ideas on refreshing your teaching life.

Since you asked…

The Essential Specifics Within the Big Picture

The good news about research-based teaching advice?

We have LOTS and LOTS of helpful suggestions!

The bad news about research-based teaching advice?

Well: we have LOTS and LOTS of helpful suggestions!! Probably too many suggestions to keep track of.

If only someone would organize all those suggestions into a handy checklist, then you might strategically choose just a few of those topics that merit your attention. If this approach sounds appealing to you, I’ve got even more good news:

You can check out Sherrington and Caviglioli’s EXCELLENT book Walkthrus. This book digests substantial research into dozens of specific classroom topics (how to value and maintain silence; how to create a “no opt out” culture). It then offers 5-step strategies to put each one into practice.

In a similar vein, Teaching and Learning Illuminated, by Busch, Watson*, and Bogatchek, captures all sorts of teaching advice in handy visuals. Each one repays close study — in the same way you might closely study a Walkthru.

With these books, you can do a deep dive into as many — or as few — topics as you choose.

School Policy

The hot topics in education policy world are a) cell phones and b) AI.

As everyone knows, Jonathan Haidt’s recent book has made a strong case for heavily restricting cell phone usage for children.

I think it’s equally important to know that LOTS of wise people worry that Haidt is misinterpreting complex data.

Schools and teachers no doubt benefit from reading up on this debate. My own view, however, is that we should focus on the effects that phones (and other kinds of technology) have in our own schools and classrooms. Create policies based on the realities you see in front of you — not abtract data about people who might (but might not) resemble your students.

As for Artificial Intelligence: I think the field is too new — and evolving too rapidly — for anyone to have a broadly useful take on the topic.

In my brief experience, AI-generated results are too often flukily wrong for me to rely on them in my own work. (Every word of this blog is written by me; it’s a 100% AI-free zone.)

Even worse: the mistakes that AI makes are often quite plausible — so you need to be a topic expert to see through them.

My wise friend Maya Bialik — one-time blogger on this site, and founder of QuestionWell AI — knows MUCH more about AI than I do. She recommends this resource list, curated by Eric Curts, for teachers who want to be in the know.

A Pod for You

I’m more a reader than a pod-er, but:

If you’re in the mood for lively podcasts, I have two recommendations:

First, the Learning Scientists routinely do an EXCELLENT job translating cognitive science reseach for classroom teachers.

Unsurprisingly, their wise podcast is still going strong after many years.

Second, Dr. Zach Groshell’s podcast — Progressively Incorrect — explores instructional coaching, math and reading instruction, current debates in education, and other essential topics.

You might start with his interview with fan favorite Dan Willingham.

(Full disclosure: I have appeared on both podcasts, and am friends with the people who run them.)

The Journey Ends at Its Beginning

But, seriously, give yourself a break. You’ve worked hard. Take the summer off. I bet you’ve got A LOT of shows to binge-watch in your queue…


* A different “Watson”: EDWARD Watson. As far as I know, we’re not related.

 

The Jigsaw Advantage: Should Students Puzzle It Out? [Repost]
Andrew Watson
Andrew Watson

This post got a LOT of attention when our blogger first wrote it back in February:


The “jigsaw” method sounds really appealing, doesn’t it?

Imagine that I’m teaching a complex topic: say, the digestive system.

Asking students to understand all those pieces — pancreas here, stomach there, liver yon — might get overwhelming quickly.

So, I could break that big picture down into smaller pieces: puzzle pieces, even. And, I assign different pieces to subgroups of students.

Group A studies the liver.

Group B, they’ve got the small intestine.

Group C focuses on the duodenum.

Once each group understands its organ — its “piece of the puzzle” — they can explain it to their peers. That is: they re-assemble the larger puzzle from the small, understandable bits.

This strategy has at least two potential advantages:

First, by breaking the task down into smaller steps, it reduces working memory load. (Blog readers know that I’m a BIG advocate for managing working memory load.)

Second, by inviting students to work together, it potentially increases engagement.

Sadly, both those advantages have potential downsides.

First: the jigsaw method could reduce working memory demands initially. But: it also increases working memory demands in other ways:

… students must figure out their organ themselves, and

… they have to explain their organ (that’s really complicated!), and

… they have to understand other students’ explanations of several other organs!

Second: “engagement” is a notoriously squishy term. It sounds good — who can object to “engagement”? — but how do we define or measure it?

After all, it’s entirely possible that students are “engaged” in the process of teaching one another, but that doesn’t mean they’re helpfully focused on understanding the core ideas I want them to learn.

They could be engaged in, say, making their presentation as funny as possible — as a way of flirting with that student right there. (Can you tell I teach high school?)

In other words: it’s easy to spot ways that the jigsaw method could help students learn, or could interfere with their learning.

If only we had research on the subject…

Research on the Subject

A good friend of mine recently sent me a meta-analysis puporting to answer this question. (This blog post, in fact, springs from his email.)

It seems that this meta-analysis looks at 37 studies and finds that — YUP — jigsaw teaching helps students learn.

A closeup of four hands holding out single puzzle pieces, trying to see how to put them together well.

I’m always happy to get a research-based answer…and I always check out the research.

In this case, that “research-based” claim falls apart almost immediately.

The meta-analysis crunches the results of several studies, and claims that jigsaw teaching has a HUGE effect. (Stats people: it claims a Cohen’s d of 1.20 — that’s ENORMOUS.)

You’ve probably heard Carl Sagan’s rule that “extraordinary claims require extraordinary evidence.” What evidence does this meta-analysis use to make its extraordinary claim?

Well:

… it doesn’t look at 37 studies, but at SIX (plus five student dissertations), and

… it’s published in a journal that doesn’t focus on education or psychology research, and

… as far as I can tell, the text of the meta-analysis isn’t available online — a very rare limitation.

For that reason, we know nothing about the included studies.

Do they include a control condition?

Were they studying 4th graders or college students?

Were they looking at science or history or chess?

We just don’t know.

So, unless I can find a copy of this meta-analysis online (I looked!), I don’t think we can accept it as extraordinary evidence of its extraordinary claim.

Next Steps

Of course, just because this meta-analysis bonked doesn’t mean we have no evidence at all. Let’s keep looking!

I next went to my go-to source: elicit.com. I asked it to look for research answering this question:

Does “jigsaw” teaching help K-12 students learn?

The results weren’t promising.

Several studies focus on college and graduate school. I’m glad to have that information, but college and graduate students…

… already know a great deal,

… are especially committed to education,

… and have higher degrees of cognitive self-control than younger students.

So, they’re not the most persuasive source of information for K-12 teachers.

One study from the Phillipines showed that, yes, students who used the jigsaw method did learn. But it didn’t have a control condition, so we don’t know if they would have learned more doing something else.

After all, it’s hardly a shocking claim to say “the students studied something, and they learned something.” We want to know which teaching strategy helps them learn the most!

Still others report that “the jigsaw method works” because “students reported higher levels of engagement.”

Again, it’s good that they did so. But unless they learned more, the “self-reports of higher engagement” argument doesn’t carry much weight.

Recent News

Elicit.com did point me to a highly relevant and useful study, published in 2022.

This study focused on 6th graders — so, it’s probably more relevant to K-12 teachers.

It also included control conditions — so we can ask “is jigsaw teaching more effective than something else?” (Rather than the almost useless question: “did students in a jigsaw classroom know more afterwards than they did before?” I mean: of course they did…)

This study, in fact, encompases five separate experiments. For that reason, it’s much too complex to summarize in detail. But the headlines are:

The study begins with a helpful summary of the research so far. (Tl;dr : lots of contradictory findings!)

The researchers worked carefully to provide appropriate control conditions.

They tried different approaches to jigsaw teaching — and different control conditions — to reduce the possibility that they’re getting flukey results.

It has all the signs of a study where the researchers earnestly try to doubt and double-check their own findings.

Their conclusions? How much extra learning did the jigsaw method produce?

Exactly none.

Over the course of five experiments (some of which lasted an entire school term), students in the jigsaw method group learned ever-so-slightly-more, or ever-so-slightly-less, than their control group peers.

The whole process averaged out to no difference in learning whatsoever.

The Last Word?

So, does this recent study finish the debate? Should we cancel all our jigsaw plans?

Based on my reading of this research, I do NOT think you have to stop jigsawing — or, for that matter — start jigsawing. Here’s why:

First: we’ve got research on both sides of the question. Some studies show that it benefits learning; others don’t. I don’t want to get all bossy based on such a contradictory research picture.

Second: I suspect that further research will help us use this technique more effectively.

That is: jigsaw learning probably helps these students learn this material at this point in the learning process. But it doesn’t help other students in other circumstances.

When we know more about those boundary conditions, we will know if and when to jigsaw with our students.

I myself suspect that we need to focus on a key, under-discussed step in the process: when and how the teacher ensures that each subgroup understands their topic correctly before they “explain” it to the next group. If they misunderstand their topic, after all, they won’t explain it correctly!

Third: let’s assume that this recent study is correct; jigsaw teaching results in no extra learning. Note, however, that it doesn’t result in LESS learning — according to these results, it’s exactly the same.

For that reason, we can focus on the other potential benefits of jigsaw learning. If it DOES help students learn how to cooperate, or foster motivation — and it DOESN’T reduce their learning — then it’s a net benefit.

In sum:

If you’re aware of the potential pitfalls of the jigsaw method (working memory overload, distraction, misunderstanding) and you have plans to overcome them, and

If you really like its potential other benefits (cooperation, motivation),

then you can make an informed decision about using this technique well.

At the same time, I certainly don’t think we have enough research to make jigsaw teaching a requirement.

As far as I know, we just don’t have a clear research picture on how to do it well.


By the way, after he wrote this post, our blogger then FOUND the missing online meta-analysis. His discussion of that discovery is here.


Stanczak, A., Darnon, C., Robert, A., Demolliens, M., Sanrey, C., Bressoux, P., … & Butera, F. (2022). Do jigsaw classrooms improve learning outcomes? Five experiments and an internal meta-analysis. Journal of Educational Psychology114(6), 1461.