Skip to main content
How to Reduce Mind-Wandering During Class
Andrew Watson
Andrew Watson

I recently wrote a series of posts about research into asking questions. As noted in the first part of that series, we have lots of research that points to a surprising conclusion.

Let’s say I begin class by asking students questions about the material they’re about to learn. More specifically: because the students haven’t learned this material yet, they almost certainly get the answers wrong.

A college age student smiling and raising her hand to ask a question.

Even more specifically — and more strangely — I’m actually trying to ask them questions that they won’t answer correctly.

In most circumstances, this way of starting class would sound…well…mean. Why start class by making students feel foolish?

Here’s why: we’ve got a good chunk of research showing that these questions — questions that students will almost certainly get wrong — ultimately help them learn the correct answers during class.

(To distinguish this particular category of introductory-questions-that-students-will-get-wrong, I’m going to call them “prequestions.”)

Now, from one perspective, it doesn’t really matter why prequestions help. If asking prequestions promotes learning, we should probably ask them!

From another perspective, we’d really like to know why these questions benefit students.

Here’s one possibility: maybe they help students focus. That is: if students realize that they don’t know the answer to a question, they’ll be alert to the relevant upcoming information.

Let’s check it out!

Strike That, Reverse That, Thank You

I started by exploring prequestions; but we could think about the research I’m about to describe from the perspective of mind-wandering.

If you’ve ever taught, and ESPECIALLY if you’ve ever taught online, you know that students’ thoughts often drift away from the teacher’s topic to…well…cat memes, or a recent sports upset, or some romantic turmoil.

For obvious reasons, we teachers would LOVE to be able to reduce mind-wandering. (Check out this blog post for one approach.)

Here’s one idea: perhaps prequestions could reduce mind-wandering. That is: students might have their curiosity piqued — or their sense of duty highlighted — if they see how much stuff they don’t know.

Worth investigating, no?

Questions Answered

A research team — including some real heavy hitters! — explored these questions in a recent study.

Across two experiments, they had students watch a 26-minute video on a psychology topic (“signal detection theory”).

  • Some students answered “prequestions” at the beginning of the video.
  • Others answered those questions sprinkled throughout the video.
  • And some (the control group) solved unrelated algebra problems.

Once the researchers crunched all the numbers, they arrived at some helpful findings.

First: yes, prequestions reduced mind-wandering. More precisely, students who answered prequestions reported that they had given more of their attention to the video than those who solved the algebra problems.

Second: yes, prequestions promoted learning. Students who answered prequestions were likelier to get the answer correct on a final test after the lecture than those who didn’t.

Important note: this benefit applied ONLY to the questions that students had seen before. The researchers also asked students new questions — ones that hadn’t appeared as prequestions. The prequestion group didn’t score any higher on those new questions than the control group did.

Third: no, the timing of the questions didn’t matter. Students benefitted from prequestions asked at the beginning as much as those sprinkled throughout.

From Lab to Classroom

So, what should teachers DO with this information.

I think the conclusions are mostly straightforward.

A: The evidence pool supporting prequestions is growing. We should use them strategically.

B: This study highlights their benefts to reduce mind-wandering, especially for online classes or videos.

C: We don’t need to worry about the timing. If we want to ask all prequestions up front or jumble them throughout the class, either strategy (according to this study) gets the job done.

D: If you’re interested in specific suggestions on using and understanding prequestions, check out this blog post.

A Final Note

Research is, of course, a highly technical business. For that reason, most psychology studies make for turgid reading.

While this one certainly has its share of jargon heavy, data-laden sentences, its explanatory sections are unusually easy to read.

If you’d like to get a sense of how researchers think, check it out!


Pan, S. C., Sana, F., Schmitt, A. G., & Bjork, E. L. (2020). Pretesting reduces mind wandering and enhances learning during online lectures. Journal of Applied Research in Memory and Cognition9(4), 542-554.

Even More Questions (3rd of a Series)
Andrew Watson
Andrew Watson

This blog post continues a series about research into questions.

I started with questions that teachers should ask BEFORE students’ learning begins: “pre-questions,” measuring prior knowledge.

I then turned to questions that we ask DURING early learing: retrieval practice, checking for understanding.

Now — can you guess? — I’ll focus on questions that we ask LATER in learning, or “AFTER” learning.

To structure these posts, I’ve been focusing on three organizing questions:

When to ask this kind of question? (Before, during/early, during/later)

Who benefits most immediately from doing so?

What do we do with the answers?

Let’s dive in…

A Controversy Resolved?

At some point, almost all teaching units come to an end. When that happens, teachers want to know: “how much did my students learn?”

To find out, we typically ask students questions. We might call these questions “quizzes” or “tests” or “assessements” or “projects.”

A young girl reads and draws in a garden

Whatever we call such questions, students answer by writing or saying or doing something.

Who benefits from all these activities? Well, here we arrive at a controversy, because reasonable people disagree on this point.

OVER HERE, some folks argue that assessments basically benefits school systems — and harm others. After assessments, school systems can…

  • sort students into groups by grade, or
  • boast about their rising standardized test scores, or
  • evaluate teachers based on such numbers.

I don’t doubt that, in some cases, assessments serve these purposes and no others.

OVER THERE, more optimistically, others argue that assessments can benefit both teacher and student.

Students benefit because

  • They learn how much they did or didn’t learn: an essential step for metacognition; and
  • The act of answering these questions in fact helps students solidify their learning (that’s “retrieval practice,” or “the testing effect”).

Teachers benefit because

  • We learn how much our teaching strategies have helped students learn, and
  • In cumulative classes, we know what kinds of foundational knowledge our students have for the next unit. (If my students do well on the “comedy/tragedy” project, I can plan a more ambitious “bildungsroman” unit for their upcoming work.)

In other words: final assessments and grades certainly be critiqued. At the same time, as long as they’re required, we should be aware of and focus on their potential benefits.

Digging Deeper

While I do think we have to understand the role of tests/exams/capstone projects at the “end” of learning, I do want to back up a step to think about an intermediate step.

To do so, I want to focus on generative questions — especially as described by Zoe and Mark Enser’s excellent book on the topic.*

As the Ensers describe, generative questions require students to select, organize, and integrate information — much of which is already stored in long-term memory.

So:

Retrieval practice: define “bildungsroman.”

Generative learning: can a tragedy be a bildungsroman?

The first question asks a student to retrieve info from long-term memory. The second requires students to recall information — and to do mental work with it: they organize and integrate the parts of those definitions.

For that reason, I think of retrieval practice as an early-in-the-learning-process question. Generative learning comes later in the process — that is, after students have relevant ideas in long-term memory to select, organize, and integrate.

The Ensers’ book explores research into, and practical uses of, several generative learning strategies: drawing, mind-mapping, summarizing, teaching, and so forth.

In my thinking, those distinct sub-categories are less important that the overall concept. If students select, organize, and integrate, they are by definition answering generative learning questions.

For instance: the question “can a tragedy be a bildungsroman” doesn’t obviously fit any of the generative learning categories. But because it DOES require students to select, organize, and integrate, I think it fits the definition.

(I should fess up: technically, retrieval practice is considered a generative learning strategy. For the reasons described above, I think it’s helpful to use RP early in learning, and generative learning later in learning. My heresy could be misguided.)

“Generative learning” is a BIG category; teachers can prompt students to think generatively in all sorts of ways. A recent review by Garvin Brod suggests that some strategies work better than others for different age groups: you can check out those guidelines here.

TL;DR

In most school systems, teachers must ask some kind of summary questions (tests, projects) at the end of a unit. Such questions — if well designed — can benefit both teachers and students.

After students have a bedrock of useful knowledge and before we get to those final test/project questions, teachers should invite students to engage in generative learning. By selecting, organizing, and reintegrating their well-established knowledge, students solidify that learning, and make it more flexible and useful.


Brod, G. (2021). Generative learning: Which strategies for what age?. Educational Psychology Review33(4), 1295-1318.


* Grammar nerds: if you’re wondering why I wrote “Zoe and Mark Enser’s book” instead of “Zoe and Mark Ensers’ book” — well — I found that apostrophe question a stumper. I consulted twitter and got emphatic and contradictory answers. I decided to go with the apostrophe form that makes each Enser and invidivual — because each one is. But, I could be technically wrong about that form.

Graphic Disorganizers; or, When Should Teachers Decorate Handouts?
Andrew Watson
Andrew Watson

Recent research has raised questions about classroom decoration. In this post, our blogger wonders about decorating HANDOUTS:


Teachers regularly face competing goals. For instance:

On the one hand — obviously — we want our students to learn.

And, on the other hand, we equally obviously want them to feel safe, comfortable, at home.

To accomplish that second goal, we might decorate our classrooms. The more adorable cat photos, inspirational posters, and familiar art work, the homier the classroom will feel.

A colorful bar graph, showing 20%, 40%, 60%,etc.

But here’s the problem: what if all that decoration (in pursuit of goal #2) interferes with goal #1?

What if decorations inhibit learning?

The Story so Far

I’ve written about this topic a fair amount, and the story so far gives us reason to concentrate on that question.

So: do decorations get in the way of learning? According to this study: yes.

Is this a problem for all age groups? Research done by this team suggests: yes.

When I showed teachers all this research, they often raised a perfectly plausible doubt:

Don’t students get used to the decorations? According to this recent study: nope.

Given these studies (and many others), I think we’ve got a compelling narrative encouraging our profession to rethink decoration. While I don’t think that classrooms should be sterile fields … I do worry we’ve gone substantially too far down the “let’s decorate!” road.

“I’ve Still Got Questions”

Even with this research pool, I think teachers can reasonably ask for more information. Specifically: “what counts as a decoration?”

I mean: is an anchor chart decration?

How about a graphic organizer?

A striking picture added to a handout? (If they’re answering questions about weather, why would it be bad to have a picture of a thunderstorm on the handout?)

An anchor chart might be “decorative.” But, if students use it to get their math work done, doesn’t it count as something other than a “decoration”?

In other words: if I take down an anchor chart, won’t my students learn less?

Because practically everything in the world can be made prettier, we’ve got an almost infinite number of things that might be decorated. (I’ve done some work at a primary school that has arrows embedded in the floor: arrows pointing to, say, Beijing or Cairo or Los Angeles. Does that count as “decoration”?)

For this reason, research to explore this question gets super detailed. But if we find enough detailed examples that more-or-less resemble our own classroom specifics, we can start to credit a “research-informed” answer.

Graphic Disorganizer?

A friend recently pointed me to a study about reading bar graphs.

This research team wanted to know if “decorated” bar graphs make learning harder for students in kindergarten, and in 1st and 2nd grade.

So, if a bar graph shows the number of gloves in the lost and found box each week, should the bar representing that number…

Be decorated with little glove icons?

Or, should it be filled in with stripes?

How about dots?

This study in fact incorporates four separate experiments; the researchers keep repeating their basic paradigm and modifying a variable or two. For this reason, they can measure quite precisely the problems and the factors that cause them.

And — as you remember — they’re working with students in three different grades. So: they’ve got LOTS of data to report…

The Headlines, Please…

Rather than over-decorate this blog post with a granular description, I’ll hit a few telling highlights.

First: iconic decorations inhibit learning.

That is: little gloves on the bar graph made it harder for students to learn to read those graphs correctly.

Honestly, this result doesn’t surprise me. Gloves are concrete and familiar, whereas bar graphs represent more abstract concepts. No wonder the little tykes get confused.

Second: stripes and dots also inhibit learning.

Once again, the students tend to count the objects contained within the bar — even little dots! — instead of the observing the height of the bar

This finding did surprise me a bit more. I wasn’t surprised that young learners focus on concrete objects (gloves, trees), but am intrigued to discover they also want to count abstract objects (lines, dots) within the bar.

Third: age matters.

That is: 1st graders did better than kindergarteners. And, 2nd graders better than first graders.

On the one hand, this result makes good sense. As we get older, we get better at understanding more abstract concepts, and at controlling attention.

On the other hand, this finding points to an unfortunate irony. Our profession tends to emphasize decoration in classrooms for younger students.

In other words: we decorate most where decoration might do the most harm! (As a high-school teacher, I never got any instructions about decoration, and was never evaluated on it.)

In Brief

We teachers certainly might be tempted to make our environments as welcoming — even festive! — as possible.

And yet, we’ve got a larger (and larger) pool of research pointing out the distraction in all that decoration.

This concern goes beyond — say — adorable dolphin photos on the wall, or uplifting quotations on waterfall posters.

In this one study, something as seemingly-harmless as dots in a bar graph can interfere with our students learning.

When it comes to decorating — even worksheets and handouts — we should keep the focus on the learning.


Fisher, A. V., Godwin, K. E., & Seltman, H. (2014). Visual environment, attention allocation, and learning in young children: When too much of a good thing may be bad. Psychological science25(7), 1362-1370.

Godwin, K. E., Leroux, A. J., Seltman, H., Scupelli, P., & Fisher, A. V. (2022). Effect of Repeated Exposure to the Visual Environment on Young Children’s Attention. Cognitive Science46(2), e13093.

Kaminski, J. A., & Sloutsky, V. M. (2013). Extraneous perceptual information interferes with children’s acquisition of mathematical knowledge. Journal of Educational Psychology105(2), 351.

The Jigsaw Advantage: Should Students Puzzle It Out? [Repost]
Andrew Watson
Andrew Watson

This post got a LOT of attention when our blogger first wrote it back in February:


The “jigsaw” method sounds really appealing, doesn’t it?

Imagine that I’m teaching a complex topic: say, the digestive system.

Asking students to understand all those pieces — pancreas here, stomach there, liver yon — might get overwhelming quickly.

So, I could break that big picture down into smaller pieces: puzzle pieces, even. And, I assign different pieces to subgroups of students.

Group A studies the liver.

Group B, they’ve got the small intestine.

Group C focuses on the duodenum.

Once each group understands its organ — its “piece of the puzzle” — they can explain it to their peers. That is: they re-assemble the larger puzzle from the small, understandable bits.

This strategy has at least two potential advantages:

First, by breaking the task down into smaller steps, it reduces working memory load. (Blog readers know that I’m a BIG advocate for managing working memory load.)

Second, by inviting students to work together, it potentially increases engagement.

Sadly, both those advantages have potential downsides.

First: the jigsaw method could reduce working memory demands initially. But: it also increases working memory demands in other ways:

… students must figure out their organ themselves, and

… they have to explain their organ (that’s really complicated!), and

… they have to understand other students’ explanations of several other organs!

Second: “engagement” is a notoriously squishy term. It sounds good — who can object to “engagement”? — but how do we define or measure it?

After all, it’s entirely possible that students are “engaged” in the process of teaching one another, but that doesn’t mean they’re helpfully focused on understanding the core ideas I want them to learn.

They could be engaged in, say, making their presentation as funny as possible — as a way of flirting with that student right there. (Can you tell I teach high school?)

In other words: it’s easy to spot ways that the jigsaw method could help students learn, or could interfere with their learning.

If only we had research on the subject…

Research on the Subject

A good friend of mine recently sent me a meta-analysis puporting to answer this question. (This blog post, in fact, springs from his email.)

It seems that this meta-analysis looks at 37 studies and finds that — YUP — jigsaw teaching helps students learn.

A closeup of four hands holding out single puzzle pieces, trying to see how to put them together well.

I’m always happy to get a research-based answer…and I always check out the research.

In this case, that “research-based” claim falls apart almost immediately.

The meta-analysis crunches the results of several studies, and claims that jigsaw teaching has a HUGE effect. (Stats people: it claims a Cohen’s d of 1.20 — that’s ENORMOUS.)

You’ve probably heard Carl Sagan’s rule that “extraordinary claims require extraordinary evidence.” What evidence does this meta-analysis use to make its extraordinary claim?

Well:

… it doesn’t look at 37 studies, but at SIX (plus five student dissertations), and

… it’s published in a journal that doesn’t focus on education or psychology research, and

… as far as I can tell, the text of the meta-analysis isn’t available online — a very rare limitation.

For that reason, we know nothing about the included studies.

Do they include a control condition?

Were they studying 4th graders or college students?

Were they looking at science or history or chess?

We just don’t know.

So, unless I can find a copy of this meta-analysis online (I looked!), I don’t think we can accept it as extraordinary evidence of its extraordinary claim.

Next Steps

Of course, just because this meta-analysis bonked doesn’t mean we have no evidence at all. Let’s keep looking!

I next went to my go-to source: elicit.com. I asked it to look for research answering this question:

Does “jigsaw” teaching help K-12 students learn?

The results weren’t promising.

Several studies focus on college and graduate school. I’m glad to have that information, but college and graduate students…

… already know a great deal,

… are especially committed to education,

… and have higher degrees of cognitive self-control than younger students.

So, they’re not the most persuasive source of information for K-12 teachers.

One study from the Phillipines showed that, yes, students who used the jigsaw method did learn. But it didn’t have a control condition, so we don’t know if they would have learned more doing something else.

After all, it’s hardly a shocking claim to say “the students studied something, and they learned something.” We want to know which teaching strategy helps them learn the most!

Still others report that “the jigsaw method works” because “students reported higher levels of engagement.”

Again, it’s good that they did so. But unless they learned more, the “self-reports of higher engagement” argument doesn’t carry much weight.

Recent News

Elicit.com did point me to a highly relevant and useful study, published in 2022.

This study focused on 6th graders — so, it’s probably more relevant to K-12 teachers.

It also included control conditions — so we can ask “is jigsaw teaching more effective than something else?” (Rather than the almost useless question: “did students in a jigsaw classroom know more afterwards than they did before?” I mean: of course they did…)

This study, in fact, encompases five separate experiments. For that reason, it’s much too complex to summarize in detail. But the headlines are:

The study begins with a helpful summary of the research so far. (Tl;dr : lots of contradictory findings!)

The researchers worked carefully to provide appropriate control conditions.

They tried different approaches to jigsaw teaching — and different control conditions — to reduce the possibility that they’re getting flukey results.

It has all the signs of a study where the researchers earnestly try to doubt and double-check their own findings.

Their conclusions? How much extra learning did the jigsaw method produce?

Exactly none.

Over the course of five experiments (some of which lasted an entire school term), students in the jigsaw method group learned ever-so-slightly-more, or ever-so-slightly-less, than their control group peers.

The whole process averaged out to no difference in learning whatsoever.

The Last Word?

So, does this recent study finish the debate? Should we cancel all our jigsaw plans?

Based on my reading of this research, I do NOT think you have to stop jigsawing — or, for that matter — start jigsawing. Here’s why:

First: we’ve got research on both sides of the question. Some studies show that it benefits learning; others don’t. I don’t want to get all bossy based on such a contradictory research picture.

Second: I suspect that further research will help us use this technique more effectively.

That is: jigsaw learning probably helps these students learn this material at this point in the learning process. But it doesn’t help other students in other circumstances.

When we know more about those boundary conditions, we will know if and when to jigsaw with our students.

I myself suspect that we need to focus on a key, under-discussed step in the process: when and how the teacher ensures that each subgroup understands their topic correctly before they “explain” it to the next group. If they misunderstand their topic, after all, they won’t explain it correctly!

Third: let’s assume that this recent study is correct; jigsaw teaching results in no extra learning. Note, however, that it doesn’t result in LESS learning — according to these results, it’s exactly the same.

For that reason, we can focus on the other potential benefits of jigsaw learning. If it DOES help students learn how to cooperate, or foster motivation — and it DOESN’T reduce their learning — then it’s a net benefit.

In sum:

If you’re aware of the potential pitfalls of the jigsaw method (working memory overload, distraction, misunderstanding) and you have plans to overcome them, and

If you really like its potential other benefits (cooperation, motivation),

then you can make an informed decision about using this technique well.

At the same time, I certainly don’t think we have enough research to make jigsaw teaching a requirement.

As far as I know, we just don’t have a clear research picture on how to do it well.


By the way, after he wrote this post, our blogger then FOUND the missing online meta-analysis. His discussion of that discovery is here.


Stanczak, A., Darnon, C., Robert, A., Demolliens, M., Sanrey, C., Bressoux, P., … & Butera, F. (2022). Do jigsaw classrooms improve learning outcomes? Five experiments and an internal meta-analysis. Journal of Educational Psychology114(6), 1461.

When Experience Contradicts Research: The Problem with Certainty
Andrew Watson
Andrew Watson

A friend recently told me about his classroom experience using mindfulness to promote thoughtful and effective writing.

Young girl at school practicing yoga on a mat

He started the year by explaining the benefits of mindfulness to his students. After that introduction, he began each class period with five minutes of mindful silence.

Although he wasn’t running a research study, he kept two kinds of “data.”

First: his own impression is that students got appreciably better at adding insight, depth, and detail to their writing.

For instance, instead of saying “the mood was optimistic,” they might write “the sun came out” or “I could hear lively music in the background.”

They also “got stuck” less often during the writing process.

Second, he surveyed his students at the end of the year.

He got LOTS of positive responses. One wrote:

I was surprised by how much I looked forward to meditation as the weeks went on, it helped calm me down before big assignments (like practice exams or actual tests) or just gave me a breather during stressful moments.

Another:

I thought meditation was very helpful in class this year because it helped me focus on my clarity of mind. I especially liked it before writing essays because it relaxed me and helped my thoughts flow clearer I think.

A third:

I would start the year by doing it everyday. I’ve started implementing it in my home life and have felt more present and conscious not only my daily interactions but also my thought process and decision making.

I could go on quoting like this for a few pages.

Based on this experience, my friend asked me what research shows about the effects of mindfulness in the classroom…

What Research Shows…

Alas, research into the classroom benefits of mindfulness doesn’t obviously align with my friend’s experience.

Yes, we do have some encouraging research about the ways that mindfulness can reduce stress.

Yes, we do have some correlational research showing a relationship between mindfulness and academic accomplishment.

But my honest opinion is that — so far — we don’t have a strong enough research pool to make inclusion of mindfulness programs a “research-supported” practice in schools.

In particular, we have an ENORMOUS recent study (over 8000 students!) showing that mindfulness training provided students with NO BENEFITS AT ALL, and perhaps (very small) increases in the likelihood of a few problems.

I’m game for the idea that mindfulness training might help students and teachers. But I don’t see enough consistent, high-quality research findings to champion the practice myself.

A Friend’s Quandry

So, a quandry:

My friend’s EXPERIENCE suggests that “brief mindfulness exercises help.”

But RESEARCH suggests that “mindfulness training doesn’t necessarily do anything.”

What should he do?

Equally important: what should YOU do if research suggests that one of your teaching practices a) doesn’t help, or b) might actually hurt?

Let me suggest a few steps.

STEP ZERO: even before we begin answering this question, I think it’s essential to admit that it’s both a challenging and an essential question. (So challenging and important that I wrote a book about it.)

Someone might say to you, “there’s an obviously easy answer to that question.” I think that person is wrong.

STEP ONE: recognize that both kinds of knowledge have their own benefits and weaknesses.

For instance, research helps us see long-term effects that we teachers most often miss.

We know that “short-term performance is an unreliable indicator of long-term learning.” (Thanks, Dr. Nick Soderstrom (on Twitter @NickSoderstrom).)

So, research can help us see when things that feel really good in the classroom right now don’t actually produce the long-term benefits we’re hoping for.

Likewise, research helps us overcome our biases. My friend works REALLY HARD to make his mindfulness intervention work. He devotes LOTS of time to it. (So do his students!)

NO WONDER he sees all the benefits! Well…research can help us see past that motivated reasoning.

Not So Fast…

At the same time, a teacher’s classroom experience provides insights that research just can’t spot.

Teachers combine variables. Researchers isolate variables. That is: we see combinations that researchers rarely explore — combinations like “daily mindfulness + writing.”

Also: research always exists within “boundary conditions.”

A research study done with 3rd grade Montessori students learning long division might — but might not! — apply to dyslexic 11th graders studying history at a military academy.

Unless we have SO MUCH research on a particular topic, a research-only perspective might miss the places that a technique DOES work because we’ve seen that it DOESN’T help in all these other places.

Teachers — however — might discover those places.

Don’t Stop Now

Okay, so we know that this quandry is an important question and requires complex answers (step 0); and we know that research and experience provide separate and useful kinds of knowledge (step 1).

What’s next?

STEP TWO: Get specific.

In that 8000+ person study: what exactly did they do? And: how well does “what they did” align with “what my friend did”?

In the 8000+ person study, they had students practice mindfulness for 10 weeks. They wanted to know if a) the students would keep doing mindfulness on their own after the 10 weeks, and b) if their mindfulness practice would help them — or not — according to 28 different criteria.

The answers were a) “nope, not really” and b) “nope, not at all.”

But: OF COURSE the students didn’t get the benefits of mindfulness (that’s b) because they didn’t continue the mindfulness exercises at home (that’s a).

Notice, however, that this research doesn’t align with my friend’s strategy. His students DID continue the mindfulness because he started every class with time for mindfulness.

True: students who don’t practice mindfulness don’t benefit from it; but my friend’s students might benefit because they had time to practice it.

In other words: that big study shouldn’t necessarily discourage my friend, because his strategy differs from their strategy in meaningful ways.

STEP THREE: Combine humility with determination.

Here’s the trickiest part.

As I’ve just argued, this big study might not apply to my friend’s approach.

AND: my friend’s “biased” perspective (we ALL have “biased” perspectives) might make it difficult for him to recognize the shortcomings in his approach.

For this reason, I think we have to push ourselves relentlessly to balance humility (“I should really focus on and respect research guidance!”) with determination (“My classroom experience is valuable and I should give it weight in my decision making!”).

But, gosh, that’s a difficult balancing act.

It’s tempting to pick one side or the other:

I shall do what research tells me!

or

My training and instincts matter most!

Instead, we should strive to give both sources of knowledge their due…and always doubt our own certainties.

An Excellent Example

Note, by the way, that my friend was doing just that.

After all, his own classroom experience — and his students’ enthusiastic feedback! — gave him all sorts of reasons to be entirely confident.

He might easily have said “research, schmeesearch” and gone ahead with his mindfulness routine.

Instead, he looked for a reason to doubt his own certainty; that is, he asked me what research has to say … knowing that it might not support his experience. (He had, after all, just finished reading my book on evaluating research-based teaching claims.)

He now has to decide the best way to procede. And: in my view, he will do so all the more effectively because he allowed himself to doubt.

In this field, certainty is the enemy of improvement and excellence.

Updating the Great Cold-Call Debate: Does Gender Matter?
Andrew Watson
Andrew Watson

Edu-Twitter predictably cycles through a number of debates; in recent weeks, the Great Cold-Call Debate has reheated. (You see what I did there.)

Team A argues that cold calling — that is, calling on students who haven’t raised their hands — is a vital strategy to increase student participation and learning. (Additional benefit: it allows teachers to check for understanding with strategic rapidity and flexibility.)

Team B argues that cold calling raises students’ stress levels, and thereby hampers their learning. (Additional detriment: it especially raises stress for students who face a variety of classroom difficulties–from trauma to special educational needs.)

A young student sits at a desk with her hands covering her eyes; a sympathetic teacher stands next to her with his hand on her shoulder

This “debate” mostly involves making strong claims — “it’s vital!”; “no, it’s dreadful!” — but rarely draws on research to explore its key contentions.

In fact, the debate doesn’t often turn to research because we don’t have much research. But given the energy of recent arguments, I thought I’d check to see if any recent studies can help us out…

Picking Up Where They Left Off

A few years ago, I wrote about a 2013 study done by Dr. Elise Dallimore and Co. This research team — working with college sophomores — found that cold calling increased voluntary class participation and decreased class discomfort.

That is: compared to students in low cold-calling classes, those in high cold-calling classes spoke up more on their own, and expressed greater levels of comfort in class.

That sounds like a win-win.

Of course, all studies include limitations — no one study can explore everything. Team Dallimore spotted an obvious concern with their first study: it didn’t consider the effect of gender on class participation.

We have LOTS of research showing that women feel less comfortable participating in class discussions, and — unsurprisingly — speak up less often than men.

So, picking up where they left off, Dallimore and Co. wanted to see if cold calling reduced or increased this gender split.

In other words: if cold calling benefits students overall (the 2013 study), does it have a different effect on men and women?

Important note:

Dallimore’s first study more-or-less supported Team A as described above: “cold calling encourages class participation.”

Her second study starts to address the the concerns of Team B. We might reasonably worry that women — who (on average) go into many classes feeling stressed about participation — will feel EXTRA stress if that participation becomes mandatory.

This second study explores that plausible concern.

Take II

Like her first study, Dallimore’s second study looks at class participation in several college Accounting classes.

They divided those classes into two groups: “low” cold-calling (less than 25% of the questions were framed as cold call), and “high” cold-calling (more than 33% — and as high as 84%!!).

According to survey data, male and female students went into these classes with roughly the same perceptions of class participation.

So Dallimore’s questions were:

First: Did students’ behavior change based on high- vs. low-cold-calling? And,

Second: Did gender matter for any changes?

In answer to the first question: over time, students volunteered more in the high-cold-calling classes than the low-cold-calling classes.

Whether you’re counting the percentage of students who participated or the number of questions that students asked, those numbers went up.

So, cold calling INCREASED voluntary participation.

Better and Better

Of course, we’re happy to see that cold calling increased participation. However, that finding simply replicates the 2013 study. What about the second question: did gender matter?

Well, both men and women voluntarily participated more in high-cold-calling classes. And, women’s participation increased more than men’s participation.

Specifically: 57% of men voluntarily participated in the low-cold-calling classes, whereas 73% did in the high-cold-calling classes. That’s a difference of  16%.

For women: 52% voluntarily participated in the low-cold-calling classes, whereas 82% did in the high-cold-calling classes. That’s a difference of 30%.

We get the same result if we look at the number of questions asked. Men asked more questions in high-cold-calling classes than in low-cold-calling classes; the average number went from 1.78 to 2.13.

Women asked LOTS more question: the average went from 1.33 to 2.6.

In brief: high-cold-calling classes increased participation for everyone — especially women.

Not So Fast

So far, Dallimore’s 2019 study seems like a slam dunk for Team A. It says, basically: “cold calling does help and doesn’t hurt.”

At the same time, I don’t think we can now rush to conclude “all teachers must cold call all the time.”

I have three reasons to hesitate:

First: both Dallimore’s studies were done with college students. As I’ve written elsewhere, I don’t think that college students make great proxies for K-12 students. On average:

College students know more than K-12 students.

They have higher level of academic and personal maturity.

They probably have higher levels of academic motivation — they’re in college!

So, these findings might apply to K-12 students…but we don’t have research (that I know of) to demonstrate that conclusion.

Second: as I wrote in a blog post last fall, bad cold calling does exist. As the research study described there explains, we need to refine our question.

Instead of asking: “is cold-calling a good idea?”

We should ask: “how can we hone our cold-calling technique to get its benefits without its potential harms?”

Let’s get some really good answers to that second question before we insist on spreading the practice.

Third: At least so far, research suggests that Team B’s concern — “the stress that results from cold calling hampers learning” — doesn’t hold true for most students.

At the same time, our goal is not that most students learn, but that all of them do.

We should accept the almost certainly true statement that cold calling will stress out a few students to the detriment of their learning. Part of “honing out technique” — described in my second point above — will be identifying and working with those students.

To Sum Up

Despite all the heated debate about cold calling, I think we have the beginnings of a persuasive research pool. So far — at least — it seems to encourage class participation (which should, in turn, increase learning).

Yes: we need to be good at this technique for it to work. Yes: we should respect important boundary conditions.

And, based on the research I’ve seen so far, I plan to keep using cold calling myself.

Coda

After I wrote this blog post, I discovered that LOTS of people have been adding to this debate.

Here’s Bradley Busch.

Here’s Tom Sherrington.

No doubt others have got wise ideas!


Dallimore, E. J., Hertenstein, J. H., & Platt, M. B. (2013). Impact of cold-calling on student voluntary participation. Journal of Management Education37(3), 305-341.

Dallimore, E. J., Hertenstein, J. H., & Platt, M. B. (2019). Leveling the playing field: How cold-calling affects class discussion gender equity. Journal of Education and Learning8(2), 14-24.

Can students “catch” attention? Introducing “Attention Contagion”
Andrew Watson
Andrew Watson

Every teacher knows: students won’t learn much if they don’t pay attention. How can we help them do so? (In my experience, shouting “pay attention!” over and over doesn’t work very well…)

So, what else can we do?

Close up of student with head down on a wooden desk, hair covering his or her face. Other students are working out of focus in the background.

As is so often the case, I think “what should we do?” isn’t exactly the right question.

Instead, we teachers should ask: “how should we THINK ABOUT what we do?”

When we have good answers to the “how-do-we-think?” question, we can apply those thought processes to our own classrooms and schools.

So, how should we think about attention?

Let me introduce “attention contagion”…

Invisible Peer Pressure

A research team in Canada wanted to know: can students “catch” attention from one another? How about inattention?

That is: if Student A pays attention, will that attentiveness cause Student B to pay more attention as well?

Or, if Student A seems inattentive, what happens with Student B?

To study this question, a research team led by Dr. Noah Forrin had two students — A and B — watch a 50 minute video in the same small classroom.

In this case, “Student A” was a “confederate”: that is, s/he had been trained…

to “pay attention”: that is, focus on the video and take frequent notes, or

NOT to “pay attention”: that is, slouch, take infrequent notes, glance at the clock.

Student A sat diagonally in front of Student B, visible but off to the side.

What effect did A’s behavior have on B?

Well, when A paid attention, B

… reported focusing more,

… focused more, got less drowsy, and fidgeted less,

… took more notes, and

… remembered slightly more on a subsequent multiple-choice quiz.

These results seem all the more striking because the inattentive confederate had been trained NOT to be conspicuously distracting. NO yawning. NO fidgeting. NO pen tapping.

The confederates, in other words, didn’t focus on the video, but didn’t try to draw focus themselves. That simple lack of focus — even without conspicuous, noisy distraction — sapped Student B’s attention.

Things Get Weird

So far, this study (probably) confirms teacherly intuition. I’m not terribly suprised that one student’s lack of focus has an effect on other students. (Forrin’s research team wasn’t surprised either. They had predicted all these results, and have three different theories to explain them.)

But: what happens if Student A sits diagonally BEHIND Student B, instead of diagonally in front?

Sure enough, Forrin’s team found the same results.

Student B caught Student A’s inattention, even if s/he couldn’t see it.

I have to say: that result seems quite arresting.

Forrin and Co. suggest that Student B could hear Student A taking notes — or not taking notes. And this auditory cue served as a proxy for attentiveness more broadly.

But whatever the reason, “attention contagion” happens whether or not students can see each other. (Remember: the confederates had been trained not to be audibly distracting — no sighs, no taps, no restless jostling about.)

Classroom Implications

I wrote at the top that teachers can use research to guide our thinking. So, what should we DO when we THINK about attention contagion?

To me, this idea shifts the focus somewhat from individual students to classroom norms.

That is: in the old days, I wanted that-student-right-there to pay attention. To do so, I talked to that-there-student. (“Eyes on the board, please, Bjorn.”)

If attention contagion is a thing, I can help that-student-right-there pay attention by ensuring ALL students are paying attention.

If almost ALL of my students focus, that-student-right-there might “catch” their attentiveness and focus as well.

Doug Lemov — who initially drew my attention to this study — rightly points to Peps Mccrea’s work.

Mccrea has written substantively about the importance of classroom norms. When teachers establish focus as a classroom norm right from the beginning, this extra effort will pay off down the road.

The best strategy to do so will vary from grade to grade, culture to culture, teacher to teacher. But this way of thinking can guide us in doing in our specific classroom context.

Yes, Yes: Caveats

I should point out that the concept of “attention contagion” is quite new — and its newness means we don’t have much reasearch at all on the topic.

Forrin’s team has replicated the study with online classrooms (here) — but these are the only two studies on the topic that I know of.

And: two studies is a VERY SMALL number.

Note, too, that the research was done (for very good reasons) in a highly artificial context.

So, we have good reason to be curious about pursuing this possibility. But we should not take “attention contagion” to be a settled conclusion in educational psychology research.

TL;DR

To help our students pay attention, we can work with individual students on their behavior and focus.

And, we can emphasize classroom norms of focus — norms that might help students “catch” attention from one another.

Especially if more classroom research reinforces this practice, we can rethink attention with “contagion” in mind — and thus help our students learn.


Forrin, N. D., Huynh, A. C., Smith, A. C., Cyr, E. N., McLean, D. B., Siklos-Whillans, J., … & MacLeod, C. M. (2021). Attention spreads between students in a learning environment. Journal of Experimental Psychology: Applied27(2), 276.

Graphic Disorganizers; or, When Should Teachers Decorate Handouts?
Andrew Watson
Andrew Watson

Teachers regularly face competing goals. For instance:

On the one hand — obviously — we want our students to learn.

And, on the other hand, we equally obviously want them to feel safe, comfortable, at home.

To accomplish that second goal, we might decorate our classrooms. The more adorable cat photos, inspirational posters, and familiar art work, the homier the classroom will feel.

A colorful bar graph, showing 20%, 40%, 60%,etc.

But here’s the problem: what if all that decoration (in pursuit of goal #2) interferes with goal #1?

What if decorations inhibit learning?

The Story so Far

I’ve written about this topic a fair amount, and the story so far gives us reason to concentrate on that question.

So: do decorations get in the way of learning? According to this study: yes.

Is this a problem for all age groups? Research done by this team suggests: yes.

When I showed teachers all this research, they often raised a perfectly plausible doubt:

Don’t students get used to the decorations? According to this recent study: nope.

Given these studies (and many others), I think we’ve got a compelling narrative encouraging our profession to rethink decoration. While I don’t think that classrooms should be sterile fields … I do worry we’ve gone substantially too far down the “let’s decorate!” road.

“I’ve Still Got Questions”

Even with this research pool, I think teachers can reasonably ask for more information. Specifically: “what counts as a decoration?”

I mean: is an anchor chart decration?

How about a graphic organizer?

A striking picture added to a handout? (If they’re answering questions about weather, why would it be bad to have a picture of a thunderstorm on the handout?)

An anchor chart might be “decorative.” But, if students use it to get their math work done, doesn’t it count as something other than a “decoration”?

In other words: if I take down an anchor chart, won’t my students learn less?

Because practically everything in the world can be made prettier, we’ve got an almost infinite number of things that might be decorated. (I’ve done some work at a primary school that has arrows embedded in the floor: arrows pointing to, say, Beijing or Cairo or Los Angeles. Does that count as “decoration”?)

For this reason, research to explore this question gets super detailed. But if we find enough detailed examples that more-or-less resemble our own classroom specifics, we can start to credit a “research-informed” answer.

Graphic Disorganizer?

A friend recently pointed me to a study about reading bar graphs.

This research team wanted to know if “decorated” bar graphs make learning harder for students in kindergarten, and in 1st and 2nd grade.

So, if a bar graph shows the number of gloves in the lost and found box each week, should the bar representing that number…

Be decorated with little glove icons?

Or, should it be filled in with stripes?

How about dots?

This study in fact incorporates four separate experiments; the researchers keep repeating their basic paradigm and modifying a variable or two. For this reason, they can measure quite precisely the problems and the factors that cause them.

And — as you remember — they’re working with students in three different grades. So: they’ve got LOTS of data to report…

The Headlines, Please…

Rather than over-decorate this blog post with a granular description, I’ll hit a few telling highlights.

First: iconic decorations inhibit learning.

That is: little gloves on the bar graph made it harder for students to learn to read those graphs correctly.

Honestly, this result doesn’t surprise me. Gloves are concrete and familiar, whereas bar graphs represent more abstract concepts. No wonder the little tykes get confused.

Second: stripes and dots also inhibit learning.

Once again, the students tend to count the objects contained within the bar — even little dots! — instead of the observing the height of the bar

This finding did surprise me a bit more. I wasn’t surprised that young learners focus on concrete objects (gloves, trees), but am intrigued to discover they also want to count abstract objects (lines, dots) within the bar.

Third: age matters.

That is: 1st graders did better than kindergarteners. And, 2nd graders better than first graders.

On the one hand, this result makes good sense. As we get older, we get better at understanding more abstract concepts, and at controlling attention.

On the other hand, this finding points to an unfortunate irony. Our profession tends to emphasize decoration in classrooms for younger students.

In other words: we decorate most where decoration might do the most harm! (As a high-school teacher, I never got any instructions about decoration, and was never evaluated on it.)

In Brief

We teachers certainly might be tempted to make our environments as welcoming — even festive! — as possible.

And yet, we’ve got a larger (and larger) pool of research pointing out the distraction in all that decoration.

This concern goes beyond — say — adorable dolphin photos on the wall, or uplifting quotations on waterfall posters.

In this one study, something as seemingly-harmless as dots in a bar graph can interfere with our students learning.

When it comes to decorating — even worksheets and handouts — we should keep the focus on the learning.


Fisher, A. V., Godwin, K. E., & Seltman, H. (2014). Visual environment, attention allocation, and learning in young children: When too much of a good thing may be bad. Psychological science25(7), 1362-1370.

Godwin, K. E., Leroux, A. J., Seltman, H., Scupelli, P., & Fisher, A. V. (2022). Effect of Repeated Exposure to the Visual Environment on Young Children’s Attention. Cognitive Science46(2), e13093.

Kaminski, J. A., & Sloutsky, V. M. (2013). Extraneous perceptual information interferes with children’s acquisition of mathematical knowledge. Journal of Educational Psychology105(2), 351.

The Jigsaw Advantage: Should Students Puzzle It Out?
Andrew Watson
Andrew Watson

The “jigsaw” method sounds really appealing, doesn’t it?

Imagine that I’m teaching a complex topic: say, the digestive system.

Asking students to understand all those pieces — pancreas here, stomach there, liver yon — might get overwhelming quickly.

So, I could break that big picture down into smaller pieces: puzzle pieces, even. And, I assign different pieces to subgroups of students.

Group A studies the liver.

Group B, they’ve got the small intestine.

Group C focuses on the duodenum.

Once each group understands its organ — its “piece of the puzzle” — they can explain it to their peers. That is: they re-assemble the larger puzzle from the small, understandable bits.

This strategy has at least two potential advantages:

First, by breaking the task down into smaller steps, it reduces working memory load. (Blog readers know that I’m a BIG advocate for managing working memory load.)

Second, by inviting students to work together, it potentially increases engagement.

Sadly, both those advantages have potential downsides.

First: the jigsaw method could reduce working memory demands initially. But: it also increases working memory demands in other ways:

… students must figure out their organ themselves, and

… they have to explain their organ (that’s really complicated!), and

… they have to understand other students’ explanations of several other organs!

Second: “engagement” is a notoriously squishy term. It sounds good — who can object to “engagement”? — but how do we define or measure it?

After all, it’s entirely possible that students are “engaged” in the process of teaching one another, but that doesn’t mean they’re helpfully focused on understanding the core ideas I want them to learn.

They could be engaged in, say, making their presentation as funny as possible — as a way of flirting with that student right there. (Can you tell I teach high school?)

In other words: it’s easy to spot ways that the jigsaw method could help students learn, or could interfere with their learning.

If only we had research on the subject…

Research on the Subject

A good friend of mine recently sent me a meta-analysis puporting to answer this question. (This blog post, in fact, springs from his email.)

It seems that this meta-analysis looks at 37 studies and finds that — YUP — jigsaw teaching helps students learn.

A closeup of four hands holding out single puzzle pieces, trying to see how to put them together well.

I’m always happy to get a research-based answer…and I always check out the research.

In this case, that “research-based” claim falls apart almost immediately.

The meta-analysis crunches the results of several studies, and claims that jigsaw teaching has a HUGE effect. (Stats people: it claims a Cohen’s d of 1.20 — that’s ENORMOUS.)

You’ve probably heard Carl Sagan’s rule that “extraordinary claims require extraordinary evidence.” What evidence does this meta-analysis use to make its extraordinary claim?

Well:

… it doesn’t look at 37 studies, but at SIX (plus five student dissertations), and

… it’s published in a journal that doesn’t focus on education or psychology research, and

… as far as I can tell, the text of the meta-analysis isn’t available online — a very rare limitation.

For that reason, we know nothing about the included studies.

Do they include a control condition?

Were they studying 4th graders or college students?

Were they looking at science or history or chess?

We just don’t know.

So, unless I can find a copy of this meta-analysis online (I looked!), I don’t think we can accept it as extraordinary evidence of its extraordinary claim.

Next Steps

Of course, just because this meta-analysis bonked doesn’t mean we have no evidence at all. Let’s keep looking!

I next went to my go-to source: elicit.com. I asked it to look for research answering this question:

Does “jigsaw” teaching help K-12 students learn?

The results weren’t promising.

Several studies focus on college and graduate school. I’m glad to have that information, but college and graduate students…

… already know a great deal,

… are especially committed to education,

… and have higher degrees of cognitive self-control than younger students.

So, they’re not the most persuasive source of information for K-12 teachers.

One study from the Phillipines showed that, yes, students who used the jigsaw method did learn. But it didn’t have a control condition, so we don’t know if they would have learned more doing something else.

After all, it’s hardly a shocking claim to say “the students studied something, and they learned something.” We want to know which teaching strategy helps them learn the most!

Still others report that “the jigsaw method works” because “students reported higher levels of engagement.”

Again, it’s good that they did so. But unless they learned more, the “self-reports of higher engagement” argument doesn’t carry much weight.

Recent News

Elicit.com did point me to a highly relevant and useful study, published in 2022.

This study focused on 6th graders — so, it’s probably more relevant to K-12 teachers.

It also included control conditions — so we can ask “is jigsaw teaching more effective than something else?” (Rather than the almost useless question: “did students in a jigsaw classroom know more afterwards than they did before?” I mean: of course they did…)

This study, in fact, encompases five separate experiments. For that reason, it’s much too complex to summarize in detail. But the headlines are:

The study begins with a helpful summary of the research so far. (Tl;dr : lots of contradictory findings!)

The researchers worked carefully to provide appropriate control conditions.

They tried different approaches to jigsaw teaching — and different control conditions — to reduce the possibility that they’re getting flukey results.

It has all the signs of a study where the researchers earnestly try to doubt and double-check their own findings.

Their conclusions? How much extra learning did the jigsaw method produce?

Exactly none.

Over the course of five experiments (some of which lasted an entire school term), students in the jigsaw method group learned ever-so-slightly-more, or ever-so-slightly-less, than their control group peers.

The whole process averaged out to no difference in learning whatsoever.

The Last Word?

So, does this recent study finish the debate? Should we cancel all our jigsaw plans?

Based on my reading of this research, I do NOT think you have to stop jigsawing — or, for that matter — start jigsawing. Here’s why:

First: we’ve got research on both sides of the question. Some studies show that it benefits learning; others don’t. I don’t want to get all bossy based on such a contradictory research picture.

Second: I suspect that further research will help us use this technique more effectively.

That is: jigsaw learning probably helps these students learn this material at this point in the learning process. But it doesn’t help other students in other circumstances.

When we know more about those boundary conditions, we will know if and when to jigsaw with our students.

I myself suspect that we need to focus on a key, under-discussed step in the process: when and how the teacher ensures that each subgroup understands their topic correctly before they “explain” it to the next group. If they misunderstand their topic, after all, they won’t explain it correctly!

Third: let’s assume that this recent study is correct; jigsaw teaching results in no extra learning. Note, however, that it doesn’t result in LESS learning — according to these results, it’s exactly the same.

For that reason, we can focus on the other potential benefits of jigsaw learning. If it DOES help students learn how to cooperate, or foster motivation — and it DOESN’T reduce their learning — then it’s a net benefit.

In sum:

If you’re aware of the potential pitfalls of the jigsaw method (working memory overload, distraction, misunderstanding) and you have plans to overcome them, and

If you really like its potential other benefits (cooperation, motivation),

then you can make an informed decision about using this technique well.

At the same time, I certainly don’t think we have enough research to make jigsaw teaching a requirement.

As far as I know, we just don’t have a clear research picture on how to do it well.


Stanczak, A., Darnon, C., Robert, A., Demolliens, M., Sanrey, C., Bressoux, P., … & Butera, F. (2022). Do jigsaw classrooms improve learning outcomes? Five experiments and an internal meta-analysis. Journal of Educational Psychology114(6), 1461.

Overwhelmed Teachers: The Working-Memory Story (Part II) [Updated with Link]
Andrew Watson
Andrew Watson

Last week, I offered an unusual take on working memory in the classroom.

Typically, I (and other writers) focus on the dangers of students’ working memory overload. Of course, we SHOULD focus on that problem — when students’ working memory is overloaded, they stop learning (temporarily).

Young teacher wearing sweater and glasses sitting on desk at kindergarten clueless and confused expression with arms and hands raised.

But last week, I focused on the dangers of a teacher’s working memory overload.

If I’m experiencing cognitive muddle, I won’t be able to explain concepts clearly, or answer questions coherently, or remember important school announcements. (Or, remember to buy the dog food on my drive home.)

So, I suggested teachers start by ANTICIPATING potential causes of working memory overload. (Say: teaching a complicated concept, or, unusual stresses at home.)

We should also be able to IDENTIFY working memory overload when it happens. (When my own working memory gets overloaded, I lose track of sentences and start garbling words.)

Next up:

Step #3: SOLVING — or mitigating, or reducing — working memory problems.

As always, the specific strategies that benefit me might not work for you. As my mantra goes: “don’t just do this thing; instead, think this way.”

The Power of Routines

By definition, routines live in long-term memory. Therefore, I don’t need to process them in working memory.

For that reason, classroom routines reduce my working memory load. (Important additional benefit: they also reduce working memory load for my students.)

So: I (almost) always begin class with a “do now” exercise. When students enter the classroom, they see that I’ve written questions on the board. They sit down and start writing their answers in their notebooks.

Once that routine has formed, I can use my working memory to process the answers that they’re writing, not to think about what I should be doing at this moment.

After we discuss their answers to my “do now” questions, I (almost) always review the previous night’s homework. I then remind them of their homework for the upcoming class. (This habit means that I don’t have to scramble and shout the assignment at them as they’re going out the door.)

Turn and talk? We have a routine.

Cold call? We have a routine.

Write your answers on the board? See previous answer.

By the way, Peps Mccrea wisely notes that creating routines takes time. That is: we initially spend class time on routine building, and therefore have less time for — say — learning.

But: once those routines are in place, we GAIN lots more time than we spent. And, because my working memory load has been reduced, I’ve got more working memory headroom to teach effectively.

Offload the Job

Of course, lots of the teaching work we do requires nimble and effective response to moment-by-moment events — responses that can’t be made into a routine.

In these cases, recruiting working memory allies can be an enormous boon.

During the 2021-22 school year, I had the great good forture of sharing a class with another teacher.

When I found myself getting verbally tangled — a clear sign of working memory overload — I would often hand off:

“Oh, wow, I can feel a mental traffic jam coming on. Mr. Kim, can you take over? What was I saying? Can you clarify the muddle I just made?”

He would then un-knot the explanatory thread I had tangled, and I’d have time to regain my mental bearings.

This strategy also helped out during hybrid teaching.

With most of my students seated in the classroom before me, I could quite easily forget all about the one or two “participating” from the iPad.

A wise colleague suggested creating a “buddy” system. The remote students picked an in-class buddy — and the buddy would check in to be sure they understood the discussion, heard their classmates’ comments, and had a chance to ask questions.

Because the buddy had that responsibility, I didn’t have to worry about it so much. Voila: working memory load reduced.

Offload, Part II

As I noted last week, working memory selects, holds, reorganizes, and combines bits of information.

So, the less information I have to “select and hold,” the lower the working memory load.

One easy way to offload the “select/hold” responsibilities: WRITE STUFF DOWN.

A few examples:

Following Ollie Lovell’s advice, I’ve started crafting “bullet-proof definitions” of important concepts. Because such a definition requires precision and nuance, it’s easy to get the words or the phrasing wrong.

For those reasons, I write down my bullet-proof definitions. I don’t have to use working memory to recall the nuances; I’ve got them on the page right in front of me.

Another strategy:

I write down the start/end times for each of my lesson-plan segments.

That is: my lesson plan might note that we’ll have a discussion about comic and tragic symbols in Act 3 Scene 4 of Macbeth — the famous “banquet scene.”

My notes will include the important line-numbers and passages to highlight.

And, I’ll also write down the times: the discussion begins at 10:12, and goes to 10:32.

This level of detail might sound silly. However, if I DON’T write those times, my working memory will be extra cluttered.

That is: part of my working memory will be processing our discussion (“Notice that Benjamin’s point contradicts Ana’s earlier argument. Can we resolve that disagreement?”).

But at least some of my working memory will be trying to calculate how much more time to spend (“If I let this part of the discussion go on to long, then we won’t have time Act 4 Scene 1. When should I stop?”)

That extra working-memory drag will slow down my processing ability for the scene discussion.

These simple steps to offload working memory demands help me focus on the teaching part of my job.

Your Turn

The strategies I’ve outlined above have helped me reduce the working-memory demands of my own teaching. In theory, anyway, they should help me teach more effectively. (You’ll have to ask my students how effective they’ve really been…)

Of course, these specific strategies might not help you.

The goal, therefore, is NOT that you do what I do. Instead, I hope you’ll think the way I thought: how to anticipate, identify, and reduce working-memory problems.

The more time you devote to these steps, the lower your working memory demands will be. The result: your students too will appreciate the clarity and focus of your classroom.


 

Update: 2/4/24

It seems I’m not the only one focusing on working memory overload for teachers.

Here’s a recent blog post from Doug Lemov — with videos!