Skip to main content
I Am a Doctrinaire Extremist; S/he Is a Thoughtful Moderate
Andrew Watson
Andrew Watson

I recently had an email exchange with an educational thinker and leader who has spent several decades in the field.

After some back and forth, he dismissed my “tenacious belief in the centrality of memorization and retrieval” as ultimately missing the point of learning.

This summary struck me for a number of reasons:

First: it’s true (as far as it goes). I certainly do think that, under some circumstances, memorization can be helpful. And — supported by piles o’ research — I think that retrieval practice helps students form, consolidate, and transfer long-term memories.

Second: this summary implies that I’m in favor ONLY of memorization and retrieval practice. It suggests that I — like Dickens’s Gradgrind — want my students to know facts, facts, facts. (No doubt, someone is aching to use the verb “regurgitate” to capture my purported obsession with facts.)

Third: it further implies that I genuinely don’t care about the meaning behind the facts, my students’ interest in them, or the future usefullness or flexibility of them.

I am, simply put, a doctrinaire extremist.

Crowds surround a burning mansion at night

Because I see myself quite differently — heck, I recently wrote a book with the name “Goldilocks” in the title — I was taken aback by this rhetorical move.

I’ve been thinking about my new Gradgrind Status since receiving this email, and have arrived at a few tentative conclusions about the nature of educational debates.

We’re Mostly Moderates (?)

As implied above, I see myself as seeking out a reasonable middle ground in many educational debates

For instance, as I’ve written repeatedly, I think that working memory limitations suggest that novices will benefit from “high-structure” pedagogies more than from more “low-structure” pedagogies. (See this recent blog post for the difficulties in summarizing this  “high-vs-low” debate simply, fairly, and accurately.)

At the same time, as I’ve also emphasized, I think students’ increasing expertise should promote them from high- to low-structure pedagogies.

That is: the more my students know, the more they should be challenged with open-ended, creative, quest-like assignments that will help them consolidate, connect, and extend their knowledge. (If you know Adam Boxer’s book Teaching Secondary Science, you know he makes the same argument.)

Given these three paragraphs — so earnest in their moderation —  you can see why I’m puzzled (and amused) to see myself reduced to a pitchfork carrier.

At the same — and here I’m guessing — I suspect almost everyone in an educational debate believes they’ve struck up the most reasonable position: probably one in the middle of some continuum.

For instance: my interlocutor explicitly champions a stem-to-stern overhaul of the US educational system.

From his perspective, the system we currently have is so disastrously out of synch with the needs of human flourishing and the genuine truths behind human cognitive and emotional functioning that its wholesale replacement is the only logical option.

That is: although “stem-to-stern overhaul” might sound radical, it is — in fact — an entirely moderate and sensible position given the extremity of the crisis we face.

Just as I think I’m a sensible moderate, he (I suspect) thinks his position is sensible and moderate-given-the-dreadful-circumstances.

We’re all moderates here.

We Are Moderates, but Extremes Exist

When someone accuses me of being a “high-structure extremist,” I have an easy rejoinder at hand: “oh, come on; NO ONE believes any such thing.”

As in: NO ONE follows the Gradgrind method and stuffs students with (facts)3.

In an early draft of that Goldilocks book I was just talking about, I made that very argument. I found a study that contrasts two teaching methods.

Method A: to understand what functions bones serve, students test chicken-bone strength by using vinegar to remove calcium from them.

Method B: students copy down the names of 206 bones from the chalkboard.

I argued — in this early draft — that “no one in the history of the planet has asked students to copy down the names of 206 bones. That’s an absurd straw man.”

A colleague who read this draft took me aside one day and assured me that — sure enough — some schools do exactly that. She, in fact, had taught at such a school.

Now, I’m probably right that no cognitive science research supports this method. But I do have to admit that some people distort cognitive science research to champion this method.

My approach is moderate, but extreme versions of my moderation do exist. In other words: my interlocutor is wrong about me (I think), but not entirely wrong about the world of education.

The Double Flip

This insight, in turn, invites two more aha! moments.

When I worry about the dangers of “low-structure” pedagogy, I might be tempted to highlight examples where teachers throw students overboard into a stormy ocean of cognitive stuff — and ask them to swim to shore. (“In your groups, figure out how to cure rabies …”)

Folks who champion low-structure pedagogies have a handy rejoineder: “NO ONE could misunderstand us to be in favor of such nonsense. That’s an absurd straw man extreme; I’m a sensible moderate.”

And — here’s the first aha! — I suspect low-structure advocates are entirely sincere in this claim. They see this approach as a moderate one, and I’m yoking them to an extreme version of it.

That rhetorical move is as unfair as is my interlocutor’s attempt to make me into Gradgrind.

And yet — here’s the second aha! — those extreme examples do exist; just as extreme versions of direct instruction do.

This tangle of circumstances leads to (at least) two prohibitions:

Low-structure proponents should not say: “those extreme versions of our pedagogy don’t exist!”

Why not? Because they do.

And I should not say: “because those extremes exist, your pedagogy is obviously unsound!”

Why not? Because those extremes are — almost certainly — misunderstandings of their plausibly moderate position.

Honestly, all this moderation is making me a little dizzy.

The Gradgrind Perch

From my new Gradgrindian vantange point, I see two conclusions:

One: although I see myself as a reasonable moderate, others easily perceive me as an extremist — because extreme versions of my way of thinking do exist,

and

Two: although I occasionally see other approaches as extreme, it’s possible/likely that their most thoughtful advocated champion a moderate version of them.

At this point, I’ve maxed out on the even-handed moderation that I can muster. To recover my equilibrium, I’m going to write the names of 206 bones on a chalkboard…

Does Mind-Wandering Harm Learning?
Andrew Watson
Andrew Watson

If you teach children for several hours a day, you just know that sometimes they’re with you…and sometimes not.

Side view of student girl enjoying summer breeze , smiling with eyes closed

They might be focused on your description of the “angle-side-angle” theorem; or, they might be thinking about the Oscars. (What a speech!)

So we might reasonably ask: “is their mind-wandering a problem? Do they learn less?”

We might be tempted by an uplifting answer: “mind-wandering allows students to make fresh and helpful connections.” If they link angle-side-angle to the Oscars, after all, they have made connections that will help them consolidate this new geometry information.

Or, we might be worried about a tough-minded answer: “it seems sort of obvious that if students aren’t focusing, they almost certainly aren’t learning.”

Which is it?

We’ve got a fair amout of research with adolescents and adults; for them, mind-wandering hampers learning.

But, what about younger students?

Pharaohs and Dinosaurs

As always, research details matter.

In this recent study, researchers asked 8-9 year olds to listen to two stories: one about pharaohs, the other about dinos.

These stories — about 12 minutes long — were interrupted every 90 seconds or so. The students answered whether they were …

… focusing on the story,

… thinking about something unrelated to the story (“It was fun being at the zoo yesterday”),

… thinking about their interest in — or abilities relative to — the story (“I’m not very good at this,” “I’m really interested in this”), or

… distracted by the environment (a slamming door).

Researchers also asked the students how interested they were in the content of the stories.

And — of especial interest — they measured the students’ understanding of the stories both immediately after the story and also one week later.

I’d Rather Know than Not Know

The results include lots of useful information: some surprising, some not.

First: unsurprisingly (to me), students who mind-wandered remembered less.

And, crucially, they remembered less both right away and AND a week later.

This point really matters. We know from Nick Soderstrom’s work that initial performance isn’t a reliable indicator of long-term learning.

If we had only short-term results, we might optimistically think that short-term memory problems would give way to long-term improvements.

But: nope.

Students who reported more mind wandering didn’t learn as much.

Second: surprisingly (to me), the students’ interest level didn’t matter.

That is: even the students who REALLY LIKE DINOS didn’t learn as much if they mind-wandered.

Interest doesn’t protect students from the dangers of mind-wandering.

Third: horrifyingly (to me), students lose focus roughly 25% of the time.

In this study, they spent…

… about 10% of their time thinking about something else (“the zoo”),

… about 10% of their time thinking about their ability/interest (“I bet I won’t remember this part”), and

… about 5% of the time distracted by the environment (the slamming door).

If we want students to learn 100% of the material, and they’re mentally elsewhere for 25% of the time…well, that distraction puts a firm cap on what they can learn.

To be clear: this study took place during the pandemic, so student were at home and participating on Microsoft Teams. We therefore can’t take this finding as an entirely reliable measurement of their off-task thoughts during class.

However, I honestly worry that they might be mentally off task even more during school hours. The average classroom has LOTS more people in it, and features fewer dinosaur videos…

Teaching Implications

I think this study (especially if others confirm its findings) encourages us to several tough-minded conclusions:

Mind-wandering really does interfere with learning.

It happens A LOT.

Students’ intrinsic interest doesn’t mitigate this problem.

Each of us will respond to those conclusions differently, but they do offer several suggestions:

First: reduce classroom distractions with energy and purpose.

Second: check for understanding even more frequently than we might think necessary. Doing so will a) help us know if they are mind-wandering, and b) help keep them focused.

Third: remain vigilant even if the topic seems intrinsically interesting. I might think that dinosaurs will keep students’ focus…but in this study they didn’t.

More broadly: I might spend some time looking in the mirror. How distracted am I? How much mind-wandering is a part of my thought routine?

After all: if mind-wandering hampers my own understanding, that result is as bad for me as much as for my students.


Cherry, J., McCormack, T., & Graham, A. J. (2023). Listen up, kids! How mind wandering affects immediate and delayed memory in children. Memory & Cognition, 1-17.

Soderstrom, N. C., & Bjork, R. A. (2015). Learning versus performance: An integrative review. Perspectives on Psychological Science10(2), 176-199.

“Writing By Hand Fosters Neural Connections…”
Andrew Watson
Andrew Watson

Imagine this conversation that you and I might have:

ANDREW: The fastest way to drive from here to the school is via South Street.

YOU: It is? That seems like a long detour. Why would I go that way?

ANDREW: I didn’t say it was the fastest; I said it was the best because it’s the prettiest.

YOU: You DID say it was fastest…wait, the prettiest? It’s basically junk yards and construction sites.

ANDREW: Yes, but because of all the bakeries, it smells really nice.

YOU: What does that have to do with fastest/prettiest?

ANDREW: Why are you being so unpleasant and difficult? South Street is the best route…

I suspect you would think: “this conversation is very frustrating and unhelpful because the goal posts keep moving.”

That is: I initially claimed that South Street is the fastest…but keep moving my claims as soon as you object. (And, oddly, I’m mad at you for being unreasonable.)

I routinely notice this pattern when I ask questions about the claim that “handwriting is better than laptops for note taking.”

Watch the goalposts move:

CLAIM: Handwriting is better than laptops for note taking. This study says so.

ANDREW: That study starts with the BIZARRE assumption that students can’t learn how to do new things — like, how to take notes correctly. And, research since then has routinely complicated or contradicted it.

CLAIM: I didn’t say laptops are better beacuse of this study. It’s because writing by hand changes neural networks. This research says so.

ANDREW: That research says that writing by hand helps students learn to write by hand. Of course it does.

But that doesn’t mean that writing by hand helps students learn other things — like, say, history or chemistry or German. Can you show me research supporting that claim?

CLAIM: I can’t, but when students write on laptops they distract students around them.

ANDREW: Yes, but that’s a completely different claim than the one you started with.

CLAIM: Why are you being so unpleasant and difficult? Writing by hand is better than taking notes on laptops!

Once again, I find this conversation frustrating and unhelpful. SO MANY MOVING GOALPOSTS.

I am entirely open to the idea that handwriting is better. But if someone makes that claim, and says it’s “research-based,” I’d like them to provide research that actually shows this claim to be true.

A bright yellow American football goalpost, above a bright green field and against dark stadium

So far, that turns out to be a big ask.

This idea that “handwriting is better than keyboarding” has popped up again (I suspect because of a recent study), so I want to re-investigate this claim — with a keen eye on those goalposts.

Reasonable Start

If you see a headline that says, “Why Writing by Hand Is Better for Memory and Learning,” you might interpret that claim roughly this way:

Students who take handwritten notes — in their 6th grade history class, say, or their 10th grade science class — remember more of that material after 2 weeks than students who took notes on laptops.

Yes, I conjured up some of those specifics: “6th grade history,” “two weeks later.” But those seem like reasonable extrapolations. What else could the claim substantively mean?

Briefly: plausible goalpost = “students remember more history 2 weeks later.”

So, let’s look at the recent research being used to support this claim.

Here’s a very basic question: “how did the researchers measure how much the students learned and remembered?”

Did the students take a quiz two weeks later? Did they undertake a “brain dump” the following day? How, precisely, do we know what they learned?

The answer is:

The researchers did not measure how much the students learned/remembered.

Honestly. No quiz. No brain dump. Nothing.

And yet, even though the study doesn’t measure memory or learning, it is being used to argue that handwriting enhances memory and learning.

I find this astonishing.

Instead, the study measures activity “in brain regions associated with memory and learning.”

Did you notice something?

Goalpost plausibly was: “students remember more history 2 weeks later.”

Goalpost now is: “more activity in important brain regions.”

Grrr.

Getting Specific

When evaluating “research-based” claims, it’s helpful to know exactly what the participants in the research did.

So, these 36 participants wrote the same fifteen words multiple times. Sometimes they wrote with a stylus on a tablet; sometimes they typed using only their right index finger. (BTW: all the participants were right handed.)

Now, this insistance on “right index finger” makes sense from a neuro-research perspective. If both “handwriters” and “keyboarders” are using one hand, then the researchers reduce lots of confounding variables.

At the same time, this emphasis also leads to highly artificial circumstances.

Presumably some people type with one finger. But, I’m guessing that most people who want to take laptop notes don’t. I suspect they want to take laptop notes because they have some degree of facility on a keyboard.

So:

Goalpost initially was: “students remember more history 2 weeks later.”

Goalpost then was: “more activity in important brain regions.”

Goalpost now is: “more activity in important brain regions when participants write as they usually do than when they type in a really, really unnatural way.”

Double grrr.

It is, of course, helpful to know about these differences in neural responses. But I don’t think they plausibly add up to “students remember more.” Because — remember — no one measured learning.

Lest I Be Misunderstood

In such conversations, I’m often misunderstood to be confident about the right answer. That is: I might seem to be saying “I’m confident that laptops are better than handwriting for learning.”

I am NOT saying that.

Instead, I’m asking for research that directly measures the claim being made.

If I say to you: “research shows that handwriting is better for learning than laptops,” I should be able to show you research that directly measures that claim.

If, instead, I have research showing that handwriting develops neural networks that might be beneficial for learning, I should say that.

My frustration about this point stems from a broader concern.

Over and over, I find that non-teachers cite research — especially neuroscience research — to boss teachers around. While I certainly do believe that teachers should know about pertinent research findings (that’s why I write this blog!), I also believe that we need to acknowledge the limits of our research-based knowledge.

I just don’t think that research (yet) demonstrates that handwritten notes generate more learning than laptop notes.

Overall, I’m inclined to believe:

Practicing fine motor skills (by, say, handwriting) is really important for young learners.

Praticing handwriting makes us better at handwriting — and other word-related skills.

As students get older and more facile with a keyboard, the benefits of handwriting vs. keyboarding will probably depend on the student, the subject, the kind of notes being taken, etc.

And if I see more than one study directly testing the claim that handwriting helps people learn better, I’m entirely open to that possibility.

But at least so far, that claim is not — by any definition that seems reasonable to me– “research-based.”


Van der Weel, F. R., & Van der Meer, A. L. (2024). Handwriting but not typewriting leads to widespread brain connectivity: a high-density EEG study with implications for the classroom. Frontiers in Psychology14, 1219945.

Weather Forecasting and Cognitive Science
Andrew Watson
Andrew Watson

I live in Boston, and we just had an ENORMOUS snow storm. TWELVE INCHES of snow fell in just a few hours. It was, as we say, “a monstah.”

Oh, wait a minute, that didn’t happen.

A winter scene: cars covered in a foot of swon, and two pedestrians walking away from the camera, shoulders hunched agains the cold snow

The FORECAST said we’d get a monstah. In reality, by the end of the day, exactly 0.0 inches of snow had accumulated on my sidewalk. It was as close to “nothing” as was the Patriots’ chance of winning the Super Bowl this year.

You can imagine the public response:

Hah! All the “experts” with all their science-y equipment and equations and models and colorful images … they all got it wrong. AGAIN!

That’s it: I’m done with all this weather forecasting nonsense. I’ll rely on my trick knee to tell me when the weather is a-changing.

While that response is predictable, I also think it’s unfair. In fact, believe it or not, it reminded me of the work we do at Learning and the Brain.

In most ways, weather forecasting has almost nothing to do with cognitive science. But the few similarities might help explain what psychology and neuroscience research can (and can’t do) for teachers.

I want to focus on three illustrative similarities.

Spot the Butterfly

First, both meteorologists and cognitive scientists focus on fantastically complex systems.

In the world of weather:

As the butterfly theory reminds us, small changes over here (a butterfly flapping its wings in my backyard) could cause enormous changes over there (a typhoon in Eastern Samar).

In the world of education:

Whether we’re looking at neurons or IEPs or local school budgets or working memory systems or mandated annual testing, we’ve got an almost infinite number of interconnected variables.

Research might tell us to “do this thing!”, but the effect of that recommendation will necessarily depend on all those other variables.

We should not be shocked, therefore, that a one-step intervention (e.g.: growth mindset training) doesn’t have exactly the effect we want it to. That one intervention interacts with all those other complex systems.

The research-based suggestion isn’t necessarily wrong, but it also can’t briskly overcome all the other forces that influence learning.

Possibilities and Probabilities

Second: like weather forecasts, research-based suggestions focus on probabilities.

That is: the weather channel didn’t say “Boston is going to get 12 inches of snow!”

If you looked past the simplified headline, it said:

“We’ve seen conditions more-or-less like this 100 times before.

2 of those times, we got less than 2 inches

8 times, we got 2-6 inches

25 times, 6-10 inches

45 times, 10-14 inches

15 times, 14-18 inches

5 times, more than 18 inches.

Make plans accordingly.”

They don’t know for sure; they’re making predictions based on previous cases — and those previous cases provide a range of possibilities.

Notice, by the way, that the forecasters weren’t exactly wrong. New York and Philly got pounded; they got the “monstah” we were expecting.

But — because a butterfly somewhere flapped its wings — the storm went slightly south and left us alone.

So, too, with psychology and neuroscience research aimed at the classroom.

Researchers can say: “this strategy helped students score 5% higher on the end-of-year exam … ON AVERAGE.”

That means the strategy (probably) helped more students than it hurt. But the effects were different student-by-student.

Who knows: the strategy could have made learning harder for some students.

We’re looking at probabilities, not panaceas.

The Bigger the Claim…

Third: expert forecasters get their predictions right more often than they get them wrong. And — this is crucial — the “wrong” results come more often for big, outlier events.

Sunny days in June? Glum rain in November?

Relatively easy to predict.

A once-in-a-generation hurricane? A monstah snow storm?

MUCH harder to predict. We just have less data about unusual events because…they’re unusual!

So too in the world of research-based teaching advice.

I honestly think that researchers get their advice “right” much of the time — at least within the narrow confines of the context they describe.

That is: a large collection of well-designed studies probably merits careful consideration.

At the same time, if researchers loudly announce a big, outlier conclusion, we should be ready for that claim to collapse upon further investigation.

Imagine that researchers claim…

… dancing a hornpipe helps students learn fractions, or

… standing in a “power pose” does something worthwhile/important, or

… teachers don’t need to know anything about a topic to teach it well.

In each of these cases, the extremity of the claim should prepare us for doubts.

Equally true, let’s say “research shows” that a particular teaching strategy has a HUGE effect on learning.

It’s possible, but honestly kinda rare.

For instance, as I wrote recently, I found a meta-analysis claiming that the “jigsaw” method has a cohen’s d value of 1.20. As stats people know, that’s simply ENORMOUS.

It’s possible…but I wasn’t at all surprised to find very little support for that claim. I honestly can’t think of any teaching intervention that makes that much of a difference on its own.

TL;DR

Like weather forecasters, psychology and neuroscience research…

… looks at enormously complicated systems,

… offers conclusions best understood as probabilities, and

… is likeliest to be right when it makes modest claims.

In brief: this field can be fantastically useful to classroom teachers, as long as we understand its challenges and limitations.

Our teacherly “trick knee” might be right from time to time. But wisely considered research will probably be better.

Constructivism, or Constructivism, Part II
Andrew Watson
Andrew Watson

Last week, I wrote about the often-testy debates that surround “constructivism.”

One possible cause for the heat behind these debates: we have at least two different meanings for the word “constructivism” in education.

Definition A emphasizes the physical activity and social interactivity of the classroom.

If students are up and moving about, if they’re working and talking and building ideas together, that’s “constructivism.”

Definition B emphasizes the mental activity within the student.

If the student is thinking and learning, then by definition they have “constructed” new meaning.

If students aren’t moving about and aren’t interacting with peers, but they ARE LEARNING — well, that’s constructivism. (Of course, if they learn while moving and interacting, then that’s also constructivism.)

Can we use these distinct definitions to develop research that sheds light on this debate?

Guides and Sages

A study published in 2019 — led by Dr. Elizabth Fischer and Dr. Martin Hänze — explores this question:

Is the definition A constructivism (students moving and interacting) the best way to accomplish the definition B constructivism (students learning)?

To answer this question, they undertake a remarkably thorough and complex study with German college students.

As I will explore in detail below, I do think the study has important limitations — limitations, by the way, which the authors themselves highlight.

But I do think …

… that they’ve undertaken a good-faith effort to answer this question with a perfectly reasonable approach, and

… that they’ve taken great care with the nitty-gritty research specifics.

In other words: this study might not be perfect (no study is), but it’s not somehow a partisan hack job.

Some highlights: Fischer and Hänze surveyed over 1700 students in 80 different college courses — from physics to social sciences to language to music. So, they have a large sample size from an unusually diverse range of disciplines.

College students working together at a table, lots of papers in front of them

These surveys — both at the beginning and the end of the term — measured everything from the students’ initial interest in the topic, to their sense of how much they learned, to the amount of mental effort that the course required.

And the researchers themselves evaluated these 80 courses for two different approaches to teaching:

They measured “student-activating methods,” where all students were working “without direct involvement of the teacher”: reading, exercises, group work, games.

The also measured “teacher-guided methods,” which focused primarily on the teacher’s instruction: lecture, demonstration, videos.

Here’s the logical chain they explored:

First: did the students’ “amount of perceived mental effort” in fact lead to a greater sense of learning?

Second: which classroom approach — “student activating methods” or “teacher-guided methods” — resulted in more mental effort?

In this way, Fischer and Hänze could distinguish between the two kinds of constructivism. Did “student-activating methods” (definition A) lead to more learning (definition B)?

Tentative Conclusions

Unsurprisingly, given the amount of data they gathered, these researchers reached a great many detailed conclusions.

For instance: teachers devoted A LOT more time to “teacher-guided methods” (64% of class time) than to “student-activating methods” (13.8%).

Let’s focus on the two questions listed above:

First: did the students’ “amount of perceived mental effort” in fact lead to a greater sense of learning?

Unsurprisingly: yes. Students who felt that they thought harder also thought that they learned more. (Teachers everywhere are breathing a sigh of relief.)

Second: which classroom approach — “student activating methods” or “teacher-guided methods” resulted in more mental effort?

The results were clear: students reported thinking harder and learning more with more “teacher-guided methods,” and less with “student-activating method.”

In other words: in these college classes, definition A constructivism (student activity/interaction) resulted in less definition B constructivism (student learning) than more “teacher-guided” methods.

Recommendations and Limitations

Given their findings, you might expect Fischer and Hänze to demand an end to “student-activating methods.”

THEY DON’T.

Do our findings indicate that university teachers should stop being the guide on the side and return to being the sage on the stage? We refrain from deducing this kind of prescription.

Instead, they want to slow the rush in the other direction. They don’t think their evidence strong enough to forbid guide-on-the-side teaching, but it certainly doesn’t recommend it.

Part of their hesitation stems from the limitations to their approach — limitations which they themselves highlight.

First: their data show correlation, not causation. (Always an important distinction.)

Second: they focus on student survey data. Such data are, by definition, subjective. For instance, students report that they learned more from “teaching guided methods,” but did they? We don’t know.

By the way, researchers had good methodological reasons to rely on surveys. But that reliance does require cautious interpretation.

Third: recall that they found “teacher-guided methods” during 64% of class time; so much more than “student-activating methods” (13.8%).

The researchers reasonably wonder if “student-activating methods” are less effective because the professors haven’t practiced them very much, and aren’t very good at them yet.

That’s a reasonable question.

Long-time readers know my own hypothesis. I suspect that “teacher-guided methods” probably help students more early in their learning process, and “student-activating methods” help more later in that process. This study doesn’t measure that variable, and doesn’t even try to answer the question.

So: Constructivism, or Constructivism?

When these German researchers make a good-faith effort to answer that question in college courses, they arrive at two tentative findings:

Unsurprisingly, students learn more when they think more. Thinking causes “construction” of new knowledge.

Perhaps surprisingly, “student activating methods” do not help students think more than “teacher guided methods.” Instead, students think more when they sit and listen to a well-prepared talk than they do moving about and interacting with one another.

Although Fischer and Hänze are appropriately modest in their recommendations, their research does suggest that we should slow down and ask challenging questions before we insist on too much student movement and interaction.


Fischer, E., & Hänze, M. (2019). Back from “guide on the side” to “sage on the stage”? Effects of teacher-guided and student-activating teaching methods on student learning in higher education. International Journal of Educational Research95, 26-35.

Constructivism, or Constructivism? Part I
Andrew Watson
Andrew Watson

If you want to launch a feisty debate in your next faculty meeting, stand up and ask “who here supports ‘constructivism’?”

This question all but guarantees heated discussion, because …

… MANY people identify strongly with one side or another (“I definately DO!” “Well, I certainly DON’T!!”)

and

… people on one side often think ill of people on the other (“How can you possibly hold that immoral position?”).

We’re talking Hadfields and McCoys here.

Typically, this blog would look to research to resolve such a debate. Alas the debate resists a research-based answer, because we can easily find research supporting both positions.

A construction site: a 6 story building with scaffolding all around, a crane against a sunset

Whether you’re championing PBL or direct instruction or inquiry learning or Direct Instruction or guiding from the side or a knowledge-rich curriculum, you’ve got research to bolster your claim.

Is there any way out of this tangle? Will we always disagree?

A Fresh Place to Start?

In many fields, basic definitions can be helpfully straightforward.

If I’m a carpenter, I can tell you what a “nail” is. In fact, I can show you several. They’re right there.

If I’m a surgeon, I can tell you what a “pancreas” is. I can tap it with my scalpel. It’s right there.

When a mechanic repairs my car, she can show me the “rust” on my “carburetor.” See? Right there.

And so forth.

In psychology and education research, alas, basic definitions get really tricky.

How, exactly, do I define “attention”? Or, “engagement”? Or, heck, how do I define “learning”?

If I can’t define these core concepts, then I can’t develop a precise way to measure them. And research relies on precise measurements…

For that reason, we might start reexamining our Hadfield/McCoy fued about “constructivism” by looking at basic definitions.

What, exactly, does the word “constructivism” mean?

Take Two

This potentially pedantic question turns out to have helpful results.

We’ve got (at least) two fundamentally different ways of thinking about constructivism.

First: physical + social activity = mental change.

If students are up and DOING carefully designed activities — if they’re moving about and (preferably) interacting with other students — this combination of physical and social work leads to the construction of new mental models.

This kind of “active” classroom work — unlike the “passive” classrooms where students sit and listen to a teacher explain — results in learning.

In other words: we can tell by looking at the students in the classroom whether or not “constructivist teaching” is happening.

Secondmental change = mental change.

This second definition focuses on learning more than on teaching.

It says: OF COURSE students learn by constructing their own mental models. (Unlike The Matrix, reality does not allow us to download mental models into our students.)

If students have learned, they have constructed. ALL LEARNING is by definition constructivist.

So they core question is: what teaching methods produce that result?

If, say, a lecture results in learning, then a lecture is “constructivist.”

If a jigsaw activity results in learning, then it too is “constructivist.”

In other words: we can’t tell by looking at the students whether or not constructivist teaching is happening.

Instead, we can tell only if we determine how much they learned. If they learned, they constructed. Voila: constructivism.

Rethinking Strong Claims

This definitional muddle might explain some of the “are you a constructivist” debate.

According to the first definition, I’m a constructivist teacher if I teach in particular ways: my students are up and about, working and talking with each other and — as much as possible — finding their own way through a particular concept.

According to the second definition, I’m a constructivist teacher if my students learn.

In other words: EVERYONE wants to be a constructivist according to the second definition. The first definition, however, prompts the feud.

Champions of the first definition believe those methods result in the second result. That is: “physical + social” constructivism (“guide on the side”) should result in more learning.

Critics of the first definition believe that “sage on the stage” teaching results in more learning — more construction of mental models.

Once we separate these two ways of thinking about “constructivism,” we can ask:

“How might we use this distinction in research? Can we determine which approach — saging or guiding — results in more construction of new mental models?”

I’ve found a study that tries to do just that. I’ll summarize it in next week’s blog post…


Fischer, E., & Hänze, M. (2019). Back from “guide on the side” to “sage on the stage”? Effects of teacher-guided and student-activating teaching methods on student learning in higher education. International Journal of Educational Research95, 26-35.

Should Students Make Their Own Flashcards, Take II
Andrew Watson
Andrew Watson

A respected colleague recently posted a study about making flashcards.

The basic question: should students make their own flashcards, or use premade flashcards?

In my experience, people really like the answer “students should make their own.” This approach feels more active/good, compared with the passive/bad strategy of using someone else’s.

I have had two concerns with this popular answer.

First: how will students know what flashcards to make? Because students don’t know what they don’t know, they probaby don’t know what they should be studying with flashcards.

Second: making flashcards takes time. If students have only 30 minutes to study, and spend 5-10 of those minutes making cards, then they have much less time to review.

And, as I wrote back in March of 2023, the most on-point study I could find confirmed those suspicions. Students who made their own flashcards did not learn more — although they DID spend more time.

If you can learn the same amount in less time, why not do that?

For all these reasons, I was surprised to read the title of the study my friend recommended:

“User-Generated Digital Flashcards Yield Better Learning Than Premade Flashcards”

What’s going on? Have I been giving you bad advice?

A Feature, Not a Bug

Before I explore this new study — led by Dr. Steven Pan — I want to focus on the seemingly vexing problem of conflicting studies.

That is: Lin & McDaniels’ study from 2018 says “making your own flashcards DOESN’T help you learn.” Pan’s study from 2024 says “making your own flaschcards DOES help you learn.”

What’s going on here?

I started coming to Learning and the Brain because I wanted research-based answers to my teaching questions. (Perhaps you started coming for the same reason.)

The process, I thought, would go like this:

I have a question.

Researchers tell me the correct answer.

I do what the researchers tells me.

So simple.

Here’s the problem. Psychology is complicated. People are COMPLICATED. Teaching and learning are COMPLICATED.

When we start using psychology research methods to study people who are teaching and learning, we’re studying fantastically complicated systems.Teenage girl sitting on floor and trying to learn with post-it notes

OF COURSE we get contradictory answers to specific questions.

This result doesn’t mean something is wrong or broken. It means that we’ve done enough research to unearth the genuine complexity of the questions we’re exploring: psychology + people + teaching + learning.

The fact that we’ve found contradictory results means that we’re doing this right. In other words: if all the research I find tells me the same thing, I probably haven’t looked at enough research.

So: if contradictory findings are the norm, what do we actually do in our classrooms?

Well: we start by looking at the specifics of the research…

Specifics of the Research

I’ve got two different studies with contrasting headlines. To resolve the differences, let’s get specific.

First, Pan’s research team executed all the steps we want them to.

They’ve got precise hypotheses and control groups and abstruse calculations and colorful graphs. The conflict between Lin (2018) and Pan (2024), in other words, doesn’t result from unprofessional work.

Over the course of their six studies, Pan’s team found that students remembered more when they came up with their own definitions of words than they did when they used flashcards with pre-written defintions.

Likewise, students remembered more when they wrote their own examples on flashcards than when they used pre-written examples.

And so forth.

So: YES, students spent more time making their own cards. But because they thought harder while they did so, they ultimately remembered and applied the concepts better 2 days later.

How then can we explain the difference between the two studies?

Focus, Focus, Focus

Because these two studies are different, we can — of course! — find lots of differences.

Here’s a key difference in my view:

In the (2024) Pan study, the researchers told the students which words to study.

In the (2018) Lin and McDaniel study, they didn’t.

So, students learn more useful stuff when we tell them what stuff is useful — that’s the recent Pan study.

They don’t learn more useful stuff when they decide on their own what’s useful — that’s the older Lin & McDaniel study.

My own hypothesis, thus, goes like this:

Students will probably learn more from substantive flashcards that they generate (paraphrase, examples, etc.) than from pre-written flashcards … as long as they get good guidance about the terms and concepts that merit flashcards.

In the absence of that guidance, they probably won’t learn more — because the pre-written flashcards will focus on more useful words and concepts.

Now, my hypothesis hasn’t been tested directly, as far as I know. But it both makes sense and explains the differences between these sets of studies.

TL;DR

Should students make their own flashcards?

As long as

First: they generate new information (rather than simply copying it down), and

Second: we tell them which words and concepts merit flashcards,

then, probably yes.

Given the complexity of this research field, however, we should probably keep our eyes out for further research.


Pan, S. C., Zung, I., Imundo, M. N., Zhang, X., & Qiu, Y. (2022). User-generated digital flashcards yield better learning than premade flashcards. Journal of Applied Research in Memory and Cognition.

The Jigsaw Advantage: Should Students Puzzle It Out?
Andrew Watson
Andrew Watson

The “jigsaw” method sounds really appealing, doesn’t it?

Imagine that I’m teaching a complex topic: say, the digestive system.

Asking students to understand all those pieces — pancreas here, stomach there, liver yon — might get overwhelming quickly.

So, I could break that big picture down into smaller pieces: puzzle pieces, even. And, I assign different pieces to subgroups of students.

Group A studies the liver.

Group B, they’ve got the small intestine.

Group C focuses on the duodenum.

Once each group understands its organ — its “piece of the puzzle” — they can explain it to their peers. That is: they re-assemble the larger puzzle from the small, understandable bits.

This strategy has at least two potential advantages:

First, by breaking the task down into smaller steps, it reduces working memory load. (Blog readers know that I’m a BIG advocate for managing working memory load.)

Second, by inviting students to work together, it potentially increases engagement.

Sadly, both those advantages have potential downsides.

First: the jigsaw method could reduce working memory demands initially. But: it also increases working memory demands in other ways:

… students must figure out their organ themselves, and

… they have to explain their organ (that’s really complicated!), and

… they have to understand other students’ explanations of several other organs!

Second: “engagement” is a notoriously squishy term. It sounds good — who can object to “engagement”? — but how do we define or measure it?

After all, it’s entirely possible that students are “engaged” in the process of teaching one another, but that doesn’t mean they’re helpfully focused on understanding the core ideas I want them to learn.

They could be engaged in, say, making their presentation as funny as possible — as a way of flirting with that student right there. (Can you tell I teach high school?)

In other words: it’s easy to spot ways that the jigsaw method could help students learn, or could interfere with their learning.

If only we had research on the subject…

Research on the Subject

A good friend of mine recently sent me a meta-analysis puporting to answer this question. (This blog post, in fact, springs from his email.)

It seems that this meta-analysis looks at 37 studies and finds that — YUP — jigsaw teaching helps students learn.

A closeup of four hands holding out single puzzle pieces, trying to see how to put them together well.

I’m always happy to get a research-based answer…and I always check out the research.

In this case, that “research-based” claim falls apart almost immediately.

The meta-analysis crunches the results of several studies, and claims that jigsaw teaching has a HUGE effect. (Stats people: it claims a Cohen’s d of 1.20 — that’s ENORMOUS.)

You’ve probably heard Carl Sagan’s rule that “extraordinary claims require extraordinary evidence.” What evidence does this meta-analysis use to make its extraordinary claim?

Well:

… it doesn’t look at 37 studies, but at SIX (plus five student dissertations), and

… it’s published in a journal that doesn’t focus on education or psychology research, and

… as far as I can tell, the text of the meta-analysis isn’t available online — a very rare limitation.

For that reason, we know nothing about the included studies.

Do they include a control condition?

Were they studying 4th graders or college students?

Were they looking at science or history or chess?

We just don’t know.

So, unless I can find a copy of this meta-analysis online (I looked!), I don’t think we can accept it as extraordinary evidence of its extraordinary claim.

Next Steps

Of course, just because this meta-analysis bonked doesn’t mean we have no evidence at all. Let’s keep looking!

I next went to my go-to source: elicit.com. I asked it to look for research answering this question:

Does “jigsaw” teaching help K-12 students learn?

The results weren’t promising.

Several studies focus on college and graduate school. I’m glad to have that information, but college and graduate students…

… already know a great deal,

… are especially committed to education,

… and have higher degrees of cognitive self-control than younger students.

So, they’re not the most persuasive source of information for K-12 teachers.

One study from the Phillipines showed that, yes, students who used the jigsaw method did learn. But it didn’t have a control condition, so we don’t know if they would have learned more doing something else.

After all, it’s hardly a shocking claim to say “the students studied something, and they learned something.” We want to know which teaching strategy helps them learn the most!

Still others report that “the jigsaw method works” because “students reported higher levels of engagement.”

Again, it’s good that they did so. But unless they learned more, the “self-reports of higher engagement” argument doesn’t carry much weight.

Recent News

Elicit.com did point me to a highly relevant and useful study, published in 2022.

This study focused on 6th graders — so, it’s probably more relevant to K-12 teachers.

It also included control conditions — so we can ask “is jigsaw teaching more effective than something else?” (Rather than the almost useless question: “did students in a jigsaw classroom know more afterwards than they did before?” I mean: of course they did…)

This study, in fact, encompases five separate experiments. For that reason, it’s much too complex to summarize in detail. But the headlines are:

The study begins with a helpful summary of the research so far. (Tl;dr : lots of contradictory findings!)

The researchers worked carefully to provide appropriate control conditions.

They tried different approaches to jigsaw teaching — and different control conditions — to reduce the possibility that they’re getting flukey results.

It has all the signs of a study where the researchers earnestly try to doubt and double-check their own findings.

Their conclusions? How much extra learning did the jigsaw method produce?

Exactly none.

Over the course of five experiments (some of which lasted an entire school term), students in the jigsaw method group learned ever-so-slightly-more, or ever-so-slightly-less, than their control group peers.

The whole process averaged out to no difference in learning whatsoever.

The Last Word?

So, does this recent study finish the debate? Should we cancel all our jigsaw plans?

Based on my reading of this research, I do NOT think you have to stop jigsawing — or, for that matter — start jigsawing. Here’s why:

First: we’ve got research on both sides of the question. Some studies show that it benefits learning; others don’t. I don’t want to get all bossy based on such a contradictory research picture.

Second: I suspect that further research will help us use this technique more effectively.

That is: jigsaw learning probably helps these students learn this material at this point in the learning process. But it doesn’t help other students in other circumstances.

When we know more about those boundary conditions, we will know if and when to jigsaw with our students.

I myself suspect that we need to focus on a key, under-discussed step in the process: when and how the teacher ensures that each subgroup understands their topic correctly before they “explain” it to the next group. If they misunderstand their topic, after all, they won’t explain it correctly!

Third: let’s assume that this recent study is correct; jigsaw teaching results in no extra learning. Note, however, that it doesn’t result in LESS learning — according to these results, it’s exactly the same.

For that reason, we can focus on the other potential benefits of jigsaw learning. If it DOES help students learn how to cooperate, or foster motivation — and it DOESN’T reduce their learning — then it’s a net benefit.

In sum:

If you’re aware of the potential pitfalls of the jigsaw method (working memory overload, distraction, misunderstanding) and you have plans to overcome them, and

If you really like its potential other benefits (cooperation, motivation),

then you can make an informed decision about using this technique well.

At the same time, I certainly don’t think we have enough research to make jigsaw teaching a requirement.

As far as I know, we just don’t have a clear research picture on how to do it well.


Stanczak, A., Darnon, C., Robert, A., Demolliens, M., Sanrey, C., Bressoux, P., … & Butera, F. (2022). Do jigsaw classrooms improve learning outcomes? Five experiments and an internal meta-analysis. Journal of Educational Psychology114(6), 1461.

Putting It All Together: Connecting “Motivation” with “Teaching Style”
Andrew Watson
Andrew Watson

Researchers tend to focus on particular topics in education.

Some folks study attention, while others look at motivation.

A research team here might look at working memory, while that team over there considers sleep.

And: let’s not forget about technology!

Of course, it’s useful to have specialists in each of these fields; each one is highly complicated.

At the same time, as teachers, we need to understand how all the pieces fit together.

College student smiling while taking notes on while studying

After all, if I’m successfully managing my students’ working memory load, but they’re not paying attention, then all my working-memory efforts have gone to waste.

For this reason, we’d like greater clarity about putting the pieces together.

For instance, you might wonder: what’s the relationship between cognitive load and motivation?

I’m so glad you asked…

Calling Austrialia

In a recently-published study, an international group of researchers asked almost 1300 Australian students in grades 7-10 to fill out surveys about their academic experience.

Some questions asked student to rate their teachers’ efforts to reduce the complexity of the material (that is, the “instructional load”):

On a scale of 1-7, they responded to statements like:

“When we learn new things in class, the teacher makes it easy at first.”

“As we work on tasks or activities in this class, the teacher gives good assistance.”

“In this class, the teacher makes sure we get enough practice before moving on to new tasks or activities.”

Other statements focused mental work the students were doing (that is, their “cognitive load”):

“The work in this class is very difficult for me.”

“The way information is presented in this class is too complex.”

Still others inquired about the teachers’ motivational strategies, and the students’ experience of motivation:

“The teacher communicates which learning goals he/she expects you to accomplish
by the end of the lesson;”

“The teacher doesn’t plan or organize too much. The lesson will just happen;”

“The teacher offers a very interesting, highly engaging lesson;”

“The teacher insists that you have to finish all your required work—no exceptions, no excuses”

As you can see right away, these researchers have an ENORMOUS amount of data to crunch as they calculate the relationships among all these variables.

By the way, we should note the researchers’ method here: they’re considering survey data. Some people — quite reasonably — worry that survey data can’t really capture classroom reality.

For instance, a student might perceive that “the teacher doesn’t plan or organize too much. The lesson will just happen.” But that perception might be entirely inaccurate.

With that caveat in mind, what did the researchers find?

The (Basic) Results: Reversing Assumptions

Given all the variables measured above, we can expect LOTS of findings — reported in graphs and equations and declarations.

What does it all boil down to?

The simple summary reverses a common belief.

Teachers often assume that “when my students feel motivated, they will learn more.”

These data suggest that “when my students learn more, they feel motivated.”

More precisely: according to the survey data, teachers who ensure that cognitive load remains managable help students learn more. That additional learning correlates with higher subsequent motivation.

This finding makes a rough-n-ready sense. For example: my students rarely clamor to learn grammar; they are, honestly, not super motivated to do so.

However, part of their demotivation results from the fact that grammar learning is notoriously difficult. (“Object complements,” anyone?) My students just feel bad when they keep failing at it.

If I teach well — that is, if I reduce the cognitive load of learning grammar — they are likelier to succeed at doing so. Result: they feel less demotivated. Heck, they might even feel motivated.

The (Advanced) Results: All That Lingo

Research ain’t research if it doesn’t include lots of niche-y terminology.

Unfortunately, as is often the case, the terminology here gets rather confusing.

Because the research team draws on two very different fields (working memory, motivation), and two different theories (cognitive load, self-determination), we can easily get bogged down in the terminological niceties.

For example, the researches consider a teacher’s motivational style along two axes: do the teachers support or thwart students’ needs; are they highly directive or not. The resulting four quadrants are broken down into further sub-categories, resulting in — I’m not joking here — EIGHT different teaching styles: “attuning,” “guiding,” “clarifying,” “demanding,” “domineering,” and so forth.

The word “autonomy” — quite an important word in self-determination theory — leads to particular puzzles.

We might reasonably think that “autonomy” means “the teacher encourages students by giving them relatively more freedom to explore and solve problems on their own.” However, in this terminological world:

“Autonomy support … directs attention towards activities that are necessary for learning, thus reducing the proportion of cognitive load that is extraneous.”

Which is to say, teachers support autonomy by “directing” students in specific ways. Lots of student freedom might sound “autonomous,” but that’s not what the word “autonomy” means in this context.

To Sum Up

I’ve focused on this terminology because I think the study’s results are easy to misunderstand.

Here is their conclusion, quoted from the abstract:

We conclude that by using load-reducing strategies and a motivating style characterized by structure and autonomy support, teachers can reduce students’ cognitive load and improve their self-regulated motivation, engagement, and achievement.

In that sentence, “autonomy support” does NOT mean “give students lots of freedom” or “be sure to emphasize voice and choice.” Instead, it means “students can think effectively and autonomously when teachers direct them towards specific and manageable cognitive work.”

That effective thinking, it turn, results in more learning, and higher levels of motivation.

The big picture: by putting together two distinct research fields — cognitive load theory and self determination theory — we can start to get a clearer picture of complex classroom realities.


 

 

A final note:

The title of this blog post includes the phrase “teaching style” because the researchers use it.

That phrase, however, does NOT mean that “learning styles” are a thing. They’re not.


Evans, P., Vansteenkiste, M., Parker, P., Kingsford-Smith, A., & Zhou, S. (2024). Cognitive Load Theory and Its Relationships with Motivation: a Self-Determination Theory Perspective. Educational Psychology Review36(1), 7.

Overwhelmed Teachers: The Working-Memory Story (Part II) [Updated with Link]
Andrew Watson
Andrew Watson

Last week, I offered an unusual take on working memory in the classroom.

Typically, I (and other writers) focus on the dangers of students’ working memory overload. Of course, we SHOULD focus on that problem — when students’ working memory is overloaded, they stop learning (temporarily).

Young teacher wearing sweater and glasses sitting on desk at kindergarten clueless and confused expression with arms and hands raised.

But last week, I focused on the dangers of a teacher’s working memory overload.

If I’m experiencing cognitive muddle, I won’t be able to explain concepts clearly, or answer questions coherently, or remember important school announcements. (Or, remember to buy the dog food on my drive home.)

So, I suggested teachers start by ANTICIPATING potential causes of working memory overload. (Say: teaching a complicated concept, or, unusual stresses at home.)

We should also be able to IDENTIFY working memory overload when it happens. (When my own working memory gets overloaded, I lose track of sentences and start garbling words.)

Next up:

Step #3: SOLVING — or mitigating, or reducing — working memory problems.

As always, the specific strategies that benefit me might not work for you. As my mantra goes: “don’t just do this thing; instead, think this way.”

The Power of Routines

By definition, routines live in long-term memory. Therefore, I don’t need to process them in working memory.

For that reason, classroom routines reduce my working memory load. (Important additional benefit: they also reduce working memory load for my students.)

So: I (almost) always begin class with a “do now” exercise. When students enter the classroom, they see that I’ve written questions on the board. They sit down and start writing their answers in their notebooks.

Once that routine has formed, I can use my working memory to process the answers that they’re writing, not to think about what I should be doing at this moment.

After we discuss their answers to my “do now” questions, I (almost) always review the previous night’s homework. I then remind them of their homework for the upcoming class. (This habit means that I don’t have to scramble and shout the assignment at them as they’re going out the door.)

Turn and talk? We have a routine.

Cold call? We have a routine.

Write your answers on the board? See previous answer.

By the way, Peps Mccrea wisely notes that creating routines takes time. That is: we initially spend class time on routine building, and therefore have less time for — say — learning.

But: once those routines are in place, we GAIN lots more time than we spent. And, because my working memory load has been reduced, I’ve got more working memory headroom to teach effectively.

Offload the Job

Of course, lots of the teaching work we do requires nimble and effective response to moment-by-moment events — responses that can’t be made into a routine.

In these cases, recruiting working memory allies can be an enormous boon.

During the 2021-22 school year, I had the great good forture of sharing a class with another teacher.

When I found myself getting verbally tangled — a clear sign of working memory overload — I would often hand off:

“Oh, wow, I can feel a mental traffic jam coming on. Mr. Kim, can you take over? What was I saying? Can you clarify the muddle I just made?”

He would then un-knot the explanatory thread I had tangled, and I’d have time to regain my mental bearings.

This strategy also helped out during hybrid teaching.

With most of my students seated in the classroom before me, I could quite easily forget all about the one or two “participating” from the iPad.

A wise colleague suggested creating a “buddy” system. The remote students picked an in-class buddy — and the buddy would check in to be sure they understood the discussion, heard their classmates’ comments, and had a chance to ask questions.

Because the buddy had that responsibility, I didn’t have to worry about it so much. Voila: working memory load reduced.

Offload, Part II

As I noted last week, working memory selects, holds, reorganizes, and combines bits of information.

So, the less information I have to “select and hold,” the lower the working memory load.

One easy way to offload the “select/hold” responsibilities: WRITE STUFF DOWN.

A few examples:

Following Ollie Lovell’s advice, I’ve started crafting “bullet-proof definitions” of important concepts. Because such a definition requires precision and nuance, it’s easy to get the words or the phrasing wrong.

For those reasons, I write down my bullet-proof definitions. I don’t have to use working memory to recall the nuances; I’ve got them on the page right in front of me.

Another strategy:

I write down the start/end times for each of my lesson-plan segments.

That is: my lesson plan might note that we’ll have a discussion about comic and tragic symbols in Act 3 Scene 4 of Macbeth — the famous “banquet scene.”

My notes will include the important line-numbers and passages to highlight.

And, I’ll also write down the times: the discussion begins at 10:12, and goes to 10:32.

This level of detail might sound silly. However, if I DON’T write those times, my working memory will be extra cluttered.

That is: part of my working memory will be processing our discussion (“Notice that Benjamin’s point contradicts Ana’s earlier argument. Can we resolve that disagreement?”).

But at least some of my working memory will be trying to calculate how much more time to spend (“If I let this part of the discussion go on to long, then we won’t have time Act 4 Scene 1. When should I stop?”)

That extra working-memory drag will slow down my processing ability for the scene discussion.

These simple steps to offload working memory demands help me focus on the teaching part of my job.

Your Turn

The strategies I’ve outlined above have helped me reduce the working-memory demands of my own teaching. In theory, anyway, they should help me teach more effectively. (You’ll have to ask my students how effective they’ve really been…)

Of course, these specific strategies might not help you.

The goal, therefore, is NOT that you do what I do. Instead, I hope you’ll think the way I thought: how to anticipate, identify, and reduce working-memory problems.

The more time you devote to these steps, the lower your working memory demands will be. The result: your students too will appreciate the clarity and focus of your classroom.


 

Update: 2/4/24

It seems I’m not the only one focusing on working memory overload for teachers.

Here’s a recent blog post from Doug Lemov — with videos!