Andrew Watson – Page 10 – Education & Teacher Conferences Skip to main content

Andrew Watson About Andrew Watson

Andrew began his classroom life as a high-school English teacher in 1988, and has been working in or near schools ever since. In 2008, Andrew began exploring the practical application of psychology and neuroscience in his classroom. In 2011, he earned his M. Ed. from the “Mind, Brain, Education” program at Harvard University. As President of “Translate the Brain,” Andrew now works with teachers, students, administrators, and parents to make learning easier and teaching more effective. He has presented at schools and workshops across the country; he also serves as an adviser to several organizations, including “The People’s Science.” Andrew is the author of "Learning Begins: The Science of Working Memory and Attention for the Classroom Teacher."

How to Change Students’ Minds? Create Surprise…
Andrew Watson
Andrew Watson

Sometimes teaching is challenging. And sometimes, it’s REALLY challenging.

For instance:

Because I’m an English teacher, I want my students to know the word “bildungsroman.” (It means, “a novel of character formation.” Their Eyes Were Watching God depicts Janie’s formation as a complete person — so, it’s a bildungsroman.)

Alas, students find tha word to be disturbingly odd: “bildungswhat???” they cry.

And the definition is at times perplexing. Are the Harry Potter novels examples of a bildungsroman? How about The Book Thief?

So, learning that definition presents a challenge.

But, other literary terms create a bigger learning challeng.

As an English teacher, I also want my students to know the definition of the word “comedy.”

In this case, my students and I face a much different problem. That is: my students think they already know what ‘comedy’ means.

They think it means, basically, “a story that’s funny.”

In the world of literary analysis, however, “comedy” doesn’t mean funny.

Basically, the definition goes like this: ” ‘tragedy’ ends in death or banishment; ‘comedy‘ ends in marriage, implying birth.” (Lots more to say, but that’s a good place to start.)

So: according to this definition, sitcoms aren’t comedy.

And all sorts of stories can be comic, even if they’re not even a little bit funny. (I just read a murder mystery which has a comic ending: one of the protagonists goes on a date — implying the potential for marriage.)

In research world, we call this problem a “prior misconception.”

That is: my students think they know the correct answer (“comedy” = funny), but the question really has a different answer (“comedy” = ending implying marriage).

Sadly, prior misconceptions make learning harder. Students’ prior misconceptions complicate the process of learning correct answers or concepts.

So: what’s a teacher to do?

A Scientific Method?

Although the examples I’ve offered focus on teaching English literary terminology, this question gets most research attention for teaching scientific concepts.

A brighly colored beac ball floating in a vibrantly blue pool

For instance: imagine pushing a solid ball underwater. How much liquid will it displace?

Younger students have an important misconception about this question. They typically think that the amount of water depends on the WEIGHT of the ball, not the SIZE of the ball.

This misconception about “displacement” will get in the way of later scientific learning, so teachers should correct it as quickly as we can. How best to do so?

A research team in Germany approached this question with a specific strategy: using surprise.

These researchers showed a video to 6- to 9-year-olds, whom they met at a natural history museum.

Half of the children were asked to predict how much water would be displaced when balls of various sizes and materials were submerged. Then they saw the actual results.

Sure enough: the children who made predictions  — based on their prior misconceptions — were more surprised than those who didn’t. (Believe it or not, surprise in this case is measured by pupil dilation!)

And, those children learned more from the experiment than those who didn’t make predictions.

That is: they scored higher on subsequent tests about displacement. And — even better — they scored higher on transfer tests of this concept.

So, one potential strategy to help students overcome their prior misconceptions about the natural world:

Step one: ask them to make predictions based on those misconceptions

Step two: surprise them with real-world experiences that contradict them.

Boom: minds changed.

Strengths, and Doubts

When I first saw it, this study appealed to me for a number of reasons.

First, one author — Garvin Brod — has worked on several studies and papers that I admire. (I’ve written about another one here.)

So, when I see Dr. Brod’s name on the byline, I sit up and take notice.

Second: for a variety of technical reasons, I admire the study design. The researchers have taken great care to get the tricky details just right. (For instance: their active control condition makes sense to me.)

However, I do have concerns. (To be clear: Brod & Co. acknowledge both these concerns in their “Limitations” section.)

Concern #1: duration.

For understandable reasons, researchers measured the students’ learning right away. (The entire process took about 30 minutes.)

But we don’t want our students to change their prior misconceptions right now. We want them to change misconceptions — as much as possible — FOREVER.

This problem creates concerns because prior misconceptions are stubborn. To ensure that the “surprise” method works, it would be GREAT if we could retest participants weeks or months later.

Concern #2: contradiction.

I have seen other authors and writers raise a plausible concern. If we invoke students’ prior misconceptions before contradicting them, we run the risk of strengthening those misconceptions.

That is: students will naturally filter the new/contradictory experience through the the distorting lens of their misconceptions. And that lens is EVEN MORE DISTORTING because we just asked students to activate it.

Now at this point I have a confession: I simply can’t remember where I read that. But I remember thinking: “that sounds really plausible to me.”

So at this point, I’m honestly kind of stumped. A well-conceived study suggests the “surprise” strategy will work (at least in the short term). But other scholars in this field have plausible doubts.

Truthfully, I’m hoping one of you will know the “don’t invoke prior misconceptions!” research pool and point it out to me. If/when that happens, I’ll share it with you.

TL;DR

This study suggest that asking students to make predictions based on their prior misconceptions increases their surprise when those misconceptions are contradicted by experience.

And: that feeling of suprise helps them learn a correct conception — at least in the field of science.

However, I myself am not fully persuaded by this approach. I’ll keep a lookout for other studies in the field, and share them with you.


 

Theobald, M., & Brod, G. (2021). Tackling scientific misconceptions: The element of surprise. Child Development92(5), 2128-2141.

Classroom Cognition Explained, or, Dual Coding Just Right
Andrew Watson
Andrew Watson

The Good News: research into cognitive science can be SPECTACULARLY USEFUL to teachers. (That’s why we have Learning and the Brain conferences….)

Book Cover for Teaching & Learning Illuminated

The Less Good News: ideas that come from cognitive science can be MISUNDERSTOOD and MISAPPLIED with alarming frequency.

For example: as I’ve written elsewheredual coding has lots of potential benefits for reducing working memory load — and thereby helping students learn. That’s the good news.

But — less good news — dual coding has too often been interpreted to mean “put icons on things to make them better.”

Wouldn’t it be great if someone could bring together LOTS of ideas from cognitive science, AND explain them with well-executed dual coding?

Yes; Yes It Would…

Well, someone has done exactly that. Three someones, in fact.  Bradley Busch, Edward Watson (no relation), and Ludmila Bogatchek have written Teaching and Learning Illuminated: the Big Ideas, Illustrated.

As that title promises, this book illuminates (that is, dual codes) the greatest hits from cognitive science: retrieval practice, cognitive load theory, Rosenshine’s principles, mindset, and a few dozen more.

Each section combines a pithy description of a particular concept with a visual representation of its core ideas.

So, for instance, page 35 summarizes dozens of studies looking at the benefits of spreading practice out (“spacing”) and practicing related topics together (“interleaving”).

And, the facing page offers a carefully considered graph that depicts learning over time. One path (“cramming”) looks good because it works so well in the short term. But the second path (“spacing and interleaving”) results in more learning over time.

Voila: “desirable difficulties” in one thoughtful graph.

Unlike so many examples of dual coding of the “put-an-icon-somewhere” school, Busch, Watson, and Bogatchek create substantial, meaty visuals that both require and reward careful study.

I never looked at the illustrations and thought: “gosh, that’s pretty.”

Instead, I thought:

Oh, gosh, I need to stop and study this for a bit.

Wait, why is that line there?

Ok, now I get it. Presumably this axis is labeled…oh, right, so cool!

In other words, the visuals both require thought and support thought. The result: readers understand these complex ideas even better.

So Many Shelves

I’ve written in the past that the “best book to read” depends on the reader’s current knowledge.

If you’re somewhat of a beginner in this field. I think you should probably read a book that focuses on just one topic: long-term memeory, or attention, or cognitive load theory.

Once you understand lots of the pieces, it’s time to read the books that put them all together.

Teaching and Learning Illuminated looks like an easy read — so many cool pictures! At the same time, it includes an ENORMOUS number of research-based insights and suggestions.

For that reason, I think of it as an “early-advanced” book more than one for those who are new to the field. Those illustrations are welcoming, but they also create cognitive demands of their own.

Full Disclosure

Because this field is relatively small, I know one of the three authors — Bradley Busch — a bit. (I recently recorded some brief video snippets for his website.)

I don’t believe our conversations have influenced this review, but the reader should know of them in making that evaluation.

I’ll also note: yes, I have written a book about Mindset; and yes, this book includes a mindset chapter called “The Watson Matrix.” But: their matrix isn’t about my summation of mindset theory.

 

An Argument Against “Chunking”
Andrew Watson
Andrew Watson

Learning and the Brain exists so that we can talk about good teaching together.

Although such conversations can provide great benefits, they also run into problems.

We might disagree with each other’s beliefs.

Or, we might disagree about research methods.

Even when we do agree, we might struggle to communicate effectively about shared beliefs.

For example: jargon.

When specialists talk with each other about “theory of mind” or “p3” or “element interactivity,” the rest of us often think “what the heck does THAT mean?”

Effective communication stops when words don’t have recognizeable meanings.

Another, subtler problem also hampers communication:

Effective communication stops when we use the same word to mean different things.

Sometimes this problem happens between disciplines.

The word “transfer,” for instance, has different meanings in neuroscience, education, and psychology.

Other words get us all tangled up, even within the same discipline.

I’m looking at you, “chunking.”

Television for All

I believe I first heard the word “chunking” to describe this mental phenomenon:

Imagine I ask you to memorize this list of letters:

CN NAB CFO XHB OCB S

Or, I might ask you to memorize THIS list of letters:

CNN ABC FOX HBO CBS

From one perspective, those lists are identical. They are the same letters in the same order. I just moved the spacing around a bit.

But, when I moved those spaces, I “chunked” the letters.

Penguins grouped together into the shape of a heart

That is: I organized those letters to align with your prior knowledge.

As teachers, we can reduce working memory load by “chunking”: that is, by aligning new ideas/information with ideas/information our students already have.

“Chunking” means “alignment with prior knowledge.”

Cool.

Or, wait a moment…

Curiouser and Curiouser

I’ve also heard “chunking” used in entirely different ways.

The second meaning: “break larger pieces down into smaller pieces.”

If I’ve got a list of ten instructions I want my students to follow, that list will almost certainly overwhelm their working memory. So, I could break that list down.

Three instructions.

Then three more.

An additional two, followed by the final two.

VOILA, I “chunked” the instructions.

Of course, this kind of chunking (breaking down into smaller bits) doesn’t mean the same thing as the first kind of chunking (aligning with prior knowledge).

Nor does it mean the same thing as the THIRD kind of chunking: forming a link with prior knowledge.

That is:

You could learn that “hamster” is another “mammal” that people keep as a “pet.”

You’ve formed a new “chunk”: mammals that are pets.

Or, you could learn that “Saratoga” is another surprising military victory, like “Agincourt” and “Thermopylae.”

You’ve formed a new “chunk”: unlikely military victories.

You see the problem here?

In Sum

So, as far as I can tell, “chunking” means either…

… aligining new information with prior knowledge, or

… breaking large information dumps into smaller pieces, or

… connecting new information with well-known information (which sounds like the first meaning, but isn’t exactly the same thing).

If I tell a colleague, “I think that part of the lesson would have benefitted from more chunking,” s/he doesn’t really know what I mean.

Even worse: s/he might THINK that s/he knows — but might understand chunking one way when I mean it another.

Ugh.

To be clear: I am IN FAVOR of all three strategies.

After all: all three ideas reduce working memory load. And, I’m a BIG FAN of reducing WM load.

However, when we use the word “chunking” to describe three different teaching strategies, we make our advice harder to understand.

That is: we increase the working memory demands of understanding strategies to reduce working memory demands. The paradox is both juicy and depressing.

So, I am enthusiastically in favor of all the strategies implied by the word “chunking,” but I think we should stop calling them “chunking.”

Instead, we should use more precise vocabulary to label our true meaning.

Do Animations Improve Learning? A Definitivie Answer, Please…
Andrew Watson
Andrew Watson

Recently I discussed working memory overload with a group of wise and thoughtful teachers.

I showed them one of my favorite GIFs:

a glass (representing working memory),

slowly filling up with stuff (teaching methods, complex information),

so that there is ultimately no room left in the glass (that is: no room left for understanding).

VOILA: working memory overload in one handy animation.

I love this GIF, and show it often.

Young woman draws an animated storyboard

Yet when I gave these teachers time to discuss this animation, they honestly didn’t like it very much. They had lots of specific (and insightful) suggestions, but the overall message was: thumbs down.

So: should I ditch the GIF?

Where to Start

For a guy who writes a blog about research-informed teaching, the next step seem obvious: find out what the research says!

Surely I can find an answer — maybe even a definitive one.

Alas, I quickly stumbled into a quandry.

On the one hand, we’ve got lots of good research suggesting that — on the whole — students do NOT learn more from animated information.

One of the best known studies — led by the much-esteemed Richard Meyer — supports the static media hypothesis: “static illustrations with printed text reduce extraneous processing and promote germane processing as compared with narrated animations.”

In this study, researchers used animations about everything from lightning formation to toilet tanks to see if they helped students understand.

These animations never helped, and often hurt, student learning.

One the other hand, a substantial meta-analysis of 40 studies finds a “beneficial effect of the presence of animated display for learning dynamic phenomena.”

So: what to do when we’ve got persuasive — and contradictory — evidence?

A Feature, Not a Bug

For people unfamiliar with research-world, this kind of contradiction might seem like a failure. If the people who do the research can’t agree on an answer, surely we should just ignore them.

I would offer a different interpretation.

Teaching is complicated. Learning is complicated. PEOPLE are complicated.

So, any time we do research about people teaching and learning, we’re looking at enormously complicated questions.

Some disagreement is inevitable.

And — here’s the surprise — the fact that we found contradictions means that we’ve been looking hard enough. (If I didn’t find contradictory research, I probably haven’t looked very hard…)

What, then, should we do to resolve the (inevitable, helpful) contradictions?

One useful step: get granular.

In this case: presumably some kinds of animations are helpful under some kinds of circumstances. But others: not so much.

We need to know more about the specifics.

Okay, Some Specifics

With that in mind, I found a more recent study trying to understand when and why animations might hinder understanding.

The study, in effect, looked at two questions:

Are the animations essential to understanding the topic, or are they basically “decorative”?

and

Is the material being studied cognitively challenging?

Two scholars — Annabel Pink and Philip Newton — had students study slides with information on them. Some slides had animations; others didn’t.

And — useful to know — the slides covered complex material: human physiology and enzyme kinetics.

Sure enough, students remembered LESS information from the slides with animations. And they rated those slides as cognitively MORE challenging.

In other words:

When deciding whether or not to break out the GIFs, we can ask ourselves:

Am I just decorating the slide, or does animation help clarify the meaning of the material?

and

Is this material a cognitive heavy lift?

When I ask these questions about my working memory overload GIF, I arrive at these answers:

The GIF illustrates a complex process: it’s not decorative, but meaningfully connected to an understanding of the ideas.

BUT

The ideas are — in fact — quite complicated.

The animation, in other words, might add cognitive load to an already mentally challenging concept. Hence the teachers’ unhappiness.

Small, Medium, and Big Pictures

What should we teachers do with this information?

Narrowly stated, we can consistently ask the two questions above: a) is the animation “decorative”? and b) is the material cognitively challenging?

If either answer is “yes,” then we should hesitate to add animations.

More broadly, we should continue to look for detailed guidance about when to use, and when to avoid using, animations to help students learn.

As far as I can tell, we just don’t have a clear picture about the boundary conditions within which they help students learn.

The big picture looks like this.

Psychology research rarely gives us an absolute, definitive answer to questions like: “should we add animations or not?”

Teachers always need to look at research specifics, compare them to the classroom conditions where we work, and use our own expert judgment to analyze the goodness of fit.


Mayer, R. E., Hegarty, M., Mayer, S., & Campbell, J. (2005). When static media promote active learning: annotated illustrations versus narrated animations in multimedia instruction.. Journal of Experimental Psychology: Applied, 11(4), 256-265. https://doi.org/10.1037/1076-898x.11.4.256

Berney, S., & Bétrancourt, M. (2016). Does animation enhance learning? A meta-analysis. Computers & Education101, 150-167.

Pink, A., & Newton, P. M. (2020). Decorative animations impair recall and are a source of extraneous cognitive load. Advances in Physiology Education.

The Whole Toolbox in One (Free) Download
Andrew Watson
Andrew Watson

If you want to learn more about improving teaching with psychology research, I’ve got good news:

There are SO MANY excellent books to read.

I’ve also got bad news:

There are SO MANY excellent books to read, we can struggle to manage them all.

In fact, as I’ve written elsewhere, I think the “best book to read” depends on the category of book you’re looking for.

At the beginning of your research+education journey, you probably want a book devoted to one topic: say, working memory, or motivation, or attention.

As you get more familiar with different categories of research, you might instead want a book that brings many topics together.

Today I’d like to recommend a book from the second category: the Great Teaching Toolkit: Evidence Review from Evidence Based Education. (You can read about it and download it here.)

Step One: How to Begin?

Anyone striving to write a book that “brings many topics together” starts with an enormous challenge: how to organize such a behemoth?

We have SO MUCH pertinent research on SO MANY topics: how can we possibly tidy this muddle?

The Toolkit’s authors devise a sensible sorting strategy. They believe research gives teachers strong guidance in four areas:

What sorts of knowledge do teachers need?

How can we make classrooms emotionally safe?

How can we structure classroom work and routines efficiently?

What teaching strategies require students to think hard?

Now, other authors organize their thinking in other ways. (For instance: Dan Willingham’s Why Don’t Students Like School focuses on nine key principles from cognitive science that should guide instruction.)

But I think you can see right away why the Toolkit’s organizational structure sounds so helpful and sensible.

Step Two: Break It Down

Within each of these categories, the authors offer between 3 and 6 specific principles: everything from “teachers should know common misconceptions in their discipline” to “strategies for asking questions effectively.”

This structure, in turn, allows for a straightfoward teacher-development plan.

If I were using this Toolkit with a faculty, I would have teachers select one of these sixteen topics: prefereably one where they feel the least confident and successful.

Each teacher would then dig into the research-base suggestions provided right there in the Toolkit.

Even better: the Toolkit reviews the research it summarizes. Teachers and school leaders who want to know exactly why this strategy or topic has been prioritized get all the info they need to dig deeper and discover more.

Examples, Please

You have, no doubt, heard that feedback is essential for student learning.

Imagine that a teacher reviews the Toolkit’s list and determines that s/he really needs to work on this specific part of her craft.

Turning to section 4.4, this teacher quickly gathers several useful insights about the role of feedback in our work.

In the first place, the Toolkit draws a helpful distinction between feedback that helps the teacher — by giving us information about how much our students know and understand — and feedback that helps the student — by giving them structured ways to improve.

That simple distinction sounds almost too obvious to state out loud…but in my experince isn’t emphasized nearly often enough.

In the second place, the teacher will find several thoughtful prompts for further thought.

As the authors wisely say: “there is no simple recipe for giving powerful feedback.”

Should the teacher remind the student of the success criteria, or point out gaps between the current work and those criteria?

The Toolkit doesn’t offer prescriptive answers because research can’t do that. Research can provide us with options, and let teachers sort out the best ways to put all those options together.

And, if you’re a research nerd (as I am), you’ll be delighted to find almost 20 pages of discussion on their sources for these ideas, and their methods for sorting them all together.

TL;DR

You already know several specific cognitive-science informed teaching strategies? You to want a bigger picture?

The Great Teaching Toolkit will be a feast for you. (And yes: you can download it free!)

The Cold-Calling Debate: Potential Perils, Potential Successes
Andrew Watson
Andrew Watson

Some education debates focus on BIG questions:

high structure vs. low structure pedagogy?

technology: good or bad?

how much should teachers focus on emotions?

Other debatess focus on narrower topics. For instance: cold calling. (“Cold calling” means “calling on student who haven’t raised their hands.”)

Proponents generally see several benefits:

Cold calling helps broaden check-for-understanding strategies. That is: it lets teachers know that MANY students understand, not just those who raise their hands.

It increases accountability.

It adds classroom variety.

And so forth.

Opponents likewise raise several concerns. Primarily:

Cold-calling could stress students out — even the ones not being cold called. That is: even the possibility that I might be called on could addle me.

Also, cold calling signals a particular power dynamic — one that runs contrary to many school philosophies.

Because both sides focus on different measures of success or peril, this debate can be difficult to resolve.

The Story So Far

Back in 2020, a friend asked about the cold calling debate. I looked for research, and –honestly — didn’t find much. The result of that search was this blog post.

Kindergarten students sitting on the floor, listening to the teacher at the chalkboard

In brief, the only study I found (focusing on college sophmores) found more benefits and fewer perils.

Students who had been cold-called a) asked more questions later on, and b) felt less stress.

But, one study is just one study. And, if you don’t teach college sophomores, you might not want to rely on research with that age group.

Today’s News

Research might offer teachers useful guidance, but we shouldn’t accept all research without asking a few questions.

One way to ensure we’re getting GOOD research-based advice is to look for wide ranges of evidence: evidence from…

… primary school AND high school

… science class AND history class

… small AND large school

… Stockholm AND Johannesburg

And so forth.

Similarly, teachers should feel especially confident when reseachers use different methodologies to explore their questions.

For this reason, I was especially pleased to find a cold-calling study published just last year.

This study doesn’t go in for random distribution or control groups (staples of other research paradigms). Instead, it uses a technique called “multimodal interaction analysis.”

I haven’t run into this technique before, so I’m honestly a newbie here. But the headline is: researchers used videotapes to study 86 cold-calling interactions.

In their analysis, the break the interaction down into a second-by-second record — noting the spoken words, the hand gestures, the length of pauses, the direction of the teacher’s gaze. (In some ways, it reminds me of Nuthall’s The Hidden Lives of Learners.)

Heck, they even keep track of the teacher’s use of modal verbs. (No, I’m not entirely sure what modal verbs are in German.)

By tracking the interactions with such extraordinary precision, they’re able to look for nuances and patterns that go beyond simply: “the teacher did or didn’t cold call.”

Conclusions?

Perhaps unsurprisingly, the study’s broad conclusion sounds like this: details matter.

The researchers offer a detailed analysis of one cold call, showing how the teacher’s build up to the moment created just the right support, and just the right tone, for the student to succeed.

They likewise detailed another cold call where the teacher’s body language and borderline insulting framing (“do you dare to answer?”) seem to have alarmed a shy student in monosyllables.

By implication, this research suggests that both opponents and proponents are missing a key point.

We needn’t ask: “is cold calling good or bad?”

Instead, we should ask: “what precise actions — what words, what gestures, what habits — set the student up for a positive interaction? Which precise actions do the opposite?”

Once we get good answers, we can focus and practice! Let’s do more of the good stuff, and less of the harmful stuff.

TL;DR

“Is cold calling good or bad?” is probably the wrong question.

Recent research focusing on nuances of technique suggests that teachers can reduce the perils of cold calling to foster participation and enhance learning.


Morek, M., Heller, V., & Kinalzik, N. (2022). Engaging ‘silent’students in classroom discussions: a micro-analytic view on teachers’ embodied enactments of cold-calling practices. Language and Education, 1-19.

Navigating Complexity: When 1st Order Solutions Create 2nd Order Problems
Andrew Watson
Andrew Watson

Here’s a common classroom problem.

As I’m explaning a complex concept, a student raises a hand.

“Just a moment,” I say, and finish my explanation.

Now I turn and smile at the student: “what was your question?” I ask.

All too often, the student answers, “I forgot my question.”

What’s going on here?

As is so often the case, the answer is: working memory overload.

Working memory HOLDS and PROCESSES information. When a student fails to hold and process, that’s working memory overload.

A primary school student wearing a backpack and sitting at a desk raises an eager hand.

In this case, my student was processing my explanation, and so failed to hold the question.

The solution?

It might seem simple. Don’t ask students to hold questions while they process explanations.

Instead, I should answer students’ questions right away. Problem solved….

When Solutions Create Problems

Wait just a moment.

This “solution” I just offered might solve the student’s problem.

At the same time, it might create new problems.

The student’s question — even a well-intentioned one — might throw my explanation off track.

My students might lose their tentative understanding of my complex explanation.

I might lose my own train of thought.

So I fixed one classroom problem but now have yet another one. YIKES.

What’s a teacher to do?

First Things First

This example — but one of many — might throw our entire project into question.

Teachers turn to psychology and neuroscience to solve classroom problems.

However, if these “research-based solutions” simply transform one problem into some other headache, why bother with the research?

We could save time by sticking with the old problem, right?

I think the fair answer to that question is: “actually, no.” Here’s why…

Teachers don’t need research to solve classroom problems. We need research to solve COMPLEX classroom problems.

When our classroom problems are simple, we just solve them on our own. We are — after all — teachers! We specialize in problem solving.

For that reason, we turn to research only when the problem isn’t simple.

And for that reason, we shouldn’t be surprised when the answer isn’t simple either.

OF COURSE we can’t fix the “questions-interrupting-my-explanation” problem with one easy research-based step.

If it were so simple a problem, we would have solved it without the research.

Changing the Lens

As I’ve explored this question with wise teachers in recent weeks, I’ve been struck by a pattern:

PROBLEM ONE requires SOLUTION ONE.

But: SOLUTION ONE creates PROBLEM TWO.

And: it’s often true that PROBLEM TWO comes from a different cognitive function than PROBLEM ONE.

So, in the example above, I started with a working memory problem: my student coudn’t hold and process information.

My solution (“take questions right away”) created another problem — but not a working memory problem.

When I answer questions mid-explanation, my students lose focus. That is, the working memory problem has been transformed into an attention problem.

To solve this second problem, I need to switch from working memory solutions to attention solutions.

In other words, I need to think about a separate cognitive function. I’ll find solutions to this 2nd order problem in a different research field.

Again with the Mantra

If you’ve ever heard me speak at a Learning and the Brain conference, you know my mantra: “don’t just do this thing; instead, think this way.”

In other words: psychology research can’t provide teachers with a list of “best practices.” The strategy that works in my 10th grade English classroom at a boarding school might not help 1st graders add numbers in a Montessori program.

But: the thought process I follow with my 10th graders might lead to beneficial solutions for those 1st graders. The answer (“do this thing”) might be different, but the mental pathway (“think this way”) will be the same.

The point I’m making here is: these thought processes might require us to leap from mental function to mental function in search of a more successful solution.

A solution to a long-term memory problem might uncover a motivational problem.

The solution to an alertness problem might promt an orienting problem.

When I reduce my students’ stress, I might ramp up their working memory difficulties.

And so forth.

When we understand research into all these topics, we can anticipate that these solutions might unveil an entirely different set of troubles.

And by moving nimbly from research topic to research topic, we can ultimately solve that complex problem that once seemed intractable.

All this nimbling about takes practice. And, ironically, it might threaten our own working memory capacity.

But once we get used to thinking this new way, we will arrive at solutions that fit our classrooms, and that work.

Collaborative Learning and Working Memory Overload: Good News or Bad?
Andrew Watson
Andrew Watson

Consider the following paradox:

Teachers need to give students instructions — of course we do!

After all, instructions help students do what they need to do, so that they can learn what we want them to learn.

3 middle school students working together on a problem from a textbook

At the same time, too many instructions might very well overwhelm working memory.

After all, the student has to HOLD the instructions in memory while PROCESSING each one individually. And: “holding while processing” is one handy definition of working memory function.

In brief: the right number of instructions can help learning, but too many instructions can impede learning.

I recently asked a group of wise and experienced teachers this question:

“Can you think of other teaching practices — like instructions — that are beneficial in small amounts, but might create working memory overload in large amounts?”

After some time to think and discuss, one teacher answered: group work.

After all, he mused, collaboration might simplify some mental processes. But collaboration itself creates additional mental taxes — all that negotiating and delegating and coordinating and explaining.

And disagreeing.

And resolving.

Are they ways that teachers can reduce those “mental taxes” so that students get the benifits without the penalties?

If only we had a research-based answer to those questions…

Inspired by this teacher’s observation, I hunted up this study.

Quadratics in Quito

To explore this question, researchers working in Quito, Ecuador worked with 15-year-olds solving quadratic equations.

Specifically, they wanted to know if practice collaborating helps students collaborate effectively.

As is always true, research design gets tricky. But the overall design makes sense.

Some students did practice solving quadratic equations collaboratively; others didn’t.

For a second round of math learning, all students were then sorted into groups for collaborative learning.

So, did students who practiced collaborating do better on later collaboration?

For complex equations: YES. Both three days later and seven days later, students who practiced collaborating did better solving problems than students who didn’t.

For simple equations: NO. If the mental work wasn’t very hard, students didn’t need to practice to collaborate effectively.

In light of these findings, the researchers’ recommendations make good sense.

If learners are novices, learning tasks are complex, and information distribution demands [extensive cooperation], teachers should prepare group members … using similar problems.

If task information does not demand [extensive cooperation], it is not necessary for the teachers to prepare learners to collaborate.

I want to highlight one part of this summary: “using similar problems.”

This research team emphasizes that “collaboration” is NOT a general skill. Collaboration will look different depending on the precise demands of the discipline and the topic.

So: students who “practiced” were given a VERY specific format for learning how to collaborate on this task.

If we want to get the benefits of practice for our own students, we should be sure to tailor the practice in very specific ways.

The Story Behind the Story: an Analogy and a Principle

A research article like this study always begins with a summary of earlier research findings, conclusions, and questions.

This summary includes a particularly helpful foundational inquiry.

When does collaboration increase WM load so much as to threaten learning?

When does collaboration reduce WM load enough to promote learning?

Is there some sort of taxonomy to consider or principle to explore?

To explain, I’ll start with an analogy.

Imagine I want to illuminate a yard at night.

For all sorts of reasons, it would be simplest to have one lamp to do so. Having multiple lamps just adds to the complexity and expense of the project.

So, if my yard is small enough to be covered by one lamp, then I should use one. Adding more lamps makes the project worse — more complicated, more expensive — not better.

But at some point, a yard gets big enough to need multiple lamps. If I use only one lamp, I just can’t illuminate the full yard.

In this case, the additional expense and complexity of having multiple lamps provides a meaningful benefit.

You can see where this is going.

Here’s a potential principle:

If a cognitive problem is simple enough, then one student can solve it on her own.

Adding other students (“collaborative leaning”!) increases the WM complexity of the situation without providing any additional benefit.

In this case, each student’s mental effort has become less effective, not more effective.

If, on the other hand, a cognitive problem gets complex enough, then it goes beyond any one student’s cognitive capacity.

In that case, the problem benefits from additional students’ cognitive efforts — even though all those extra students do increase the complexity of the problem.

At some tipping point, when a problem gets complicated enough, it needs to be divided into sub-tasks — despite the complexity of managing that work.

At that tipping point, well-structured and precisely-practiced collaboration probably is more beneficial than harmful.

TL;DR

Groupwork (probably) increases WM demands on simple cognitive tasks, but reduces WM demands for complex cognitive tasks.

To get the most benefits from collaboration, students should practice that skill — and teachers should tailor the practice to the precise demands of the cognitive work.


Zambrano, J., Kirschner, F., Sweller, J., & Kirschner, P. A. (2019). Effects of group experience and information distribution on collaborative learning. Instructional Science47, 531-550.

The Dangers of “The Big Ask”: In Defense of Stubborn (?) Teachers
Andrew Watson
Andrew Watson

Let’s face it: teaching is hard.

I’ve been a classroom teacher for roughly 20 years — how do I count summer school? — and I still find the work exhillarating, exhausting, baffling, uplifting, frustrating, humbling, and joyous.

Exasperated teacher standing in the middle of a chaotic classroom, holding her hands on her head and shouting

And that was Tuesday.

And: I think I’m not the only one who finds teaching to be an extra-ordinary challenge. I mean, don’t get me wrong, I love it. But GOSH it’s hard.

This hard-won experience leads me — and perhaps you — to two conclusions:

First: people who haven’t taught in the classroom don’t fully understand the challenges of the work.

Until you’ve tried to follow a scrupulously-devised lesson plan despite the fact that an un-announced fire-drill is in progress, two students have switched sections, three don’t have their notebooks, and four don’t think the cell-phone policy applies at just this moment…you just don’t really know.

How could you? It’s “Misson Impossible: Chalkdust” in here.

Second: I need all the help I can get.

No, really.

You’ve got some research that might …

… help me create a more effective lesson plan?

… explain how attention really works?

… suggest study strategies to help my students learn?

… foster motivation, at the beginning of a lesson on grammar?

I’m all ears. Please. I’m practically begging here…

My Learning and the Brain Journey

I attended my first LatB conference in 2008; the topic was The Science of Attention.

I IMMEDIATELY realized that this conference was just what I needed. So much wisdom and advice. So many compelling suggestions.

And, so many graphs and pictures of brains!

I returned to the classroom and started rethinking everything.

What should be rewrite policy be?

Should my classroom decorations be in primary colors?

What’s the right number of new vocabulary words to teach per class?

I had research-y answers to all those questions.

After several years of attending conferences, I went back to grad school and got a degree combining education, psychology, and neuroscience.

And, in addition to teaching, I started training other teachers.

Now I offered LOTS of advice of my own:

Because working memory does this, teachers should do that.

Because long-term memory benefits from this, teachers should do that.

Because stress affects this…you get the picture.

Many teachers appreciated all this guidance. But some obviously didn’t.

For whatever reason, they just didn’t want to do what I was telling them to do!

I was genuinely surprised; after all, I knew I was right because the research said so!

“The Big Ask”

Over time, I’ve come to realize that those teachers didn’t want to do what I was telling them — and “what the research said” — because I had forgotten the first lesson described above.

I mean: yes (lesson #2), “I need all the help I can get.” So, research-based advice MIGHT help.

But also,

Yes (lesson #1), “people who haven’t taught in the classroom don’t fully understand the challenges of the work.” So, research-based advice might not apply to this classroom, this topic, this teacher, or this student.

In other words,

When I say to teachers, “you should change the way you teach because research says so, ” I’m making a REALLY BIG ASK.

After all, those students are ultimately their responsibility, not mine.

Yes, I have taught — but I teach English to 10th graders at a very selective school. I should be very careful when offering guidance to, say …

… 2nd grade teachers who work with struggling readers, or

… those with lots of students on the Autism spectrum, or

… teachers who work in different cultures.

Because I do know research well, and I know classrooms fairly well, I can make those Big Asks. But I should be humble about doing so. And I should be respectful when a teacher says, “your ‘research-based advice’ …

… conflicts with our school’s mission, or

… might work with college students, but probably won’t work with 2nd graders, or

… requires personality traits that I don’t have.”

Yes, I think teachers should listen thoughtfully to the guidance that comes from research. And, I think those of us who cite research should listen thoughtfully to the classroom specifics, and the experience, of teachers.

Teachers who resist ‘research-based advice’ might seem “stubborn,” but they also might be right.

The Story Behind the Story

In last week’s blog post, I summarized research about using gestures to teach specific science concepts. I also sounded a few notes of caution:

I don’t fully understand the concept of “embodied cognition,” and

I worry that a few very specific studies will be used to insist that teachers make broad changes to our classroom work.

Even as I wrote that post, I could hear colleagues’ voices in my head: “why are you always so grouchy? Why don’t you listen when people with PhDs say ‘this is the Next Important Thing’?”

The answer to those question is this week’s blog post.

I’m ‘grouchy’ because I worry our field is constantly making Big Asks of teachers.

We often make those Asks without acknowledging a) the limits of our research knowledge, and b) the breadth of teachers’ experience.

I am indeed optimistic about combining cognitive psychology research with teacherly experience to improve teaching and foster learning.

To make that combination work, we should respect “stubborn” teachers by making Respectful Asks.

“Embodied Cognition” in Action: Using Gestures to Teach Science
Andrew Watson
Andrew Watson

Here’s a topic that has gotten lots of enthusiastic attention in recent years: embodied cognition.

As the name suggests, that phrase means — basically — “thinking with your body, not just your mind.”

Because your brain is a part of your body (it is, in fact, physically attached to your body), the concept makes rough-and-ready sense.

In at least two ways, this perspective has well-established research support.

First, physical fitness improves cognition — at least, up to a point.

We don’t need to be Olympic athletes to learn chemistry.

But if I’m conspicuously out of shape, the related health detriments harm my brain just as they harm my lungs; that harm makes learning harder. (If you want to get your neuroscience geek on, look up “brain-derived neurotrophic factor.”)

Second, some degree of physical movement during class moderates students’ alertness levels.

If my students are nodding off or bouncing giddily, I’ll get them up on their feet for a bit to get the blood moving (or to burn off some of that excess energy).

In these ways, the body’s physical state obviously matters for cognition.

And yet, over the years, I’ve had two basic concerns about broader claims within this field. Let me try to explain…

Better Definitions, Please

Scientific conclusions typically require precise measurements; precise measurements require precise defintions.

That is: I can tell you that this rock weighs more than that rock because I can measure it (on my scale) according to well-defined measurements (pounds or kilos).

But: if I want to say that this student is paying more attention than that student, I need a really good definition of attention, and a way to measure it. “This student demonstrates 6 attention units, whereas that one demonstrates only 4.”

Picture of a student doing acrobatic movement in the classroom while carrying backpack with doodles on the blackboard

Sadly, the concept of “embodied cognition” invites definitional muddle.

For instance: is mindful meditation “embodied cognition”? (It often includes a focus on the body.)

More broadly, here’s Wikipedia’s entry on embodied cognition. I’m not gonna lie; I get lost really quickly when I read that entry.

So, problem #1: I don’t always understand exactly what the claims about embodied cognition really are.

More Research, Please

I think I do understand one of the claims under the “embodied cognition” umbrella. I think the claim is:

Adding the right gestures to teaching helps students learn.

That is: using gestures (“embodied”) helps students think and learn (“cognition”).

A recent study in Australia pursued just this line of inquiry.

In this study, 33 students (aged 12-14) learned about Brownian motion.

Half of them saw a typical lesson — a powerpoint presentation, group discussion, worksheets — taught by an experienced teacher.

The other half saw the same lesson (powerpoint presentation, etc.) with additional, carefully designed hand gestures.

By the way, the teacher used the hand getures, and encouraged the students to do so as well.

Two days later, the students who saw and used the meaningful gestures (a.k.a., “iconic” gestures) scored a lot higher on a simple quiz. (For stats folks, the Cohen’s d was 0.98, which is really big!)

Now, I admit to some concerns about this study:

33 is a very modest sample size.

“2 days later” isn’t really learning.

Most important: there is no “active control group.”

That is: the researchers didn’t compare iconic gestures with another new strategy. Instead, they compared gestures to “business as usual.”

“Business as usual” isn’t often a very persuasive control group; after all, the novelty might explain the effect.

These concerns aside, I do think the study — combined with other similar studies — gives us some reason to think that the right gestures just might help students learn better.

I was especially glad to see an emphasis on students’ use of the gestures. This variable hasn’t gotten much attention in other studies I’ve seen, so I’m encouraged to see it getting greater visibility.

Lingering Questions

And yet, I STILL want more research. Here’s why:

Problem #2: I don’t think we have nearly enough research (yet) to establish useful principles for instructive gestures.

In other words: these gestures probably helped 13-year-olds learn about states of matter.

But: what sorts of gestures can help what ages learn about what topics?

Specifically:

If I want my students to know the difference between “comedy” and “tragedy” (and I do!), can gestures help with those concepts? How should I think about desiging those gestures?

What sorts of topics in a history class would benefit from gestures?

Should foreign language teachers have students make specific gestures — say — when they learn different declensions? When they learn masculine or feminine nouns?

I’m not trying to be difficult or grouchy when I ask these questions. I’m trying to understand how seeming success in this one case could be translated to other topics, other disciplines, and other age groups.

Growing Concerns

More broadly, I worry that “iconic gestures/embodied cognition” will become the Next Thing We’re All Talking About.

Teachers will get instruction about Iconic Gestures, be required to use them, and be evaluated on their use … even though we don’t have even basic guidelines on how to create or use them. (At least, as far as I know.)

For instance: the topic of Brownian motion was chosen, in part, because it is “susceptible to being taught using specific gesticulation.”

What about topics that aren’t obviously susceptible?

In fact, if you look at the gestures used during the lesson, they don’t seem too far off from the sorts of gestures that teachers might make spontaneously.

Are “iconic gestures” simply “the sorts of gestures we’d use anyway, but formally planned, scripted, practiced, and repeated by students”?

If yes, does the entire topic of iconic gestures change from “revolutionary” to “a modest technical update to something we’re doing anyway”?

I’m entirely open to the possibility that gestures (“embodied”) can help students learn (“cognition”) … but we need more research to know for sure.

TL;DR

Because the brain is in the body, the body’s physical state obviously matters for learning.

This recent study from Australia (and others) suggest that well crafted hand gestures can help students learn some concepts.

However, the principles that guide us in the creation and use of those hand gestures are not yet well mapped. So: we just don’t know how widely this technique might benefit teachers, schools, and students.

If someone insists you start using gestures because “research in embodied cognition says you must!”, ask to see the specific study.


Bentley, B., Walters, K., & Yates, G. C. (2023). Using iconic hand gestures in teaching a year 8 science lesson. Applied Cognitive Psychology.