Skip to main content

Andrew Watson About Andrew Watson

Andrew began his classroom life as a high-school English teacher in 1988, and has been working in or near schools ever since. In 2008, Andrew began exploring the practical application of psychology and neuroscience in his classroom. In 2011, he earned his M. Ed. from the “Mind, Brain, Education” program at Harvard University. As President of “Translate the Brain,” Andrew now works with teachers, students, administrators, and parents to make learning easier and teaching more effective. He has presented at schools and workshops across the country; he also serves as an adviser to several organizations, including “The People’s Science.” Andrew is the author of "Learning Begins: The Science of Working Memory and Attention for the Classroom Teacher."

Help Me Understand: Narrative Is Better than Exposition
Andrew Watson
Andrew Watson

I’m straight-up asking for some guidance here.

Here’s the story…

“Psychologically Privileged”

For many years now, I’ve seen the claim that “narrative is psychologically privileged.”

That is: we humans understand and remember stories better than we remember other kinds of informational presentations — say, essays.

Young children sit on the floor eagerly listening to a teacher, off camera

Because I’ve read this statement so many times, I didn’t really think about it — other than to accept it’s probably true.

Also — I will admit — I spent some time feeling a bit smug. As an English teacher, I get to hang out with narratives all the time. If this claim is true, my discipline has it made.

I recently came across a meta-analysis examining this claim of psychological privilege. Sure enough, it found that:

“Based on over 75 unique samples and data from more than 33,000 participants, we found that stories were more easily understood and better recalled than essays.” (Emphasis added)

Technically speaking, 33,000 is a very large number.

Given this much data, I thought, SURELY teachers should teach with stories more than with exposition.

And then, I started feeling a bit queasy…

Let’s Get Practical

If you read this blog often, you know that I’m always trying to understand how a specific research finding can actually be used in classrooms.

So: how EXACTLY might teachers use this advice?

The obvious answer: when we have a choice, we should use the narrative version of an explanation instead of an expository one.

But here’s the catch: realistically speaking, how often do we have a choice? How easily can we switch between these two modes?

For example: I know how to present The Grapes of Wrath as a narrative. (It IS a narrative.)

But if I want my students to know …

the definition of a sonnet, or

the difference between personification and hyperbole, or

the qualities of a well-written thesis paragraph,

… how can I present that information as a narrative?

Of course, I could give examples of all those things. (I always do.) But examples aren’t narratives.

Honestly, I was kinda stumped.

Let the Quest Begin

But wait!

An obvious solution tapped me on the shoulder. I have, right here on my computer screen, a meta-analysis about the superiority of narrative! I can simply review the studies it meta-analyzes, and find strategies to accomplish this task.

I reviewed the document, and identified all the studies showing the benefits of narrative published after 2000. (Yes, that’s an arbitrary cut off, but it seemed handy and plausible.)

Believe it or not, only ONE of the studies even attempts to answer this vital question.

That is: almost all of the studies show that students understand narrative passages better than exposition, and remember them better.

But only this one tries to present the same information both ways.

Specifically, students read short passages about the circulatory system.

Some students read a version as a story: a young man named Alex shrinks himself into a tiny person, gets pulled into a passer-by’s lungs, and travels through the circulatory system to escape.

Other students read more traditional textbook explanations of the circulatory system.

In other words: my quick survey found only one example of expository information (the bad stuff) being translated into narrative (the good stuff).

One example doesn’t give teachers lots to work with.

And, the situation gets worse — in two ways.

Double Trouble

First: students in this study don’t (exactly) understand and remember the narrative better that the traditional exposition.

Instead, the researchers find that that the student’s prior knowledge is the key variable. I quote directly from the abstract:

“Learning and recall did not differ as a function of text genre overall [that is: narrative vs. exposition], but did interact with prior knowledge.” (Emphasis added)

In other words: we’ve got only one example to go on — and the example didn’t work as promised!

Second: more subjectively, I found the “tiny-man-pulled-into-the-lungs-story” rather confusing.

You can the sample passages here: check out Appendix B at the end.

I get that the story is a bit more interesting…but I’m not at all sure that I would have learned more from reading it.

In fact, other research into the “seductive details” effect increases my worry.

What if my cool and memorable stories actually distract my students instead of helping them?

The Problem, and The Ask

So here’s my problem: I’d like to be able to tell teachers how to use this research.

I’d like to say: “because students learn better from stories than from exposition, you can/should convert exposition into stories in this way…”

And yet, realistically speaking, I can’t figure out how to make that advice work. How can teachers do so?

So here’s my ask: can anyone out there help me out?

Do you know of research that answers this question more directly and successfully?

Do you have ideas how exposition can realistically become narrative?

I’d really like to know.

BTW, x2

BTW #1: I am, with this post, inaugurating what might be a series of questions. Rather than trying to provide answers, I increasingly find myself in search of them.

So, depending on the responses I get to this “help me understand” format, I might try it again.

BTW #2: Because we were getting SO MUCH SPAM on this blog, we had to add filters to the comment function. If you DO have an answer to my question, but you’re not able to get past those filters, you can email me directly at [email protected].

 

I really hope to hear from you!


Mar, R. A., Li, J., Nguyen, A. T., & Ta, C. P. (2021). Memory and comprehension of narrative versus expository texts: A meta-analysis. Psychonomic Bulletin & Review28, 732-749.

Wolfe, M. B., & Mienko, J. A. (2007). Learning and memory of factual content from narrative and expository text. British Journal of Educational Psychology77(3), 541-564.

Feedback Before Grades? Research and Practice…
Andrew Watson
Andrew Watson

The plan sounds so simple:

Students practice a new skill.

Teachers give them feedback.

Using that feedback, students improve.

What could be more straightforward?

Alas, if you’ve spent more than a minute teaching, you spot a problem with the formula above: students often ignore the feedback.

For example: I write SO MUCH USEFUL GUIDANCE in the margins of my students’ papers. And yet, as far as I can tell, they just don’t spend much time reading all those helpful comments.

They tell me they want to learn. They tell me they want higher grades. They could accomplish both missions if they would just read the feedback. Oy.

It Just Might Work…

A few years ago, I devised a strategy to combat this feedback problem.

First: I wrote comments on papers as I had before.

Second: I summarized the three most important concerns at the end of the paper.

(For example:

“Be sure to focus the topic sentence on abstract nouns.

Give specific examples for all your main arguments.

Look out for danging modifiers.”)

Third: I returned the paper with the comments BUT WITHOUT A GRADE.

Fourth: Students reviewed the comments, and wrote up their own summary. (This step ensured that students read and understood the comments.)

Fifth: Then — and only then — did the students get their grades.

My thinking went like this:

My students were REALLY motivated to know their grades. If I could harness that motivation correctly, then I could get them to review and learn from the comments I spent so much time writing.

They would get the grades and learn at the same time. Brilliant! (Well, potentially brililant…)

So: Did It Work?

I did not think to collect data at the time, so I don’t have a scientific answer to the obvious question — “did this strategy work?”

But I have a few strong impressions.

First: the students were REALLY BAD at summarizing my comments, and did not like the process.

On the one hand, this conclusion suprised me. After all: I had summarized the comments for them (“topic sentences, examples, dangling modifiers”).

All they had to do was spot and re-summarize my own summary.

On the other hand, this conclusion made sense. No wonder my students hadn’t responded effectively to my comments — they didn’t even want to read them!

Second: my strategy either really helped, or made no difference.

In some cases, students quickly took advantage of this system. I could tell because my comments were different on each paper.

If the first paper asked them to focus on “abstract nouns in the topic sentence,” the next paper clearly met that goal.

Arrows pointed to the center of a target

On the second paper, my feedback focused on — say — transitional language between examples.

Because my comment summary changed from paper to paper, I could tell the system was working for these students.

I must admit, however, that not all students responded this way. Some submitted the feedback summaries as I required — and continued to make the same old mistakes.

A partial victory — but not a complete one.

So: SHOULD It Work?

My experience suggests that my witholding the grade prompted some (but not all) students to focus more on feedback.

Do we have any reseach supporting this strategy?

Sure enough, we do.

A study from 2021 shows that students who get feedback before grades improve more than those who get grades before feedback.

The researchers here, in fact, consider some of the underlying mechanisms as well.

They note that “excessive focus on grades can interfere with the students’ ability to self-assess,” and that, “in the case of [grade] disappointment…students may decide not to engage with the written comments at all.”

These truths suggest the obvious solution: postpone grades until students have time to process the feedback.

In this case college students didn’t need to go through all the extra steps that I created; that is, they didn’t summarize the feedback their teachers wrote.

Simply having extra time to peruse the feedback — before they got the grades — proved a significant benefit.

Closing Thoughts

First: I note that both my own mini-experiment and this published study took place with older, academically successful students. I don’t know of research looking at a broader, more representative sample.

Second: reasonable people might ask, “if grades distract from feedback, can’t you just do away with the grade thing altogether?”

Some schools might make that decision — and plenty of people are advocating for it. But: individual teachers almost certainly can’t stop assigning grades. So, this strategy can help one teacher at a time.

Third: I first read about this study when Jade Pearce (X-Twitter handle: @PearceMrs) wrote about it. If you’re interested in this kind of research, you should ABSOLUTELY follow her there.

TLDR: To help students focus on learning, postpone grades until they have time to review feedback.

This strategy might not help everyone, but it provides clear benefits for many.

 

 


 

Kuepper-Tetzel, C. E., & Gardner, P. L. (2021). Effects of temporary mark withholding on academic performance. Psychology Learning & Teaching20(3), 405-419.

Which Is Better: “Desirable Difficulty” or “Productive Struggle”?
Andrew Watson
Andrew Watson

The obvious answer to my question is: “what a silly question.”

After all, the two phrases sound almost synonymous, don’t they?

Being “productive” is often “desirable.”

We often respond to “difficulty” by “struggling.”

When I face a “desirable difficulty,” I’m likely to engage in “productive struggle” — no?

This obvious answer, however, doesn’t seem to hold true in the world of education.

In fact, as far as I can tell, champions of “desirable difficulty” often decry “productive struggle” — and vice versa.

Let me explain.

Group A; Group 1

Many debates in the field of education boil down to two rough camps or teams.

These teams disagree quite heatedly about MANY questions; as a result, finding neutral terminology to name them gets complicated. You might hear about:

Progressive vs. traditional, or

Teacher-centered vs. student centered, or

Direct instruction vs. constructivism, or

… well, the list goes on and on. (All of these terms contain inaccuracies, but they offer a place to start.)

Perhaps the least partisan differentiation works like this:

Champions of high-structure pedagogy believe that working memory limitations require careful teacherly guidance through complex paths of learning. This “team” — often misunderstood to be “traditional” — roughly favors a teacher-centered classroom, direct instruction, a knowledge-rich curriculum, and so forth.

Champions of low-structure pedagogy believe that students learn best as they build (that is, “construct”) their own mental models by discovery, trial, and error. This “team” — often mislabeled as “progressive” — typically favors a student-centered classroom, inquiry or discovery pedagogies, a skills-based curriculum, and so forth.

To be clear, every sentence in these two paragraphs invites LOTS of disagreement and debate. But they’ll serve as a useful starting place.

Now, proponents of both high- and low-structure pedagogy want students to THINK HARD. (No surprise there. As Dan Willingham has written, “memory is the residue of thought.”)

BUT — here’s the big reveal — each team has a different name for the right kind of hard thinking.

As best I can tell:

Team high-structure promotes “desirable difficulties,” like spacing, interleaving, retrieval practice, and generative learning strategies.

Team low-structure promotes “productive struggle,” as students wrestle to construct their own understanding through inquiry, discovery, and so forth.

These phrases — which seem like synonyms outside the high vs. low debate — serve as team jerseys for thinkers engaged in the debate.

So, high-structure champions don’t think much of “productive struggle” because that phrase signifies open-ended project pedagogies.

Low-structure champions don’t like “desirable difficulties” because they suggest an excessive level of teacherly control.

In this way, rough synonyms turn into markers of strong disagreement.

Renaming the Rose

This paradox — “synonyms signal strong disagreement” — in turn highlights an important part of this debate.

Champions of both high-structure and low-structure pedagogy want students to THINK HARD.

mathematics professor solving complex equations on a blackboard

The key difference between them: what’s the right kind of hard thinking?

In other words: what principles should guide us as we decide when and how students think hard? And — implicitly — how do we measure the success of our pedagogy?

Rather than saying “that team is entirely wrong,” we can say:

“That team has a different set of principles behind achieving a goal that we share.

We certainly disagree about those principles, but because — again — we share the goal we have a good place to start a conversation about meeting it.”

Long-time readers probably recognize one of my core beliefs in those sentences.

I have long argued that cognitive science reseach can’t tell teachers what to do. Instead, that reseach can help teachers think about what to do.

If we shift the high-structure/low-structure debate from a to do list (“follow these pedagogical steps”) to a to-think list (“think about making students THINK HARD in these ways”), we just might get the best ideas from both approaches.

After all: as long as students do the right kind of hard thinking — no matter the label we use to describe it — learning will result. Surely that’s a goal we share.

 

“Comprehensive and Manageable”: Walkthrus Has It All
Andrew Watson
Andrew Watson

Teachers who want to rely on cognitive science to improve our teaching have SO MANY good options to choose from:

The best ways to help students practice,

The best ways to help explain new material,

Even the best ways to help students feel connected to one another.

This good news, however, can quickly become bad news.

How can we keep track of all this guidance?

How can we balance and combine all these suggestions?

As I’ve written elsewhere, we’re lucky to have an increasing number of books that bring all these pieces together. (I wrote about Teaching and Learning Illustrated just a few weeks ago.)

Another EXCELLENT candidate in this field has been published for a US audience in recent months: Walkthru: 5-Step Guides to Build Great Teaching by Tom Sherrington and Oliver Caviglioli.

The cover of Walkthru: a bright yellow and white cover, with a drawing of two teachers thinking and talking together

Many books in this field summarize and organize research into coherent topics and flow charts.

Sherrington and Caviglioli – long time educators, both – take a different approach. They start from the assumption that teachers want to do something practical with the research right now.

With that in mind, they sort dozens of ideas into “Walkthrus”: a series of five concrete steps that teachers can take to focus on and improve a particular part of their teaching practice.

You want to be better at cold calling?

You want a new way to think about seating charts?

Maybe you’d like to create routines that foster a sense of classroom belonging?

For each of these goals – and DOZENS of others — you can pick a Walkthru and get down to work.

Here’s the fuller story:

The Background

Sherrington and Caviglioli build their Walkthrus on conceptual work done by many other scholars in this field. And – helpfully – they highlight these conceptual frameworks in the first section of their book, entitled “Why?”

MANY – perhaps most – of these frameworks will be familiar to long-time readers.

You’ve already heard about Caviglioli’s own work on dual coding.

I interviewed Blake Harvard for this blog many years ago.

Peps Mccrea’s book on Motivation gets the focus it deserves.

All the greats appear in this first section: Dan Willingham, and Generative Learning, and Shimamura’s MARGE model, and Cognitive Load Theory, and …

In effect, these 30 pages briskly summarize the essential thinkers, models, and frameworks of recent decades.

You might think this “Why” section as a Hall of Fame for this field.

Getting Practical

This sort of brisk summary can be inspiring, but it can also be overwhelming. What should we teachers DO with SO MUCH information?

Fear not!

Sherrington and Caviglioli spend the next 200+ pages answering exactly that question.

As a teacher – or instructional leader – you might pick one of the book’s broader sections: say, “Questioning and Feedback,” or “Behavior and Relationships,” or “Mode B Teaching.”

Or, you might pick one of the individual Walkthrus.

To take one example – literally at random – you might decide to work on helping students read. Happily, one Walthru focuses on “Building a Culture of Reading.” Steps here include:

Read Across the Curriculum, and

Embrace Reading Aloud, and

Embed Reading in Homework Tasks.

You can work through these steps at your own pace in an iterative cycle, which Sherrington and Caviglioli call “ADAPT” (see page 290).

In other words: teachers don’t need to do everything all at once. And we don’t need to figure out how to structure the application process.

Instead, Walkthrus walks us through the translation from theory (the “Why” section) to practice (the “What” section).

This strategy means that an enormous amount of research-based advice is repackaged into brief and manageable steps.

Some Important Notes

First: The USA version of Walkthrus distills the greatest hits from a 3-volume version published in the UK. If you’re REALLY into Walkthrus, you might look for that larger set.

Second: Sherrington and Caviglioli – of course! – make decisions about what to include (and not). Not all teachers or leaders will agree with all these decisions.

However: you can easily find points of agreement and focus there. The book’s structure, in fact, encourages you to do so.

Third: I share a publisher (John Catt) with these authors; in fact, I wrote a “blurb” for the book. I don’t think these factors have influenced my review, but you should have those facts as you weigh my opinions.

TL;DR

You’re looking for a resource that sums up the most important ideas for applying cognitive science to the classroom?

You’d like it to be simultaneously comprehensive and manageable?

Walkthrus just might be the book for you.

How to Change Students’ Minds? Create Surprise…
Andrew Watson
Andrew Watson

Sometimes teaching is challenging. And sometimes, it’s REALLY challenging.

For instance:

Because I’m an English teacher, I want my students to know the word “bildungsroman.” (It means, “a novel of character formation.” Their Eyes Were Watching God depicts Janie’s formation as a complete person — so, it’s a bildungsroman.)

Alas, students find tha word to be disturbingly odd: “bildungswhat???” they cry.

And the definition is at times perplexing. Are the Harry Potter novels examples of a bildungsroman? How about The Book Thief?

So, learning that definition presents a challenge.

But, other literary terms create a bigger learning challeng.

As an English teacher, I also want my students to know the definition of the word “comedy.”

In this case, my students and I face a much different problem. That is: my students think they already know what ‘comedy’ means.

They think it means, basically, “a story that’s funny.”

In the world of literary analysis, however, “comedy” doesn’t mean funny.

Basically, the definition goes like this: ” ‘tragedy’ ends in death or banishment; ‘comedy‘ ends in marriage, implying birth.” (Lots more to say, but that’s a good place to start.)

So: according to this definition, sitcoms aren’t comedy.

And all sorts of stories can be comic, even if they’re not even a little bit funny. (I just read a murder mystery which has a comic ending: one of the protagonists goes on a date — implying the potential for marriage.)

In research world, we call this problem a “prior misconception.”

That is: my students think they know the correct answer (“comedy” = funny), but the question really has a different answer (“comedy” = ending implying marriage).

Sadly, prior misconceptions make learning harder. Students’ prior misconceptions complicate the process of learning correct answers or concepts.

So: what’s a teacher to do?

A Scientific Method?

Although the examples I’ve offered focus on teaching English literary terminology, this question gets most research attention for teaching scientific concepts.

A brighly colored beac ball floating in a vibrantly blue pool

For instance: imagine pushing a solid ball underwater. How much liquid will it displace?

Younger students have an important misconception about this question. They typically think that the amount of water depends on the WEIGHT of the ball, not the SIZE of the ball.

This misconception about “displacement” will get in the way of later scientific learning, so teachers should correct it as quickly as we can. How best to do so?

A research team in Germany approached this question with a specific strategy: using surprise.

These researchers showed a video to 6- to 9-year-olds, whom they met at a natural history museum.

Half of the children were asked to predict how much water would be displaced when balls of various sizes and materials were submerged. Then they saw the actual results.

Sure enough: the children who made predictions  — based on their prior misconceptions — were more surprised than those who didn’t. (Believe it or not, surprise in this case is measured by pupil dilation!)

And, those children learned more from the experiment than those who didn’t make predictions.

That is: they scored higher on subsequent tests about displacement. And — even better — they scored higher on transfer tests of this concept.

So, one potential strategy to help students overcome their prior misconceptions about the natural world:

Step one: ask them to make predictions based on those misconceptions

Step two: surprise them with real-world experiences that contradict them.

Boom: minds changed.

Strengths, and Doubts

When I first saw it, this study appealed to me for a number of reasons.

First, one author — Garvin Brod — has worked on several studies and papers that I admire. (I’ve written about another one here.)

So, when I see Dr. Brod’s name on the byline, I sit up and take notice.

Second: for a variety of technical reasons, I admire the study design. The researchers have taken great care to get the tricky details just right. (For instance: their active control condition makes sense to me.)

However, I do have concerns. (To be clear: Brod & Co. acknowledge both these concerns in their “Limitations” section.)

Concern #1: duration.

For understandable reasons, researchers measured the students’ learning right away. (The entire process took about 30 minutes.)

But we don’t want our students to change their prior misconceptions right now. We want them to change misconceptions — as much as possible — FOREVER.

This problem creates concerns because prior misconceptions are stubborn. To ensure that the “surprise” method works, it would be GREAT if we could retest participants weeks or months later.

Concern #2: contradiction.

I have seen other authors and writers raise a plausible concern. If we invoke students’ prior misconceptions before contradicting them, we run the risk of strengthening those misconceptions.

That is: students will naturally filter the new/contradictory experience through the the distorting lens of their misconceptions. And that lens is EVEN MORE DISTORTING because we just asked students to activate it.

Now at this point I have a confession: I simply can’t remember where I read that. But I remember thinking: “that sounds really plausible to me.”

So at this point, I’m honestly kind of stumped. A well-conceived study suggests the “surprise” strategy will work (at least in the short term). But other scholars in this field have plausible doubts.

Truthfully, I’m hoping one of you will know the “don’t invoke prior misconceptions!” research pool and point it out to me. If/when that happens, I’ll share it with you.

TL;DR

This study suggest that asking students to make predictions based on their prior misconceptions increases their surprise when those misconceptions are contradicted by experience.

And: that feeling of suprise helps them learn a correct conception — at least in the field of science.

However, I myself am not fully persuaded by this approach. I’ll keep a lookout for other studies in the field, and share them with you.


 

Theobald, M., & Brod, G. (2021). Tackling scientific misconceptions: The element of surprise. Child Development92(5), 2128-2141.

Classroom Cognition Explained, or, Dual Coding Just Right
Andrew Watson
Andrew Watson

The Good News: research into cognitive science can be SPECTACULARLY USEFUL to teachers. (That’s why we have Learning and the Brain conferences….)

Book Cover for Teaching & Learning Illuminated

The Less Good News: ideas that come from cognitive science can be MISUNDERSTOOD and MISAPPLIED with alarming frequency.

For example: as I’ve written elsewheredual coding has lots of potential benefits for reducing working memory load — and thereby helping students learn. That’s the good news.

But — less good news — dual coding has too often been interpreted to mean “put icons on things to make them better.”

Wouldn’t it be great if someone could bring together LOTS of ideas from cognitive science, AND explain them with well-executed dual coding?

Yes; Yes It Would…

Well, someone has done exactly that. Three someones, in fact.  Bradley Busch, Edward Watson (no relation), and Ludmila Bogatchek have written Teaching and Learning Illuminated: the Big Ideas, Illustrated.

As that title promises, this book illuminates (that is, dual codes) the greatest hits from cognitive science: retrieval practice, cognitive load theory, Rosenshine’s principles, mindset, and a few dozen more.

Each section combines a pithy description of a particular concept with a visual representation of its core ideas.

So, for instance, page 35 summarizes dozens of studies looking at the benefits of spreading practice out (“spacing”) and practicing related topics together (“interleaving”).

And, the facing page offers a carefully considered graph that depicts learning over time. One path (“cramming”) looks good because it works so well in the short term. But the second path (“spacing and interleaving”) results in more learning over time.

Voila: “desirable difficulties” in one thoughtful graph.

Unlike so many examples of dual coding of the “put-an-icon-somewhere” school, Busch, Watson, and Bogatchek create substantial, meaty visuals that both require and reward careful study.

I never looked at the illustrations and thought: “gosh, that’s pretty.”

Instead, I thought:

Oh, gosh, I need to stop and study this for a bit.

Wait, why is that line there?

Ok, now I get it. Presumably this axis is labeled…oh, right, so cool!

In other words, the visuals both require thought and support thought. The result: readers understand these complex ideas even better.

So Many Shelves

I’ve written in the past that the “best book to read” depends on the reader’s current knowledge.

If you’re somewhat of a beginner in this field. I think you should probably read a book that focuses on just one topic: long-term memeory, or attention, or cognitive load theory.

Once you understand lots of the pieces, it’s time to read the books that put them all together.

Teaching and Learning Illuminated looks like an easy read — so many cool pictures! At the same time, it includes an ENORMOUS number of research-based insights and suggestions.

For that reason, I think of it as an “early-advanced” book more than one for those who are new to the field. Those illustrations are welcoming, but they also create cognitive demands of their own.

Full Disclosure

Because this field is relatively small, I know one of the three authors — Bradley Busch — a bit. (I recently recorded some brief video snippets for his website.)

I don’t believe our conversations have influenced this review, but the reader should know of them in making that evaluation.

I’ll also note: yes, I have written a book about Mindset; and yes, this book includes a mindset chapter called “The Watson Matrix.” But: their matrix isn’t about my summation of mindset theory.

 

An Argument Against “Chunking”
Andrew Watson
Andrew Watson

Learning and the Brain exists so that we can talk about good teaching together.

Although such conversations can provide great benefits, they also run into problems.

We might disagree with each other’s beliefs.

Or, we might disagree about research methods.

Even when we do agree, we might struggle to communicate effectively about shared beliefs.

For example: jargon.

When specialists talk with each other about “theory of mind” or “p3” or “element interactivity,” the rest of us often think “what the heck does THAT mean?”

Effective communication stops when words don’t have recognizeable meanings.

Another, subtler problem also hampers communication:

Effective communication stops when we use the same word to mean different things.

Sometimes this problem happens between disciplines.

The word “transfer,” for instance, has different meanings in neuroscience, education, and psychology.

Other words get us all tangled up, even within the same discipline.

I’m looking at you, “chunking.”

Television for All

I believe I first heard the word “chunking” to describe this mental phenomenon:

Imagine I ask you to memorize this list of letters:

CN NAB CFO XHB OCB S

Or, I might ask you to memorize THIS list of letters:

CNN ABC FOX HBO CBS

From one perspective, those lists are identical. They are the same letters in the same order. I just moved the spacing around a bit.

But, when I moved those spaces, I “chunked” the letters.

Penguins grouped together into the shape of a heart

That is: I organized those letters to align with your prior knowledge.

As teachers, we can reduce working memory load by “chunking”: that is, by aligning new ideas/information with ideas/information our students already have.

“Chunking” means “alignment with prior knowledge.”

Cool.

Or, wait a moment…

Curiouser and Curiouser

I’ve also heard “chunking” used in entirely different ways.

The second meaning: “break larger pieces down into smaller pieces.”

If I’ve got a list of ten instructions I want my students to follow, that list will almost certainly overwhelm their working memory. So, I could break that list down.

Three instructions.

Then three more.

An additional two, followed by the final two.

VOILA, I “chunked” the instructions.

Of course, this kind of chunking (breaking down into smaller bits) doesn’t mean the same thing as the first kind of chunking (aligning with prior knowledge).

Nor does it mean the same thing as the THIRD kind of chunking: forming a link with prior knowledge.

That is:

You could learn that “hamster” is another “mammal” that people keep as a “pet.”

You’ve formed a new “chunk”: mammals that are pets.

Or, you could learn that “Saratoga” is another surprising military victory, like “Agincourt” and “Thermopylae.”

You’ve formed a new “chunk”: unlikely military victories.

You see the problem here?

In Sum

So, as far as I can tell, “chunking” means either…

… aligining new information with prior knowledge, or

… breaking large information dumps into smaller pieces, or

… connecting new information with well-known information (which sounds like the first meaning, but isn’t exactly the same thing).

If I tell a colleague, “I think that part of the lesson would have benefitted from more chunking,” s/he doesn’t really know what I mean.

Even worse: s/he might THINK that s/he knows — but might understand chunking one way when I mean it another.

Ugh.

To be clear: I am IN FAVOR of all three strategies.

After all: all three ideas reduce working memory load. And, I’m a BIG FAN of reducing WM load.

However, when we use the word “chunking” to describe three different teaching strategies, we make our advice harder to understand.

That is: we increase the working memory demands of understanding strategies to reduce working memory demands. The paradox is both juicy and depressing.

So, I am enthusiastically in favor of all the strategies implied by the word “chunking,” but I think we should stop calling them “chunking.”

Instead, we should use more precise vocabulary to label our true meaning.

Do Animations Improve Learning? A Definitivie Answer, Please…
Andrew Watson
Andrew Watson

Recently I discussed working memory overload with a group of wise and thoughtful teachers.

I showed them one of my favorite GIFs:

a glass (representing working memory),

slowly filling up with stuff (teaching methods, complex information),

so that there is ultimately no room left in the glass (that is: no room left for understanding).

VOILA: working memory overload in one handy animation.

I love this GIF, and show it often.

Young woman draws an animated storyboard

Yet when I gave these teachers time to discuss this animation, they honestly didn’t like it very much. They had lots of specific (and insightful) suggestions, but the overall message was: thumbs down.

So: should I ditch the GIF?

Where to Start

For a guy who writes a blog about research-informed teaching, the next step seem obvious: find out what the research says!

Surely I can find an answer — maybe even a definitive one.

Alas, I quickly stumbled into a quandry.

On the one hand, we’ve got lots of good research suggesting that — on the whole — students do NOT learn more from animated information.

One of the best known studies — led by the much-esteemed Richard Meyer — supports the static media hypothesis: “static illustrations with printed text reduce extraneous processing and promote germane processing as compared with narrated animations.”

In this study, researchers used animations about everything from lightning formation to toilet tanks to see if they helped students understand.

These animations never helped, and often hurt, student learning.

One the other hand, a substantial meta-analysis of 40 studies finds a “beneficial effect of the presence of animated display for learning dynamic phenomena.”

So: what to do when we’ve got persuasive — and contradictory — evidence?

A Feature, Not a Bug

For people unfamiliar with research-world, this kind of contradiction might seem like a failure. If the people who do the research can’t agree on an answer, surely we should just ignore them.

I would offer a different interpretation.

Teaching is complicated. Learning is complicated. PEOPLE are complicated.

So, any time we do research about people teaching and learning, we’re looking at enormously complicated questions.

Some disagreement is inevitable.

And — here’s the surprise — the fact that we found contradictions means that we’ve been looking hard enough. (If I didn’t find contradictory research, I probably haven’t looked very hard…)

What, then, should we do to resolve the (inevitable, helpful) contradictions?

One useful step: get granular.

In this case: presumably some kinds of animations are helpful under some kinds of circumstances. But others: not so much.

We need to know more about the specifics.

Okay, Some Specifics

With that in mind, I found a more recent study trying to understand when and why animations might hinder understanding.

The study, in effect, looked at two questions:

Are the animations essential to understanding the topic, or are they basically “decorative”?

and

Is the material being studied cognitively challenging?

Two scholars — Annabel Pink and Philip Newton — had students study slides with information on them. Some slides had animations; others didn’t.

And — useful to know — the slides covered complex material: human physiology and enzyme kinetics.

Sure enough, students remembered LESS information from the slides with animations. And they rated those slides as cognitively MORE challenging.

In other words:

When deciding whether or not to break out the GIFs, we can ask ourselves:

Am I just decorating the slide, or does animation help clarify the meaning of the material?

and

Is this material a cognitive heavy lift?

When I ask these questions about my working memory overload GIF, I arrive at these answers:

The GIF illustrates a complex process: it’s not decorative, but meaningfully connected to an understanding of the ideas.

BUT

The ideas are — in fact — quite complicated.

The animation, in other words, might add cognitive load to an already mentally challenging concept. Hence the teachers’ unhappiness.

Small, Medium, and Big Pictures

What should we teachers do with this information?

Narrowly stated, we can consistently ask the two questions above: a) is the animation “decorative”? and b) is the material cognitively challenging?

If either answer is “yes,” then we should hesitate to add animations.

More broadly, we should continue to look for detailed guidance about when to use, and when to avoid using, animations to help students learn.

As far as I can tell, we just don’t have a clear picture about the boundary conditions within which they help students learn.

The big picture looks like this.

Psychology research rarely gives us an absolute, definitive answer to questions like: “should we add animations or not?”

Teachers always need to look at research specifics, compare them to the classroom conditions where we work, and use our own expert judgment to analyze the goodness of fit.


Mayer, R. E., Hegarty, M., Mayer, S., & Campbell, J. (2005). When static media promote active learning: annotated illustrations versus narrated animations in multimedia instruction.. Journal of Experimental Psychology: Applied, 11(4), 256-265. https://doi.org/10.1037/1076-898x.11.4.256

Berney, S., & Bétrancourt, M. (2016). Does animation enhance learning? A meta-analysis. Computers & Education101, 150-167.

Pink, A., & Newton, P. M. (2020). Decorative animations impair recall and are a source of extraneous cognitive load. Advances in Physiology Education.

The Whole Toolbox in One (Free) Download
Andrew Watson
Andrew Watson

If you want to learn more about improving teaching with psychology research, I’ve got good news:

There are SO MANY excellent books to read.

I’ve also got bad news:

There are SO MANY excellent books to read, we can struggle to manage them all.

In fact, as I’ve written elsewhere, I think the “best book to read” depends on the category of book you’re looking for.

At the beginning of your research+education journey, you probably want a book devoted to one topic: say, working memory, or motivation, or attention.

As you get more familiar with different categories of research, you might instead want a book that brings many topics together.

Today I’d like to recommend a book from the second category: the Great Teaching Toolkit: Evidence Review from Evidence Based Education. (You can read about it and download it here.)

Step One: How to Begin?

Anyone striving to write a book that “brings many topics together” starts with an enormous challenge: how to organize such a behemoth?

We have SO MUCH pertinent research on SO MANY topics: how can we possibly tidy this muddle?

The Toolkit’s authors devise a sensible sorting strategy. They believe research gives teachers strong guidance in four areas:

What sorts of knowledge do teachers need?

How can we make classrooms emotionally safe?

How can we structure classroom work and routines efficiently?

What teaching strategies require students to think hard?

Now, other authors organize their thinking in other ways. (For instance: Dan Willingham’s Why Don’t Students Like School focuses on nine key principles from cognitive science that should guide instruction.)

But I think you can see right away why the Toolkit’s organizational structure sounds so helpful and sensible.

Step Two: Break It Down

Within each of these categories, the authors offer between 3 and 6 specific principles: everything from “teachers should know common misconceptions in their discipline” to “strategies for asking questions effectively.”

This structure, in turn, allows for a straightfoward teacher-development plan.

If I were using this Toolkit with a faculty, I would have teachers select one of these sixteen topics: prefereably one where they feel the least confident and successful.

Each teacher would then dig into the research-base suggestions provided right there in the Toolkit.

Even better: the Toolkit reviews the research it summarizes. Teachers and school leaders who want to know exactly why this strategy or topic has been prioritized get all the info they need to dig deeper and discover more.

Examples, Please

You have, no doubt, heard that feedback is essential for student learning.

Imagine that a teacher reviews the Toolkit’s list and determines that s/he really needs to work on this specific part of her craft.

Turning to section 4.4, this teacher quickly gathers several useful insights about the role of feedback in our work.

In the first place, the Toolkit draws a helpful distinction between feedback that helps the teacher — by giving us information about how much our students know and understand — and feedback that helps the student — by giving them structured ways to improve.

That simple distinction sounds almost too obvious to state out loud…but in my experince isn’t emphasized nearly often enough.

In the second place, the teacher will find several thoughtful prompts for further thought.

As the authors wisely say: “there is no simple recipe for giving powerful feedback.”

Should the teacher remind the student of the success criteria, or point out gaps between the current work and those criteria?

The Toolkit doesn’t offer prescriptive answers because research can’t do that. Research can provide us with options, and let teachers sort out the best ways to put all those options together.

And, if you’re a research nerd (as I am), you’ll be delighted to find almost 20 pages of discussion on their sources for these ideas, and their methods for sorting them all together.

TL;DR

You already know several specific cognitive-science informed teaching strategies? You to want a bigger picture?

The Great Teaching Toolkit will be a feast for you. (And yes: you can download it free!)

The Cold-Calling Debate: Potential Perils, Potential Successes
Andrew Watson
Andrew Watson

Some education debates focus on BIG questions:

high structure vs. low structure pedagogy?

technology: good or bad?

how much should teachers focus on emotions?

Other debatess focus on narrower topics. For instance: cold calling. (“Cold calling” means “calling on student who haven’t raised their hands.”)

Proponents generally see several benefits:

Cold calling helps broaden check-for-understanding strategies. That is: it lets teachers know that MANY students understand, not just those who raise their hands.

It increases accountability.

It adds classroom variety.

And so forth.

Opponents likewise raise several concerns. Primarily:

Cold-calling could stress students out — even the ones not being cold called. That is: even the possibility that I might be called on could addle me.

Also, cold calling signals a particular power dynamic — one that runs contrary to many school philosophies.

Because both sides focus on different measures of success or peril, this debate can be difficult to resolve.

The Story So Far

Back in 2020, a friend asked about the cold calling debate. I looked for research, and –honestly — didn’t find much. The result of that search was this blog post.

Kindergarten students sitting on the floor, listening to the teacher at the chalkboard

In brief, the only study I found (focusing on college sophmores) found more benefits and fewer perils.

Students who had been cold-called a) asked more questions later on, and b) felt less stress.

But, one study is just one study. And, if you don’t teach college sophomores, you might not want to rely on research with that age group.

Today’s News

Research might offer teachers useful guidance, but we shouldn’t accept all research without asking a few questions.

One way to ensure we’re getting GOOD research-based advice is to look for wide ranges of evidence: evidence from…

… primary school AND high school

… science class AND history class

… small AND large school

… Stockholm AND Johannesburg

And so forth.

Similarly, teachers should feel especially confident when reseachers use different methodologies to explore their questions.

For this reason, I was especially pleased to find a cold-calling study published just last year.

This study doesn’t go in for random distribution or control groups (staples of other research paradigms). Instead, it uses a technique called “multimodal interaction analysis.”

I haven’t run into this technique before, so I’m honestly a newbie here. But the headline is: researchers used videotapes to study 86 cold-calling interactions.

In their analysis, the break the interaction down into a second-by-second record — noting the spoken words, the hand gestures, the length of pauses, the direction of the teacher’s gaze. (In some ways, it reminds me of Nuthall’s The Hidden Lives of Learners.)

Heck, they even keep track of the teacher’s use of modal verbs. (No, I’m not entirely sure what modal verbs are in German.)

By tracking the interactions with such extraordinary precision, they’re able to look for nuances and patterns that go beyond simply: “the teacher did or didn’t cold call.”

Conclusions?

Perhaps unsurprisingly, the study’s broad conclusion sounds like this: details matter.

The researchers offer a detailed analysis of one cold call, showing how the teacher’s build up to the moment created just the right support, and just the right tone, for the student to succeed.

They likewise detailed another cold call where the teacher’s body language and borderline insulting framing (“do you dare to answer?”) seem to have alarmed a shy student in monosyllables.

By implication, this research suggests that both opponents and proponents are missing a key point.

We needn’t ask: “is cold calling good or bad?”

Instead, we should ask: “what precise actions — what words, what gestures, what habits — set the student up for a positive interaction? Which precise actions do the opposite?”

Once we get good answers, we can focus and practice! Let’s do more of the good stuff, and less of the harmful stuff.

TL;DR

“Is cold calling good or bad?” is probably the wrong question.

Recent research focusing on nuances of technique suggests that teachers can reduce the perils of cold calling to foster participation and enhance learning.


Morek, M., Heller, V., & Kinalzik, N. (2022). Engaging ‘silent’students in classroom discussions: a micro-analytic view on teachers’ embodied enactments of cold-calling practices. Language and Education, 1-19.