Skip to main content

Andrew Watson About Andrew Watson

Andrew began his classroom life as a high-school English teacher in 1988, and has been working in or near schools ever since. In 2008, Andrew began exploring the practical application of psychology and neuroscience in his classroom. In 2011, he earned his M. Ed. from the “Mind, Brain, Education” program at Harvard University. As President of “Translate the Brain,” Andrew now works with teachers, students, administrators, and parents to make learning easier and teaching more effective. He has presented at schools and workshops across the country; he also serves as an adviser to several organizations, including “The People’s Science.” Andrew is the author of "Learning Begins: The Science of Working Memory and Attention for the Classroom Teacher."

Let’s Get Practical: What Works Best in the Classroom?
Andrew Watson
Andrew Watson

At times, this blog explores big-picture hypotheticals — the “what if” questions that can inspire researchers and teachers.

And, at times, we just want practical information. Teachers are busy folks. We simply want to know: what works? What really helps my students learn?

That question, in fact, implies a wise skepticism. If research shows a teaching strategy works well, we shouldn’t just stop with a study or two.

Instead, we should keep researching and asking more questions.

Does this strategy work with …

… older students as well as younger students?

… history classes as well as music classes as well as sports practice?

… Montessori classrooms, military academies, and public school classrooms?

this cultural cultural context as well as that cultural context?

And so forth.

In other words, we want to know: what have you got for me lately?

Today’s News

Long-time readers know of my admiration for Dr. Pooja Agarwal.

Her research into retrieval practice has helped clarify and deepen our understanding of this teaching strategy.

Her book, written with classroom teacher Patrice Bain, remains one of my favorites in the field.

And she’s deeply invested in understanding the complexity of translating research into the classroom.

That is: she doesn’t just see if a strategy works in the psychology lab (work that’s certainly important). Instead, she goes the next step to see if that strategy works with the messiness of classrooms and students and schedule changes and school muddle.

So: what has she done for us lately? I’m glad you asked.

Working with two other scholars, Agarwal asked all of those questions I listed above about retrieval practice.

That is: we think that retrieval practice works. But: does it work with different ages, and various subjects, in different countries?

Agarwal and Co. wanted to find out. They went though an exhaustive process to identify retrieval practice research in classrooms, and studied the results. They found:

First: yup, retrieval practice really does help. In 57% of the studies, the Cohen’s d value was 0.50 or greater. That’s an impressively large result for such a simple, low-cost strategy.

Second: yup, it works it in different fields. By far the most research is done in science and psychology (19 and 16 studies), but it works in every discipline where we look — including, say, history or spelling or CPR.

Third: yup, it works at all ages. Most research is done with college students (and, strangely, medical students), but works in K-12 as well.

Fourth: most retrieval practice research is done with multiple choice. (Yes: a well-designed multiple choice test can be retrieval practice. “Well-designed” = “students have to THINK about the distractors.”)

Fifth: we don’t have enough research to know what the optimal gap is between RP and final test.

Sixth: surprisingly, not enough classroom research focused on FEEDBACK. You’d think that would be an essential component…but Team Agarwal didn’t find enough research here to draw strong conclusions.

Seventh: Of the 50 studies, only 3 were from “non-Western” countries. So, this research gap really stands out.

In brief: if we want to know what really works, we have an increasingly clear answer: retrieval practice works. We had good evidence before; we’ve got better evidence now.

Examples Please

If you’re persuaded that retrieval practice is a good idea, you might want to be sure exactly what it is.

You can always use the “tags” menu on the right; we blog about retrieval practice quite frequently, so you’ve got lots of examples.

But, here’s a handy description (which I first heard in Agarwal and Bain’s book):

When students review, they put information back into their brains. So: “rereading the textbook” = “review,” because students try to redownload the book into their memory systems.

When students use retrieval practice, they take information out of their brains. So, “flashcards” = “retrieval practice,” because students have to remember what that word means.

So:

Reviewing class notes = review.

Outlining the chapter from memory = retrieval practice.

Short answer questions = retrieval practice.

Watching a lecture video = review.

When you strive for retrieval practice, the precise strategy is less important than the cognitive goal. We want student to try to remember before they get the correct answer. That desirable difficulty improves learning.

And, yes, retrieval practice works.

How Can We Help Students Study Better? [Repost]
Andrew Watson
Andrew Watson

This story might sound familiar:

You attend a Learning and the Brain conference (like, say, our upcoming conference about Teaching During a Pandemic) and come away with FANTASTIC ideas.

You go back to your classrooms — in person, online, asynchronous — and tell your students all about the amazing research you saw. (Perhaps you discuss the importance of retrieval practice, which helps much more than old-fashioned review.)

Your students sound thrilled!

And yet, the very next day they ignore your retrieval practice suggestion, and go right back to rereading their notes. Ugh.

SO FRUSTRATING!

What can we do to help our students study correctly — which is to say: how can we help them learn more, and more effectively?

In a recent article, Mark McDaniel and Gilles Einstein offer a 4-step framework to help change students’ study behavior.

Called KBCP — which stands for “Knowledge, Belief, Commitment, and Planning” — this framework could make a real difference for long-term learning.

The Short Version

In brief:

Knowledge: we should tell students about the study strategy or technique that research has shown to be effective: say, spacing, or generative learning strategies.

Belief: students then undertake an exercise that demonstrates the benefits of this strategy.

Commitment: students get onboard with the idea. They don’t just know and believe; they buy in.

Planning: next, they make a specific and measurable plan to enact their commitment.

As McDaniel and Einstein’s article shows, each of these steps has good research behind it. Their contribution to this field: they bring them all together in a coherent system.

McDaniel and Einstein emphasize that teachers shouldn’t rely on just one or two of these steps. They all work together to help students learn more:

Our central premise is that all four components must and can be explicitly targeted in a training program to maximize self-guided transfer of effective learning strategies.

The problem with the story that began this blog post, in other words, is that it targets only the first of these four steps. To help our students learn, we need to do more and better.

One Example

This article makes for such compelling reading because the authors both explain the research behind each step and offer specific classroom examples to show what they mean.

For instance: the “belief” step encourages teachers to design an exercise that helps students really believe that the technique will work. What would such an exercise look like?

If, for instance, we want to encourage students to “generate explanations” as a memory strategy, what exercise would persuade them that it works?

M&E describe a strategy they’ve often used.

First: have students learn several simple sentences. For instance: “The brave man ran into the house.”

Second: for half of those sentences, encourage students to (silently) generate an explanation: perhaps, “to rescue the kitten from the fire.”

Third: when we test students on those sentences later, they will (almost certainly) remember the second group better than the first. That is: they’ll have reason to believe the strategy works because they experienced it themselves.

McDaniel and Einstein include such examples for each of their four steps.

And Beyond

This article gets my attention for another reason as well. The authors write:

There are many potentially effective ways to actualize the key components of the KBCP framework, and we offer the following as one possible example of a training program.

Frequent readers recognize my mantra here: “don’t just do this thing; instead, think this way.”

In other words, McDaniel and Einstein don’t offer readers a to-do list — a set of instructions to follow. Instead, they provide ideas for teachers to consider, and then to adapt to our own specific teaching context.

KBCP will look different in a 2nd grade classroom than a high-school classroom; different in a gym class than a tuba lesson; different in a Brazilian cultural context than a Finnish one.

Research can offer us broad guidance on the directions to go; it can’t tell us exactly what to do with our own students.

The KBCP framework creates another intriguing possibility.

I recently saw an article saying — basically — that “teaching study skills doesn’t work.”

Its provocative abstract begins:

This paper argues that the widespread approach to enhancing student learning through separate study skills courses is ineffective, and that the term ‘study skills’ itself has misleading implications, which are counterproductive to learning.

The main argument is that learning how to study effectively at university cannot be separated from subject content and the process of learning.

Having seen McDaniel and Einstein’s article, I wonder: perhaps these courses don’t work not because they can’t work, but because they’re currently being taught incorrectly.

Perhaps if study skills classes followed this KBCP framework, they would in fact accomplish their mission.

M&E acknowledge that their framework hasn’t been tested together as a coherent strategy. To me at least, it sounds more promising than other approaches I’ve heard.

“Rich” or “Bland”: Which Diagrams Helps Students Learn Deeply? [Reposted]
Andrew Watson
Andrew Watson

Here’s a practical question: should the diagrams we use with students be detailed, colorful, bright, and specific?

Or, should they be simple, black and white, somewhat abstract?

We might reasonably assume that DETAILS and COLORS attract students’ attention. If so, they could help students learn.

We might, instead, worry that DETAILS and COLORS focus students’ attention on surface features, not deep structures. If so, students might learn a specific idea, but not transfer their learning to a new context.

In other words: richly-decorated diagrams might offer short-term benefits (attention!), but result in long-term limitations (difficulties with transfer). If so, blandly-decorated diagrams might be the better pedagogical choice.

Today’s Research

Scholars in Wisconsin — led by David Menendez — have explored this question.

Specifically, they asked college students to watch a brief video about metamorphosis. (They explained that the video was meant for younger students, so that the cool college kids wouldn’t be insulted by the simplicity of the topic.)

For half the students, that video showed only the black-and-white diagram to the left; for the other half, the video showed the colors and dots.

Did the different diagrams shape the students’ learning? Did it shape their ability to transfer that learning?

Results, Please…

No, and yes. Well, mostly yes.

In other words: students who watched both videos learned about ladybug metamorphosis equally well.

But — and this is a BIG but — students who watched the video with the “rich” diagram did not transfer their learning to other species as well as students who saw the “bland” diagram.

In other words: the bright colors and specifics of the rich diagram seem to limit metamorphosis to this specific species right here. An abstract representation allowed for more successful transfer of these concepts to other species.

In sum: to encourage transfer, we should use “bland,” abstract diagrams.

By the way: Team Menendez tested this hypothesis with both in-person learners and online learners. They got (largely) the same result.

So: if you’re teaching face-to-face or remotely, this research can guide your thinking.

Some Caveats

First: as is often the case, this effect depended on the students’ prior knowledge. Students who knew a lot about metamorphosis weren’t as distracted by the “rich” details.

Second: like much psychology research, this study worked with college students. Will its core concepts work with younger students?

As it turns out, Team Menendez has others studies underway to answer that very question. Watch This Space!

Third: Like much psychology research, this study looked at STEM materials. Will it work in the humanities?

What, after all, is the detail-free version of a poem? How do you study a presidency without specifics and details?

When I asked Menendez that question, he referred me to a study about reader illustrations. I’ll be writing about this soon.

In Sum

Like seductive details, “rich” diagrams might seem like a good teaching idea to increase interest and attention.

Alas, that perceptual richness seems to help in the short term but interfere with transfer over time.

To promote transfer, teach with “bland” diagrams — and use a different strategy to grab the students’ interest.

How to Foster New Friendships in School? Seating Plans! (We’ve Got Research…)
Andrew Watson
Andrew Watson

In schools, we want students to learn many topics: math, and history, and reading, and health, and robotics…

And, especially at the beginning of the year, we’d like them to make friends along the way.

Can we help?

One research team tried a reasonable approach. They wondered if students might form new friendships when they sit next to classmates they don’t yet know well.

Here’s the story:

The Plan

Julia Rohrer and colleagues worked with 182 teachers in 40 schools in Hungary. Their study included 3rd through 8th graders — almost 3000 of them!

In these schools, students sat at “freestanding forward-facing 2-person desks.” (It sounds to me like Little House on the Prairie, but in rural Hungary.) Researchers assigned students to these paired desks randomly.

And, they tracked the friendships that formed.

So: what happened? Did students befriend their deskmates?

The Prediction & the Speculation

Unsurprisingly, we tend — on average — to form friendships with people who are like us. In schools, that means:

boys typically befriend boys, while girls befriend girls;

academic achievers connect with other achievers;

members of racial and ethnic groups often form friendships within those groups. (In this study, researchers kept track of Roma and non-Roma Hungarian identities.)

Researchers predicted that this pattern (called “homophily) would continue.

And they speculated that the new seating plans might shake things up a bit. That is: perhaps more friendships would form outside of those usual patterns.

The Results

So, what happened with these new seating plans?

First: Randomly seating students next to each other did modestly increase the likelihood of mutual friendships forming: from 15% to 22%.

Second: These new friendships did mostly fit the expected patterns. As homophily suggests, friendships largely formed within gender, achievement, and ethnic groups.

Third: Random seating DID foster new friendships across those divides as well — although to a smaller degree. That is: some girls did form mutual friendships with boys, and so forth.

In brief: researchers wondered if random seating patterns might expand friendship circles — and they do!

The Big Picture

We should, of course, remember that this study is just one study. We’ll need more research to be increasingly certain of these conclusions.

And, honestly, this seating plan didn’t make a huge difference.

At the same time: teachers know that every little bit counts. If we can help students form new friendships — and help them form friendships that might not otherwise have started — that’s a powerful way to start a new school year.

You will, of course, adapt this idea to your own teaching context. As you contemplate your routine at the beginning of a new year, this strategy might be a useful way to open new friendship vistas.

To Grade or Not to Grade: Should Retrieval Practice Quizzes Be Scored? [Repost]
Andrew Watson
Andrew Watson

We’ve seen enough research on retrieval practice to know: it rocks.

When students simply review material (review their notes; reread the chapter), that mental work doesn’t help them learn.

However, when they try to remember (quiz themselves, use flashcards), this kind of mental work does result in greater learning.

In Agarwal and Bain’s elegant phrasing: don’t ask students to put information back into their brains. Instead, ask them to pull information out of their brains.

Like all teaching guidance, however, the suggestion “use retrieval practice!” requires nuanced exploration.

What are the best methods for doing so?

Are some retrieval practice strategies more effective?

Are some frankly harmful?

Any on-point research would be welcomed.

On-Point Research

Here’s a simple and practical question. If we use pop quizzes as a form of retrieval practice, should we grade them?

In other words: do graded pop quizzes result in more or less learning, compared to their ungraded cousins?

This study, it turns out, can be run fairly easily.

Dr. Maya Khanna taught three sections of an Intro to Psychology course. The first section had no pop quizzes. In the second section, Khanna gave six graded pop quizzes. In the third, six ungraded pop quizzes.

Students also filled out a questionnaire about their experience taking those quizzes.

What did Khanna learn? Did the quizzes help? Did grading them matter?

The Envelope Please

The big headline: the ungraded quizzes helped students on the final exam.

Roughly: students who took the ungraded pop quizzes averaged a B- on the final exam.

Students in the other two groups averaged in the mid-to-high C range. (The precise comparisons require lots of stats speak.)

An important note: students in the “ungraded” group scored higher even though the final exam did not repeat the questions from those pop quizzes. (The same material was covered on the exam, but the questions themselves were different.)

Of course, we also wonder about our students’ stress. Did these quizzes raise anxiety levels?

According to the questionnaires, nope.

Khanna’s students responded to this statement: “The inclusion of quizzes in this course made me feel anxious.”

A 1 meant “strongly disagree.”

A 9 meant “strongly agree.”

In other words, a LOWER rating suggests that the quizzes didn’t increase stress.

Students who took the graded quizzes averaged an answer of 4.20.

Students who took the ungraded quizzes averaged an answer of 2.96.

So, neither group felt much stress as a result of the quizzes. And, the students in the ungraded group felt even less.

In the Classroom

I myself use this technique as one of a great many retrieval practice strategies.

My students’ homework sometimes includes retrieval practice exercises.

I often begin class with some lively cold-calling to promote retrieval practice.

Occasionally — last Thursday, in fact — I begin class by saying: “Take out a blank piece of paper. This is NOT a quiz. It will NOT be graded. We’re using a different kind of retrieval practice to start us off today.”

As is always true, I’m combining this research with my own experience and classroom circumstances.

Khanna gave her quizzes at the end of class; I do mine at the beginning.

Because I’ve taught high school for centuries, I’m confident my students feel comfortable doing this kind of written work. If you teach younger grades, or in a different school context, your own experience might suggest a different approach.

To promote interleaving, I include questions from many topics (Define “bildungsroman.” Write a sentence with a participle. Give an example of Janie exercising agency in last night’s reading.) You might focus on one topic to build your students’ confidence.

Whichever approach you take, Khanna’s research suggests that retrieval practice quizzes don’t increase stress and don’t require grades.

As I said: retrieval practice rocks!

Parachutes Don’t Help (Important Asterisk) [Repost]
Andrew Watson
Andrew Watson

A surprising research finding to start your week: parachutes don’t reduce injury or death.

How do we know?

Researchers asked participants to jump from planes (or helicopters), and then measured their injuries once they got to the ground. (To be thorough, they checked a week later as well.)

Those who wore parachutes and those who did not suffered — on average — the same level of injury.

Being thorough researchers, Robert Yeh and his team report all sorts of variables: the participants’ average acrophobia, their family history of using parachutes, and so forth.

They also kept track of other variables. The average height from which participants jumped: 0.6 meters. (That’s a smidge under 2 feet.) The average velocity of the plane (or helicopter): 0.0 kilometers/hour.

Yes: participants jumped from stationary planes. On the ground. Parked.

Researchers include a helpful photo to illustrate their study:

Representative study participant jumping from aircraft with an empty backpack. This individual did not incur death or major injury upon impact with the ground

Why Teachers Care

As far as I know, teachers don’t jump out of planes more than other professions. (If you’re jumping from a plane that is more than 0.6 meters off the ground, please do wear a parachute.)

We do, however, rely on research more than many.

Yeh’s study highlights an essential point: before we accept researchers’ advice, we need to know exactly what they did in their research.

Too often, we just look at headlines and apply what we learn. We should — lest we jump without parachutes — keep reading.

Does EXERCISE helps students learn?

It probably depends on when they do the exercise. (If the exercise happens during the lesson, it might disrupt learning, not enhance it.)

Does METACOGNITION help students learn?

It probably depends on exactly which metacognitive activity they undertook.

Do PARACHUTES protect us when we jump from planes?

It probably depends on how high the plane is and how fast it’s going when we jump.

In brief: yes, we should listen respectfully to researchers’ classroom guidance. AND, we should ask precise questions about that research before we use it in our classrooms.

Making “Learning Objectives” Explicit: A Skeptic Converted? [Reposted]
Andrew Watson
Andrew Watson

Teachers have long gotten guidance that we should make our learning objectives explicit to our students.

The formula goes something like this: “By the end of the lesson, you will be able to [know and do these several things].”

I’ve long been skeptical about this guidance — in part because such formulas feel forced and unnatural to me. I’m an actor, but I just don’t think I can deliver those lines convincingly.

The last time I asked for research support behind this advice, a friend pointed me to research touting its benefits. Alas, that research relied on student reports of their learning. Sadly, in the past, such reports haven’t been a reliable guide to actual learning.

For that reason, I was delighted to find a new study on the topic.

I was especially happy to see this research come from Dr. Faria Sana, whose work on laptop multitasking  has (rightly) gotten so much love. (Whenever I talk with teachers about attention, I share this study.)

Strangely, I like research that challenges my beliefs. I’m especially likely to learn something useful and new when I explore it. So: am I a convert?

Take 1; Take 2

Working with college students in a psychology course, Sana’s team started with the basics.

In her first experiment, she had students read five short passages about mirror neurons.

Group 1 read no learning objectives.

Group 2 read three learning objectives at the beginning of each passage.

And, Group 3 read all fifteen learning objectives at the beginning of the first passage.

The results?

Both groups that read the learning objectives scored better than the group that didn’t. (Group 2, with the learning objectives spread out, learned a bit more than Group  3, with the objectives all bunched together — but the differences weren’t large enough to reach statistical significance.)

So: compared to doing nothing, starting with learning objectives increased learning of these five paragraphs.

But: what about compared to doing a plausible something else? Starting with learning objectives might be better than starting cold. Are they better than other options?

How about activating prior knowledge? Should we try some retrieval practice? How about a few minutes of mindful breathing?

Sana’s team investigated that question. In particular — in their second experiment — they combined learning objectives with research into pretesting.

As I’ve written before, Dr. Lindsay Richland‘s splendid study shows that “pretesting” — asking students questions about an upcoming reading passage, even though they don’t know the answers yetyields great results. (Such a helpfully counter-intuitive suggestion!)

So, Team Sana wanted to know: what happens if we present learning objectives as questions rather than as statements? Instead of reading

“In the first passage, you will learn about where the mirror neurons are located.”

Students had to answer this question:

“Where are the mirror neurons located?” (Note: the students hadn’t read the passage yet, so it’s unlikely they would know. Only 38% of these questions were answered correctly.)

Are learning objectives more effective as statements or as pretests?

The Envelope Please

Pretests. By a lot.

On the final test — with application questions, not simple recall questions — students who read learning-objectives-as-statements got 53% correct.

Students who answered learning-objectives-as-pretest-questions got 67% correct. (For the stats minded, Cohen’s d was 0.84! That’s HUGE!)

So: traditional learning objectives might be better than nothing, but they’re not nearly as helpful as learning-objectives-as-pretests.

This finding prompts me to speculate. (Alert: I’m shifting from research-based conclusions to research-&-experience-informed musings.)

First: Agarwal and Bain describe retrieval practice this way: “Don’t ask students to put information into their brains (by, say, rereading). Instead, ask students to pull information out of their brains (by trying to remember).”

As I see it, traditional learning objectives feel like review: “put this information into your brain.”

Learning-objectives-as-pretests feel like retrieval practice: “try to take information back out of your brain.” We suspect students won’t be successful in these retrieval attempts, because they haven’t learned the material yet. But, they’re actively trying to recall, not trying to encode.

Second: even more speculatively, I suspect many kinds of active thinking will be more effective than a cold start (as learning objectives were in Study 1 above). And, I suspect that many kinds of active thinking will be more effective that a recital of learning objectives (as pretests were in Study 2).

In other words: am I a convert to listing learning objectives (as traditionally recommended)? No.

I simply don’t think Sana’s research encourages us to follow that strategy.

Instead, I think it encourages us to begin classes with some mental questing. Pretests help in Sana’s studies. I suspect other kinds of retrieval practice would help. Maybe asking students to solve a relevant problem or puzzle would help.

Whichever approach we use, I suspect that inviting students to think will have a greater benefit than teachers’ telling them what they’ll be thinking about.

Three Final Points

I should note three ways that this research might NOT support my conclusions.

First: this research was done with college students. Will objectives-as-pretests work with 3rd graders? I don’t know.

Second: this research paradigm included a very high ratio of objectives to material. Students read, in effect, one learning objective for every 75 words in a reading passage. Translated into a regular class, that’s a HUGE number of learning objectives.

Third: does this research about reading passages translate to classroom discussions and activities? I don’t know.

Here’s what I do know. In these three studies, Sana’s students remembered more when they started reading with unanswered questions in mind. That insight offers teachers a inspiring prompt for thinking about our daily classroom work.

“Once Upon a Time”: Do Stories Help Learning?
Andrew Watson
Andrew Watson

When Daniel Willingham wrote Why Don’t Students Like School, he accomplished a mini-miracle: he made abstract psychology research…

…easy to understand, and

… obviously helpful to classroom teachers.

Its invaluable pages include emphatically practical teaching advice: “memory is the residue of thought,” novices and experts think differently. (Little wonder its third edition was just published.)

In his third chapter, Willingham included one important strategy for helping students understand and remember: use stories.

We understand and remember stories for many reasons:

They follow a familiar cause/effect structure.

They focus on people and conflicts.

We (most of us) grew up hearing stories.

Stories evoke emotions.

Expository writing — essays, textbooks — has its own advantages, but they probably can’t compete with the advantages of narrative.

Today’s News

Willingham first published Why Don’t Students Like School in 2009. What have we learned about narratives vs. exposition since then?

After all, research conclusions change over time. Does this advice still hold?

Earlier this year, Raymond Mar and colleagues published a meta-analysis of research on this topic. They wanted to know:

Does narrative presentation of information improve memory, compared to expository texts?

Does it improve comprehension?

Are there boundary conditions?

They identified 37 studies (with 78 data sets and more than 33,000 participants!) that matched their criteria.

So, what did they find?

The Envelope Please…

Sure enough, narratives help students understand. And they help students remember. And — this news is surprising to me — those benefits don’t have quirky exceptions. (Most research findings do: e.g., “This technique works in these circumstances but not those.”)

For the stats minded, they calculated a Hedges’s g of 0.55. In my experience, that’s a surprisingly big effect for measurements across such a big field. (Hedges’s g is a version of Cohen’s d — it’s more appropriate for meta-analyses.)

One of my favorite examples of this strategy wasn’t (I believe) included in this study. McNamara and Scott asked students to remember a list of words. They coached one group to turn those words into a story.

Given the words “foot, cow, shirt, hut,” students created sentences like “my foot was stepped on by a cow that wore a shirt and lived in a hut.”

McNamara and Scott coached the other students to remember their words by “thinking out loud.”

Sure enough, the group that composed stories remembered a lot more words.

Getting the Story Just Right

Although Mar’s meta-analysis did not find boundary conditions, I do want to add a note of caution. This advice — like so much of cognitive science — can be easily misconstrued.

The idea makes sense, but its classroom application can be tricky.

Imagine that, as a science teacher, I want my students to understand Boyle’s Law. I’m tempted to tell my students that P1V1=P2V2, to define the various terms, and to run some sample problems.

If I hear the advice that stories improve comprehension, I’m probably tempted to tell some interesting stories about Boyle’s life. For instance, he made a list a wish list of 24 inventions (a machine that could fly, drugs that could “exalt the imagination”). Way back in the 1600s! So cool!!

That story is interesting and memorable, but it doesn’t have anything to do with the pressure or volume of gas.

I shouldn’t shouldn’t, in other words, tell stories about the general subject; we’ve got lots of research about the dangers of “seductive details.”

I should instead tell stories tailored specifically to the meaning and importance of the topic.

Why does the relationship between volume and pressure matter?

Who else was trying to find out?

What changed as a result of Boyle’s discovery?

If I can fashion those questions into a story, now I’ve got all the characteristics of a memorable narrative: cause and effect, personal conflict, familiarity, and emotion.

And, all those benefits attach to the content I want my students to learn.

In Brief

Do narratives help students understand and remember?

Yes, the right stories do.

Conflicting Advice: What to Do When Cognitive Science Strategies Clash?
Andrew Watson
Andrew Watson

Teachers like research-informed guidance because it offers a measure of certainty.

“Why do you run your classes that way?”

“Because RESEARCH SAYS SO!”

Alas, we occasionally find that research encourages AND DISCOURAGES the same strategy simultaneously.

What to do when expert advice differs?

In fact, I got this question on Thursday during a Learning and the Brain Summer Institute. Here’s the setup.

“Difficult” Can be Good

Regular readers know that desirable difficulties help students learn. As explained by Bob Bjork and Elizabeth Ligon Bjork — and researched by countless scholars — some degree of cognitive challenge enhances long-term memory formation.

In brief: “easy learning doesn’t stick.”

And so: why do spacing and interleaving help students learn? Because they ramp up desirable difficulty.

Why is retrieval practice better than simple review? Because (among other reasons) review isn’t difficult enough. Retrieval practice, done correctly, adds just the right amount of challenge.

And so, if you attend Learning and the Brain conferences (like this one on “Teaching Thinking Brains”), or if you read any of the great books about long-term memory formation, you’ll hear a lot about desirable difficulty.

Memory at Work

Cognitive scientists who don’t focus on long-term memory might instead focus on a distinct mental capacity: working memory. 

Working memory allows us to gather information — facts, procedures, etc. — into a mental holding space, and then to reorganize and combine them into new patterns and ideas.

In other words: it’s absolutely vital for thinking and learning. If students are learning academic information, they are using their working memory.

Alas, all this good news comes with some bad news: we don’t have much working memory. And, our students probably have less than we do. (For evidence, try this mental exercise: try alphabetizing the workdays of the week. No problem alphabetizing 5 words? Now try alphabetizing the twelve months of the year. OUCH.)

For this reason, effective teachers pay scrupulous attention to working memory load. Every time we go beyond working memory constraints, we make learning MUCH HARDER.

In fact, I think working memory is so important that I wrote a lengthy series of blog posts on the topic. I’m kind of obsessed. (Heck: I even wrote a book on the topic, called Learning Begins.)

Trouble in CogSci Paradise

Because both topics — desirable difficulties and working memory — provide teachers with important and powerful insights, I devoted much of last week’s workshop to them. Almost every day, in fact, we talked about both.

On Thursday, one participant asked this wise and provocative question:

Wait a minute. You’ve told us that desirable difficulties help learning. And you’ve told us that working memory overload hinders learning.

But: isn’t desirable difficulty a potential working memory overload? Don’t those two pieces of advice conflict with each other? Won’t “spacing” and “interleaving” vex working memory?

Yes, reader, they certainly might.

So, what’s a research-focused teacher to do? Team Desirable Difficulty tells us to space and interleave practice. Team Working Memory tells us to beware overload. How can we make sense of this conflicting advice?

This (entirely reasonable) question has two answers: one specific, one general.

A Specific Answer

When we consider the tension between “working memory” and “desirable difficulty,” we can focus for a moment on the adjective “desirable.”

In almost every case, working memory overload is UNdesirable.

So, if our teaching strategy — spacing, interleaving, retrieval practice, metacognition — results in overload, we shouldn’t do it: it’s not desirably difficult. We should, instead, back off on the difficulty until students can manage that cognitive load.

How do we get that balance just right?

We use our teacherly experience and insight. If I create a homework assignment with lots of interleaved practice AND ALL MY STUDENTS DO TERRIBLY, then interleaving wasn’t desirably difficult. (Or, perhaps, I taught the concepts ineffectively.)

In this case, I know the next night’s assignment should be working-memory-friendlier.

No research can tell us exactly what the best balance will be. Our expertise as teachers will guide us.

The General Answer

Researchers and teachers have different goals, and follow different practices. In brief: researchers isolate variables; teachers combine variables.

We think about stress and about working memory and about alertness and about technology and about spacing and

That list goes on almost infinitely.

For that reason, I chant my mantra: when adopting cognitive science approaches to teaching, “don’t just do this thing; instead, think this way.”

That is: don’t just DO “spacing and interleaving” because research tells us they’re good ideas. Instead, we have to THINK about the ideas that guide spacing and interleaving, and be sure they make sense at this particular moment.

Should we have students meditate at the beginning of each class? It depends on our students, our school, our schedule, our culture, our curriculum, our goals, and … too many other variables to list here.

Should we ban laptops from classrooms? Ditto.

Should high schools start later? Ditto.

Should 3rd graders learn by doing projects? Ditto.

Should students read on exercycles? Ditto.

One isolated piece of research advice can’t effectively guide teaching and school-keeping decisions. We have to combine the variables, and think about them in our specific context.

Simply put: we can’t just “do what the research says.” It’s not possible; different research pools almost certainly conflict.

Instead, we’re doing something more challenge, more interesting, and more fun.

Let the adventure begin!

Does Online Learning Work? Framing the Debate to Come…
Andrew Watson
Andrew Watson

I first published this blog post back in January. I’ve been seeing more and more discussion of this question on social media, so I thought it might be helpful to offer this perspective once again.


With news that several very effective vaccines will be increasingly available over the upcoming months, we teachers can now start thinking about “a return to normal”: that is — in person teaching as we (mostly) worked before February of 2020.

One question will inevitably be debated: did online learning work?

I suspect that the

“debate” will go something like this. One voice will stake an emphatic opinion: ONLINE CLASSES WERE AN UNEXPECTED TRIUMPH! Some data will be offered up, perhaps accompanied by a few stories.

An equally emphatic voice will respond: ONLINE CLASSES FAILED STUDENTS, TEACHERS, AND PARENTS! More data. More stories.

This heated exchange will reverberate, perhaps improved by all of Twitter’s nuance and common sense.

A Better Way?

Rather than launch and participate a BATTLE OF EXTREMES, I hope we can look for a more level-headed approach. As is so often the case when research meets teaching, a key question should be boundary conditions.

Whenever we look for a research  finding (e.g.: drawing helps students learn!), we should ask: under what precise conditions is this true?

Does drawing help older students and younger ones? In math and in phonics? Autistic students, dyslexic students, aphantasic students, and neurotypical students?

We’re always looking for boundaries, because every research finding has boundaries. As Dylan Wiliam (who will be speaking at our February Conference) famously says: “When it comes to educational interventions, everything works somewhere. Nothing works everywhere.”

If we ask about boundary conditions for the strengths and weaknesses of online learning, we can have a much more productive discussion.

Places to Start

Age: I suspect we’ll find that — on average — older students did better with online classes than younger ones. My friends who teach college/high school don’t love online teaching, but they don’t seem quite as overwhelmed/defeated by those who teach younger grades.

Additional Technology: Is it better to have a simple Zoom-like platform with occasional breakout sessions? Does it help to use additional, elaborate programs to supplement online learning?

Discipline: Perhaps online teaching worked better with one kind of class (science?) than another (physical education?).

Personality: Although most high school students I know emphatically prefer in-person classes, I do know two who greatly prefer the online version. Both really struggle negotiating adolescent social networks; they’ve been frankly grateful to escape from those pressures and frustrations.

Teachers’ personalities could matter as well. Some of us comfortably roll with the punches. Some of us feel set in our ways.

Administration: Did some school leaders find more effective ways to manage transitions and support teachers and students? The question “does online learning work” might get different answers depending on the managerial skill supervising the whole process. (In my work, I find teachers appreciated decisiveness and clear communication above all else. Even when they didn’t like the decision itself, they liked knowing that a decision had been made.)

SES: No doubt the socio-economic status (SES) of school districts made a big difference. It’s hard to run online classes in schools and communities that don’t have money for technology, or infrastructure to support its use.

Pedagogy: Do some styles of teaching work better online? Or — a slightly different version of this questions — do teachers and schools with experience “flipping the classroom” have greater success with an online model?

Teacher Experience: Perhaps well-seasoned teachers had more experience to draw on as they weathered the muddle? Or, perhaps younger teachers — comfortable with tech, not yet set in their ways — could handle all the transitions more freely?

Country/Culture: Do some countries or cultures manage this kind of unexpected social transition more effectively than others?

Two Final Points

First: We should, I think, expect complex and layered answers to our perfectly appropriate question.

In other words: online learning (a la Covid) probably worked well for these students studying this topic in this country using this technology. It was probably so-so for other students in other circumstances. No doubt it was quite terrible for still other students and disciplines and pedagogies.

Second: I myself have long been skeptical of the idea that “online learning is the future of education (and everything else)!”

And yet, I don’t think we can fairly judge the validity of that claim based on this last year’s experience.

After all: most teachers and school and students didn’t get well-designed and deliberately-chosen online education. They got what-can-we-throw-together-with-grit-and-hope online education.

Of course that didn’t work as well as our old ways (for most students). Nothing worked well: restaurants struggled to adjust. The travel industry struggled. Retail struggled.

Yes: I think that — for almost everybody learning almost everything — in-person learning is likely to be more effective. But I myself won’t judge the whole question based on this year’s schooling.

We all benefit from forgiveness for our lapses and muddles during Covid times.

Let’s learn what we reasonably can about online education, and use that experience to improve in-person and remote learning in the future.