Skip to main content

Andrew Watson About Andrew Watson

Andrew began his classroom life as a high-school English teacher in 1988, and has been working in or near schools ever since. In 2008, Andrew began exploring the practical application of psychology and neuroscience in his classroom. In 2011, he earned his M. Ed. from the “Mind, Brain, Education” program at Harvard University. As President of “Translate the Brain,” Andrew now works with teachers, students, administrators, and parents to make learning easier and teaching more effective. He has presented at schools and workshops across the country; he also serves as an adviser to several organizations, including “The People’s Science.” Andrew is the author of "Learning Begins: The Science of Working Memory and Attention for the Classroom Teacher."

New Research: Unrestricted Movement Promotes (Some Kinds of) Creativity
Andrew Watson
Andrew Watson

Teachers like creativity.

We want our students to learn what has come before, certainly. And, we want them to do and think and imagine new things with that prior knowledge.

We want them, in ways big and small, to create. How can we best foster such creativity?

Over the years, I’ve often heard that walking outside promotes creativity.

Because I work at a summer camp, I’m in favor of almost anything that promotes being outside. Alas, it turns out, this data pool didn’t hold up very well.

Since that time, lots of folks have focused on the walking part of “walking outside.” Several studies do suggest that simply walking around ramps up creative output. (For instance, here.)

Can we be more specific than “walking around”? Do some kinds of walking boost creativity more than others?

Defining Creativity

Ironically, the study of creativity begins with mundane, even tedious, tasks: defining and measuring it.

Researchers often focus on two kinds of creativity.

First, my students might come up with something new and useful.

Researchers measure this flavor of creativity (“divergent”) in a fun way:

Think about, say, a brick. Now, list all the things you might do with a brick.

The answer “build a wall” doesn’t score very high, because almost everyone says “build a wall.”

The answer “raise the water level in my pool by throwing lots of bricks in” does score high, because — well — because nobody ever says that. This answer is new and (assuming you care about the water level in your pool) useful.

Second, my students might see hidden connections.

Researchers measure this flavor of creativity (“convergent”) in another fun way:

Think about these three words: cottage, swiss, and cake.

Can you think of a word that pairs with each of those to make a meaningful phrase? (Answer: “cheese.” As in, cottage cheese, etc.)

Researchers in Germany wanted to knowwhat kind of walking might increase DIVERGENT creativity.

Here’s what they found…

It’s All About the Freedom

Researchers Supriya Murali and Barbara Händel asked participants to walk or to sit.

And, they asked them to do so in restricted or unrestricted ways.

Unrestricted walkers, for instance, could walk around a large room however they pleased. Restricted walkers had to walk back and forth down the middle of the room. (Notice: all this walking was inside.)

Unrestricted sitters sat in a solid chair (no wheels, no reclining features) with a view of the full room. Restricted sitters sat in the same chair, but with a computer screen in front of them. The “fixation cross” on the screen implied (if I understand this correctly) that the participants should remain focused on the screen.

What happened afterwards, when they took a test on divergent thinking?

Headlines:

Walkers scored higher on tests of divergent creativity than sitters.

Participants without restrictions (both walking and sitting) scored higher than their restricted peers.

For some interesting reason, unrestricted movement reduces restrictions in subsequent mental activity.

Classroom Implications

As I think about this research, it implies some happy, practical suggestions.

If we want our students to launch an explicitly creative assignment — start composing a poem, imagine an approach to studying a historical question, plan an environmentally-friendly city — we can give them an extra boost of physical freedom.

Walking outside might be good.

But if they can’t walk outside (that’s just not possible in many schools), then walking inside could be good.

Heck, if 25 students walking around in the classroom sounds like too much chaos, maybe they can choose a new place to sit for a while.

In other words: this research suggests that the actual movement (walking/sitting) matters, and that the relative degree of restriction also matters.

Even if students sit in solid chairs, their freedom to choose seats or move seats or sit cross-legged (or whatever) might jostle some creative energy in useful ways.

TL;DR

As long as we don’t make our claims too strong or grand, this research allows a sensible claim: “by reducing physical limitations for a while, we might help students expand their mental activity and creativity.” *


* I should note that the sample sizes in these three studies are quite small: 20, 17, and 23. Were these studies repeated with larger sample sizes (and/or in more classroom-like conditions), I’d be more confident and emphatic in drawing these conclusions.


Kuo, C. Y., & Yeh, Y. Y. (2016). Sensorimotor-conceptual integration in free walking enhances divergent thinking for young and older adults. Frontiers in psychology7, 1580.

Murali, S., & Händel, B. (2022). Motor restrictions impair divergent thinking during walking and during sitting. Psychological research, 1-14.

The First Three Steps
Andrew Watson
Andrew Watson

Early in January, The Times (of London) quoted author Kate Silverton (on Twitter: @KateSilverton) saying:

It’s the schools that have the strictest discipline that have the highest mental health problems.

Helpfully, they include a video recording of her saying it.

In context, Silverton is saying — in effect — that schools’ strict disciplinary policies damage students’ mental health.

If she’s right, school leaders should know that!

If we run schools with strict disciplinary policies, we should at least consider changing them. Obviously, we don’t want to cause mental health problems.

But … is she right?

This specific question leads to a broader question:

When someone says “research says you should change the way you run your school!,” what should we do next?

Accept the advice? Reject it? Flip a coin?

Let me suggest three simple steps.

Step 1: Ask for Sources

This advice seems too obvious to say out loud.

OF COURSE someone reading this blog would ask for sources.

However, in my experience, we’re very hesitant to do so. It seems — I don’t know — rude, or pushy, or presumptuous.

Especially when the research comes from psychology or neuroscience, we just don’t want to seem stubborn.

But, trust me, it’s always appropriate to ask for the research.

In this case, happily, lots (and lots) of people did ask Silverton for research.

This small niche of edutwitter lit up with people asking — quite simply — “what research suggests that strict school discipline damages mental health?” (To be clear, it also lit up with people praising Silverton for speaking out.)

Even more happily, she responded by citing 11 research studies.

Her transparency allows us to ask a second question:

Step 2: Does the Research, in fact, Support the Claim?

Here again, the question seems to obvious to raise. Who would cite research that doesn’t support the claim they make?

I’m here to tell you: it happens all the time. (I wrote about a recent example here.)

In this case, teacher/researcher/blogger Greg Ashman looked at those sources. (You can read the article he wrote here, although you might have to subscribe to his substack to do so.)

So, does the research support the claim?

Amazingly, most of the cited studies don’t focus on students’ mental health.

That’s right. To support the claim that “strict discipline harms mental health,” Silverton cites very little research about mental health. (Ashman has the details.)

Yes, we might make some guesses based on these studies. But, guesses aren’t research.

As Ashman writes:

it’s easy to accept that suspension and [expulsion] are associated with higher rates of depression without assuming suspension and [expulsion] are the cause.

So, DOES strict school discipline cause mental health problems? I don’t (yet) know of direct research on the subject.

This specific example about school discipline, I hope, emphasizes the broader point:

Simply by a) asking for research and b) giving it a quick skim, we can better decisions about accepting or rejecting “research-based” teaching advice.

Step 3: Actively Seek Out Contradictory Information

Because humans are so complicated, psychology and neuroscience research ALWAYS produces a range of findings.

Even with something as well-supported as retrieval practice, we can find a few studies suggesting limitations — even (very rare) negative effects.

I thought of this truth when I saw a New York Times headline: Cash Aid to Poor Mothers Increases Brain Activity in Babies, Study Finds.

This blog is about brain research, not politics. At the same time, this brain research might be cited to support a policy proposal.

So: what should we do when we see brain research used this way?

Step 1: “Ask for sources.” Good news! The sources are quoted in the article.

Step 2: “Does the research, in fact, support the claim?”

Sure enough, the researchers conclude

“we provide evidence that giving monthly unconditional cash transfers to mothers experiencing poverty in the first year of their children’s lives may change infant brain activity.”

Step 3: “Actively seek out contradictory information.”

Because this claim made the front page of the Times, I kept an eye out for responses, both pro and con.

Just a few days later, I found this tweet thread. In it, Dr. Stuart Richie points out some real concerns with the study.

For instance: the authors “pre-registered” their study. That is, they said “we’re going to measure variables X, Y, and Z to see if we find significant results.”

As it turns out, they found (small-ish) significant results in P, Q, and R, but not X, Y, and Z.

As Richie notes, P, Q, and R are certainly interesting. But:

This is a clear example of hype; taking results that were mainly null and making them into a huge, policy-relevant story. [The research] is a lot more uncertain than this [Times article implies].

To be very clear: I’m not arguing for or against a policy proposal.

I am arguing that when someone says “brain science shows!,” we should ask questions before make big changes.

TL;DR

When people cite brain research to encourage you to teach differently…

… ask for sources,

… confirm they support the recommendation,

… seek out contradictory points of view.

Our students benefit when we follow those three simple steps.

A “Noisy” Problem: What If Research Contradicts Students’ Beliefs?
Andrew Watson
Andrew Watson

The invaluable Peps Mccrea recently wrote about a vexing problem in education: the “noisy relationship between teaching and learning.”

In other words: I can’t really discern EXACTLY what parts of my teaching helped my students learn.

Was it my content knowledge?

The quality of my rapport with them?

The retrieval practice I require?

The fact that they slept and ate well in the days before class?

Some combination of all these variables?

Because I don’t know EXACTLY which teaching variable helped (or hurt) learning, I struggle to focus on the good stuff and eliminate the bad stuff.

I thought about Mccrea’s wisdom when I read a recent study about interleaving.

Here’s the story…

Interleaving 101

Frequent blog readers know all about interleaving, a way of organizing students’ practice.

Let’s say I teach my students about parts of speech.

Once they have a basic understanding of each one, I could have them practice each part of speech on its own.

That is: they identify nouns on Monday, adverbs on Tuesday, prepositions on Wednesday, and so forth.

Researchers call that structure “blocking” — as in “blocks of homework focusing on individual topics.”

Or, I could have my students jumble several topics together every night.

That is: Monday night they practice nouns, adverbs, and prepositions. Tuesday they practice verbs, prepositions, and conjunctions. Wednesday: nouns, verbs, and adjectives.

The total number of practice problems would remain the same, but they’d practice several parts of speech all together.

Researchers call this system “interleaving” — as in “weaving together several different topics.”

Measuring Success

Of course, teachers want to know: does interleaving work? Do students who interleave their practice learn more than students who block?

Let’s imagine two ways of answering that question

Strategy #1: ask the students.

Obviously.

Who knows more about the students’ learning than the students themselves?

Strategy #2measure their learning.

Obviously.

If students who block consistently remember more than students who interleave (or vice versa), then we have a winner.

So, what’s the answer?

Answers, and Vexing Questions

According to Samini and Pan’s 2021 study, strategy #1 yields a clear answer: students say that interleaving is harder and results in LESS learning.

Of course, that means they think that blocking is easier and results in MORE learning.

Alas, strategy #2 arrives at a contradictory result.

When we measure students’ actual learning, they remember more after interleaving than blocking.

Samini and Pan’s study gets this result. And, LOTS AND LOTS of research gets to the same result. (See Agarwal and Bain’s book for a great review of the research.)

In other words, this study points to an especially “noisy” part of the relationship between teaching and learning.

Students genuinely think and believe that interleaving interferes with learning.

However, interleaving in fact promotes learning.

How do we handle this quandary?

Tentative Solutions

In my high-school classroom, we do A LOT of retrieval practice.

Almost every day, I fire off questions and ask students to attempt an answer.

Sometimes I call on raised hands; or cold call; or have students write answers in their notebooks (I circle the room to check their responses). They might write on the board; they might consult in pairs.

I’m entirely comfortable using retrieval practice — and so are my students — because on the second day of class I showed them research about retrieval practice.

I told them:

This might feel hard at first.

But, trust me. It feels hard because your brain is working harder. And that means you’re learning more.

It’s like going to the gym. You don’t gain muscle by picking up paper clips. You gain muscle by picking up heavy things. Hard work leads to better fitness.

The same rule applies here. Retrieval practice is harder, so you’ll learn more.

Since that day, I stop every now and then at the end of an RP session and say: ” do you feel how much you’ve learned? Do you see how much retrieval practice is helping?”

In fact (I swear I am not making this up), one of my Sophomores once said: “Thank you Mr. Watson for making us do retrieval practice every day.”

I tell this story because it applies to interleaving as well.

I’ve been interleaving all year, but I haven’t (yet) explained it to my students. I plan to do so this upcoming week (or next).

My hope is: they’ll see why we’ve been bouncing back and forth from topic to topic in ways that might seem random or disorganized.

We’ve been interleaving all along.

I offer this solution as “tentative” because my context might not match yours.

For instance, if you teach younger or older students, they might not respond as mine do.

If you teach students with diagnosed learning differences, interleaving might not benefit them as much.

And so forth.

As always: consider the research findings, consider my experience, and then use your own best judgment to fit them into your classroom practice.

TL;DR

If students’ beliefs contradict research, I myself tell them about the research — graphs and all. And then I ask them to trust me.

Retrieval practice and interleaving really do work. My students know about this research pool. So far, they’re on board.

If you try this strategy, or another one, I hope you’ll let me know about your own experience.


Samani, J., & Pan, S. C. (2021). Interleaved practice enhances memory and problem-solving ability in undergraduate physics. NPJ science of learning6(1), 1-11.

Teaching with Images: Worth the Effort?
Andrew Watson
Andrew Watson

According to Richard Mayer’s “multimedia principle,”

People learn better from words and pictures than from words alone.

If that’s true, then we should — obviously — be sure to include pictures in our teaching.

However…

Whenever we see a broad principle like that, we should always look for specific limitations.

That is…

… does this principle apply to kindergarteners as well as 5th graders and adult learners?

… does it apply for students with an ADHD diagnosis?

… is it true when teaching Civil War history, theorems about similar triangles, and bunting strategies?

And so forth.

Researchers call such limits “boundary conditions,” and we should ALWAYS look for boundary conditions.

So, let’s look at that broad principle ( “pictures + words” > “words”) and ask this boundary question:

Does the content of the picture matter?

Possibilities and Perils

Happily, one of the people asking that question is…Richard Mayer himself.

In his career, he’s come up with a whole suite of useful principles. And, he spends lots of time looking for boundary conditions.

Specifically, in a usefully straightforward study, he and Eunmo Sung study several different kinds of images:

Instructive images: “directly relevant to the instructional goal.”

I’m teaching Macbeth right now, and focusing on the play’s tension between order and chaos. So, I might show students a picture of Scotland’s craggy wildernesses (chaos) and one of a highly structured royal ceremony (order).

Seductive images: “highly interesting but not directly relevant to the instructional goal.”

A movie version of Macbeth — starring Denzel Washington and Frances McDormand — just came out. I could show my students a picture of these two movie stars on the Red Carpet at an Oscar ceremony.

Decorative images: “neutral but not directly relevant to the instructional goal.”

Macbeth can be a grim play: so much beheading, so much unseaming. So: I could include pictures of waterfalls and sunrises on my handouts to raise my students’ spirits a bit.

Once we start exploring these potential boundary conditions — perhaps not all images benefit learning equally — we might get even more useful guidance about combining words and images.

Predictions and Results

Sung and Mayer measured the effects of such images on students’ learning AND on their enjoyment of a lesson.

Take a moment to make some predictions on your own.

Which, if any, of those graphics will help students learn more?

Which, if any, will help students enjoy the lesson more?

[I’ll pause while you think about those questions.]

 

 

Perhaps you, like Sung and Mayer, predicted that ALL the images would increase students’ enjoyment.

And perhaps you predicted that the INSTRUCTIVE images would help students learn, but not the others.

Sure enough, you and they were right. Students LIKE images, but LEARN FROM images that focus their attention on the learning goal. (If you’re interested in the specific numbers, look at the 6th page of the study.)

We should, I think, focus on this key finding: students do not always learn more when they enjoy a lesson more.

We shouldn’t deliberately make our lessons dull.

But: we shouldn’t assume that an enjoyable lesson necessarily results in more learning. In this case, those photos of Macbeth movie stars piqued my students’ curiosity and interest, but didn’t help them learn anything about the play.

Three Final Points

First: the benefits of dual coding have gotten lots of attention in recent years.

To get those benefits, we should remember these boundary conditions. Dual coding helps if — and only if — the images highlight the learning goal.

Second: a recent meta-analysis about “seductive details” nicely complements this study.

Third: Like many teachers, I see the good and the vile in Twitter.

Yes (YES!!), it can be a sink of repulsive yuckiness.

And (surprise!!), it can also be supportive and helpful.

I bring up this point because: a wise soul on Twitter mentioned this Sung & Mayer study recently, and reminded me of its importance.

I can’t remember who brought it up (I would credit that tweep if I did), but I’m grateful for the nudge.

Such useful research! Such helpful guidance!


Sung, E., & Mayer, R. E. (2012). When graphics improve liking but not learning from online lessons. Computers in Human Behavior28(5), 1618-1625.

Let’s Get Practical: How Fast Should Videos Be?
Andrew Watson
Andrew Watson

Research often operates at a highly abstract level.

Psychologists and neuroscientists study cognitive “tasks” that stand in for school work. If we’re being honest, however, we often struggle to see the connection between the research task and actual classroom learning.

HOWEVER…

Every now and then, a study comes along that asks a very practical question, and offers some very practical answers.

Even better: it explores the limits of its own answers.

I’ve recently found a study looking at this (incredibly practical) question:

Because students can easily play videos at different speeds, we need to know: which video speed benefits learning the most?

So: what advice should we give our students about learning from videos?

Exploring The Question

Let’s start with a specific example:

If a student watches a video at double speed, she (obviously) spends only half as much time mentally interacting with its information.

Does that reduction in time lead to an equal reduction in learning? Will she learn half as much as if she had watched it at regular speed?

Dr. Dillon Murphy starts with that question, and then quickly gets interested in crucial related questions:

What about other video speeds? That is: what about watching the video at 1.5x speed? What about 3x speed?

Does the topic of the video matter?

And, here’s a biggie: what should students do with the time they save?

Even before we look at the results of this study, I think we can admire its design.

Murphy’s team ran multiple versions of this study looking at all these different variables (and several others).

They did not, in other words, test one hypothesis and then — based on that one test — tell teachers what to do. (“Best practices require…”)

Instead, they invited us into a complex set of questions and possibilities.

Maybe 1.5x is the most efficient speed for learning.

Maybe 3x is the best speed if students use the time they saved to rewatch the video.

Maybe regular speed is best after all.

Because Murphy’s team explores so many possibilities with such open-minded curiosity, we have a MUCH better chance of figuring out which results apply to us. *

The Envelope Please

Rather than walk you through each of the studies, I’ll start with the study’s overall conclusions.

First: watching videos at higher speeds does reduce learning, but not as much as you might think.

That is: spending half as much time with the video (because a student watched it at double speed) does NOT result in half as much learning.

To be specific: students watched ~ 14 minute videos (about real-estate appraisals, or about Roman history).

A week later, those who watched them at regular speed scored a 59% on a quiz. Those who watched at 2x speed scored a 53%.

59% is higher that 53%, but it’s not twice as high. **

Second: students can use that “saved” time productively.

What should a student do with the 7 minutes she saved? She’s got two helpful choices.

Choice 1: rewatch the video right away.

Students who used their “saved” time to rewatch the video right away recaptured those “lost” points. That is: they had the same score as students who watched the video once at regular speed.

Choice 2: bank the time and rewatch the video later.

In another version of the study, students who watched the 1x video once scored a 55% on a quiz one week later.

Other students watched the 2x video once, and then once again a week later. They scored a 63% on that quiz. (For stats types, the d value is 0.55 — a number that gets my attention.)

In other words: rewatching at double speed a week later leads to MORE LEARNING in the THE SAME AMOUNT OF TIME (14 minutes).

Practical + Practical

Murphy takes great care to look at specific combinations.

His example encourages us to take care as well. For instance:

His team worked with college students. Will this result hold for 8th graders, or 2nd graders?

You can look to you your teacherly experience and judgment to answer that question.

Will this effect hold for longer videos: 30 minutes, or one hour?

We don’t know yet.

These videos included a talking head and slides with words — but not closed captions. Will some other combination (no talking head? closed captions on?) lead to different results?

We don’t know yet.

In other words: Murphy’s study gives us practical guidance. We should use our judgment and experience to apply it to our specific teaching circumstances.


* I should note: This study is unusually easy to read. If the topic interests you, you might look it over yourself.

** Important note: I’ve seen news reports about this study saying that watching once at double speed results in the same amount of learning as watching once at regular speed. That claim is untrue. And: Murphy’s study does not make that claim.

Murphy, D. H., Hoover, K. M., Agadzhanyan, K., Kuehn, J. C., & Castel, A. D. (2021). Learning in double time: The effect of lecture video speed on immediate and delayed comprehension. Applied Cognitive Psychology.

The Benefits of Direct Instruction: Balancing Theory with Practice
Andrew Watson
Andrew Watson

When teachers hear that “research shows we should do X,” we have at least two broad questions:

First Question: what’s the research?

Second Question: what EXACTLY does X look like in the classroom?

People who have the expertise to answer the first question (researchers) might not have the K-12 classroom experience to answer the second question.

And, of course, people who can make it work in the classroom (teachers) might not know or understand the research.

Wouldn’t it be great if we could find one book that answers both sets of questions?

In fact, it would be especially great if that book focused on a controversial topic. In that case, we could see a complete argument – both the why and the how – before we make a judgment about the controversy.

Does that sound tempting? I have good news…

Embracing Controversy

A feisty battle has raged in edu-circles for many years now: “direct instruction” vs. “constructivist pedagogy.” *

In one corner, “constructivists” argue that problems or projects or independent inquiries help students discover and build enduring understanding. And, such exploration fosters authentic motivation as well.

In the other corner, “direct instruction” advocates argue that working memory limitations sharply constrain students’ cognitive workspace. For that reason, teachers must explicitly shape learning experiences with small steps and carefully-designed practice.

Both approaches can be – and frequently are – parodied, misunderstood, and badly practiced. So, a book explaining the WHY (research) and the HOW (classroom practice) would be greatly helpful.

Sage on the Page

Adam Boxer teaches chemistry at a school in London, and has been blogging about his work for some time now. (If you follow our twitter account, @LearningandtheB, you’ve seen links to his work before.)

In his book Explicit & Direct Instruction: An Evidence-Informed Guide for Teachers, Boxer gathers eleven essays that explain the research background and then then get SUPER specific with classroom suggestions.

In the first chapter, Kris Boulton tells the history of “Project Follow Through,” a multi-decade program to discover the best way of teaching children.

Researchers tracked more than 200,000 children in 13 different programs over several years, and compared their learning across three dimensions: basic skills, cognitive skills, and affective skills.

Which approach proved most effective?

Direct Instruction, created by Siegfried Engelmann.** It was, in fact, the only program of the 13 that benefitted students in all three dimensions.

When advocates of Direct Instruction (and direct instruction) insist that research shows its effectiveness, they reasonably enough point to Project Follow Through. (Can others critique this study? Of course…)

Both Boulton and Greg Ashman (in the second chapter) then emphasize the alignment of direct instruction with psychology models: cognitive load theory, schema theory, and so forth.

In brief: we’ve got LOTS of research explaining why direct instruction should work, and showing that it does work.

Let’s Get Practical

After Boulton and Ashman explain the why, the next several chapters deliver on the classroom how.

For me, the book’s great success lies in the number, variety, and specificity of these chapters.

What does direct instruction look like for teaching math?

How about science?

How about writing?

What’s the best number of examples to use?

And so forth.

I especially enjoyed Sarah Cullen’s chapter on fading. Cullen begins with an important question/critique:

How, then, can a teaching method that so depends on instruction – on teachers leading learning and controlling the content to which pupils are exposed – foster autonomy?

Her answer focuses on having scaffolds and removing scaffolds – aka, “fading.”

In particular, Cullen wisely conceptualizes fading over many different time spans: fading across grades (which requires planning across years), fading within a term’s curriculum (requiring planning across months), and fading within a lesson (requiring skill, insight, and practice).

Like the book’s other chapters, Cullen’s offers many specific examples for each of her categories. In other words, she ground theoretical understanding with highly specific classroom realities.

In Brief

If you already think direct instruction sounds right, you’ll be glad to have a how-to guide.

If you think it sounds suspect (or even oppressive), you’ll be glad to read a straightforward explanation of the research behind the approach. (You might not be persuaded, but you’ll understand both sides of the argument more clearly.)

And, if you want realistic classroom examples explained with loving detail, this book will launch 2022 just right.


* I’ve put those labels in quotation marks because both are familiar, but neither one really works.

** Direct Instruction (with capital letters) is the name of Engelmann’s specific program. On the other hand, direct instruction (without capital letters) is a broader approach to thinking about teaching and learning.

The Best Kind of Practice for Students Depends on the Learning Goal
Andrew Watson
Andrew Watson

In some ways, teaching ought to be straightforward. Teachers introduce new material (by some method or another), and we have our students practice (by some method or another).

Result: THEY (should) LEARN.

Alas, both classroom experience and psychology/neuroscience research suggest that the process is MUCH more complicated.

For instance:

When we “introduce new material,” should we use direct instruction or more of an inquiry/problem-based pedagogy? *

When we “have our students practice,” what’s the very BEST kind of practice?

Around here, we typically offer two answers to that 2nd question: retrieval practice and interleaving.

Retrieval practice has gotten lots of love on this blog — for instance, here. I have written less about interleaving, mostly because we have less research on the topic.

But I’ve found some ripping good — and very practical — research to share here at the end of 2021.

“What?,” “Why?,” and Other Important Questions

Let’s start with definitions.

Let’s say I teach a particular topic today: “adjectives.” And tomorrow I teach “adverbs.” Next day, “prepositions.” Next: “coordinating conjunctions.”

How should I structure students’ homework?

They could do 20 adjective practice problems tonight. Then 20 adverb problems the next night. Then 20 prepositions. And so forth.

Let’s call that homework schedule blocking.

Or, they could do 5 adjective problems a night for the next 4 nights. And 5 adverb problems a night starting tomorrow night. And so forth.

If I go with this system, students will practice multiple different topics (adjectives, adverbs, prepositions…) at the same time. So, let’s call that homework schedule interleaving.

For the most part, when we compare these two approaches, we find that interleaving results in more learning than blocking. (Lots of info here. Also in this book.)

That’s an interesting conclusion, but why is it true?

In the first place, probably, interleaving is a desirable difficulty. Students must THINK HARDER when they interleave practice, so they learn more.

In the second place, well, we don’t exactly know. Our confusion, in fact, stems in part from an arresting truth: interleaving usually helps students learn, but not always.

Of course, NOTHING ALWAYS WORKS, so we’re not fully surprised. But if the exceptions helped explain the rule, that could be mightily helpful…

An Intriguing Possibility…

Two scholars — Paulo F. Carvalho and Robert Goldstone — have been studying a potential explanation.

Perhaps blocking and interleaving enhance different kinds of memories. And so, research produces contradictory results because researchers use different kinds of memory tests.

Specifically, they propose that:

During blocked study, attention and encoding are progressively directed toward the similarities among successive items belonging to the same category,

whereas during interleaved study attention and encoding are progressively directed toward the differences between successive items belonging to different categories.

In other words: blocking focuses students on the properties of a particular category (“adjectives”). Interleaving focuses students on the distinctions among different categories (“adjectives, adverbs, prepositions”).

And so: if I want students to DEFINE ONE topic or idea or category (“adjectives”), blocking will help them do that well.

If I want students to COMPARE/CONTRAST MANY topics or ideas or categories, interleaving will help them do that well.

To repeat the title of this blog post: “the best kind of practice for students depends on the learning goal.”

In their most recent study, Carvalho and Goldstone test this possibility.

Sure enough, they find that students who block practice do better at defining terms, whereas those who interleave practice do better at multiple-choice questions.

The study gets splendidly intricate — they work hard to disprove their own hypothesis. But once they can’t do so, they admit they they just might be right.

Caveats and Classroom Implications

Caveat #1: “one study is just one study, folks.” (Dan Willingham.)

Although, to be fair, Carvalho and Goldstone have been building a series of studies looking at this question.

Caveat #2: The researchers worked with adults (average age in the 30s) studying psychology topics.

Does their conclusion hold true for K-12 students learning K-12 topics? Maybe…

Caveat #3: Practically speaking, this research might focus on a distinction that evaporates over time.

In truth, I always want my students to know specific definitions — like “tragedy” — well. And, I want them to compare those well-known definitions flexibly to other definitions — like, say, “comedy.”

An an English teacher, I — of course! — want my students to define adjective. AND I — of course!! — want them to compare that definition/concept to other related ideas (adverbs; participles; prepositional phrases acting as adjectives).

In other words, I suspect the ultimate teaching implication of this research goes like this:

We should have students BLOCK practice until they know definitions to some degree of confidence, and then have them INTERLEAVE practice to bring those definitions flexibly together.

To be clear: I’m extrapolating, based on my classroom experience and on my reading in this field.

Until my interpretation gets more research behind it, Carvahlo and Goldstone’s research suggests this general plan:

START BY DECIDING ON THE GOAL.

If you mostly want your students to know individual concepts, have them block their practice.

If you mostly want them to bring several topics together, have them interleave practice.

As your goal changes, their homework changes too.

As is so often the case, this research doesn’t tell teachers what to do. It helps us think more clearly about the work we’re doing.

In my view, that’s the most helpful research of all.


* I think that’s a false choice; both approaches make sense under different circumstances. More on that in another blog post.


Carvalho, P. F., & Goldstone, R. L. (2021). The most efficient sequence of study depends on the type of test. Applied Cognitive Psychology35(1), 82-97.

When Does Technology Distract Students? The Benefits of Research that Contradicts My Beliefs
Andrew Watson
Andrew Watson

I spoke with several hundred students last week about research-based study strategies.

As always, students were fascinating to hear about psychology and neuroscience research: for instance, the benefits of retrieval practice.

And, as always, they did not love my alarming news about multi-tasking. My advice goes like this:

“If you want to study less and learn more, do one thing at a time.”

No insta-snap-gram-tweet-flix-chat-tok while you’re studying. One thing at a time.

Since that talk, I’ve found some interesting research about the LIMITS of that advice, so I thought I’d share it here.

Tech Problems, Tech Benefits

Our reasons to worry about technology use during class seem perfectly obvious. If I am paying attention to my tweets, I am not paying attention to my academic work.

Divided attention = less learning. Obviously.

At the same time, we can easily see ways that technology benefits learning.

If — during a lecture — students text one another to reinforce their understanding of the material (“What did the prof just say?”), they might solidify their learning.

If they look up complementary information on the interwebs, their quest might boost their comprehension. (I’ve been surprised how often my students want to do this in class, and I occasionally allow them to do so.)

So, we need a more precise question than “is technology good or bad?” We need to know — under what precise circumstances does it help? Or hurt?

Technology and Higher Order Thinking

For instance: does off-topic texting during a lecture interfere with both “lower order” and “higher order” thinking, as defined by Bloom?

And, by the way, what role does note-taking play?

A study from 2018 explores this question.

The details, of course, get complicated, but the short version goes like this. Students watched a 24 minute lecture about psychiatric diagnoses: PTSD, ADHD, OCD, and so forth. They also took notes.

Some students received and answered off-topic texts during the lecture — one about every 2 minutes.

After the lecture, students took a test.

Some of those test questions focused simply on recalling details: “How long must an individual display symptoms of PTSD in order to be diagnosed?”

The researchers designed these questions to measure knowledge and comprehension — that is, “Bloom’s level 1 & 2.”

Four questions, however, focused on deeper understanding: “Compare PTSD and ADHD. How do these disorders differ? Are there ways in which they are similar?”

That is: these questioned aimed to measure application and analysis: Bloom’s level 3 & 4.

So: what effect did the OFF-TOPIC TEXTS have on the students’ NOTES, and on their UNDERSTANDING?

The Envelope Please

The researchers’ results surprised them — and certainly surprised me.

Unsurprisingly, students distracted by texts took less complete notes.

And, also unsurprisingly, they did substantially less well on the factual questions. Texters averaged a 62 on those questions, while non-texters averaged a 71.  (If you speak stats, the Cohen’s d was 0.64. That’s an impressively large difference, at least to me.)

Here’s the surprise: researchers did NOT find a statistically significant difference between students’ scores on the application and analysis questions.

How do we explain this finding?

First: let’s admit the possibility that texting distractions do interfere with factual recall but not analysis.

Honestly, I would not have anticipated that finding, but it could be true.

Second: perhaps the timing matters. That is: these students took the test immediately after the lecture. Perhaps application and analysis — unlike mere factual recall — require more time for steeping.

That is, if the “higher order thinking skills” had been tested the next day, perhaps we would have seen a difference in those scores.

Third: perhaps the number of questions mattered. Because the researchers asked only 4 application/analysis questions, they might not have had enough data to discern a difference.

Perhaps a greater number of questions would have revealed a difference.

The Big Picture(s)

Based on this research, will I tell students “it’s okay to text during lectures”?

NO I WILL NOT.

Here’s why:

First, facts matter. If off-topic texting interferes with factual learning, that finding itself means that texting during lectures is bad.

Second, taking notes properly (almost certainly) matters. If texting  interferes with good note-taking, that finding itself should dissuade students from doing so.

Third, I’m willing to believe that texting doesn’t interfere with application/analysis, but only if other studies — with more questions and later tests — consistently demonstrate that result.

Another point also jumps out at me from this research. This study contradicts my firmly held belief that multitasking vexes learning.

I genuinely believe that IT’S A GOOD THING when research contradicts my firmly held beliefs.

If research never contradicted my beliefs, then I would never learn anything from it.

In fact, I would never need to look at research because it shows me only what I already know.

Research might prove most useful to us when it contradicts our beliefs.

Who knows, maybe I’ll go back to those students and update my advice…


Waite, B. M., Lindberg, R., Ernst, B., Bowman, L. L., & Levine, L. E. (2018). Off-task multitasking, note-taking and lower-and higher-order classroom learning. Computers & Education120, 98-111.

Why I Still Love Learning and the Brain Conferences
Andrew Watson
Andrew Watson

I attended my first Learning and the Brain in 2008; I believe the topic was “The Science of Attention.”

Since then, I’ve attended at least two dozen: in New York, Chicago, Washington, San Francisco. Discussing Stress, and Memory, and Ethics, and Technology. And, of course, learning.

At some point, you might reasonably think, I’d get tired of the handouts and the slides and the coffee.

But, no: I still can’t get enough.

Reason #1: Old Friends

Being an interdisciplinary endeavor, the field of Mind, Brain, and Education is dramatically large…and comfortably small. When you come back, you start recognizing folks right away.

John Almarode presents regularly (and, with his bow tie, vivaciously) about applying cognitive science to the classroom. A post-presentation chat with John is one of the great learning experiences you’ll ever have.

I met Sarah Flotten — currently the interim Director of the Peter Clark Center for Mind Brain Education — through a friend several years ago. It’s now an annual event to catch up with her insights, her school, and her center.

Joanna Christodoulou (a former professor of mine) combines knowledge of neuroscience, knowledge of reading, and enthusiasm so compellingly that I’m still learning from her. I get to catch up with here every year or so at LatB.

This list could go on at length: Pooja Agarwal and Ellen Anderson, and even David Daniel (who doesn’t like it when I mention him in the blog).

In brief: if you want to find colleagues who think the way you do about teaching and learning, you’ll find them here. Even better: you’ll build relationships and coalitions that grow over the years.

Reason #2: New Friends

Once you enter the world of Mind, Brain, and Education — on this blog, on twitter, at the conferences — you’ll start meeting people from (literally) across the globe.

At this most recent conference in Boston, I FINALLY got to meet people I’ve been online chatting with for years.

Beth Hawks (twitter handle @PhysicsHawk) — a science teacher, who blogs here — offers a rare twitter presence. She is encouraging, wise, well-informed, and unwilling to be bamboozled by uplifting-but-empty slogans. I’ve been liking her posts for years, and got to meet her in Boston.

Kristin Simmers (@KristinASimmers) — currently studying the intersection of neuroscience and education — reached out to me about my first book AGES ago, and we’ve been in e-conversation ever since. Perhaps 2 years after that first e-exchange, we got to have lunch at the conference. Where else would I get to meet her in person?

Your MBE colleagues are out there — sometimes a continent away. You can meet them at the conferences.

Reason #3: SO MUCH TO LEARN

Of course, depending on your interests, this could be reason #1.

Even after 14 years, I still have so much to learn in this field. The speakers explore their research and insights — challenging me (and each other) to rethink settled ideas in light of new information.

For instance: on the very first day of the Boston conference, two speakers (politely, curiously) squared off on this important question: can we use conscious strategies to respond to stressful environments?

If the answer is “yes,” then we can guide our students (and our colleagues, and ourselves) down one path.

If the answer is “no” — because “stress turns off the pre-frontal cortex” — then we need a different path entirely.

What’s the correct answer? Honestly: check out Judson Brewer and Bessel van der Kolk to see whose analysis you find more persuasive.

https://www.youtube.com/watch?v=gv-CmqMecVY

https://www.youtube.com/watch?v=d_YApSkqsxM

The best place I know to hear these debates and have these conversations: Learning and the Brain.

Beyond FOMO

If you’re worried that you’ve missed out, I’ve got good news: the schedule for the February Conference in San Francisco has been posted!

Is “Cell Phone Addiction” Really a Thing?
Andrew Watson
Andrew Watson

I recently read a tweet asserting “the fact that cell phones are proven to be as addictive as drugs.”

Of course, people casually use the word “addictive” about all sorts of things: chocolate, massages, pumpkin-spice lattes. (No doubt somewhere Twitter is being described as “addictive.” My mother tells me that this blog is addictive.)

But all that casual language doesn’t add up “proving the fact” that cell phones are “as addictive as drugs.” So I started wondering: has this “fact” been “proven”?

Good News, Bad News (Good News, Meh News)

Over the years I’ve adopted a simple strategy. When someone makes a factual claim about research, I ask for the research.

In this case, I simply asked the tweep for the research behind the claim.

Good news: He answered!

I’ve been amazed over the years how often people make “research-based” claims and then refuse to provide research to support them.

In this case, he did in fact point toward research on the topic. (I found one website claiming that 16% of adolescents, and 6.3% of the population, are addicted; alas, no sources cited. Happily, they do provide treatment…)

The tweep’s willingness to cite evidence enhances his credibility. Let’s check it out…

Bad news: hmm. “Evidence,” in this case, means “links to newspaper articles.” Generally speaking, USA Today and Vice.com aren’t the best places to find research. Research is published in journals. (Heck, I’ve even criticized the New York Times for its research credulity.)

So: the tweep’s credibility clicks down slightly.

Good news: All three of the links do, in fact, point to underlying research! I didn’t get a direct connection to the promised research, but I can keep digging to find it.

Credibility clicks back up.

Meh news: it turns out that all three articles point to the same underlying research. That is: I didn’t find three studies supporting the claim that “cell phones are proven to be as addictive as drugs”; I got one.

Now: one study isn’t nothing. But [checks notes] one isn’t three.

This Just In: Correlation Isn’t…

Given how much is riding on this one study, let’s check it out.

First off, we can see right there in the title that the study focuses on correlation. As you’ve no doubt heard dozens (thousands?) of times, “correlation isn’t causation.”

In this case, the authors asked 48 people questions about their cell-phone usage. Based on their answers, they categorized some of those people as “addicted.” And they then found brain differences between the “addicted” and “not addicted” people.

This quick summary leads to several concerns.

First: one study of 48 people doesn’t “prove a fact.” It might be an interesting data point, but that’s all.

Second: this study doesn’t claim to “prove a fact.” Using a questionnaire, it DEFINES some folks as addicted and others as not addicted.

Third: “brain differences” always seems like a big deal, but trust me — they might not be.

People who throw the javelin probably have a different “average muscular profile” than people who run marathons, because they’re using different muscles.

People who play the piano probably have different neural profiles than people who dance ballet, because they’re spending more time using THIS part of the brain than THAT part.

It seems likely people who score high on this “cell-phone addiction” questionnaire behave differently than those who don’t; so it’s not dramatically surprising that their brains are different.

Did the phone cause to brain differences, or brain differences cause phone use? We don’t know. (Because, “correlation isn’t …”)

Important to Note

One interesting point does jump out. The brain differences found by this research team do — in some ways — align with plausible predictions about addiction.

Now, the researchers don’t make strong claims here: the word “drugs” appears only once in the body of the study.

This finding isn’t a big surprise to me. Very roughly, the  brain differences have to do with “our ability to control what we pay attention to.” It’s not hugely surprising that heavy cell-phone users have brain differences there (and that people addicted to drugs do too).

Don’t Stop Now

If the tweep’s study doesn’t support the claim that “cell phones are proven to be addictive,” does other research?

To answer that question, I did a simple google search (“cell phone addiction real”). The first scholarly article that pops up says…not so much.

Here’s their summary:

Although the majority of research in the field declares that smartphones are addictive or takes the existence of smartphone addiction as granted, we did not find sufficient support from the addiction perspective to confirm the existence of smartphone addiction at this time.

The behaviors observed in the research could be better labeled as problematic or maladaptive smartphone use and their consequences do not meet the severity levels of those caused by addiction.

In brief: “maladaptive,” yes; “addictive,” no.

As I continued clicking, I found other skeptical reviews (for instance, here), and also found some that embrace the category (with some open questions, here).

Oh, and, by the way: “cell phone addiction” isn’t included in the DSM-5.

In other words, I think we can reasonably describe the category of “cell phone addiction” as an active scholarly debate. To be clear, this conclusion means we can’t reasonably describe it as “a proven fact.”

Why I Care

I am, believe it or not, open to the idea that cell phones might be addictive. If they are — if at some point research “proves that fact” — then this label might help us treat a real problem effectively.

My objection springs from another source entirely.

I worry when debate about measurable claims sinks to applying insulting labels.

If I think that asking students to memorize is a bad idea, I could study research on the topic. Or, I could dismiss it as “drill and kill.” The insulting label replaces the argument.

If I think that teacher talk is bad, I could study research on the topic. Or, I could mock it as “sage on the stage.” The dismissive label replaces the argument.

If I think that cell-phone usage is bad for teens, I could study research on the topic. Or, I could call it “an addiction.” The alarming label replaces the argument.

If we’re going to rely on research to make decisions about teaching and education (which is, after all, the GOAL of our organization) we should never replace research with labels.

Instead, let’s try something else. Let’s replace labels with research…


Horvath, J., Mundinger, C., Schmitgen, M. M., Wolf, N. D., Sambataro, F., Hirjak, D., … & Wolf, R. C. (2020). Structural and functional correlates of smartphone addiction. Addictive behaviors105, 106334.

Panova, T., & Carbonell, X. (2018). Is smartphone addiction really an addiction?. Journal of behavioral addictions7(2), 252-259.

Billieux, J., Maurage, P., Lopez-Fernandez, O., Kuss, D. J., & Griffiths, M. D. (2015). Can disordered mobile phone use be considered a behavioral addiction? An update on current evidence and a comprehensive model for future research. Current Addiction Reports2(2), 156-162.

Gutiérrez, J., & Rodríguez de Fonseca, F. (2016). Gabriel Rubio.: Cell Phone Addiction: A Review. Front. Psychiatry7, 175.