Skip to main content

Andrew Watson About Andrew Watson

Andrew began his classroom life as a high-school English teacher in 1988, and has been working in or near schools ever since. In 2008, Andrew began exploring the practical application of psychology and neuroscience in his classroom. In 2011, he earned his M. Ed. from the “Mind, Brain, Education” program at Harvard University. As President of “Translate the Brain,” Andrew now works with teachers, students, administrators, and parents to make learning easier and teaching more effective. He has presented at schools and workshops across the country; he also serves as an adviser to several organizations, including “The People’s Science.” Andrew is the author of "Learning Begins: The Science of Working Memory and Attention for the Classroom Teacher."

Walking Promotes Creativity? A Skeptic Weighs In…
Andrew Watson
Andrew Watson

When teachers try to use psychology research in the classroom, we benefit from a balance of optimism and skepticism.

Family walking toward camera in autumn woods

I confess, I’m often the skeptic.

When I hear that – say – “retrieval practice helps students learn,” I hope that’s true, but I want to see lots of research first.

No matter the suggestion…

… working memory training!

… dual coding!

… mindfulness!

… exercise breaks!!!

… I’m going to check the research before I get too excited. (Heck, I even wrote a book about checking the research, in case you want to do so as well.)

Here’s one surprising example.

Because I really like the outdoors (summer camp, here I come!), I’d LOVE to believe that walking outside has cognitive benefits.

When I get all skeptical and check out the research…it turns out that walking outside DOES have cognitive benefits.

As I wrote back in May, we’ve got enough good research to persuade me, at least for now, that walking outdoors helps with cognition.

Could anything be better?

Yup, Even Better

Yes, reader, I’ve got even better news.

The research mentioned above suggests that walking restores depleted levels of both working memory and attention.

“Yes,” I hear you ask, “but we’ve got other important mental functions. What about creativity? What does the research show?”

I’ve recently found research that looks at that very question.

Alas, studying creativity creates important research difficulties.

How do you define “creativity”?

How do you measure it?

This research, done by Oppezzo and Schwartz, defines it thus: “the production of appropriate novelty…which may be subsequently refined.”

That is: if I can come up with something both new and useful, I’ve been creative – even if my new/useful thing isn’t yet perfect.

Researchers have long used a fun test for this kind of creativity: the “alternative uses” test.

That is: researchers name an everyday object, and ask the participants to come up with alternative uses for it.

For example, one participant in this study was given the prompt “button.” For alternative uses, s/he came up with…

“as a doorknob for a dollhouse, an eye for a doll, a tiny strainer, to drop behind you to keep your path.”

So much creativity!

Once these researchers had a definition and a way to measure, what did they find?

The research; the results

This research team started simple.

Participants – students in local colleges – sat for a while, then took a creativity test. Then they walked for a while, and took second version of that test.

Sure enough, students scored higher on creativity after they walked than after they sat.

How much higher? I’m glad you asked: almost 60% higher! That’s a really big boost for such a simple change.

However, you might see a problem. Maybe students did better on the 2nd test (after the walking) because they had had a chance to practice (after the sitting)?

Oppezzo and Schwartz spotted this problem, and ran three more studies to confirm their results.

So, they had some students sit then walk, while others walked then sat.

Results? Walking still helps.

In another study, they had some students walk or sit indoors, and walk or sit outdoors.

Results: walking promotes creativity both indoors and out.

Basically, they tried to find evidence against the hypothesis that walking boosts creativity…and they just couldn’t do it. (That’s my favorite kind of study.)

Just One Study?

Long-time readers know what’s coming next.

We teachers should never change our practice based on just one study – even if that study includes 4 different experiments.

So, what happens when we look for more research on the topic?

I’ve checked out my go-to sources: scite.ai and connectedpapers.com. (If you like geeking out about research, give them a try – they’re great!)

Sure enough, scite.ai finds 13 studies that support this conclusion, and 3 that might contradict it. (In my experience, that’s a good ratio.)

Connectedpapers.com produces fewer on-point results. However, the most recent study seems like a very close replication, and arrived at similar findings.

In brief: although I’m usually a skeptic, I’m largely persuaded.

TL;DR

Walking outdoors helps restore working memory and attention; walking either indoors or outdoors enhances creativity (at least as measured by the “alternative uses”  test).

I’d love to see some studies done in schools and classrooms. For the time being, I think we have a persuasive foundation for this possible conclusion.

Our strategies for putting this research to good use will, of course, be different for each of us. But it’s good to know: simply walking about can help students think more creatively.


Oppezzo, M., & Schwartz, D. L. (2014). Give your ideas some legs: the positive effect of walking on creative thinking. Journal of experimental psychology: learning, memory, and cognition40(4), 1142.

The Most Important 5 Minutes in Class: The Primacy/Recency Effect
Andrew Watson
Andrew Watson

As we put our lesson plans together, we teachers want to know: are some minutes more valuable than others?

Student Holding Clock

That is:

Do students remember most at the 10-minute mark of the lesson, because they’re mentally revved up?

Or, perhaps they remember most from the final five minutes, because the whole class has led to this grand conclusion.

Or, perhaps some other time slot generates the most learning, because psychology reasons.

What does the research tell us?

Start Here

I occasionally see teaching advice that seeks to answer this question. That advice typically begins with a fascinating research pool.

Here’s the story.

Researchers present students with — say — a list of 15 words. After distraction, how many  words do students remember? And, can we predict which ones?

Several studies suggest a consistent answer.

Students tend to remember words from the beginning of the list. Researchers call that the “primacy” effect.

And, they remember words from the end of the list. That result gets the moniker “recency effect.”

Going all the way back to 1962, this primacy/recency effect has a lot of research behind it. (For a more recent study, click here.)

Lab to Classroom

So, how should teachers plan our lessons based on this particular finding?

Let’s imagine that I tell my students a list of 8 instructions. Because of the primacy/recency effect, I suspect they’ll remember the early and late instructionst better than the ones in the middle. (Hint: maybe I should write down a long list of instructions…)

But: what does this effect tell us about the most valuable teaching time during a class period as a whole?

From time to time, scholars who translate psychology research for classroom teachers make this argument:

The primacy/recency effect suggests that the first several minutes of class, and the final several minutes of class, have the greatest effect on learning.

That is: For the same reason that students remember the first and last instruction from my list of 8, they’ll learn the most during the first and last minutes of class.

Voila: a research-based answer to the question.

I confess, however, that I myself have doubts.

The argument says, in effect:

Rules governing mental processes for 60-120 seconds also govern mental processes for 45-80 minutes.

Honestly, I’m just not sure that’s plausible. My doubts spring from two sources.

Doubts, and More Doubts

In the first place, I doubt this advice because it extrapolates so far beyond the initial research conditions.

If research tells me something about — say — college students, that conclusion might also apply to 1st graders. But it might not. 1st graders aren’t college students.

If research tells me something about adolescents in Iceland, that conclusion might apply to teens in Brazil. But it might not. Icelandic culture differs from Brazilian culture.

And, if research tells me about mental functions over one minute, that conclusion might apply to 20 minutes. (Or 45, or 80.) But IT MIGHT NOT. One minute isn’t twenty.

Long-time readers know I always focus on “boundary conditions.” From my perspective, this advice goes WAY beyond the boundaries of the initial research.

By the way: I’ve asked SEVERAL wise people if they know of primacy/recency research that goes beyond a minute or two. So far, the answer is “no.”

The second reason I doubt this advice because of the specific mental functions involved.

As far as I can tell, researchers explain the primacy/recency effect by talking about short-term memory and working memory.

Both of these mental faculties describe very short-term mental functions. In my grad-school classes, our profs typically said that working memory holds information somewhere between 5 and 30 seconds.

If, in fact, the primacy/recency effect results from short-term and working memory functions, then those findings almost certainly won’t apply to mental processes that take 30+ minutes.

Like, say, our classes.

Just Answer the Question

If this advice doesn’t hold, what can research tell us about the “most important five minutes in class”?

I’ve got two answers.

Answer #1:

I’ve asked lots of people if they have a resaerch-informed answer to this question. So far, no one has a strong “yes.” But, If I hear of one, I’ll pass it along.

And, btw, a friend has answered “we really have to research that question!” So, I’ll let you know if/when his results come through.

Answer #2:

Long-time readers know my mantra: “don’t just do this thing; instead, think this way.”

In this case, I don’t think we can plausibly identify any one time slot that consistently generates the most learning.

Instead, we want to use core ideas from cognitive science to structure lesson plans effectively.

Use retriveal practice.

Beware working-memory overload.

Foster attention.

Activate prior knowledge.

And so forth.

If we follow this approach, every minute will build ultimately — and more-or-less equally — toward students’ learning.


Castel, A. D. (2008). Metacognition and learning about primacy and recency effects in free recall: The utilization of intrinsic and extrinsic cues when making judgments of learning. Memory & Cognition36(2), 429-437.

Working Memory in Everyday Life
Andrew Watson
Andrew Watson

Imagine this scenario: you’re standing in the CVS toothpaste aisle, trying to decide.

You think you should be able to recognize something familiar, but honestly there are so many choices.

Which brand are you loyal to?

Do you want mint?

Fluoride? Foaming? Whitening?

A patented “sensitive teeth” formula? Bacon flavor?

I think I made up the bacon. But, given all those choices and all the combinations, you simply can’t decide.

The Roman Coloseum on a sunny day, with lots of people in view

If you’re like me, you feebly grab at something plausible and make a dash for the register.

If you’ve had a long day of grading, you might just give up entirely.

So: what on earth is going on in your head? Why is picking a box of toothpaste so exhausting?

Cognition Im/possible

When I meet with teachers, I regularly discuss the importance of working memory.

This vital cognitive capacity allows students to hold on to several bits of information, and to reorganize/combine them into new facts, processes, and mental models.

Oversimplifying a bit, you could say it’s where the learning starts happening in the mind.

This essential mental process, however, creates two important problems.

The first problem: our students just don’t have very much working memory.

If you see students forget the question they were about to ask, or give up on a shockingly simple task, or lose focus completely, you might just be looking at working-memory overload.

It happens all the time.

The second problem: most of the ideas that we want our students to learn already exist in our own long-term memory.

We really struggle to see the working-memory load included in their work, because we already know how to do it.

Why can’t they do this simple math problem?

Why do they struggle to use new vocabulary words into a sentence?

And, why isn’t the answer to a history question perfectly obvious?

In every case, the correct answer is in our long-term memory, but students must wrestle with it in their working-memory.

In other words, our own expertise obscures our students’ mental struggles from us.

But: when we go to the CVS toothpaste aisle, we know exactly what they’re going through. Too many mental variables – not enough headspace. Ugh.

When In Rome…

I’ve spent the last week in Rome for a conference, and – believe it or not – found myself thinking about all that toothpaste.

Why? Because: museums.

I visited several museums, and was repeatedly struck by my own working-memory overload.

For example, the room with all those coins:

What should I be learning from the hundreds (and hundreds!!) of doubloons and coppers and denarii?

Which are the most important examples?

Should I spot trends or cycles or dramatic shifts?

Of course, the museum folks know that I have those questions, so they provide answers:

Hundreds and hundreds of little cards with LOTS of information about the coins.

All that information includes specialized vocabulary.

And those vocabulary words get helpful definitions in parentheses.

All these answers – the information, the vocabulary, the definitions – benefit other experts in ancient coins.

But they leave me even more confused and overwhelmed.

In other words: like some teachers, museum experts did not recognize the cognitive overload experienced by many students/museum-goers.

I wanted to learn.

I wanted to understand.

Certainly I wanted to appreciate.

But I just didn’t know how to process SO MUCH STUFF. And, don’t get me started on  the rooms with helmets or wine-jugs…

Inherent Expertise

At the same time I noted my own experience of working-memory overload, I experienced several museum collections that did NOT overwhelm my brain.

For instance, the first room (more or less) in the Vatican Museum includes several hundred busts: matrons, soldiers, children, priests, emperors, even an enslaved person.

To my surprise, I didn’t feel overwhelmed; instead, I felt curious and enticed. I wanted to look at the faces and speculate about their identities and stories and personalities.

Why the different reaction? Here’s my hypothesis:

I have no expertise in coins (or wine jars), and so all those samples felt overwhelming.

However, I have LOTS of expertise with faces. I spend most of my days interacting with them and the personalities behind them.

My inherent expertise with faces meant that 1000 busts felt fun and interesting, whereas 1000 helmets filled me with boredom and dread.

Classroom Implications

I said above that our teacherly expertise makes it difficult for us to spot our students’ working memory struggles.

For that reason, I think we should always look out for the working-memory overload that we all experience.

Driving to a new location in a rental car? Wondering where the rear defrost button is, and when to turn left? Could be working memory overload…

Navigating a new cafeteria, trying to find the silverware and the beverages and the gluten-free options? The salad dressing is where again? Yup: working memory overload…

Too many options when you’re trying to choose a hotel on that website? Perhaps you’re furious about all those helpful pop-ups? You know the feeling…

In brief: the better we get at recognizing working-memory problems in our own lives, the better we may become at spotting the problems our students are likely to have.

Empathy may be the pathway to understanding.

And, that empathy just might help us teach better.

Earworms and Sleep: What Will They Research Next?
Andrew Watson
Andrew Watson

Just last week, I spoke with middle- and upper-school students about learning.

Student lying in bed listening to music on earphones

We all know — and these students certainly know — that learning is hard. So, does cognitive science have any practical suggestions to help them study and learn?

Yes, reader, it does.

We know that retrieval practice helps learning much more than simple review.

We know that multitasking really isn’t a thing.

And, we know that exercise and sleep really help learning.

This last point — the importance of sleep — can be tricky.

After all, students say that they don’t have time to sleep — they have too much homework.*

Several students asked me: “I’m having trouble falling asleep. What do you suggest?”

In the moment, I suggested having a routine. Go to bed at the same time every night (as much as possible).

But, just a few days ago, a new study came across my desk…

Music and Sleep

I’ve often written about Dr. Michael Scullin’s research (for instance, here and here). He typically researches really practical questions. And, he studies and writes about them in unusually clear ways.

So, I’m a fan.

His most recent study looked at an unexpected topic: earworms.

You know: those infuriating tunes that get stuck in your head.

You just can’t get rid of them. (No, I’m not going to mention a song about very young scary fish that have huge teeth and eat seals and occasionally terroize people. “Doo doo doo doo doo doo.”)

What effect do earworms have on sleep?

Questions and Answers

Research into sleep can get quite technical. We start talking about “spindle detection” and “polysomnography” and “frontal slow oscillation activity.”

Rather than go into the details, I’ll offer a quick summary of the conclusions:

First: survey results suggest that most people (87%!) think that listening to music will improve sleep (or, at least, not harm it).

However — a big however — people who reported listening to relatively more music also report relatively lower sleep quality.

Second: the same survey results suggest that “earworms” make up a big part of this problem.

That is: the more music I listen to, the more earworms I experience. And, the more earworms I experience, the worse I sleep.

YIKES.

Third: you might think that music with lyrics results in more earworms than music without lyrics. Scullin’s team, in fact, thinks that’s the “intuitive view.”

Well, as so often happens, our intuitions are wrong.

Believe it or not, people who listen to instrumental versions of popular songs have more earworms — and worse sleep — than those who listen to the songs themelves.

So, What To Do?

What advice should we be giving students about sleep — other than, “get at least 8 hours”?

Scullin’s team sums up their study this way:

There are few behaviors as prevalent in young adults as listening to music, and many regularly listen to music as part of their bedtime routine. Listening to music feels relaxing, but familiar and repetitive music can trigger involuntary musical imagery that worsens sleep quality and daytime functioning.

In other words: to reduce earworms and sleep better, don’t listen to music before going to sleep. And, instrumental versions of popular songs seem to be especially likely to generate earworms.

I can’t believe I’m typing this, but: Listener beware!


* When students say to me, “I can’t sleep, I have too much homework,” I say, “Let’s think about this:

‘Homework’ is anything that helps you learn more.

Sleep helps you learn more.

Therefore, sleep is homework.

Do your sleep homework, and you will learn more.”


Scullin, M. K., Gao, C., & Fillmore, P. (2021). Bedtime music, involuntary musical imagery, and sleep. Psychological Science32(7), 985-997.

“No Cameras Allowed:” Does Taking Pictures During Lectures Benefit Learning?
Andrew Watson
Andrew Watson

Should students use cameras to take pictures of boardwork?

My high school students know my fierce anti-cell-phone policy. Nonetheless, they do occasionally ask if they may take a quick picture. (I typically say yes, and then check to be sure the phone goes back in the bag.)

When I do PD work at schools, or present at conferences, teachers take pictures of my slides ALL THE TIME.

Of course, the fact that students and teachers want to take those pictures doesn’t automatically mean that it’s a good idea to do so.

In fact, we have several reasons to think it’s a bad idea.

First reason: those photos might serve as subtle hint to our brain’s memory systems: “you don’t need to remember this, because you’ve got a photo.”

Second reason: the act of taking a photo might distract students (and teachers) from the content we’re discussing.

For example: If my students are thinking about framing the photo correctly (and using a cool filter), they’re NOT thinking about the ways that Fences combines both comic and tragic symbols.

Third reason: we’ve got research!

Check this out…

Prior Knowledge

Researchers have looked at this question for several years now.

In several studies, for instance, researchers asked participants to tour a museum and take pictures of various works of art.

Sure enough, later tests revealed that people remember more about the artwork they didn’t photograph than the artwork they did photograph.

As predicted above, something about taking a photograph made it harder – not easier – to remember the content.

For all these reasons, it seems, teachers might reasonably discourage students from taking photos.

At the same time, we should probably keep asking questions.

In particular, we should acknowledge that museum photography probably isn’t a good stand-in for classroom photography.

That is: my students (and teachers during PD) probably take photographs to help themselves remember important ideas, concepts, and examples. In museums, people might take pictures because that statue is so cool and beautiful!

The museum research offers a useful and interesting baseline, but we’d love some research into … say … actual classrooms.

Cheesemaking, and Beyond!

Well, I’ve got good news. A research team — led by Dr. Annie Ditta at the University of California, Riverside — has indeed started exploring exactly these questions.

In their studies, Team Ditta had students watch 3 short online video lectures about obscure topics. (Like, cheesemaking. No, I’m not joking.)

Participants took pictures of half of the slides.

Here’s the primary question: did students remember more information from the photographed slides, or the unphotographed slides?

SURPRISE! Taking pictures helped students remember the information on the slide.

For the reasons listed above, I did not expect that result. In fact, the researchers didn’t either.

But, those photos really helped.

In one study, students got 39% of the questions right for the slides they photographed, and 29% right for the ones they didn’t. (Stats folks: Cohen’s d was 0.41.)

Given how EASY this strategy is, we should really pay attention to this finding.

By the way, Dr. Ditta’s study explored some other questions as well.

First: students remembered info from photographed slides better both when they decided which slides to photograph and when they were told which ones to photograph.

So, if we tell students to “photograph this information,” we (probably) don’t disrupt the benefit.

Second: what about spoken information?

Common sense suggests that taking a picture won’t help remember spoken ideas (if those ideas aren’t written on the slide). In fact, taking that picture might distract students from the spoken words.

Strangely, in this research, Team Ditta came up with mixed – and surprising – results. In one study, taking a picture made no difference in memory of spoken material. In the other, it benefitted memory of spoken material.

WOW.

So, What Should Teachers Do?

Before we rush to apply research in our classrooms, we always want to ask a few questions.

In this case, I think we should have LOTS of questions.

First: Dr. Ditta’s research looked at brief, online lectures for college students.

Do these conclusions apply to longer classes? To in-person classes? For K-12 students? To students who aren’t neurotypical?

We just don’t (yet) know.

Second: participants in these studies didn’t do anything with the photos. They simply took them.

Would we find the same pattern for students who reviewed their photos, compared to – say – reviewing their notes?

We don’t (yet) know.

Third: participants were tested on their knowledge 5 minutes after the videos were done.

We’ve got LOTS of research showing that short-term gains don’t necessarily result in long-term learning.

So, would these findings hold a week later? A month later?

We don’t (yet) know.

 

Given all the things we don’t know, how can this research benefit us?

For me, these studies open up new possibilities.

In the past, as I described above, I permitted students (and teachers) to take photos. But I tried to discourage them.

I would even – on occasion – explain all the reasons above why I thought taking photos would reduce learning.

Well, I’m no longer going to discourage.

Instead, I’ll explain the complex possibilities.

Perhaps taking photos helps memory because it signals that THIS INFORMATION DESERVES ATTENTION.

Or, perhaps taking photos helps only if students DON’T review before tests. But, taking notes would help more … especially the students who DO review before tests.

And perhaps, just perhaps, this research team got flukey results because even well-done research sometimes produces flukey results. Future classroom research about taking photos of slides might ultimately suggest that (despite this thoughtful work), it really is a bad idea.

I wish the answer were simpler, but it just isn’t.

TL;DR

Surprising new research suggests that taking photos of lecture slides helps college students remember slide contents – even when students don’t review those photos.

Before we teachers rush to make dramatic changes, we should think carefully how this research fits our classrooms and contexts.

And, we should weigh this memory strategy against lots of other strategies – like retrieval practice.

Finally: let’s all watch this space!


Ditta, A. S., Soares, J. S., & Storm, B. C. (2022). What happens to memory for lecture content when students take photos of the lecture slides?. Journal of Applied Research in Memory and Cognition.

Soderstrom, N. C., & Bjork, R. A. (2015). Learning versus performance: An integrative review. Perspectives on Psychological Science10(2), 176-199.

 

It’s Funny (but It’s Not): Our Instincts about Learning are Often Badly Wrong
Andrew Watson
Andrew Watson

Every now and then, research is just plain funny. Here’s the story:

If you’ve spent even a hot minute at a Learning and the Brain conference, you know that multitasking is not a thing.

When we undertake two cognitively demanding tasks “simultaneously,” we actually switch rapidly back and forth between them.

The result: we do worse at both.

That is: if you’re reading this blog post while listening to the news, you won’t understand or remember either very well. (That is: not as well as you would have done with each task separately.)

Where’s the funny?

In 2017, Shalena Srna published research about our perceptions of multitasking.

She found that we do better at activities when we think we’re multitasking than when we think we’re monotasking.

For instance, participants transcribed a video lecture about sharks.

Researchers told half of the participants that listening and transcribing are two different things, so they would be multitasking.

They told the other half that listening and transcribing are one thing, so they’re not multitasking.

Sure enough, the group that perceived transcription as multitasking transcribed more words, and remembered more content, than the group who perceived the same task as monotasking.

Amazing.

Srna’s team suspects that people who think they’re multitasking concentrate harder, and so do better.

Hence this paradox: people don’t multitask well, but we monotask better when we think we’re multitasking.

The Bigger Picture

So, what do we do with this comical finding?

On the one hand, I don’t think it has direct teaching implications. That is, we teachers should NOT pretend to our students that they’re multitasking so that they’ll monotask better. (Why not? Well, misleading students is usually a very bad idea…)

On the other hand, this study provides an important reminder:

Humans don’t intuitively understand how we think and learn.

We teachers (and we students) might just FEEL that a particular learning strategy works well for us. Sadly, those powerful feelings are often just plain wrong.

I can think of several research examples of this not-so-funny problem.

In 2009, Dr. Nate Kornell and Dr. Lisa Son published a study about retrieval practice.

Students learned some word pairs.

They practiced HALF of those words with simple review.

They practiced the OTHER HALF with retrieval practice.

Unsurprisingly (to the researchers — and to us), the students remembered more words after retrieval practice than after review. (About 6% more.)

Surprisingly, they PREDICTED that they would remember more words after the review. (About 7% more.)

That is: even thought they actually formed stronger memories after retrieval practice, they thought they formed stronger memories after another (less effective) strategy.

Why, because (say it with me):

Humans don’t intuitively understand how we think and learn.

Honestly, this insight is just bad news.

The Bigger Picture

Another study — actually a literature review — makes the same point more broadly.

Dr. Nick Soderstrom, working with Dr. Robert Bjork, reviewed research into short-term performance and long-term learning.

To summarize this ENORMOUS review, they found that teaching strategies which benefit short-term peformance do not consistently benefit long-term learning.

That is: imagine that I introduce a new topic in class, and give my students a quick low-stakes quiz at the end of that class. The strategies that boost class-end quizzes probably won’t help students learn well enough to demonstrate understanding on a later test.

They understood it today, but not long-term.

The Even Bigger Question: So What?

So far, these research findings have the whiff of humor.

Ain’t it funny that we monotask better when we think we’re multitasking? LOL.

In truth, this consistent finding — humans don’t intuitively understand how we think and learn — has important implications.

Here’s what I mean:

In theory, the field of Mind, Brain, and Education creates conversations among equals: psychology researchers, neuroscience researchers, and teachers/academic leaders.

In practice, this field often results in researchers telling teachers what to do.

I myself, in my own work, spent LOTS of time championing the voice of teachers.

We teachers can, should, MUST speak up for ourselves. Our experience — both individual and professional — matters in these conversations. We’re not here to obey; we’re here to share ideas for mutual benefit.

However, because “humans don’t intuitively understand how we think and learn,” we must speak up for our experience AND we must do so modestly.

We must do so with an open mind.

Yes, my experience tells me that teaching this way helps students learn.

But, my definition of “learn” is “do well on the class-end quiz.” Soderstrom shows us — very convincingly — that class-end quizzes don’t predict long-term learning and understanding. (Of course: “long-term learning and understanding” is my goal!)

Yes, my experience tells me that I can multitask! Alas, research shows I’m just monotasking efficiently.

My gut tells me that simple rereading results in more learning than retrieval practice. Alas, my gut is just plain old wrong.

In other words: we teachers should have a role in this Mind, Brain, Education conversation. To be most effective in that role — to merit that role — we must acknowledge the limitations of our insight, training and professional experience.

This balance is VERY DIFFICULT to get right. I hope we can talk more about finding a harmonious tension between speaking up and listening with humility and curiosity.


Kornell, N., & Son, L. K. (2009). Learners’ choices and beliefs about self-testing. Memory17(5), 493-501.

Soderstrom, N. C., & Bjork, R. A. (2015). Learning versus performance: An integrative review. Perspectives on Psychological Science10(2), 176-199.

Test Anxiety: How and When Does It Harm Students?
Andrew Watson
Andrew Watson

When our students learn, we naturally want them to show us what they’ve learned.

Most schools rely, in varying degrees, on tests. The logic seems simple: if students know something, they can demonstrate their knowledge on this quiz, or test, or exam.

But, what about students who feel test anxiety?

These students might learn the material, but not be able to show what they’ve learned — at least, not as well.

The idea of test anxiety has been around for decades, and a significant pool of research suggests it correlates with measurably lower test grades.

How do we fix the problem?

Step 1: Defining the Problem

As always, we can’t really fix a problem until we understand the problem.

When we consider test anxiety, the explanation seems entirely straightforward.

Most students feel some degree of stress during tests. That’s normal, and can be helpful.

Some students, however, feel unhelpfully high levels of stress during tests. Distracted by sweaty palms and intrusive thoughts, they don’t concentrate on the cognitive task at hand.

In short: test anxiety harms the student during the test. Teachers can help students by reducing their stress in the moment. (Yes, we have lots of strategies to do so — see below.)

But wait!

What it that theory isn’t true? What if test anxiety muddles cognitive performance at some other time? If that’s true, then our “in-the-moment” strategies won’t help — or, won’t help enough.

Intriguing Hypothesis

How would we test this unsettling question?

A group of researchers in Germany discovered a thoughtful strategy.

Medical students in Germany spend lots of time (like, say, months) preparing for a high-stakes final exam.

Dr. Maria Theobald worked with over 300 of these students, who used an online learning platform to study. On this platform, these students…

… practiced problems from previous exams, and

… took five practice tests.

She also measured their test anxiety in two ways.

First, she measured their overall test anxiety, with a standard questionnaire.

Second, she measured their day-to-day test anxiety, rating their “tension about the upcoming study day” on a 1-5 scale.

And, of course, she measured lots of other things. (Spoiler alert: Theobald measured students’ working memory — a detail that will be important later.)

What happens when these researchers put all these pieces together?

Surprising Results

Here are the headlines:

Test anxiety does not harm students’ exam performance in the moment.

Instead, it does harm their performance during the preparation for the exam.

Why does Theobald reach this conclusion?

If test anxiety harms students in the moment, then these students should do worse on the FINAL TEST than they did on the PRACTICE PROBLEMS and the PRACTICE TESTS.

Imagine that a student averaged an 85 on practice problems and an 84 on practice tests, but score a 75 on the final test. We would say:

Something strange happened.

It looks like anxiety prevented students from demonstrating the knowledge they obviously have. (They obviously have it because they scored so well on the practice problems/tests).”

Theobald’s data, however, did not fit that pattern at all.

Instead, anxious students made less progress during the months of study BEFORE the test. And, their final test score was right in line with that earlier (lower) performance.

That is: anxious students scored 75 on the practice problems and practice tests … and then a 75 on the final exam as well. (These numbers are examples, not real data.)

So, we find ourselves saying:

“Hmm. These anxious students scored consistently lower than their peers — both on the final test and on the months of practice work they did.

Their anxiety didn’t lower their final score in the moment. It interfered with their learning trajectory as they prepared for the final test.”

Reader: I did not expect these results.

What Should Teachers Do?

First, we should — in my view — continue with stress reduction strategies in the moment.

We’ve got evidence that letting students vent their stress improves exam performance. And we’ve got evidence that helping students reframe stress as positive (“I’m excited!”) helps as well.

So, I wouldn’t give up on these pre-test strategies just yet.

Second, this research encourages us to take the long view. “In the moment” strategies might help some, but longer-term strategies now sound more urgent.

Because Theobald’s research is so new, I haven’t seen any responses to it — much less research based suggestions.

But I think of “values affirmation” as one potential (let me repeat: “potential”) way to reduce this kind of test anxiety.

I’ll be keeping my eye out for others. If you hear of a promising one, I hope you’ll let me know.

Potential Limitations

First: an important limitation.

All research studies include limitations, so it’s no criticism to say this study does too.

Specifically, this research was done with students completing medical school. That is: they (probably) have been highly academically successful for decades. They (probably) bring higher levels of motivation than many students.

And, their test-anxiety profile might not match those of my students, or of yours.

Until these findings are replicated in other students populations (and cultural contexts), I would rely on professional experience to adapt them to our own settings.

Second: an important non-limitation.

I noted above that Theobald measured students’ working memory. (Long-time readers know: I’m obsessed with working memory.)

This research team speculated — plausibly — that working memory capacity might mitigate the effects of test anxiety.

That is: students with more cognitive space to think might feel less distracted by anxious thoughts.

However, their data did not support that hypothesis. Students with high working memory are just as troubled by test anxiety as those with lower working memory.

TL;DR

In this study with German medical students, test anxiety interferes NOT with student performance on the final test, but with their learning before the test.

If further studies support this conclusion, we should refocus our work on helping those students during the weeks and months before the test itself.


Theobald, M., Breitwieser, J., & Brod, G. (2022). Test Anxiety Does Not Predict Exam Performance When Knowledge Is Controlled For: Strong Evidence Against the Interference Hypothesis of Test Anxiety. Psychological Science, 09567976221119391.

Does Mindfulness Help? A Blockbuster New Study
Andrew Watson
Andrew Watson

Few ideas in education sound better than mindfulness.

If mindfulness programs work as intended, teachers and schools can help students center their attention and lower their stress.

We’ve got suggestive research indicating that, when done properly, such programs can improve wellbeing.

Perhaps they can even helps students learn more. (We school people really like research that helps students learn more.)

What’s not to love?

Not Feeling the Love; Really Feeling the Love

Although I’ve linked to suggestive research above, this field does have a research problem.

Most mindfulness studies include relatively few people.

And, their study designs aren’t often persuasive. (The topic of “study design” gets technical quickly. The simplest version is: to say that “research shows this” convincingly, a study needs to check A LOT of boxes. Not many mindfulness studies do.)

So, we’d love a study with LOTS of people. And, we’d like a really good study design.

So, how about:

A study with 8,000 students.

In 85 schools.

Lasting over two years.

With a pre-registered study design.

In this study, researchers paired similar schools: for example, two large schools, located in Wales, with similar socio-economic makeup, and so forth.

One school in that pair got a 10-week curriculum in School Based Mindfulness Training. School teachers ran these sessions, which included mindfulness exercises and home practice and so forth.

The other school in the pair continued the SEL work that they were doing. (Researchers evaluated the extant SEL programs to ensure they were good quality.)

So: did the Mindfulness training benefit students more that ongoing SEL work?

What Researchers Measured; What They Found

This research team measured three primary outcomes: risk for depression, social-emotional functioning (with a “Strengths and Difficulties Questionnaire”), and well-being.

And, believe it or not, they measure twenty-eight secondary outcomes: executive function, drug and alcohol use, anxiety, and so forth.

Did the students who got the mindfulness training show statistically significant differences compared to those who got the “teaching as usual” SEL training?

The researchers themselves had been optimistic. In the reserved language of research, they write:

Our premise was that skills in attention and social-emotional-behavioral self-regulation underpin mental health and well-being across the full spectrum of well being.

“Skills in attention and social-emotional-behavioral self-regulation” sounds A LOT like mindfulness, doesn’t it?

Their review of earlier research, and their own pilot study, showed a “promise of effectiveness.” But, they designed and ran this 2-year-8000-student study to be sure.

What did they find?

Basically, nothing.

They write that they “found no evidence that [school based mindfulness training] was superior to [teaching as usual]” one year after the training was over.

In the primary outcomes, they found no differences for depression, well-being, and social-emotional function.

In the secondary outcomes, in fact, they found students in the mindfulness group had slightly worse results in five categories:

… higher levels of self-reported hyperactivity and inattention,

… higher panic disorder and obsessive-compulsive scores,

… lower levels of mindfulness skills.

And so forth.

These differences weren’t large, but they certainly don’t suggest that mindfulness training is better that other SEL programs.

Remaining Questions

Any study including 8000+ people, and measuring 30+ variables (!), will result in LOTS of details, and lots of questions about methodology.

These points jump out at me:

First: these researchers have done an impressively thorough job.

Reasonable people will push back on their findings. But this research team has obviously taken extraordinary care, and provided an immense amount of information for others to examine. (Check out their website.)

Second: I’ve traditionally been skeptical of “teaching as usual” control groups. Here’s why:

Some teachers got a shiny new thing: mindfulness training! Other teachers got nothing: the SEL curriculum they’ve been doing all along.

I’m rarely surprised when the new thing produces better results — it’s new!

However, in this case, the new thing DIDN’T produce better results. The results, basically, were identical.

So, my typical objection doesn’t really apply here.

Third: although 43 schools added mindfulness programs, more than half of them continued with the SEL training they were already doing.

That is, we’re not exactly comparing mindfulness to other SEL approaches. Some schools did only mindfulness; others did only SEL; others offered a blend of both.

Would the mindfulness programs produce better results if they replaced the SEL programs rather than combined with them? We don’t know.

Fourth: Why didn’t the mindfulness programs help?

On reason might be: most students just didn’t do the mindfulness exercises consistently.

On a 0-5 scale, students on average rated their mindfulness practice as 0.83. As in, less than 1. As in, they simply didn’t practice much mindfulness.

If I don’t take my migraine medication, it won’t help reduce my migraines. If I don’t do my mindfulness exercises, I’m unlikely to get the benefits of mindfulness.

Would these programs work if they took place in school, so students practiced more mindfulness? We don’t know.

TL;DR

This well designed study — including more than 8000 participants — strongly suggests that mindfulness training doesn’t produce more (or fewer) benefits than other SEL approaches.

This research doesn’t suggest we must cancel the programs we have. However, it pushes back against the argument that mindfulness provides distinct advantages, and that all responsible schools must adopt such programs immediately.

As long a schools tend responsibly to their students’ social-emotional needs, a variety of approaches can work equally well.


Kuyken, W., Ball, S., Crane, C., Ganguli, P., Jones, B., Montero-Marin, J., … & MYRIAD Team. (2022). Effectiveness and cost-effectiveness of universal school-based mindfulness training compared with normal school provision in reducing risk of mental health problems and promoting well-being in adolescence: the MYRIAD cluster randomised controlled trial. Evidence-based mental health25(3), 99-109.

The Unexpected Problem with Learning Styles Theory
Andrew Watson
Andrew Watson

I recently read a much-liked Twitter post that said (I’m paraphrasing here):

If you try to debunk Learning Styles Theory and you face unexpected resistance, start looking for the profit motive.

Hmmm.

To be clear: learning styles theory just doesn’t have plausible research support.

If and when we can debunk it, we certainly should.

But, in my own experience at least, teachers who believe the theory often do so with the best of motives.

Mocking those motives — or, even worse, implying believers have wicked motives — seems unfair. And, likely to prove counterproductive.

Yes, grifters exist. Yes, we should call them out. But most teachers who offer “unexpected resistance” can explain why — for reasons that have nothing to do with profits. (Honestly, if teachers were driven by profits, would we have joined this profession?)

Surface Plausibility

In the first place, MANY teachers learned about Learning Styles Theory in their education programs.

In fact, Blake Harvard — “The Effortful Educator” — searched the websites of 9 major schools of education, and found that MOST referenced Learning Styles Theory positively.

Can we be surprised that teachers believe what their professors teach them?

Equally important, this theory seems to align with much of our classroom experience.

After all, daily classroom life suggests that students learn differently. Some students respond well to this approach, while others need another approach entirely.

So, it seems that Learning Styles Theory (helpfully?) explains these differences, and (helpfully?) suggests a coherent way to respond to them.

Why wouldn’t teachers believe a theory that a) we learned in graduate school, and b) aligns with our daily experience?

Getting Personal

In fact, “unexpected resistance” to Learning Styles Theory often stems from an even deeper source.

Many dedicated teachers have been relying on it for years. Often, their self-definition as a good and caring teachers begins with or includes their fidelity to this theory:

“My students know I care about them because I tailor my instruction to their learning style!

When we tell teachers that we simply have no evidence to support the theory (and, to be clear, we don’t), we’re not simply asking them to change what they do and believe.

Instead, we are — in effect — asking them to admit their their exemplary teaching practice was (at best) useless, and (possibly) detrimental. FOR YEARS.

That admission, of course, is incredibly painful and troubling.

For us to mock teachers (“look for the profit motive!”) for this painful struggle … well, I simply don’t understand how that approach will help. I can’t remember the last time that mockery helped me change my teaching practice for the better.

Plausible Alternatives

If we shouldn’t accuse people of being charlatans (hint: I think we mostly shouldn’t), how should we contradict these misbeliefs?

As I’ve written before, I do think this is a very difficult problem.

We really should contradict those false beliefs, but I’m not at all sure that doing so encourages people to adopt new ones.

My current approach relies on these steps.

First: rather that asking teachers to stop believing one thing, I encourage them to start thinking about something else.

My hopeful theory: the more time they’re thinking about, say, working memory, the less time they’re thinking about Learning Styles Theory.

Second: I don’t contradict in public. I try to chat with believers one-on-one.

Honestly, this approach includes perils. If I don’t contradict in public, others might believe that theory does have merit.

However, as noted above, I think increasing shame reduces the likelihood that new advice will stick.

Third: I provide research, and ask lots of genuinely curious questions.

I hope that peer-to-peer curiosity will ultimately change more minds than more confrontational strategies.

 

To be clear, I’m not certain that my approach has more merit than others. I certainly have no research suggesting that it will work.

But experience tell me that “supportive listening” beats “questioning motives” as a motivational approach.

If you’ve got suggestions and strategies, please share them!

Marshmallows and Beyond: Cultural Influences on Self-Regulation
Andrew Watson
Andrew Watson

Few psychology studies have created a bigger stir than Walter Mishel’s research into marshmallows.

Okay, he was really doing research into self-control.

But the marshmallow images were adorable: all those cute children desperately trying not to eat one marshmallow right now, so that they’d get two marshmallows in fifteen minutes.

Mishel’s studies got so much attention because they suggested that self-control correlates with SO MANY good things: high grades, better jobs, better health, etc.

And, they suggested that self-control is relatively stable. Some studies suggested that the marshmallow test, given at to a child at age five, could offer insights into their lives decades later.

Now, this research pool includes lots of complexity.

If, for instance, you saw Dr. Mishel at our 2015 conference in Boston, you know that trustworthiness matters.

Children waited for the 2nd marshmallow more often if they had reason to believe that the experimenter would actually follow through on their commitments. (Smart kids!)

So, do other factors matter?

The Power of Culture

A research team in Japan, led by Kaichi Yanaoka, wondered if cultural factors might shape self control.

So, for instance, in Japan waiting for food gets cultural priority — much more so than in the United States (where Mishel did his research).

But, Japanese culture does not emphasize waiting to open gifts as much as families in the US often do.

For instance, as Yanaoka explains in this study, Japanese parents often leave gifts for their children, with no cultural expectation that the children should wait to open them.

So, do these cultural differences shape performance on the marshmallow test?

Hypothesis. Data.

Based on these cultural norms, team Yanaoka hypothesized that children from the US would be better at waiting to open gifts, but worse at waiting to eat marshmallows, than their Japanese counterparts.

Because research requires precision, this study includes LOTS of details. (For instance, the researchers checked to be sure that the Japanese children had eaten marshmallows before, so they knew what temptation they were resisting.)

But the overall design was quite simple. In the US and Japan, children waited either to eat marshmallows, or to open gifts. Researchers followed a simple script:

Now it’s gift time! You have a choice for your gift today. You can either have this one gift to open right now, or if you wait for me to get more gifts from the other room, you can have two gifts to open instead. […]

Stay right there in that chair and I’ll leave this right here, and if you haven’t opened it […] before I get back, you can two to open instead.

Of course, for the children getting marshmallows, the script said “marshmallow” and “eat” rather than “gift” and “open.”

So, what did the researchers find?

Sure enough, cultural expectations shape self control.

In this case, Japanese children waited for the second marshmallow (median time: 15 minutes) much longer than US children (median time: 3.66 minutes).

But, US children waited to open the gift (median wait time: 14.54 minutes) longer than Japanese children (median time: 4.62 minutes).

When you look at the graphs, you’ll be impressed by the precise degree to which cultural expectations reverse wait times.

The Big Picture

So, what do we do with this information?

I think Yanaoka’s study offers us a specific reminder, and a general reminder.

Specificallythis study lets us know that self-control is NOT one monolithic, unchangeable thing.

Self-control varies across people and cultures. Yes, self-control matters; but, performance on one test — even a test with marshmallows — doesn’t tell us everything we need to know.

Generally, this study reminds us that culture always matters.

So, teachers should indeed welcome advice that experts offer us about — say — adolescence. But, that advice always includes cultural constraints. Adolescence, after all, differs in Denver, Kyoto, Sao Paolo, Reykjavik, and Gaborone.

So too cultural norms around stress. And feedback. And appropriate relationships between adults and students. Yes, and self-control.

No advice — not even research-based advice — gives us absolute guidance across all cultural norms.


Yanaoka, K., Michaelson, L. E., Guild, R. M., Dostart, G., Yonehiro, J., Saito, S., & Munakata, Y. (2022). Cultures crossing: the power of habit in delaying gratification. Psychological Science33(7), 1172-1181.