L&B Blog – Education & Teacher Conferences Skip to main content
“Video Captions Benefit Everyone”: An Investigation
Andrew Watson
Andrew Watson

If you’ve watched educational videos lately, you’ve probably noticed that captions are EVERYWHERE. In fact, caption technology has evolved in recent months so that captions now highlight individual words as spoken, rather than present entire phrases or sentences one after another. But what if this well-intentioned practice is actually interfering with learning?

Given their prevalence, I’ve just assumed that we have good reasons to include captions. At the same time, people do LOTS of things that contradict evidence — so perhaps the time has come to investigate my assumption.

I started by casting a wide net. I went to Google Scholar and put in “captions and subtitles.” The first hit sounded a confident tone: “Video Captions Benefit Everyone.”

To be sure we understand the confidence of this study, let’s read the first two sentences:

Video captions, also known as same-language subtitles, benefit everyone who watches videos (children, adolescents, college students, and adults).

More than 100 empirical studies document that captioning a video improves comprehension of, attention to, and memory for the video.

If in fact captions promote understanding, focus, and recall for practically everyone who watches, we’ve got as close to a slam dunk as I can imagine.

Let’s check to be sure…

Focus on Definitions

As is so often the case, we should start by defining our terms, questions, and expectations clearly.

Because this blog focuses on education, I’m interested captioned videos used to help K-12 students learn stuff. That is: research into captions when people watch movies for fun don’t fit my question.

For the time being, I’m also going to focus on same-language captions: a video where the narrator speaks in English and the captions show the narrated words in English. Of course, a student who speaks Spanish at home might benefit from seeing Spanish subtitles for a video in English, but that’s a very different research questions.

For similar reasons, I’ll start by focusing on research into neurotypical students. I can imagine that students with particular diagnoses might — as part of their learning profile — have different requirements than their peers. It’s probably helpful to start by understanding how most people learn, and then adapt that practice as needed for individuals.

(To be clear: we will ultimately be interested in different-language captions, and in the potential benefits for different categories of learners. To start with, I simply want the most basic question answered.)

I’d like to find several rigorously designed studies all pointing in the same direction. I want sample sizes that rise above the trivial; I’d like plausible control groups; I’d like objective measures — not mere self-report. And so forth. After all, I shouldn’t tell you that captions are (or aren’t) a reseach-informed instructional practice if all the research I’m citing doesn’t meet basic standards.

Finally, if I’m really lucky, I’d like to have both research and theoretical frameworks pointing in the same direction.

Now that we’ve got some parameters in place, let’s return to that study and see what we find.

Working the Steps

I spend lots of my time double-checking (or triple-checking) “research-based” claims, so I’ve got a process to follow.

I won’t walk you through each step of the journey — it took a few hours — but the results are impressively clear.

First:

We have essentially no research that fits the criteria above.

No, really: we don’t have a pool of persuasive research giving us an answer one way or another.

That “Captions Benefit Everyone” study focuses on foreign-language captions, or on non-neurotypical learners, or on college students, or on self-report. LOTS of self-report.

I should explain, by the way, why self-report data don’t persuade most researchers: people are REALLY bad at knowing what helps us learn. College students might THINK they pay more attention, or remember better, when they see vids with captions. But unless we actually measure their attention, understanding, or learning, we shouldn’t actually make claims about attention, understanding, or learning.

When I asked Elicit.com to research this question, I found the same problem. The studies it summarized focused almost entirely on Chinese students watching videos with English captions. That research helps answer an important question — but it’s not the question I asked.

Second:

The study that comes the closest to answering my question suggests that captions might interfere with reading for not-at-risk students.

This study suggests that captions DO help at-risk 2nd and 3rd graders recognize words. But the not-at-risk students recognized fewer words with the captions on. Obviously we’re glad to have strategies to help at-risk students. But that’s not the big-picture question we started with.

(I’m honestly puzzled that captions benefit struggling readers…but because I don’t teach reading I’m not going to have a strong opinion here.)

Let’s Talk Theory

I noted above that I’d like to have both well-done empirical research AND a theoretical framework to answer my question.

Richard Meyer’s “redundancy principle” tells us that presenting the same information both verbally and visually at the same time increases cognitive load.

In his excellent book Sweller’s Cognitive Load Theory in Action, Oliver Lovell gives a common example: conference presentations.

It’s a common practice for presenters to provide written information on their slides and then to read out that information during the presentation. To conventional audiences, this represents the presentation of redundant information. Only one presentation format is needed, either the written words, or the spoken words. (62)

That example sounds A LOT like captions, no?

A full explanation of the redundancy principle requires a blog post of its own. The short version goes like this:

  • Because I read faster than others speak, I’m constantly reading ahead of the speaker’s current point in the text. I must therefore stop and go back multiple times. All this back-n-forth adds to my cognitive muddle.
  • With captions, I have to focus either on the WORDS that the captions present or the IMAGES in the video — and that back-n-forth also adds to the cognitive work I have to do.

For the dual-coding folks reading this post, remember: dual coding advocates that words and images complement one another — not that they represent precisely the same information.

Putting It All Together

  1. I found NO research with objective measures of neurotypical K-12 learners reading same-language captions. The one study that comes closest hints — but does not say — that captions might interfere with word recognition for early readers.
  2. Meyer’s redundancy principle gives us a good reason to be VERY skeptical of claims saying that “captions benefit everyone.”
  3. If you find research that matches the criteria above, please send it my way. I always want to keep this blog as up-to-date an accurate as possible.

In the meanwhile, here are my suggestions:

a) Be wary of claims that captions benefit most learners — especially neurotypical K-12 learners reading same-language captions.

b) Be ESPECIALLY cautious if the video includes cognitively complicated material — where cognitive load is already high.

c) Be aware of legal requirements, especially for students with diagnosed learning differences. Also, I myself would be more open to the benefits of captions for students watching videos in languages they don’t speak fluently. I haven’t done a deep dive into that research pool, but common sense suggests such captions could have real benefits.


Gernsbacher M. A. (2015). Video Captions Benefit Everyone. Policy insights from the behavioral and brain sciences2(1), 195–202. https://doi.org/10.1177/2372732215602130

Linebarger, D., Piotrowski, J. T., & Greenwood, C. R. (2010). On‐screen print: the role of captions as a supplemental literacy tool. Journal of Research in Reading33(2), 148-167.

Lovell, O., & Sherrington, T. (2020). Sweller’s cognitive load theory in action. John Catt.

On Task: How Our Brains Get Things Done by David Badre
landb
landb

I’ve been staring at my to grade pile—essays, exams, books I’ve been meaning to read, skills I want to develop—and honestly, it’s not that I don’t want to begin. I just… can’t. I open a document, then blink, and suddenly it’s dinner time. I’ve read all the Getting Things Done books, but what is it that gets me On Task when I already know what to do? Sound like a familiar question? That tension between desire and initiation is exactly at the heart of David Badre’s On Task: How Our Brain Gets Things Done.

Badre, a cognitive neuroscientist, invites us into that murky space between knowing and doing. He shows how our brains—particularly the prefrontal cortex—juggle goals nested inside other goals (make coffee, generate a lesson plan, grade essays), and why that juggling sometimes comes crashing down. He doesn’t promise a self‑help checklist; instead, he offers compassionate clarity: our executive function is powerful and fragile, built for hierarchies, stability‑vs‑flexibility trade‑offs, and moment‑to‑moment cost‑benefit calculations. Badre is willing to wade into the philosophical and biological depths of what it means to have a mind at all.

Throughout the book, Badre asks: are we really steering our lives, or are we just riding the rails of our biology and past conditioning? When my students and I discuss biopsychology or epigenetics, we circle the same tension: with so much shaped by brain circuitry, classical conditioning, even the hidden influence of our genes and society—what does it mean to choose? Badre is honest about these boundaries. He uses case studies—like patients who, after prefrontal brain injury, can explain their intentions but can’t act on them—to explore the razor-thin margin where knowledge ends and true agency might begin. He draws on neuropsychological history, from Penfield’s sister to the famous EVR, and roots these questions in the living, vulnerable architecture of the brain.

You will get a strong foundation along with these great stories: Badre digs into computational models and the messy, ongoing debates about how cognitive control is organized. He walks us through the brain’s hierarchies—how the prefrontal cortex can set broad, abstract goals and then decompose them into practical action—and then pulls back to ask what these models do (and don’t) explain about everyday life. You get stability and flexibility, multitasking, inhibition and switching, the information retrieval problem, the limits and benefits of control across the lifespan. Some readers call the book demanding or dense in spots, but that’s part of the payoff: Badre trusts us to join the scientific conversation, not just spectate. And even just getting the gist of tough parts will change your thinking. 

In my daily life, all this plays out like driving home on autopilot—forgetting the road, feeling the inertia of routine, and ending up somewhere unintended. In classrooms, I see students wrestling with the same forces: between conditioned knowledge of what to do, procrastination, and action. In pandemic classrooms and the distractions of current politics, we all feel a deficit in our cognitive systems—our routines unravel, our attention frays, and our brains realize how much effort it takes to get On Task when the scaffolding disappears. If you’ve ever wondered why a simple act like making coffee can feel so complicated—or what happens in a brain when you try to stop one task and start another—this book offers insight, not just explanation.

What stays with me is that Badre refuses false optimism. He doesn’t say, “just build more willpower.” Instead, he hands us a mirror: our executive function is shaped—and we can shape it too, through environment, practice, small routines. That kind of insight feels hopeful because it’s real. It demands curiosity, not quick fixes.

So here’s my punch at the end, inspired by his closing: On Task feels less like a how‑to‑guide and more like an invitation—to observe our own hidden machinery, to notice how easily routine can slide into unawareness, and to ask: Who am I when my executive function is truly steering? What small moments—making coffee, grading papers, reading a chapter—might I reclaim to bring more awareness, more agency, more grounded action?

If your familiar with “I know what I should do, but I just can’t!”—this is a book to read. It doesn’t diagnose you. It doesn’t sell you magic. It helps you see the space where your choices live—and getting to know me, feels like a foundation worth building on.

Book Review: The End of Trauma, by George Bonanno
Andrew Watson
Andrew Watson

When I first started teaching — it’s been a few decades now — nobody ever talked about trauma. Quite literally, I do not remember a single faculty meeting or PD day or all-school read or … really … ANYTHING that raised the subject.

A silhouette of a human head in profile with vibrant, explosive colors - fiery reds, oranges, and yellows blending into cool blues and purples - flowing outward from the head like smoke or paint, suggesting stress and pain.

If you had asked me back then (the ’80s and ’90s), I would have said: “as far as I can tell, trauma happens to some people somewhere — and that’s terrible. But I don’t know of trauma happening here to our students. It’s just not on the radar.”

In the last twenty years, the world of education has done a 180 degree pivot. We talk about trauma, and trauma-informed education, a lot. We’ve got books and conferences and speakers and data.

Trauma — it seems — is everywhere. Whereas in the ’80s, trauma was a “someone else, somewhere else” problem, today it seems to be a “lots and lots of people, right here” problem.

The transition threatens whiplash. How can we manage it?

A Surprising Perspective

Back in the fall of 2024, Prof. George Bonanno presented his research at our Boston conference. His hour-long keynote included a number of surprising findings, and prompted me to buy his book. At last (!) I’ve had a chance to read it and to understand those surprising findings more deeply.

As I wrote back in December, Bonanno finds that

  • Roughly 10% of people who experience trauma have enduring symptoms;
  • Less than 10% start without symptoms, but symptoms develop over time and persist;
  • Roughly 20% initially experience symptoms, but recover over two years;
  • The rest never respond with serious symptoms.

In other words, most people do NOT respond to threatening events with PTSD. And, many who DO initially experience PTSD recover within months.

For these reasons, Bonanno doesn’t speak of “traumatic events” but of “potentially traumatic events.” After all, such events might lead to a trauma response…but most of the time they don’t.

By the way: Bonanno doesn’t arrive at these conclusions by looking at marginally threatening experiences. Two of his data sets come from people who experienced the 9/11 attacks directly — as in, they fled the buildings after the planes hit — and from members of the military who served in combat.

Even in these populations, he finds that people are mostly resilient — both in the short term and over the longer term.

In brief: we can start to manage our whiplash by realizing that PTSD is obviously very bad, but not remotely inevitable. Our students and colleagues and community members are likelier to respond to potentially traumatic events by being stable and resilient.

The Non-Recipe Recipe

This initial insight leads to an important question: exactly WHY are some people more resilient than others? If you and I go through roughly similar “potentially traumatic experiences,” why do I develop PTSD symptoms while you don’t?

To ask the same question another way: is there a formula to follow? A set of steps that leads away from PTSD? A recipe?

Bonanno answers this set of questions with nuance, sympathy, data, wisdom, and humility.

In the first place, he argues that — no — we don’t have a one-size-fits-all series of steps. In fact, he explains in thoughtful detail why no one pathway will work for all people in all circumstances.

In fact, he specifically rejects this approach. Yes: individual research studies show that character trait X or mental habit Y is “associated with a reduction in ultimate PTSD symptoms.” But the list of X, Y, and Z goes on at remarkable length — a few dozen letters at least. (Our poor alphabet taps out at 26.)

Instead, Bonanno’s research says that resilient people have a flexible collection of traits and perspectives that they use in different ways at different times.

The Return of the “Mindset”

Bonanno summarizes this this collection of traits with the phrase “flexibility mindset.” He defines the word “flexibility” quite carefully:

I’ve used the word “resilience” throughout this book to describe a pattern of continued good mental health after potential trauma, or, more precisely, a stable trajectory of healthy functioning across time.

Flexibility is not resilience. Flexibility is the process we use to adapt ourselves to traumatic stress so that we can find our way to resilience. (121)

Bonanno’s flexibility mindset rests on three connected beliefs:

  1. Optimism about the future,
  2. confidence in our ability to cope, and
  3. a willingness to think about a threat as a challenge.”

No one of these beliefs by itself is enough. And, no one of them is a straightforward first-A-then-B-then-C process. But — combined with nuance and maturity — they result, over time, in better mental health outcomes.

Bonanno, in fact, devotes several chapters to specific stories of people who successfully (or not) use a flexibility mindset to manage the potentially traumatic events in their lives.

A Big Caveat

Whenever I write a book review, I always try to include at least one point where I disagree with the author, or think the book could be better. In this case, that’s a surprisingly easy goal to meet.

Here’s why: I think the book’s TITLE is doubly misleading.

In the first place, Bonanno doesn’t for a minute suggest that we can “end” trauma. He in no way claims that you can follow his simple steps to bring trauma to an early end — either for an individual or a society. Quite the contrary, he argues that the process requires endurance, frequently includes grave setbacks, and might not work for everyone.

In other words, The End of Trauma isn’t about the end of trauma. It’s about rethinking the inevitability of trauma, and reframing strategies to cope with trauma.

In the second place, the book’s subtitle includes an equally misleading phrase: “the new science of resilience.”

Bonanno says over and over that he’s NOT proposing anything radically new. His “flexibility mindset,” after all, suggests that we be optimistic, confident, and inclined to think of threats as challenges. None of those insights — or the word “mindset” — is new.

The novelty in Bonanno’s work lies first in his data, which find the PTSD is a relatively unusual response to potentially traumatic events — not, as we’ve heard so often, an inevitable one.

Bonanno also makes a novel argument when he focuses on broad flexible categories (“optimism, confidence”) rather than specific steps (“first do this, then do that, then try t’other”).

I don’t doubt that The End of Trauma is a more saleable title than Rethinking the Inevitability of Trauma and Proposing a Flexble Path to Work Past It within 2 Years or So. But that title would be more accurate.

TL;DR

Bonanno’s book The End of Trauma isn’t about the end of trauma. It does, however, make a compelling — and ultimately optimistic — argument: we’re mostly resilient; we can bounce back from potentially traumatic events; and we’ve got a challenging-but-flexible framework to guide us as we do so.

“Good Ideas” that Might Waste Time
Andrew Watson
Andrew Watson

I would LOVE to write the following blog posts:

  • Playing with Puppies Improves Learning!
  • A Glass of Expensive Red Wine Each Day Makes You a Better Teacher
  • Research shows: Reading a Good Book on your Couch is the Best Way to Plan Lessons

In other words: if I can say that “research supports a REALLY popular idea,” I make everyone happy.

Alas, my job is often to say the opposite. As in:

Here’s a really popular idea — an idea that sounds like common sense. And: this popular idea probably is not true.

So:

Today I want to explore another “probably not true” idea — one that’s important, and a little bit tricky to understand. Here goes…

Self-Control, and Self-Control

Psychology researchers spend lots of time thinking about “self-control.” Sadly, this compound word has at least two different meanings.

  • Meaning #1: Let’s say you’re the kind of person who — naturally, without really thinking about it, most of the time — resists the tempting thing and does what you ought to do.

We could say that “self-control” is a consistent personality trait you have.

  • Meaning #2: Perhaps you’re standing in front of a plate of donuts. You say to yourself: “I’m SO HUNGRY, and the chocolate-y deliciousness looks amazing. BUT, I’m trying to get in shape, so I’ll take the fruit cup instead…”

We could say that you’re exhibiting “self-control” as a short-term behavior; you’re in a sort of state of self-control.

To distinguish between these two definitions, scholars write about

  • Personality TRAIT self-control, and
  • Short-term STATE self-control.

Although those two descriptors rhyme, they describe different kinds of self-control. One (“trait”) is a stable, ongoing, often unthinking self-control. The other (“state”) is short-term, contingent, self aware self-control.

We have at least three reasons to care about trait vs. state self-control.

  1. TRAIT self-control correlates with all sorts of good outcomes. People with high trait self-control live longer, get better grades, and earn more money. They floss more regularly and probably are nicer to dogs. (I think I made up that last one, but it could be true.)
  2. It just seems obvious: TRAIT self-control is (probably) just lots-n-lots of STATE self-control moments all strung together. As Michael Inzlicht and Brent W. Roberts write:
    • Conventional wisdom has it that people high in trait self-control reap all these benefits because they engage in more state self-control.
  3. For this reason, we have an obvious, common sense path to follow. If we train students to develop moment-by-moment STATE self-control, the result will be stable TRAIT self-control. “Grit,” anyone? All that gritty training means that students will get the benefits listed above: grades and jobs and cavity-free teeth and the love of dogs.

But WAIT JUST A MINUTE…

What if that popular, commonsense hypothesis just isn’t true?

How Can That Be?

No, really: how could that commonsense hypothesis not be true? It’s just OBVIOUS that trait self-control results from moment-by-moment states of self-control. I mean: isn’t it?

In the brief (highly readable) article linked above, Inzlicht and Roberts give us plenty of reasons to doubt that conclusion.

  1. People who score high on tests of trait self-control don’t spend much of their day exercising self-control. (See, for instance, this study.)
  2. Over longer periods of time (months and years), training in state self-control doesn’t result in all those predicted positive outcomes. People go back to their initial patterns.
    • That is: I might be able to avoid donuts long enough to fit into my wedding suit — but in the months after my wedding, I’m likely to put those pounds back on again.

Inzlicht and Roberts share our puzzlement at this odd set of outcomes. (Their article literally includes the subheading: “What the hell is going on here?”)

They speculate, for instance, that the wording has got us confused. Just because we use the compound word “self-control” for both Trait X and State X, our labels don’t necessarily mean that the same mental process is happening below the label.

They propose this thought experiment: imagine that we had decided to use the word “planfulness” instead of “trait self-control.” Because the words are different, we might not end up so surprised that “planfulness” doesn’t result from “state self-control.”

Whatever the reason for our mistake, we should be prepared to acknowledge that our commonsense belief — “training students in state self-control enhances their trait self-control” — probably isn’t true.

Why The Distinction Matters

If we were confident that state-self-control training leads to trait self-control, then we should (almost certainly) take the time to enact that training. So Many Benefits!

But — at least so far — we shouldn’t be confident that frequent bouts of short-term (state) self-control ultimately train up long-term (trait) self-control.

If we mostly want to create trait self-control…well…we just don’t yet know how to do that. And we shouldn’t fool ourselves that that state self-control gets the job done.

(Research, in fact, offers this substantial benefit: it often gives us helpful correctives precisely to stop us from fooling ourselves.)

Now, we should also admit that we want students to practice short-term state self-control because that too is useful. State self-control:

  • gets homework done (when students want to be doing something else), and
  • helps students focus in class (ditto), and
  • keeps students off Tik Tok (when they really want to be on Tik Tok).

And so forth. So I don’t think we give up on self-control all together.

But when we hear someone claim that “grit” can be trained…I think we should ask for good evidence that this claim is true.


Inzlicht, M., & Roberts, B. W. (2024). The fable of state self-control. Current Opinion in Psychology, 101848.

Neuromyths Are Myths, but They Aren’t Neuro
Andrew Watson
Andrew Watson

Few topics lead to greater despair at Learning and the Brain conferences than “neuromyths.” Teachers cling to these “brain-based” beliefs despite the absence of research supporting them — and despite all the zealous debunking from our experts.

No:

  • Learning Styles are not a thing;
  • Left-brain/right-brain distinctions have no teaching implications;
  • Fidget spinners don’t help students attend or learn;
  • Power poses don’t matter;
  • “Enriched environments” don’t benefit students;
  • Brain gym doesn’t help…

This list could go on at wearying length.

Sadly, I’ve got even worse news. Belief in these myths extends across the globe, and persists despite all our best efforts to stamp them out. One recent article reports that 89.1% of teachers believe in the visual/auditory/kinesthetic learning style myth. (Please picture the ‘gnashing teeth’ emoji here.)

A computer-generated top-down view of a brain. Half is made of computer circuitry; the other half blossoms with flowers. This image suggests the left-brain/right-brain myth.

For all these reasons, scholars in this field search for the best ways to revise teachers’ misunderstandings. The article I just mentioned — published last year in Mind, Brain, and Education — offers a detailed narrative looking at different strategies and their (depressingly small) effects.

Rather than review those strategies, I want to back up a big step and consider a crucial problem in these efforts. I think that MANY efforts won’t work because we’re thinking about the problem the wrong way. Our misunderstanding begins with the word itself: “neuromyths.”

It’s All Stuff In The Head

The field of “Mind, Brain, and Education” brings together at least three different disciplines:

  1. Mind = the study of mental function. We call that “psychology.”
  2. Brain = the study of the physical object of the brain (and nervous system). We call that “neuroscience.”
  3. Education = you know, teaching and schools and chalk dust.

The difference between mind and brain — between “psychology” and “neuroscience” — might seem too technical to merit our attention. After all, they’re both ways of studying “stuff going on inside the head.”

But we really must focus on that difference. If we don’t, the resulting confusion and mis-direction make our work less effective.

Let’s start with neuroscience: a kind of biology. If you’re looking at..

  • brain regions — like the amygdala and the pre-frontal cortex, or
  • neurons, or
  • myelin, or
  • neurotransmitters…

…you’re talking neuroscience.

While neuroscience studies biological objects, psychology studies mental function. If you’re studying…

  • attention, or
  • motivation, or
  • memory, or
  • anxiety, or

…you’re talking psychology.

For most of the 20th century, these fields regarded each other with deep suspicion, and would be surprised that anyone would have trouble telling them apart. But in truth, the distinction can feel tricky to non-experts.

One way to distinguish between them is to ask: “what is the researcher doing to study the question?”

  • If you can tap it with a scalpel or look at it with a microscope or an fMRI gizmo, it’s neuroscience.
  • If you give students a test to see how much they remembered, or a questionnaire to see whether or not they paid attention, that’s psychology.

(I should be clear: I’m answering a complicated question with a few hundred words. I’m trying not to oversimplify, but reasonable people would want me to rephrase some of those sentences. If you want a deeper dive, check out this podcast where I discuss the question with Zach Groshell and Kris Simmers.)

Why the Difference Matters

Here’s a plausible logical chain:

  1. Neuromyths result from a misunderstanding of neuroscience. (The clue is right there in the name!)
  2. Therefore, we can reduce belief in neuromyths by teaching people accurate neuroscience.

But this logical chain starts with a flaw. Most neuromyths aren’t primarily neuroscience claims. Most “neuromyths” are primarily psychology claims.

For example: claims about “learning styles” rest fundamentally on claims about mental behavior. “Visual learners” learn one way; “auditory learners” learn another way.

In this claim, “learning” is a mental behavior — it’s psychology — because I should test it using psychological methods. For instance, I would “align instruction with students’ learning style” and see how much more they learn. (To be clear: students don’t learn any more. The claim is false. It’s a myth.)

When challenged, champions of learning styles theory might try to defend their claim with neuroscience terminology. But the theory itself isn’t about the electrical or chemical behavior of neurons; it’s fundamentally about the mental behavior of learners.

In brief, learning styles theory isn’t a neuromyth. If anything, it’s a pschyo-myth.

To keep this blog post relatively short, I won’t go through each “neuromyth” claim to demonstrate that it’s a “psychomyth.” But almost all of them are — including the ones that have the word “brain” in them. (I’m looking at you, “left-brain/right-brain” and “brain gym.”)

Why does this difference matter?

Because: we can’t debunk “neuromyths” by teaching neuroscience. The myths don’t rest on neuroscience.

Although well-meaning, these neuroscience education efforts almost certainly won’t succeed.

Opportunity Costs Matter

At this moment, I can imagine a reasonable-sounding objection: “teachers benefit from learning neuroscience even if that knowledge doesn’t change their neuromyths.”

I have three concerns with that response:

First: I’m skeptical that neuroscience knowledge does benefit teachers. I can’t think of any classroom teaching advice that derives from neuroscience, not psychology.

Second: if the goal is to reduce belief in neuromyths, let’s choose a strategy that actually reduces teachers beliefs in neuromyths.

Third: we’re talking an ENORMOUS amount of time here.

  • In one PD intervention — described as “very brief” — teachers spent a “short” fifteen hours learning neuroscience.
  • Another study looked at the effect of neuroscience courses in schools of education. A full term course — over forty hours — “showed no effect on beliefs in neuromyths.”

As a PD presenter in schools, I’m lucky to get three hours with teachers. The idea that we’ll devote fifteen hours — or forty hours — to an approach that probably doesn’t work seems…unwise.

Uplifting Conclusion

I’d love to offer one. Here’s the best I can do:

First: Because neuromyths don’t result from a misunderstanding of neuroscience, we shouldn’t teach neuroscience to debunk them.

Second: In my own work, I don’t actually try to DEBUNK neuromyths; I try to REPLACE them.

That is, I don’t say: “give up on all that foolishness. It’s a myth.”

I do say: “Here’s what we know about working memory. If you start anticipating working memory overload in your lessons, your students will learn much more! Their attention and motivation will also improve!”

My theory is: if teachers are thinking about working memory all the time, then they’re spending less time thinking about psycho-myths. I have no research to support this claim, but it’s the best approach I’ve been able to think of.

If you have a more effective strategy, please let me know!


Rousseau, L. (2024). Dispelling Educational Neuromyths: A Review of In‐Service Teacher Professional Development Interventions. Mind, Brain, and Education18(3), 270-287.

Pashler, H., McDaniel, M., Rohrer, D., & Bjork, R. (2008). Learning styles: Concepts and evidence. Psychological science in the public interest9(3), 105-119.

Squishy Toys: Friend or Foe?
Andrew Watson
Andrew Watson

The US Department of Education recently released a video on eX/Twitter encouraging the use of “squishy toys” to calm students. In the video, an earnest school psychologist says that such toys help his students focus without distracting others.

The uproar that followed — “they’re vital!” vs. “they’re dreadful” — followed the usual eX/Twitter pattern.

Of course, here on this blog we’re interested in narrower questions than “are squishy toys good or bad?” We want to know:

  • do we have a consistent body of research giving us useful guidance on using squishy toys in K-12 classrooms?
  • if they have benefits, what are the boundary conditions? That is: perhaps they help some students but not others?
  • what kind of benefits are we talking about here? At a minimum, I think we want students to focus better and learn more. In other words: if the benefit is “students like ’em,” that’s not a good enough reason to consider squishy toys a research-supported classroom tool.

So, what happens when we start investigating those questions?

The Home Team Looks Worried

When people offer teachers “research-based” advice, we’d like them to cite the research. That’s both common courtesy and common sense.

A hand squeezing  a yellow squishy ball with a cartoon face on it; a computer screen and classroom in the background.

Sadly, the video itself — and the tweet that contains it — doesn’t provide a citation or link to any research. The school psychologist, as far as I can tell, is making an experienced-based claim, not a research-based claim.

Of course, teachers and school leaders rely on experience all the time, so this approach isn’t obviously ridiculous. At the same time, given that we can research a question like this, it’s surprising that the initial advice doesn’t take advantage of the research that is (almost certainly) out there.

So, what happens when we go looking for the research supporting the use of squishy toys?

Let’s imagine the following conversation:

ANDREW: You should take this medecine; research says so.

YOU: Tell me about this research.

ANDREW: It was done by a doctor who bleeds his patients to cure their fevers.

YOU: Wait…he bleeds his patients? That’s medical malpractice…it’s been out-of-date for decades, even centuries!

ANDREW: So what’s your point?

At this point, I suspect, you’d stop taking my medical advice.

When I asked Elicit.com to find research about using squishy toys in K-12 classrooms, the only relevant study it located notes that:

“Kinesthetic learners used the stress balls more consistently and their attention spans increased more when compared to other learners.”

This observation would be interesting if “kinesthetic learners” existed, and if we haven’t known for decades that learning styles theory isn’t supported by evidence. At a mimimum, we shouldn’t trust this study’s conclusions that squishy toys benefitted students.

To be clear, the study includes other key limitations:

  • It’s a self-described pilot study, including 29 students.
  • It mostly measures self-reported variables — not an especially persuasive foundation for teaching advice.

It’s not looking good for Team Squishy.

Don’t Stop Yet

I’ve tried two steps to verify the squishy-toy claim:

  1. Check with the person who made the recommendation.
  2. Look on Elicit.com.

Although those attempts didn’t pan out, we’ve still got several other avenues to pursue.

I checked with Scite.ai; it couldn’t even find that kinesthetes-love-squishy-toys study. (It’s REALLY rare for Scite.ai to come up short this way.)

I next went to ConnectedPapers.com to look for other studies surrounding that study; here’s the image it generated. (The study I started with is in the purple circle.)

So, I started poking around at the studies surrounding the first one. Of course, I didn’t look at them all — life is short. But, here’s what I found with some random checks:

  • “Fidget spinners negatively influence young children with ADHD’s attentional functioning, even in the context of an evidence-based classroom intervention.” (This from the Graziano study right in the middle.) 1
  • “Due to a recent surge in popularity, fidget spinners and other self-regulatory occupational therapy toys have yet to be subjected to rigorous scientific research. Thus, their alleged benefits remain scientifically unfounded. Paediatricians should be aware of potential choking hazards with this new fad, and inform parents that peer-reviewed studies do not support the beneficial claims.” 2
  • “Student performance was lower when they were allowed to use fidget spinners than when the fidget spinner was removed. The current study suggests that fidget spinners may cause a deficit in student performance. However, the effect of fidget spinners may actually lessen as the students habituate to the objects.” 3
  • “Using a fidget spinner was associated with increased reports of attentional lapses, diminished judgments of learning, and impaired performance on a memory test for the material covered in the video. The adverse effect on learning was observed regardless of whether the use of fidget spinners was manipulated between‐subjects (Experiment 1) or within‐subjects (Experiment 2), and was observed even when the sample and analysis were limited to participants who came into the study with neutral or positive views on the use of fidget spinners.” 4

I could go on. From this quick investigation, I notice two important patterns.

First:

No one seems to be researching stress balls (a.k.a. “squishy toys”) at all. The ONLY study I found on that topic is the pilot study with the kinesthetic learners.

Second:

Scholars ARE studying fidget spinners … and not finding any good news. Whether you’re teaching K-12 students or college students, neurotypical students or students with ADHD, kinesthetic learners or students who really exist: none of them receives classroom benefits from a fidget spinner.

Third:

To be on the safe side, I looked for meta-analyses — both ones that focus on squishy toys, and ones focusing on fidget spinners.

Unsurprisingly, I didn’t find ANY looking at squishy toys. And — equally unsurprisingly — I the fidget-toy meta-analyses sounded consistently discouraging notes. (For example, this one.)

My Current Conclusion

When I do this kind of research deep dive, I usually find conflicting evidence. As a result, I typically write a tentative, partly-yes and partly-no summary: “strategy X seems to work well with these students studying these things, but we don’t have good research outside that small group.” Or something like that.

In this unusual case, the research picture seems unambiguous to me:

  1. We have NO reliable research showing that squishy toys benefit (or harm) students.
  2. We have LOTS of research showing that fidget spinners provide few benefits, and can indeed interfere with learning.

If you have research that contradicts these conclusions, please let me know.


  1. Graziano, P. A., Garcia, A. M., & Landis, T. D. (2020). To fidget or not to fidget, that is the question: A systematic classroom evaluation of fidget spinners among young children with ADHD. Journal of attention disorders24(1), 163-171. ↩︎
  2. Schecter, R. A., Shah, J., Fruitman, K., & Milanaik, R. L. (2017). Fidget spinners: Purported benefits, adverse effects and accepted alternatives. Current opinion in pediatrics29(5), 616-618. ↩︎
  3. Hulac, D. M., Aspiranti, K., Kriescher, S., Briesch, A. M., & Athanasiou, M. (2021). A multisite study of the effect of fidget spinners on academic performance. Contemporary School Psychology25, 582-588. ↩︎
  4. Soares, J. S., & Storm, B. C. (2020). Putting a negative spin on it: Using a fidget spinner can impair memory for a video lecture. Applied Cognitive Psychology34(1), 277-284. ↩︎

Too Good to be True? The Effect of Tetris on PTSD Symptoms
Andrew Watson
Andrew Watson

Teachers and school leaders turn to research — in part — because it can help us solve problems. Of course, the bigger the problem, the more urgent the need for a solution.

For this reason, we’re easily drawn to the more dramatic “research-based” claims. When a big problem has an easy solution — an easy solution with RESEARCH behind it — we’re tempted to adopt that new strategy right away.

In this blog post, I want to suggest — first — that we should slow down. And — second — that we should ask some important questions before we make big changes. After all: if the problem is big and urgent, we could make it worse by enacting an ill-considered solution based on flimsy research claims.

First: Slow Down

Carl Sagan famously said: “extraordinary claims require extraordinary evidence.”

Let’s imagine that I tell you: “You can add 20 points to your IQ just by eating raisins every day!” That claim is a biggie. You’ll want LOTS of VERY STRONG evidence to support it. (To be clear: I don’t know of any way to increase IQ at all — much less a raisin-centric diet.)

Dan Willingham made a related point in his book Why Don’t Students Like School? Willingham argues that — over a few hundred years — teachers have tried all sorts of strategies to help students learn. For this reason, it’s difficult to imagine that a simple strategy will have a surprisingly large effect. After all: if a one-step solution works so well, why hasn’t someone tried it — and reported on its benefits?

For this reason, it’s likelier that research will find incremental benefits to a new teaching strategy — and that the new strategy will require more than a simple tweak or two. In other words: we’ve (probably) already figured out the big, easy stuff. Research will help with the small, difficult stuff.

So, let’s consider the claim implied in my title: “playing Tetris can reduce PTSD symptoms.”

That claim pushes up against both Sagan’s and Willingham’s guidance. To me, at least, the idea that Tetris (of all things) helps with trauma — that’s extraordinary all right. And the idea that something as simple as Tetris will have an effect strains credulity. If reducing trauma symptoms is so easy, why haven’t we figured that out yet?

For both of these reasons: slow down.

Second: Questions to Ask

Most researchers take care to speak very precisely about the scope and limitations of their claims. Before we get carried away, we want to be sure we understand exactly what they’re claiming.

Here’s a handy approach. All published research begins with a one-paragraph summary of the study. This summary goes by a rather odd name: the “abstract.” So, read the abstract carefully and focus on the researchers’ precise claim.

I do need to warn you: an abstract is often — well — rather abstract. The researchers are condesing hundreds of hours and thousands of words and data-points into a dense paragraph. The abstract will not delight you. But it will tell you what you need to know.

In this study, the reseachers claim that a single session of Tetris reduces the frequency of intrusive memories of traumatic events. Notice several limits here:

  • They don’t make broad claims about PTSD. Instead, they make claims about one specific symptom — intrusive memories.
  • They don’t claim to cure PTSD. Instead, they claim to reduce the frequency of intrusive memories.
  • They don’t claim that the strategy works for everyone. Instead, they claim it worked for nurses in Swedish hospitals who had experienced traumatic events while treating Covid-19 patients.

All this specificity will be very helpful for us. A Tetris salesman might show up at our school brandishing this paper and says “we can cure your students’ PTSD — research says so!” Because we read the abstract carefully, we know this claim just ain’t so.

More Questions: Trust the Process

Researchers typically do not attempt Great Leaps Forward. Instead, they take incremental steps.

So, if one research team shows that “retrieval practice helps college students remember Spanish vocabulary words,” other research times might investigate these questions: “Does retrieval practice help…

  • college students remember physics vocabulary words?”
  • 5th graders learn English spelling rules?”
  • budding baseball players recall obscure-but-important rules?”

And so forth. They’re unlikely to ask “does retrieval practice help marmosets improve their mindful meditation practice during a full moon?” That’s just too big a leap.

If you see a “too good to be true” claim, it’s often helpful to investigate the research history behind it. If this study right here takes only a small step beyond previous research steps, its modesty and carefulness inspire greater confidence.

Because research studies can be REALLY tedious to read, I’ve found AI to be a big help in cutting through the glutenous jargon and finding important facts. For this project, I often use chatpdf.com — although other AIs work too. In this case, I asked chatpdf this question:

According to this study, how much research has been done on the tetris-helps-reduce-PTSD-symptoms theory before this study?

Turns out, this study has LOTS of relevant work behind it:

  • a theory about how trauma and PTSD affect sensory processing, then
  • a small-scale study to see if if tetris has any effect, then
  • a “proof-of-concept” study in a psych lab, then
  • a few more steps.

In other words, this “too good to be true” idea didn’t come out of nowhere — it has a well-established foundation supporting it. My level of confidence is increasing…

Check the Field

As I’ve written before, I rely on three websites to see what other researchers are thinking about particular research topics. The first of the three — Scite.ai — shows that no one has yet responded to this specific study. Because this research is both very new and very niche-y, this silence isn’t totally surprising.

My second go-to website is connectedpapers.com. A quick visit there shows that, in fact, other researchers are exploring similar lines of inquiry. They don’t all get the same results; in fact, they don’t all get positive results. But this tetris study isn’t a bizarre outlier. It’s part of a small-but-serious cluster of studies trying this approach.

Finally, Elicit.com helpfully answers direct research questions. When I asked it “Does playing tetris reduce PTSD symptoms over non-trivial periods of time,” it scrounged the research web and thought for about 8 minutes. It then offered this helpful one-sentence summary:

Multiple studies demonstrate that Tetris gameplay combined with memory reactivation significantly decreases PTSD-related intrusive memories, with effects lasting up to six months.

It then listed the studies it had analyzed to arrive at that conclusion, and the various parameters that it found helpful and persuasive.

So, at this point I’m much more confident than I was 20 minutes ago. I know that the researchers are taking slow, careful steps; and that other researchers are taking other careful steps along parallel paths. We’re not alone in this forest.

A Recap, and Two Caveats

The steps so far:

  1. When I see a surprising claim, I SLOW DOWN.
  2. Next, I check the abstract to understand EXACTLY what the researchers are claiming. (Newspaper and blog headlines often misrepresent researchers’ claims quite badly.)
  3. Third, I look for the history behind this study to be sure that the idea has a good foundation.
  4. Finally, I use other web resources to see if this idea is a quirky outlier or part of a meaningful body of research.

I should also highlight two caveats.

  • First: AI is evolving very quickly. As of today’s date (May 12, 2025), these tools do what I’ve described, and are the best ones I know. By May 13, 2025, they could behave very differently, or have been superceded by better tools.
  • Second: I have explored this study as an interesting example of a “too good to be true” claim. That is: when I first saw this research, I assumed the claims would prove to be exaggerated or silly. (I am, at least for now, reassured that this is a serious line of inquiry.)

However, I am NOT suggesting that schools use tetris to treat students’ trauma symptoms. I’m not qualified to make any recommendations about treating trauma.

And, to be clear, the researchers aren’t making that suggestion either. They’re still trying to figure out if this simple idea might be helpful (to nurses). We need LOTS more research


Kanstrup, M., Singh, L., Leehr, E. J., Göransson, K. E., Pihlgren, S. A., Iyadurai, L., … & Holmes, E. A. (2024). A guided single session intervention to reduce intrusive memories of work-related trauma: a randomised controlled trial with healthcare workers in the COVID-19 pandemic. BMC medicine22(1), 403.

Will Chess Make Me Better at Baseball?
Andrew Watson
Andrew Watson

Imagine for a moment that I’m coaching college baseball.

A baseball batter makes contact with a ball, hitting it out over an empty, brightly lit baseball diamond

I’ve noticed that my players have lots of specific skills, but lack the ability to make strategic decisions from an above-the-playing-field perspective. How can I help them improve?

Suddenly I think: aha! I’ll have my boys of summer learn chess. What better way to train them up in an above-the-field view? What better way to improve strategic decision making with that view in mind?

After all, the mental skills that they learn playing chess will — no doubt — transfer to playing ball.


I don’t know if a coach has actually tried this experiment. But I do know that a similar thought process drives A LOT of inquiry in the world of cognitive science.

If I want my students to learn history — or science, or tuba — I could teach them history, or science, or tuba.

Or, perhaps, I could boost their underlying brain power in some general way that will ultimately transfer to their history learning.

That is: rather than train their historical knowledge and thinking, I could enhance the cognitive resources with which they do the historical thinking. VOILA! More learning.

In my analogy, I could enhance my baseball players’ strategic vision and thinking (by teaching them chess); they can use their chess-enhanced vision as they play baseball.

So many possibilities…

Where to Begin; How to Proceed

If I want to pursue this path, I have LOTS of “cognitive resources” to choose from. Should I train my students attention? Or, one of their executive functions?

A research team has recently tried this approach with “cognitive control”: “a set of processes critical for guiding thoughts, feelings, and actions in a flexible, goal-directed manner.”

For their research method to be persuasive, it should meet several criteria. It should

1. Include enough people to make its results credible:

If a study includes 20 or 30 people, the results might be interesting, but won’t be compelling.

2. Test its results in both the short term and the long term:

When I train my baseball players with chess, I want them to preserve their chess-enhanced vision for a long time. If they lose that vision as soon as they stop playing chess, then they haven’t really improved their cognive function in a meaningful way.

3. Test those results in meaningful areas:

When I train my first baseman in chess, I’m happy if he gets better at chess. But I really want him to get better at baseball.

To be a little bit technical, I’m glad if I see “near transfer”: that is, chess training helps my players get better at speed chess. But I care about “far transfer”: that is, chess training helps my players spot the best place to force an out during a tricky fielding play.

Better and Better

This research team — led by Nikolaus Steinbeis — included some extra steps as well. I’m DELIGHTED to see that this study…

4. Includes a plausible comparison:

Researchers often take a worrisome shortcut. They try out a Cool New Thing — say, a curriculum, or a pedagogical strategy. When students learn more, they say: “look, our Cool New Thing enhanced learning.”

But this claim disguises a logical flaw. The benefits could come from doing SOME New Thing, not THIS New Thing.

To overcome this logical flaw, researchers should have an “active control group.” That is: some participants try THIS new thing, while another group tries a PLAUSIBLE NEW ALTERNATIVE thing.

If one group benefits more than the other, we can say that — yup — the change came from the curriculum itself, and not just from the newness.

5. Psych + Neuro

Wouldn’t it be wonderful if reseachers BOTH checked out psychological measures (“do the students learn more?”) AND neurobiological measures (“do their brains physically change?”).

Well, I’m happy to report that Team Steinbeis did ALL OF THESE THINGS.

1: The study included 250+ children, age 6-13. That’s not a HUGE number, but it’s noticeably larger than most studies.

2: They tested participants at the end of the study, and again a YEAR later. In my experience, very few studies have that kind of time horizon.

3: They checked to see if their cognitive control game improved participants’ cognitive control (“near transfer”). AND, they also check if it helped their learning, mental health, decision making, creativity, and resilience in the face of COVID stressors (“far transfer”).

4: This study included an active control group. Half the students played a video game with instructions that focused on improving their cognitive control. The other half played the same video game with instructions that focused on improving their response time. That’s a plausible alternative, no?

5: Researchers scanned relevant brain regions — inferior frontal gyrus, cingulo-opercular and fronto-parietal networks — to see if the training changed structure or function. (Don’t worry: I have only a dim understanding of what those words mean, and I’ve been in this field since 2008.)

Results, PLEASE

I’ve gone into more detail than usual because I want you to see why I find this study helpful and persuasive. As far as I can tell, this team has done everything right.

If training cognitive control helps students, we should see meaningful differences in far transfer effects — and in brain structure or function — after a year. This study design will let us see that.

So, their results? Close to nothing.

  • Cognitive control training didn’t help students learn more, or make better decisions. (Far transfer)
  • It didn’t make them more creative. (Also far transfer)
  • It didn’t didn’t change relevant brain structures, or the function of those structures.

Now, the training did help students do better at tests of cognitive control — even after a year. But we don’t really care about cognitive control on its own — that’s simply near transfer. We care about cognitive control because it usually helps with learning, and creativity, and so forth.

This research tells us: untrained cognitive control might predict academic success. But increasing cognitive control with computer game training does not result in greater academic success — or greater anything else.

In the language of my starting anecdote: my players got better at chess, but they didn’t get better at baseball. And — as the coach — I care about baseball.

The Big Picture

The idea that we can “train our students’ brains” has a lot of intuitive appeal. Perhaps because the claim includes the word “brain,” it gets lots of hopeful attention. (Because it includes the potential for enormous profits, it has lots of economic appeal as well.)

I wanted to focus on this study because it does such a careful job of rebutting that claim: at least as long as “cognitive control” is the particular element we’re trying to train.

In the future, if someone brings a brain training program to your attention, consider this research example. If that someone’s research method doesn’t include all of the steps above, you might hesitate before you invest scarce time and money in this approach.

Better, instead, to focus on teaching history, science, tuba — and baseball.


Ganesan, K., Thompson, A., Smid, C. R., Cañigueral, R., Li, Y., Revill, G., … & Steinbeis, N. (2024). Cognitive control training with domain-general response inhibition does not change children’s brains or behavior. Nature neuroscience27(7), 1364-1375.

Nutrition and Learning: Do Saturated Fats Harm Cognition?
Andrew Watson
Andrew Watson

This interdisciplinary field draws on expertise from many different branches of research.

Psychologists warn us about working memory overload, and suggest strategies for enhancing students’ motivation.

Neuroscientists describe the changes in neural networks that — as we understand things so far — convert new learning into long-term memory.

Somnologists (!) can tell us about the benefits of REM sleep, while exercise physiologists help us understand the relationship between health and learning. (I’m looking at you, brain-derived neurotrophic factor.)

And so forth.

But, you might ask: what about nutrition? Can our students eat their way to more learning? Surely this field has wisdom and guidance to offer?

I hear different versions of this question frequently — often focusing on blueberries — so I was happy to see recent research exploring important questions about the relationship between diet and cognition.

Here’s the story.

Mazes, Treasures, and Meals

A research team in Australia invited 120 students to make their way through a maze in a VR world. In this VR simuation, a lake is surrounded by two rings of distinctive landmarks: a lighthouse, a windmill, the Statue of Liberty, and so forth.

Study participants had to find their way across a hexagonal grid of planks to the location of a treasure chest. If they didn’t find the chest after 4 minutes, they were transported to its location — and then given a fresh chance to find the treasure from another starting place.

Here’s the key: participants could use the distinctive landmarks to triangulate the chest’s position. That is, they could say: “if the lighthouse is at my 2 o’clock, and the windmill at my 6 o’clock, and Lady Liberty at my 10 o’clock…I’m in the right place!” Using those bearings, they could navigate back to the treasure chest fairly easily.

Now comes the nutrition part.

The researchers also asked students about their intake of saturated fats and refined carbs. For the most part, the study doesn’t list the kinds of food they ask about overall (although it does mention specific questions about adding sugar to food). But the gist is: how much of their diet comes from the bad stuff — like saturated fats and refined carbs?

So here’s the big question: what’s the relationship between unhealthy foods (fats, carbs) and spatial navigation? Are the relatively healthy eaters better at finding their way around this maze?

Answers, and Some Doubts

In a word: “yes.”

The participants who had low scores on the fat/carb inventory learned the maze somewhat better than those who had higher scores.

Said another way: if you know a participant’s fat/carb intake, you could roughly predict how well they would do on the maze.

Here’s the boldest formulation of this finding: “an unhealthy diet interferes with spatial learning.”

Of course, any such blunt sentence requires MANY caveats.

First: for obvious reasons, this study didn’t actually monitor participants’ food intake for a month. The nutrition data are entirely self-reported…and we usually take self-reported data with a dash of skepticism.

Second: this study has an ENORMOUS dropout rate. 120 people signed up for the study, and SIXTY FIVE dropped out. (Apparently the VR world created a lot of nausea.) A reasonable person might decide not to trust the study for this reason alone.

Of course, the fact that it’s published means that other reasonable people decided not to worry so much about the dropout rate. Apparently nausea is a common problem with VR research.

Third: I’m legally obligated to say: “correlation isn’t causation.” It is possible that people who are bad at spatial learning therefore prefer fats and carbs. Or that a third factor causes both deficits in spatial learning and a preference for unhealthy food. In this case, even a skeptic like me is willing to infer the potential for causality…

As is so often the case, this one study is helpfully interesting. It is not, on its own, entirely persuasive.

Another Perspective

This study suggests a small correlation (and the potential for a causal relationship) between a fatty/carby diet and spatial learning.

In other words: YES, diet and nutrition do matter for learning.

I’d like to propose an alternative framework which — in my experience — simplifies lots of complex questions. Here goes…

We have A LOT of research showing that the brain IS A PART OF the body.

The brain is, in fact, PHYSICALLY ATTACHED to the body.

For that reason, everything that’s good for the body is good for the brain. Because (say it with me), the brain is a part of the body.

Sleep is good for the body; it’s therefore good for the brain.

Exercise is good for the body; it’s therefore good for the brain.

Good nutrition is good for the body…you get the idea.

I don’t honestly think we need a special category of “brain-focused nutrition advice.” All the basic nutrition guidance we already have tells us practically everything we need to know about “brain nutrition.” *

For example: as I wrote above, people get VERY excited about the brain health benefits of blueberries. I myself haven’t been able to find any strong research on this topic. (Here’s one pilot study with 15 participants.)

TL;DR

A recent study from Australia suggests that a high carb, high fat diet might interfere with spatial learning.

While this study has some quirks and flaws, it highlights a simple principle:

“If it’s good for the body, it’s almost certainly good for the brain. If it’s bad for the body, it’s almost certainly bad for the brain. Plan your meals accordingly.”


* If you have gotten medical advice about nutrition — even brain nutrition — you should follow it. I’m a teacher and a blogger; not a doctor, and not a nutritionist.


Tran, D. M., Double, K. S., Johnston, I. N., Westbrook, R. F., & Harris, I. M. (2025). Consumption of a diet high in fat and sugar is associated with worse spatial navigation ability in a virtual environment. International Journal of Obesity, 1-9.

(How) Do Emotions Affect Learning?
Andrew Watson
Andrew Watson

When a conference speaker announces that “a student’s emotions matter for their learning,” few teachers rock back in surprise. OF COURSE emotions matter for learning. Who would have thought otherwise?

At the same time, we’re probably curious to know how emotions influence learning.

A young student with long dark hair stands pensively by a school hallway window, clutching a green notebook and wearing a blue scarf over her white uniform shirt. Her expression appears troubled or thoughtful as she gazes outside. In the background, other uniformed students interact in the corridor. The image captures a moment of isolation or contemplation within the busy school environment, suggesting themes of teenage emotional challenges in educational settings.

In fact, once we ask that question, some sense of surprise might start to creep in. After all, the word “learning” falls squarely in the realm of cognition. And the word “emotion” sounds much more like … well … emotion.

 

Aren’t cognition and emotion two different sets of mental processes? If they are, how does one affect the other?

Here’s where research can be really helpful, if we read it carefully.

One of the best known (and most misunderstood) insights in this field comes from LatB regular Mary Helen Immordino-Yang:

“It is literally neurobiologically impossible to think deeply about things that you don’t care about.”

Why? Because — in the words of a recent study led by Benjamin Hawthorne — “the brain mechanisms that give rise to conscious emotions are not fundamentally different from those that give rise to cognition.”

In other word: the parts of your brain that do the emotional work also do the thinking work. Yes, LOTS of the same neural networks operate in both processes. These two seemingly “different sets of mental processes” share very substantial plots of neural real estate. (I will, by the way, come back to the misunderstanding of Dr. Immordino-Yang’s quotation at the end of this post.)

So, STEP ONE in this multi-step argument: “students’ emotions influence their learning because — at the neurobiological level  — ’emotion’ and ‘cognition’ overlap.

Step Two

With this neuroscience understanding of the cognition/emotion relationship established, let’s turn to psychology. What mental processes might explain this relationship?

One potential answer: WORKING MEMORY (often abbreviated as WM)If emotions — positive or negative — have an effect on WM, then we can easily understand how those emotions affect learning.

This hypothesis is at the heart of that recent study, led by Dr. Hawthorne, that I quoted a few paragraphs ago.

Hawthorne’s team explored this question through the concept of “cognitive load theory.” The full theory is too complicated to review here, but the headlines are straightforward:

  • Students who can manage a WM task are facing an appropriate cognitive load.
  • When that cognitive load becomes excessive, then they experience WM overload.

Team Hawthorne hypothesized that:

  • negative emotions (or what the researchers call ‘painful’ emotions) might increase cognitive load, and thus result in WM overload. Result: less learning.
  • positive emotions might reduce cognitive load, and thus make WM overload less likely. Result: same (or more) learning.

Because they have this cognitive load theory framework (often abbreviated as CLT), they can rely on all the tools and surveys that CLT uses.

What Students Did; What Reseachers Learned

To pursue this line of inquiry, Hawthorne and his team followed a straightforward plan.

Roughly 350 students — 11 to 15 year olds in Australian schools — went through this process during their math class. In brief, they…

… watched videos teaching increasingly complicated algebra processes (that is: their cognitive load increased over time),

… rated their own experience of cognitive load for each problem,

… rated their positive and negative emotions, and

… took a final test, to see how well they learned the algebra processes.

When Team Hawthorne put all these data into the appropriate graphs and charts, they arrived at an interesting pair of results.

First:

Yes, negative emotions add to the students’ perceived cognitive load. Result: less learning.

Second:

But: positive emotions had no effect on their perceived cognitive load — although  happier students did learn more.

And so, third:

Hawthorne’s team speculates that positive emotions might help cognition via another mental process … such as motivation.

What’s a Teacher to Do?

Given these results, we might reasonably ask: “so what? What can we do with these findings?”

Good questions. I have tentative answers.

First: we now have good reasons from two distinct scientific disciplines — neuroscience and psychology — to argue that emotion and cognition aren’t different categories: they overlap a lot.

Second: we know that students experiencing more negative emotion ALSO experience more cognitive load. Potential result: less learning.

Third: because of ambiguity in the study’s language, we can’t say if the negative emotions led to the higher cognitive load, or if the higher load led to negative emotions. (Because the study measured students’ emotions only once, we can’t know if the answer is “both.”)

For that reason, I think we need a rough-n-ready, flexible set of classroom responses.

  • If I see my students are upset, I can predict their WM might be reduced; I’ll need to simplify instruction for a while.
  • If I see my students’ just can’t get their WM in gear right now, I might wonder if there’s some emotional complexity underlying the problem. So: I should check out that hunch.

Neither of these suggestions is revolutionary, but they do let me think through the two-way relationship between negative emotion and WM.

A Two-Way Street

A few hundred words ago, I wrote that Dr. Immordino-Yang’s well-know quotation is widely misunderstood. When she says:

“It is literally neurobiologically impossible to think deeply about things that you don’t care about.”

Many people hear:

“And therefore we have to start by getting students to CARE about things, because otherwise they won’t learn about them.

In other words: students’ EMOTIONS preceed their COGNITION.”

But that conclusion a) violates the research we’ve been looking at, and b) doesn’t follow logically from the original statement. Let’s try another example:

“It is literally biomechanically impossible to walk (normally) without using your ankle joints.”

We should not, I think, extend this statement to say:

“We have to start by getting walkers to focus on their ANKLES, because otherwise they can’t walk.”

The sentence really means: “ankles are an essential sub-component of the walking process. They are one of many body parts that we should be aware of as we’re teaching walkers.”

So too, I think, Dr. Immordino-Yang’s statement means: “emotion and cognition always work together. Rather than prioritize one over the other, we should be aware of their intricate interactions as we make moment-by-moment teaching decisions.”

In other words:

Yes, of course, my emotional state influences my ability to think effectively. If I’m stressed and unhappy, I might well struggle to figure out whatever academic problem faces me.

AND

Yes, of course, my ability to think effectively influences my emotional state. If I accomplish a difficult thinking task — like, say, learning a complex algebra process — I might well feel less bad and more good.

The title of this blog post asks: “how do emotions affect learning.”

I hope I’ve persuaded you that the answer is: “don’t rely on people who offer a simple answer to that question. Emotion and cognition overlap substantially, and we must keep that overlap in mind as we think our way through leading schools and classrooms.”

And also: “at a minimum, we have good reason to think that negative/painful emotions complicate working memory. No wonder they’re bad for learning!”


Hawthorne, B. S., Slemp, G. R., Vella-Brodrick, D. A., & Hattie, J. (2025). The relationship between positive and painful emotions and cognitive load during an algebra learning task. Learning and Individual Differences117, 102597.