Andrew Watson – Education & Teacher Conferences Skip to main content

Andrew Watson About Andrew Watson

Andrew began his classroom life as a high-school English teacher in 1988, and has been working in or near schools ever since. In 2008, Andrew began exploring the practical application of psychology and neuroscience in his classroom. In 2011, he earned his M. Ed. from the “Mind, Brain, Education” program at Harvard University. As President of “Translate the Brain,” Andrew now works with teachers, students, administrators, and parents to make learning easier and teaching more effective. He has presented at schools and workshops across the country; he also serves as an adviser to several organizations, including “The People’s Science.” Andrew is the author of "Learning Begins: The Science of Working Memory and Attention for the Classroom Teacher."

The Neuroscience Trap: When Skeptics Stop Being Skeptical
Andrew Watson
Andrew Watson

I collaborated on this post with Dr. Cindy Nebel. Her bio appears below.


Everyone in this field agrees: we should begin our work with SKEPTICISM. When someone comes to us with a shiny new “research-informed” teaching suggestion, we should be grateful…and be cautious as well. After all:

The “someone” who gave us the “research-based” suggestion might…

  1. Misunderstand the research; it’s easy to do!
  2. Have found an outlier study; researchers rarely agree on complex subjects — like teaching and learning.
  3. Work in a context that differs from ours, and so offer a suggestion that helps their students but not other students.
  4. Misrepresent the research. Yup, that happens.

And so forth.

For all these reasons, we have to kick the tires when we’re told to change what we do because of research.

Easy Cases, Hard Cases

At times, this advice feels easy to follow. Ask any speaker at a Learning and the Brain conference, and they will assure you that:

  • Learning Styles are not a thing;
  • Left Brain/Right Brain distinctions don’t really matter;
  • The Learning Pyramid (“students remember 5% of what they hear…”) is self-contradicting, and based on no research whatsoever;
  • The list goes on…

Friends describe these ideas as “zombie beliefs”: no matter how many times we “kill them off” with quality research, they rise from the dead.

As we attack these zombie beliefs with our evidence stakes, we always chant “remember: you have to be SKEPTICAL!”

These cases are easy.

Alas, we often face hard cases. In my experience, those hard cases often combine two key elements:

  • First: we already believe — and/or WANT to believe — the “research-based” claim; and
  • Second: the research is neuroscience.

If a new neuro-study confirms a belief I already hold, my ability to be skeptical deserts me completely. I accept the research as obviously true — and obviously beyond criticism. I find myself tweeting: “Only a fool could disagree with this claim, which is now settled…”

Of course, if I fail to be skeptical in these hard cases, I’m abandoning scientific principles just as surely as people who purport to teach left-brained thinkers.

One example: in my experience, people REALLY want to believe that handwritten notes foster learning more surely than notes taken on a laptop. (I have detailed the flaws in this claim many times.)

A study published in 2023 is regularly used to support this “handwriting > laptop” claim. I first read about it in an article entitled “Handwriting promotes memory and learning.”

Notice that this study fits the pattern above:

  • First: people already believe — and really WANT to believe — the claim.
  • Second: it’s neuroscience.

LOTS of thoughtful people held this study up to champion handwritten notes.

Alas, because this study is a hard case, the skepticism practices that we typically advocate fell to the side. As it turns out, the flaws in this study are easy to spot.

  • It’s based on a very small sample;
  • The laptop note-takers had to type in a really, really unnatural way;
  • The study didn’t measure how much the students remembered and learned.

No, I’m not making that last one up. People used a study to make claims about memory and learning even though the researchers DIDN’T MEASURE memory and learning.

In other words: in this hard case, even the most basic kinds of skepticism failed us — and by “us” I mean “people who spend lots of time encouraging folks to be skeptical.”

Today’s Hard Case

The most recent example of this pattern irrupted on eX/Twitter last week. An EEG study showed that students who used ChatGPT

a) remembered less of what they “wrote,” and

b) experienced an enduring reduction of important kinds of brain-wave activity.

Here’s a sentence from the abstract that captures that second point precisely:

“The use of LLM had a measurable impact on participants, and while the benefits were initially apparent, as we demonstrated over the course of 4 months, the LLM group’s participants performed worse than their counterparts in the Brain-only group at all levels: neural, linguistic, scoring.”

Once again, this study fits into the hard-case pattern:

  • Confirms a prior belief (for LOTS of people), and
  • Neuroscience

The unsurprisingly surprising result: this study has been enthusiastically championed as the final word on the harms of using AI in education. And some of that enthusiastic championing comes from my colleagues on Team Skepticism.

I want to propose a few very basic reasons to hesitate before embracing this “AI harms brains” conclusion:

  • First: the PDF of this study clocks in at 206 pages. To evaluate a study of that length with a critical eye would take hours and hours and hours. Unless I have, in fact, spent all those hours critically evaluating a study, I should not rush to embrace its conclusions.
  • Second: I’m going to be honest here. Even if I spent hours and hours, I’m simply not qualified to evaluate this neuroscience study. Not many people are. Neuroscience is such an intricately technical field that very few folks have the savvy to double- and triple-check its claims.

For the same reason you should not fly in a jet because I’ve assured you it’s airworthy, you should not trust a neuro study because I’ve vetted it. I can’t give a meaningful seal of approval — relatively few people can.

Knowing my own limitations here, I reached out to an actual neuroscientist: Dr. Cindy Nebel*. Here are her thoughts:


Here are my takeaways from this study:

  1. Doing two different tasks requires different brain areas.

In this study, participants were explicitly told to write using AI or on their own. Unsurprisingly, you use a different part of your brain when you are generating your own ideas than if you are looking up content and possibly coping and pasting it. In this study, participants were explicitly encouraged to use AI to write their essay, so it’s likely they did — in fact — just copy and paste much of it.

2. When you think back on what you did using different brain areas, you use those same differentiated brain areas again.

When we remember an event from our lives, we actually reactivate the neural network associated with that event. So, let’s say I’m eating an apple while reading a blog post. My neural areas associated with taste, vision, and language will all be used. When I recall this event later, those same areas will be activated. In this study, the people who didn’t use their brains much when they were copy/pasting still didn’t use their brains much when they recalled their copy/pasting. This finding is entirely unsurprising and says nothing about getting “dumber”.

3. It’s harder to quote someone else than it is to quote yourself.

The only learning and memory effect in this study showed that individuals who copied and pasted had a harder time quoting their essays immediately after writing them than those who generated the ideas themselves. Shocking, right?

My neuroscience-informed conclusion from this study is that not using your brain results in less neural activation. [*insert sarcastic jazz hands here*.]

To be clear: I did not spend the requisite hours and hours reading the 206 page article. I did scan all 206 pages, read the methods thoroughly, and took a close look at the memory results in particular. I skipped the bulk of the paper which is actually a linguistic analysis of the type of language used in prompts and essays written with and without the support of AI. I am very much not an expert in this area and, very importantly, this seemed to make up the most important findings.

Back to you, Andrew.


Customary Caveats

This post might be misunderstood to say: “this study is WRONG; teachers SHOULD use AI with their students.”

I’m not making either of those claims. Instead, I am saying:

  1. Like all studies, this study should be evaluated critically and skeptically before we embrace it. Because it’s so complicated, few people have the chops to confirm its findings. (And not many have time to review 206 pages.)
  2. As for the use of AI in schools, I think the topic resists blanket statements. Probably the best shorthand — as is so often the case — goes back to Dan Willingham’s famous sentence:

“Memory is the residue of thought.”

If we want students to learn something (“memory”), they have to THINK about it. And if they’re using ChatGPT, they’re thinking about — say — high-quality prompts. They’re probably NOT thinking about the content of the essay, or effective essay-writing strategies.

Because we want students to think, we should – in almost all cases – encourage them to write without AI.

(To be clear: I think we could easily create assignments that cause students to think with AI. For instance: they could ask Claude to write a bad essay about The Great Gatsby: one that’s ill organized, ungrammatical, and interpretively askew. They could then correct that essay. VOILA: an AI assignment that results in thinking.)

Ironic Coda

I wrote this blog post based on my own thinking and understanding. I then shared my thoughts with Dr. Nebel, who offered her substantial commentary.

Next — as is my recent habit – I asked Claude to proofread this post, and to make any suggestions for clarity and logical flow. Based on its suggestions, I made a few changes.

In other words: this post has an inherent bias in it.

If I trust Claude — an AI assistant — I’m probably biased against research showing that AI assistants create enduring mental decrements. Although I doubt that this bias has misled me too far, I do think you should know that it exists.


* Dr. Nebel notes: “To the neuroscientists in the audience, Andrew is using that term generously. My degree in Brain, Behavior, and Cognition yes, involved neuroscience courses including a human brain dissection and yes, involved courses and research using fMRI. But I am not a neuroscientist in the strictest sense. I do, however, understand neuroscience better than the average bear.”

Dr. Cynthia Nebel is the Director of Learning Services and Associate Professor of Psychiatry and Behavioral Neuroscience at St. Louis University School of Medicine.  She holds a Ph.D. in Brain, Behavior, and Cognition and  has held faculty positions at Lindenwood, Washburn, and Vanderbilt Universities. Dr. Nebel has published two influential books on the science of learning and is a leading collaborator with The Learning Scientists, an organization focused on bridging the gap between learning research and educational practice. Dr. Nebel has presented the science of learning nationally and internationally and is dedicated to bridging research and practice to improve educational and organizational outcomes.


Van der Weel, F. R., & Van der Meer, A. L. (2024). Handwriting but not typewriting leads to widespread brain connectivity: a high-density EEG study with implications for the classroom. Frontiers in psychology14, 1219945.

Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X. H., Beresnitzky, A. V., … & Maes, P. (2025). Your brain on chatgpt: Accumulation of cognitive debt when using an ai assistant for essay writing task. arXiv preprint arXiv:2506.08872.

I Take It All Back: Research on “Expressive Writing” Has Evolved
Andrew Watson
Andrew Watson

I spend my days giving research-informed advice to teachers and school leaders. You could summarize my pitch this way:

According to my current best understanding, we’ve got some good research suggesting that X strategy is likely to help most students learn most things.

Sometimes I say:

We’ve got LOTS of research suggesting…

Or occasionally:

We’ve got a few encouraging studies that make me think…

In every case, these research-informed suggestions arise from the CURRENT state of the research.

Of course, researchers haven’t stopped working…they’ve only just begun. Hundreds (thousands?) of graduate students and professors are out there right now. They’re exploring:

  • working-memory limitations, and
  • the benefits of mindfulness, and
  • the relationship between handwriting fluency and reading comprehension

And SO MANY other topics.

For this reason the advice that I gave yesterday based on “the current state of the research” could be contradicted by today’s research. My job isn’t just to find “what the research has said,” but to keep track of the ongoing conversation that current researchers have with prior conclusions.

In other words: “research-informed” teaching advice evolves over time. Occasionally, I have to retract advice I’ve given in the past.

Prior Conclusions: An Example

I’ve spent the last week talking with teachers from across the country about — among other topics — working-memory limitations, and ways to work around them.

A sign reading "Your GPS is wrong; turn around."

One sensible approach: because stress hampers working memory-function, we can support working memory by reducing stress. Happily, we’ve got a few research-supported strategies for doing so.

I first heard about one strategy at a Learning and the Brain conference many years ago. Researcher Sian Beilock — currently the President of Dartmouth College! — teamed up with Gerardo Ramirez to test a surprising hypothesis. The title of their study says it all: “Writing About Testing Worries Boosts Exam Performance in the Classroom.”

Although the idea sounds counter-intuitive — in fact, both researchers were skeptical of the hypothesis — they found that letting students VENT their stress in writing dissipated the stress. More specifically, students given time to write about their anxiety about an upcoming exam did better on that exam than students who wrote about something else.

When I heard Dr. Beilock’s talk, the suggestion struck me as SO PRACTICAL. It doesn’t take much time. It helps anxious students without harming other students. It cost $0. What’s not to love?

Happily, we don’t have to rely on one study. More recently, DiMenichi and colleagues tried a similar strategy — and got similar results.

For that reason, I’ve been giving this advice for several years now: “we’ve got some good studies suggesting that letting students VENT their stress actually helps them dissipate their stress. The result: they do better on various academic measures.”

Today’s News: It’s Time for a Retraction

Recently, as I was preparing for an upcoming presentation, I looked again at my slide about the Beilock and DiMenichi studies. I realized that I hadn’t kicked those tires in a while, and I thought it would be helpful to go back and see what more researchers have found in this field.

Who knows: perhaps someone had recently done helpful study that offers creative new approach to this counter-intuitive suggestion.

I have several different strategies to evaluate research based claims: you can find a quick guide here.

Recently I’ve been using Claude to jumpstart my searches. I asked it to do a research deep dive on this question: here’s the first paragraph of its answer:

The evidence for helping students manage stress through “venting” shows a complex picture with both promising findings and significant contradictory results.

While foundational studies by Ramirez & Beilock (2011) and DiMenichi et al. (2018) demonstrated large effects for brief expressive writing interventions, subsequent research reveals substantial methodological concerns, replication failures, and evidence that such interventions can be harmful for certain students.

Meta-analyses consistently show only small effect sizes, and the majority of studies with healthy college students find no significant benefits

Now, I didn’t stop here. I looked over this review — and then explored several of those studies on my own. (As always, I relied a lot on ConnectedPapers.com and Elicit.com.) Especially because I’m new to Claude, I didn’t simply accept its guidance without question.

But — sure enough: as I see the CURRENT research picture, we don’t have a consistent and persuasive pool of studies suggesting that students benefit from venting their stress.

Pointing Fingers

If we got a wrong answer, presumably someone is at fault. So: who is it? At whom should I point when I cry j’accuse?

  • Should we blame Ramirez and Beilock, for leading us astray with that initial study?
  • Should we blame DiMenichi for confirming their error?
  • Should we blame ME for spreading erroneous information?

I think the correct answer is: D) none of the above.

This turn of events is an entirely predictable possibility in the world of research-informed teaching advice.

  • Ramirez and Beilock didn’t do anything wrong. They had a hypothesis. They tested it. They reported their results. That’s what they’re supposed to do.
  • DiMenichi didn’t do anything wrong. She followed up that initial study. She reported her results. That’s what she’s supposed to do.
  • I (probably) didn’t do anything wrong. I saw interesting and well-designed studies. The results aligned with each other, and with my teaching experience. I told teachers about these studies. That’s what I’m supposed to do.
  • (I might have made this mistake: I might not have emphasized enough that the advice was based on a small number of studies — and therefore tentative.)

Those of us who base teaching advice on research should always acknowledge that some of our conclusions will be contradicted by future research. Occasionally, that kind of reversal just happens.

In fact: we can have confidence in research-based suggestions BECAUSE the research cycle will probably reveal false leads sooner or later. It’s not a perfect system. But as long as we stay realistic about its limitations, science can be self-correcting.

What This Means for You

First: when you get “research-based” teaching advice, ask how much research supports the claim. You don’t have to reject ideas with only a little research beind them — especially if the conclusions match your experience, or your school’s philosophy. But: be sure to check back in every now and then to see what subsequent research has found.

Second: although we can be sad that “expressive writing” probably isn’t a good strategy for helping students manage stress, we do have other strategies that can help. I’ll write about “cognitive reframing” in a future post…


Ramirez, G., & Beilock, S. L. (2011). Writing about testing worries boosts exam performance in the classroom. science331(6014), 211-213.

Improving Schools with Research: Understanding the Gaps
Andrew Watson
Andrew Watson

Those of us who study the intersection of research and teaching can get carried away all to easily.

After all: psychology research can provide enormous benefits to teachers and school leaders. More important, it benefits the students and families who depend on us. Because this reseach can help us understand — say — memory and attention, it can improve the way we teach and think about almost every school-keeping topic.

No wonder we get so excited. “Here’s the research!” we cry. “Now, go use it wisely!”

Of course, the gap between “having the research” and “using it wisely” is…ENORMOUS. If we understand research correctly, it can indeed help us know what to do. But…

… what if we’re relying on flawed research? or

… what if we’re using research that doesn’t apply to the biggest problem in our school? or

… what if we incorporate the research in our thinking, but don’t have a clear system to evaluate its effects, or…

This list goes on.

Those of us who want psychology and neuroscience research to improve education need a better system for “using research wisely” in schools.

Today’s News: Towards a Better System

I’ve been thinking about the broader difficulties of “using research wisely in education” because of a recent book: Evidence-Informed Wisdom: Making Better Decisions in Education. The authors — Bradley Busch, Edward Watson, and Matthew Shaw — see exactly this problem, and have lots of guidance to offer.*

This book includes several quotations and charts and anecdotes worth savoring; I want to focus on a Venn diagram that has really got methinking. Here’s an incomplete version:

As you can see, Busch, Watson and Shaw argue that we can find the “sweet spot” when we undertake three complex processes simultaneously.

  • In the first place, we need to identify the problem correctly.
  • Second, we should have a plan that relies on evidence and research.
  • And third, we need to monitor that plan.

Of course, each circle in this Venn diagram requires lengthy exploration. I myself have written a book about evaluating the “evidence” which we might use in the green circle. And Evidence-Informed Wisdom takes several pages to explore the “review cycle” in the blue circle: an idea drawn from Bruce Robertson’s work.

But the core insight here is: while each of these processes merits its own book, we get the most powerful effect from doing all three at the same time.

Mind the Gaps

Venn diagrams help us think because they label the places that circles overlap. Those labels typically emphasize the commonalities that the circles share.

For instance, a humorous Venn diagram considers “bank robbers” and “night club DJs.” Their commonality: “people who tell you to put your hands in the air.”

Busch, Watson, and Shaw have a different approach to that overlap. Check out this more-complete version (from page 80 of their book):

Notice that in this Venn diagram, the labels do NOT highlight the commonality between the two circles. Instead, they name the specific problems created by the absence of the third circle.

So: if I successfully identify the problem (the purple circle) and use evidence to plan a solution (the green circle), I’m wisely accomplishing two key processes. For that reason, I might make meaningful progress for a while.

However — and this is a BIG however — I’ve skipped out on Robertson’s review cycle (the blue circle). Without that additional process, my initiative might…

  • …lose focus. For example: my colleagues and I might all think we’re using retrieval practice…but we’re not all using it consistently and well.
  • …lose momentum. We could start out strong, but easily get distracted by the next shiny new thing. (I’m looking at you, AI.)

Or, if our school runs a scrupulous process to identify our most pressing problem and develop a clear “try-review-reflect” cycle, our good work might well help our students.

HOWEVER, because we didn’t use research to inform our decision making, we might be trying, reviewing, and reflecting upon a foolish plan.

In other words: this Venn diagram reminds us of three essential processes to keep in mind. And, it highlights the symptoms we’ll see and feel if we skimp on one of those processes.

As a bonus, it also offers a fun new way to think about creating Venn diagrams.

TL;DR

  1. We absolutely should use research to inform our teaching and our school keeping.
  2. The process of “using research” isn’t straightforward; it requires at least three complex processes used in a just and nuanced balance.
  3. Busch, Watson, and Shaw have created a really helpful way to think simultaneously about all three.

When you and your school decide to follow the evidence-informed path, this diagram will guide your exploration.


* Two important notes:

  1. As far as I know, I’m not related to Edward Watson.
  2. I have a policy that I don’t review books written by friends. For that reason, I’m not reviewing this book. I am, instead, writing about a topic that it explores.

Three Words Aren’t Enough: Rethinking Motivation
Andrew Watson
Andrew Watson

Let’s imagine that I — a 10th grade classroom teacher — want to motivate my students. I discover this list of motivational suggestions:

  1. listen to students
  2. give them time for independent work
  3. provide time for students to speak
  4. acknowledge students’ improvement
  5. encourage their effort
  6. offer them hints when they’re stuck
  7. respond to their comments and questions
  8. acknowledge students’ perspectives

Even better, I’m told that this list has RESEARCH behind it.

Before we dive in and adopt these suggestions, let’s pause for just a moment. Please look over that list and ask yourself: “if I were a student, would I feel more academically motivated if my teacher did any or all of those things?”

[I’m pausing here so you can review the list.]

Welcome back. If you like these suggestions, I’ve got some good news for you:

For a few decades now, Richard Deci and Edward Ryan have developed self-determination theory: a theory of academic motivation that gets LOTS of love. The headlines sound like this:

“If we want to improve our students’ academic motivation, we should be sure that they feel

  • AUTONOMY,
  • RELATEDNESS, and
  • COMPETENCE

in the classroom.”

Those three feelings — again: autonomy, relatedness, and competence — foster all sorts of good human outcomes, including academic motivation.

I have even better news. Unlike almost all psychology theories, self-determination theory uses these words in their everyday meaning. I don’t need to offer several paragraphs of translation to describe what Deci and Ryan mean by “autonomy.” They mean EXACTLY what you think they mean.

(This news might not sound like a big deal. But — trust me — in the world of cognitive psychology, few scholars offer clear terminology.)

I’ve described my three-word summary as “the headlines” of Deci and Ryan’s theory. But my blog title says that three words aren’t enough. What happens when we go beyond those three words?

Not So Fast

We should start by admitting that LOTS of people who talk about self-determination theory don’t get past these headlines. You’ll read SDT summaries that list and define those words, and then conclude with uplifting advice: “teachers — just go do that!”

Alas, I think even the headlines themselves raise pressing questions.

For instance: to my ear, “autonomy” and “relatedness” suggest two contrasting vibes. The words don’t exactly contradict one another — they’re not antonyms. But it’s easy to imagine a teaching strategy that INCREASES one but REDUCES the other.

For instance, let’s look at point #6 on the motivational list above: “offer struggling students a hint when they’re stuck.”

  • On the one hand, that advice could certainly foster a sense of relatedness in the classroom.
    • A student might think: “I was struggling, and the teacher noticed and helped me. This is a great class!” This student didn’t precisely use the word “relatedness,” but that vibe is in the air.
  • At the same time, offering hints might lessen another student’s sense of autonomy.
    • This student might think: “Does this guy think I’m completely helpless? I would have gotten it if he just left me alone. Sheesh.” So much for feelings of autonomy.

Let’s throw “competence” into this mix. If I offer a struggling student a hint, she might think:

  • “Oh, wow — I can solve this problem now! I feel so happy an successful!” She exudes an aura of joyful competence. Or
  • “Oh, wow — my teacher thinks I’m so hopeless that he has to offer me the answer on a platter. I must be the biggest loser in this class.” Her feelings of competence have clearly drained away.

Yes, you will hear “autonomy! relatedness!! competence!!!” offered as a formula to enhance student motivation. But I don’t think this formula — or ANY formula — works simply as a formula.

Reading the Fine Print

SDT’s emphasis on autonomy, relatedness, and comptence DOES provide an excellent place to start our pedagogical thinking. And, we need to keep going.

Each of the eight strategies listed above has been researched as a way to enhance student autonomy. Before we use any of them, however, I think we should stop and ask reasonable questions:

  1. What are the potential conflicts here? Will this strategy enhance one of the Big Three, but harm another?
  2. What are the individual differences here? Will this student respond to the strategy by feeling more competent, while that student responds by feeling more foolish?
  3. What are the cultural differences here? Will students in — say — Korea find a particular autonomy strategy confounding while their counterparts in — say — Brazil find that same strategy encouraging? (To be clear: I’m being entirely speculative here. I don’t know enough about either of those cultures to even attempt an example.)

I’ll offer one more example, simply to emphasize the concerns that trouble me.

Strategy #2 says that we should “give students time for independent work.”

  • On the one hand, what could possibly be more foundational to teaching. OF COURSE students need time to work on their own.
    • As a motivational benefit, all this independent work might make them feel autonomous and competent.
  • On the other hand, students who lack appropriate prior knowledge could be overwhelmed and demotivated by all that time to work alone. What should they be doing? How do they do it?
    • Motivationally, they have no partner to rely on (so much for “relatedness”), and feel their own lack of skill all too forcibly (good bye “competence”).

In other words: if I worry that my students lack motivation, I shouldn’t simply look at that list and pick one that sounds uplifting and research-y. “Oh, yes, I’ll give them independent work time — they’ll feel more motivated!”

Instead, I should look at the list and ask myself those follow-up questions. In brief, does this uplifting and research-y teaching strategy fit my students’ current educational and motivational needs? Have I considered both upsides and downsides?

In fact: these concerns about self-determination theory point to a broader challenge that teachers face when trying to implement research-backed strategies.

The Bigger Picture

In writing this post, I am using self-determination theory as an example of a larger problem. Research and researchers certainly can — and should!! — offer classroom teachers practical guidance.

And: we should always filter that guidance with friendly-but-persistent questions:

  1. How good is this research? How many studies arrive at roughly the same conclusion?
  2. Will this guidance benefit MY students (not just someone’s students)?
  3. Does the benefit over here create a problem over there?
  4. Does the cost — in money and in time — outweigh the potential benefit?

And so forth.

In brief: let’s use research to inform our practice. And: let’s also commit to being educators who dig deeper and ask tougher questions. The best teaching happens not when we follow scripts and formulas, but when we think carefully about the unique circumstances and students in our classrooms.


Reeve, J., & Jang, H. (2006). What teachers say and do to support students’ autonomy during a learning activity. Journal of educational psychology98(1), 209.

Ryan, R. M., & Deci, E. L. (2020). Intrinsic and extrinsic motivation from a self-determination theory perspective: Definitions, theory, practices, and future directions. Contemporary educational psychology61, 101860.

One Size Doesn’t Fit All: Teaching Experts and Novices
Andrew Watson
Andrew Watson

Teachers often hear that we should divide students into different groups and teach them differently.

Most famously, learning styles theory (now thoroughly debunked) says that we can think of students as auditory, visual, or kinesthetic learners. It further suggests we should align our instruction with their natural learning style: visual learners get visual instruction, and so forth.

(To be very clear: visual learners should NOT get visual instruction, because visual learners do not exist. We all learn through our senses, and the best one to prioritize depends on the content being learned.)

Honestly, most advice that sounds like “sort-students-and-teach-them-differently” lacks good research support.

At least one obvious exception to this rule demands attention: prior knowledge.

Imagine this hypothetical:

  • You and I are taking a class on the Spanish Civil War.
  • You know considerably more about Spain, fascism, military history, and Picasso than I do.
  • That is: you are, relatively speaking, an “expert.” I am, relative to you, “a novice.”

In this hypothetical, it just makes sense to suspect that we will benefit from different instructional strategies.

This argument goes by the name the “Expertise Reversal Effect.” That is: instructional supports that benefit novices instead interfere with learning for experts.

  • I will need A LOT of instructional support, and my learning will suffer if I don’t get it.
  • You will probably learn best if you get a fair amount of freedom and intellectual room to knock about on your own.

If you have to sit through all the basics that I find essential (“here is a map of Spain; notice Madrid here, Barcelona here, and Guernica here”), you’ll feel bored and constrained. And you’ll learn less than you would with greater freedom.

So far in this blog post, the expertise reversal effect is a hypothesis. Is it supported by evidence?

Does It Work in Theory?

In a minute, I’ll explore the data. (Preview: yes, this hypothesis has good research support.)

Before we get there, let’s pause to consider WHY the expertise reversal effect might be true.

A recent meta-analysis offers a few explanations; I’ll focus on two of them.

First: cognitive load theory tells us we want to get rid of needless extra mental work (“extraneous cognitive load”) so that students can focus on the essential mental work (“intrinsic cognitive load”).

  • In my Spanish Civil War hypothetical, my working memory will be threatened by all the new information. I will learn more if the teacher organizes it and focuses on the essentials.
  • You, however, have lots of working-memory headroom — because you store more relevant information in long-term memory than I do — so extra information isn’t always extraneous. It might, in fact, help you figure out new and unforeseen ideas.

Second: self-determination theory tells us that student motivation benefits from their feeling autonomy, competence, and relatedness.

  • For me — the Spanish Civil War novice — I will feel more competent if a supportive teacher helps me out, and more related to my teacher as well.
  • All that extra support, however, might reduce feelings of autonomy for an expert like you.

Now that we have some reason to think that the expertise reversal effect has a plausible theoretical background, let’s see if we’ve got FACTS to support it.

Crunching the Numbers

You want data; we’ve got data.

More precisely, Tetzlaff, Simonsmeier, Peters, and Brod have data from that recent meta-analysis mentioned above. This team found 60 relevant studies; those studies included 176 relevant effect sizes, and included almost 6000 participants. (“Effect size” means, roughly, what the name suggests: the size of the effect of teaching one way or another.)

In brief:

  • Yes: students with a LOW level of prior knowledge learn more from high-support pedagogy than low-support pedagogy. (Stats folks: the d-value here is 0.505.)
  • Yes: students with a HIGH level of prior knowledge learn more from low-support pedagogy than high-support pedagogy. (D-value: -0.428.)

Those d-values are traditionally in the “medium size” range. I speak d-value, and I pay attention to numbers that size.

A young rock climber navigating a challenging route with numerous colorful holds, illustrating how learners at different skill levels need different types of support and guidance.

The expertise reversal effect gets bigger as students get older. The effect was strongest in higher ed; we don’t honestly have lots of studies in primary school.

It also varies somewhat depending on academic discipline. (There’s less of an effect in language and humanities.) But — roughly speaking — we can say that “beginners benefit from higher-structure pedagogy; experts benefit from lower-structure pedagogy.”

Not So Fast

But wait just a minute. Why did I slip the phrase “roughly speaking” into that previous sentence? We’ve got all those data — shouldn’t we just do what they say?

Let me focus on one of several reasons.

Sometimes research considers variables that are either one thing or the other:

  • The switch is either on or off.
  • The school is either in Hawai’i or in Delaware.
  • The subject being studied is either chemistry or Latin.

At other times, research considers variables that fall along a continuum.

  • The research might sort teachers into a “cheerful” group and a “glum” group. But not all “cheerful” teachers are equally cheerful. Heck, on some days, they might be glum.
  • Research might describe playground behavior as “cooperative” or “competitive,” but not all cooperation is equally cooperative. Heck, some cooperation is a kind of competition. (From the little I’ve seen of it, the entire Survivor franchise depends on competitive cooperation.)

And so forth.

Notice that — in my “roughly speaking” summary — ALL OF THE KEY TERMS are continuous.

  • There is no exact line separating LOW prior knowledge from HIGH prior knowledge. Prior knowledge exists on a continuum.
  • There is no exact line separating LOW-structure pedagogy from HIGH-structure pedagogy. Pedagogy exists on a continuum of support.

For that reason, classroom teachers can’t simply “do what the research says.”

I might say to myself:

“My 1st period class has slightly more prior knowledge than my 3rd period class. For that reason, my instruction 3rd period should be somewhat higher structure than during 1st period. But they’re both novice groups compared to my advanced class 5th period, which should have lower-structure than both…”

These delicate — even hunchy — internal monologues can be guided by the expertise reversal effect. But that effect and this meta-analysis don’t translate into step-by-step rules that teachers can follow.

Now, some signs will be obvious.

  • If my students can’t define basic terms — “what’s a noun again?” — they’re clearly novices.
  • If they spontaneously combine ideas in ways we haven’t discussed — “I want to strengthen this sentence by using the participle as a gerund” — they’re well into expert territory.

For this reason, I should begin classes and units with questions designed to identify my students’ place along the low-to-high prior knowledge contiuum. And, I should be reasonable about the fuzzy limitations of that placement.

In Sum

This meta-analysis suggests that the expertise reversal effect is a thing.

When possible, teachers should provide beginners with relatively more pedagogical structure, and experts relatively less.

Part of our own developing expertise will be adapting — lesson-plan by lesson-plan, class by class, unit by unit — to those subtle gradations.


Pashler, H., McDaniel, M., Rohrer, D., & Bjork, R. (2008). Learning styles: Concepts and evidence. Psychological science in the public interest9(3), 105-119.

Tetzlaff, L., Simonsmeier, B., Peters, T., & Brod, G. (2025). A cornerstone of adaptivity–A meta-analysis of the expertise reversal effect. Learning and Instruction98, 102142.

“Video Captions Benefit Everyone”: An Investigation
Andrew Watson
Andrew Watson

If you’ve watched educational videos lately, you’ve probably noticed that captions are EVERYWHERE. In fact, caption technology has evolved in recent months so that captions now highlight individual words as spoken, rather than present entire phrases or sentences one after another. But what if this well-intentioned practice is actually interfering with learning?

Given their prevalence, I’ve just assumed that we have good reasons to include captions. At the same time, people do LOTS of things that contradict evidence — so perhaps the time has come to investigate my assumption.

I started by casting a wide net. I went to Google Scholar and put in “captions and subtitles.” The first hit sounded a confident tone: “Video Captions Benefit Everyone.”

To be sure we understand the confidence of this study, let’s read the first two sentences:

Video captions, also known as same-language subtitles, benefit everyone who watches videos (children, adolescents, college students, and adults).

More than 100 empirical studies document that captioning a video improves comprehension of, attention to, and memory for the video.

If in fact captions promote understanding, focus, and recall for practically everyone who watches, we’ve got as close to a slam dunk as I can imagine.

Let’s check to be sure…

Focus on Definitions

As is so often the case, we should start by defining our terms, questions, and expectations clearly.

Because this blog focuses on education, I’m interested captioned videos used to help K-12 students learn stuff. That is: research into captions when people watch movies for fun don’t fit my question.

For the time being, I’m also going to focus on same-language captions: a video where the narrator speaks in English and the captions show the narrated words in English. Of course, a student who speaks Spanish at home might benefit from seeing Spanish subtitles for a video in English, but that’s a very different research questions.

For similar reasons, I’ll start by focusing on research into neurotypical students. I can imagine that students with particular diagnoses might — as part of their learning profile — have different requirements than their peers. It’s probably helpful to start by understanding how most people learn, and then adapt that practice as needed for individuals.

(To be clear: we will ultimately be interested in different-language captions, and in the potential benefits for different categories of learners. To start with, I simply want the most basic question answered.)

I’d like to find several rigorously designed studies all pointing in the same direction. I want sample sizes that rise above the trivial; I’d like plausible control groups; I’d like objective measures — not mere self-report. And so forth. After all, I shouldn’t tell you that captions are (or aren’t) a reseach-informed instructional practice if all the research I’m citing doesn’t meet basic standards.

Finally, if I’m really lucky, I’d like to have both research and theoretical frameworks pointing in the same direction.

Now that we’ve got some parameters in place, let’s return to that study and see what we find.

Working the Steps

I spend lots of my time double-checking (or triple-checking) “research-based” claims, so I’ve got a process to follow.

I won’t walk you through each step of the journey — it took a few hours — but the results are impressively clear.

First:

We have essentially no research that fits the criteria above.

No, really: we don’t have a pool of persuasive research giving us an answer one way or another.

That “Captions Benefit Everyone” study focuses on foreign-language captions, or on non-neurotypical learners, or on college students, or on self-report. LOTS of self-report.

I should explain, by the way, why self-report data don’t persuade most researchers: people are REALLY bad at knowing what helps us learn. College students might THINK they pay more attention, or remember better, when they see vids with captions. But unless we actually measure their attention, understanding, or learning, we shouldn’t actually make claims about attention, understanding, or learning.

When I asked Elicit.com to research this question, I found the same problem. The studies it summarized focused almost entirely on Chinese students watching videos with English captions. That research helps answer an important question — but it’s not the question I asked.

Second:

The study that comes the closest to answering my question suggests that captions might interfere with reading for not-at-risk students.

This study suggests that captions DO help at-risk 2nd and 3rd graders recognize words. But the not-at-risk students recognized fewer words with the captions on. Obviously we’re glad to have strategies to help at-risk students. But that’s not the big-picture question we started with.

(I’m honestly puzzled that captions benefit struggling readers…but because I don’t teach reading I’m not going to have a strong opinion here.)

Let’s Talk Theory

I noted above that I’d like to have both well-done empirical research AND a theoretical framework to answer my question.

Richard Meyer’s “redundancy principle” tells us that presenting the same information both verbally and visually at the same time increases cognitive load.

In his excellent book Sweller’s Cognitive Load Theory in Action, Oliver Lovell gives a common example: conference presentations.

It’s a common practice for presenters to provide written information on their slides and then to read out that information during the presentation. To conventional audiences, this represents the presentation of redundant information. Only one presentation format is needed, either the written words, or the spoken words. (62)

That example sounds A LOT like captions, no?

A full explanation of the redundancy principle requires a blog post of its own. The short version goes like this:

  • Because I read faster than others speak, I’m constantly reading ahead of the speaker’s current point in the text. I must therefore stop and go back multiple times. All this back-n-forth adds to my cognitive muddle.
  • With captions, I have to focus either on the WORDS that the captions present or the IMAGES in the video — and that back-n-forth also adds to the cognitive work I have to do.

For the dual-coding folks reading this post, remember: dual coding advocates that words and images complement one another — not that they represent precisely the same information.

Putting It All Together

  1. I found NO research with objective measures of neurotypical K-12 learners reading same-language captions. The one study that comes closest hints — but does not say — that captions might interfere with word recognition for early readers.
  2. Meyer’s redundancy principle gives us a good reason to be VERY skeptical of claims saying that “captions benefit everyone.”
  3. If you find research that matches the criteria above, please send it my way. I always want to keep this blog as up-to-date an accurate as possible.

In the meanwhile, here are my suggestions:

a) Be wary of claims that captions benefit most learners — especially neurotypical K-12 learners reading same-language captions.

b) Be ESPECIALLY cautious if the video includes cognitively complicated material — where cognitive load is already high.

c) Be aware of legal requirements, especially for students with diagnosed learning differences. Also, I myself would be more open to the benefits of captions for students watching videos in languages they don’t speak fluently. I haven’t done a deep dive into that research pool, but common sense suggests such captions could have real benefits.


Gernsbacher M. A. (2015). Video Captions Benefit Everyone. Policy insights from the behavioral and brain sciences2(1), 195–202. https://doi.org/10.1177/2372732215602130

Linebarger, D., Piotrowski, J. T., & Greenwood, C. R. (2010). On‐screen print: the role of captions as a supplemental literacy tool. Journal of Research in Reading33(2), 148-167.

Lovell, O., & Sherrington, T. (2020). Sweller’s cognitive load theory in action. John Catt.

Book Review: The End of Trauma, by George Bonanno
Andrew Watson
Andrew Watson

When I first started teaching — it’s been a few decades now — nobody ever talked about trauma. Quite literally, I do not remember a single faculty meeting or PD day or all-school read or … really … ANYTHING that raised the subject.

A silhouette of a human head in profile with vibrant, explosive colors - fiery reds, oranges, and yellows blending into cool blues and purples - flowing outward from the head like smoke or paint, suggesting stress and pain.

If you had asked me back then (the ’80s and ’90s), I would have said: “as far as I can tell, trauma happens to some people somewhere — and that’s terrible. But I don’t know of trauma happening here to our students. It’s just not on the radar.”

In the last twenty years, the world of education has done a 180 degree pivot. We talk about trauma, and trauma-informed education, a lot. We’ve got books and conferences and speakers and data.

Trauma — it seems — is everywhere. Whereas in the ’80s, trauma was a “someone else, somewhere else” problem, today it seems to be a “lots and lots of people, right here” problem.

The transition threatens whiplash. How can we manage it?

A Surprising Perspective

Back in the fall of 2024, Prof. George Bonanno presented his research at our Boston conference. His hour-long keynote included a number of surprising findings, and prompted me to buy his book. At last (!) I’ve had a chance to read it and to understand those surprising findings more deeply.

As I wrote back in December, Bonanno finds that

  • Roughly 10% of people who experience trauma have enduring symptoms;
  • Less than 10% start without symptoms, but symptoms develop over time and persist;
  • Roughly 20% initially experience symptoms, but recover over two years;
  • The rest never respond with serious symptoms.

In other words, most people do NOT respond to threatening events with PTSD. And, many who DO initially experience PTSD recover within months.

For these reasons, Bonanno doesn’t speak of “traumatic events” but of “potentially traumatic events.” After all, such events might lead to a trauma response…but most of the time they don’t.

By the way: Bonanno doesn’t arrive at these conclusions by looking at marginally threatening experiences. Two of his data sets come from people who experienced the 9/11 attacks directly — as in, they fled the buildings after the planes hit — and from members of the military who served in combat.

Even in these populations, he finds that people are mostly resilient — both in the short term and over the longer term.

In brief: we can start to manage our whiplash by realizing that PTSD is obviously very bad, but not remotely inevitable. Our students and colleagues and community members are likelier to respond to potentially traumatic events by being stable and resilient.

The Non-Recipe Recipe

This initial insight leads to an important question: exactly WHY are some people more resilient than others? If you and I go through roughly similar “potentially traumatic experiences,” why do I develop PTSD symptoms while you don’t?

To ask the same question another way: is there a formula to follow? A set of steps that leads away from PTSD? A recipe?

Bonanno answers this set of questions with nuance, sympathy, data, wisdom, and humility.

In the first place, he argues that — no — we don’t have a one-size-fits-all series of steps. In fact, he explains in thoughtful detail why no one pathway will work for all people in all circumstances.

In fact, he specifically rejects this approach. Yes: individual research studies show that character trait X or mental habit Y is “associated with a reduction in ultimate PTSD symptoms.” But the list of X, Y, and Z goes on at remarkable length — a few dozen letters at least. (Our poor alphabet taps out at 26.)

Instead, Bonanno’s research says that resilient people have a flexible collection of traits and perspectives that they use in different ways at different times.

The Return of the “Mindset”

Bonanno summarizes this this collection of traits with the phrase “flexibility mindset.” He defines the word “flexibility” quite carefully:

I’ve used the word “resilience” throughout this book to describe a pattern of continued good mental health after potential trauma, or, more precisely, a stable trajectory of healthy functioning across time.

Flexibility is not resilience. Flexibility is the process we use to adapt ourselves to traumatic stress so that we can find our way to resilience. (121)

Bonanno’s flexibility mindset rests on three connected beliefs:

  1. Optimism about the future,
  2. confidence in our ability to cope, and
  3. a willingness to think about a threat as a challenge.”

No one of these beliefs by itself is enough. And, no one of them is a straightforward first-A-then-B-then-C process. But — combined with nuance and maturity — they result, over time, in better mental health outcomes.

Bonanno, in fact, devotes several chapters to specific stories of people who successfully (or not) use a flexibility mindset to manage the potentially traumatic events in their lives.

A Big Caveat

Whenever I write a book review, I always try to include at least one point where I disagree with the author, or think the book could be better. In this case, that’s a surprisingly easy goal to meet.

Here’s why: I think the book’s TITLE is doubly misleading.

In the first place, Bonanno doesn’t for a minute suggest that we can “end” trauma. He in no way claims that you can follow his simple steps to bring trauma to an early end — either for an individual or a society. Quite the contrary, he argues that the process requires endurance, frequently includes grave setbacks, and might not work for everyone.

In other words, The End of Trauma isn’t about the end of trauma. It’s about rethinking the inevitability of trauma, and reframing strategies to cope with trauma.

In the second place, the book’s subtitle includes an equally misleading phrase: “the new science of resilience.”

Bonanno says over and over that he’s NOT proposing anything radically new. His “flexibility mindset,” after all, suggests that we be optimistic, confident, and inclined to think of threats as challenges. None of those insights — or the word “mindset” — is new.

The novelty in Bonanno’s work lies first in his data, which find the PTSD is a relatively unusual response to potentially traumatic events — not, as we’ve heard so often, an inevitable one.

Bonanno also makes a novel argument when he focuses on broad flexible categories (“optimism, confidence”) rather than specific steps (“first do this, then do that, then try t’other”).

I don’t doubt that The End of Trauma is a more saleable title than Rethinking the Inevitability of Trauma and Proposing a Flexble Path to Work Past It within 2 Years or So. But that title would be more accurate.

TL;DR

Bonanno’s book The End of Trauma isn’t about the end of trauma. It does, however, make a compelling — and ultimately optimistic — argument: we’re mostly resilient; we can bounce back from potentially traumatic events; and we’ve got a challenging-but-flexible framework to guide us as we do so.

“Good Ideas” that Might Waste Time
Andrew Watson
Andrew Watson

I would LOVE to write the following blog posts:

  • Playing with Puppies Improves Learning!
  • A Glass of Expensive Red Wine Each Day Makes You a Better Teacher
  • Research shows: Reading a Good Book on your Couch is the Best Way to Plan Lessons

In other words: if I can say that “research supports a REALLY popular idea,” I make everyone happy.

Alas, my job is often to say the opposite. As in:

Here’s a really popular idea — an idea that sounds like common sense. And: this popular idea probably is not true.

So:

Today I want to explore another “probably not true” idea — one that’s important, and a little bit tricky to understand. Here goes…

Self-Control, and Self-Control

Psychology researchers spend lots of time thinking about “self-control.” Sadly, this compound word has at least two different meanings.

  • Meaning #1: Let’s say you’re the kind of person who — naturally, without really thinking about it, most of the time — resists the tempting thing and does what you ought to do.

We could say that “self-control” is a consistent personality trait you have.

  • Meaning #2: Perhaps you’re standing in front of a plate of donuts. You say to yourself: “I’m SO HUNGRY, and the chocolate-y deliciousness looks amazing. BUT, I’m trying to get in shape, so I’ll take the fruit cup instead…”

We could say that you’re exhibiting “self-control” as a short-term behavior; you’re in a sort of state of self-control.

To distinguish between these two definitions, scholars write about

  • Personality TRAIT self-control, and
  • Short-term STATE self-control.

Although those two descriptors rhyme, they describe different kinds of self-control. One (“trait”) is a stable, ongoing, often unthinking self-control. The other (“state”) is short-term, contingent, self aware self-control.

We have at least three reasons to care about trait vs. state self-control.

  1. TRAIT self-control correlates with all sorts of good outcomes. People with high trait self-control live longer, get better grades, and earn more money. They floss more regularly and probably are nicer to dogs. (I think I made up that last one, but it could be true.)
  2. It just seems obvious: TRAIT self-control is (probably) just lots-n-lots of STATE self-control moments all strung together. As Michael Inzlicht and Brent W. Roberts write:
    • Conventional wisdom has it that people high in trait self-control reap all these benefits because they engage in more state self-control.
  3. For this reason, we have an obvious, common sense path to follow. If we train students to develop moment-by-moment STATE self-control, the result will be stable TRAIT self-control. “Grit,” anyone? All that gritty training means that students will get the benefits listed above: grades and jobs and cavity-free teeth and the love of dogs.

But WAIT JUST A MINUTE…

What if that popular, commonsense hypothesis just isn’t true?

How Can That Be?

No, really: how could that commonsense hypothesis not be true? It’s just OBVIOUS that trait self-control results from moment-by-moment states of self-control. I mean: isn’t it?

In the brief (highly readable) article linked above, Inzlicht and Roberts give us plenty of reasons to doubt that conclusion.

  1. People who score high on tests of trait self-control don’t spend much of their day exercising self-control. (See, for instance, this study.)
  2. Over longer periods of time (months and years), training in state self-control doesn’t result in all those predicted positive outcomes. People go back to their initial patterns.
    • That is: I might be able to avoid donuts long enough to fit into my wedding suit — but in the months after my wedding, I’m likely to put those pounds back on again.

Inzlicht and Roberts share our puzzlement at this odd set of outcomes. (Their article literally includes the subheading: “What the hell is going on here?”)

They speculate, for instance, that the wording has got us confused. Just because we use the compound word “self-control” for both Trait X and State X, our labels don’t necessarily mean that the same mental process is happening below the label.

They propose this thought experiment: imagine that we had decided to use the word “planfulness” instead of “trait self-control.” Because the words are different, we might not end up so surprised that “planfulness” doesn’t result from “state self-control.”

Whatever the reason for our mistake, we should be prepared to acknowledge that our commonsense belief — “training students in state self-control enhances their trait self-control” — probably isn’t true.

Why The Distinction Matters

If we were confident that state-self-control training leads to trait self-control, then we should (almost certainly) take the time to enact that training. So Many Benefits!

But — at least so far — we shouldn’t be confident that frequent bouts of short-term (state) self-control ultimately train up long-term (trait) self-control.

If we mostly want to create trait self-control…well…we just don’t yet know how to do that. And we shouldn’t fool ourselves that that state self-control gets the job done.

(Research, in fact, offers this substantial benefit: it often gives us helpful correctives precisely to stop us from fooling ourselves.)

Now, we should also admit that we want students to practice short-term state self-control because that too is useful. State self-control:

  • gets homework done (when students want to be doing something else), and
  • helps students focus in class (ditto), and
  • keeps students off Tik Tok (when they really want to be on Tik Tok).

And so forth. So I don’t think we give up on self-control all together.

But when we hear someone claim that “grit” can be trained…I think we should ask for good evidence that this claim is true.


Inzlicht, M., & Roberts, B. W. (2024). The fable of state self-control. Current Opinion in Psychology, 101848.

Neuromyths Are Myths, but They Aren’t Neuro
Andrew Watson
Andrew Watson

Few topics lead to greater despair at Learning and the Brain conferences than “neuromyths.” Teachers cling to these “brain-based” beliefs despite the absence of research supporting them — and despite all the zealous debunking from our experts.

No:

  • Learning Styles are not a thing;
  • Left-brain/right-brain distinctions have no teaching implications;
  • Fidget spinners don’t help students attend or learn;
  • Power poses don’t matter;
  • “Enriched environments” don’t benefit students;
  • Brain gym doesn’t help…

This list could go on at wearying length.

Sadly, I’ve got even worse news. Belief in these myths extends across the globe, and persists despite all our best efforts to stamp them out. One recent article reports that 89.1% of teachers believe in the visual/auditory/kinesthetic learning style myth. (Please picture the ‘gnashing teeth’ emoji here.)

A computer-generated top-down view of a brain. Half is made of computer circuitry; the other half blossoms with flowers. This image suggests the left-brain/right-brain myth.

For all these reasons, scholars in this field search for the best ways to revise teachers’ misunderstandings. The article I just mentioned — published last year in Mind, Brain, and Education — offers a detailed narrative looking at different strategies and their (depressingly small) effects.

Rather than review those strategies, I want to back up a big step and consider a crucial problem in these efforts. I think that MANY efforts won’t work because we’re thinking about the problem the wrong way. Our misunderstanding begins with the word itself: “neuromyths.”

It’s All Stuff In The Head

The field of “Mind, Brain, and Education” brings together at least three different disciplines:

  1. Mind = the study of mental function. We call that “psychology.”
  2. Brain = the study of the physical object of the brain (and nervous system). We call that “neuroscience.”
  3. Education = you know, teaching and schools and chalk dust.

The difference between mind and brain — between “psychology” and “neuroscience” — might seem too technical to merit our attention. After all, they’re both ways of studying “stuff going on inside the head.”

But we really must focus on that difference. If we don’t, the resulting confusion and mis-direction make our work less effective.

Let’s start with neuroscience: a kind of biology. If you’re looking at..

  • brain regions — like the amygdala and the pre-frontal cortex, or
  • neurons, or
  • myelin, or
  • neurotransmitters…

…you’re talking neuroscience.

While neuroscience studies biological objects, psychology studies mental function. If you’re studying…

  • attention, or
  • motivation, or
  • memory, or
  • anxiety, or

…you’re talking psychology.

For most of the 20th century, these fields regarded each other with deep suspicion, and would be surprised that anyone would have trouble telling them apart. But in truth, the distinction can feel tricky to non-experts.

One way to distinguish between them is to ask: “what is the researcher doing to study the question?”

  • If you can tap it with a scalpel or look at it with a microscope or an fMRI gizmo, it’s neuroscience.
  • If you give students a test to see how much they remembered, or a questionnaire to see whether or not they paid attention, that’s psychology.

(I should be clear: I’m answering a complicated question with a few hundred words. I’m trying not to oversimplify, but reasonable people would want me to rephrase some of those sentences. If you want a deeper dive, check out this podcast where I discuss the question with Zach Groshell and Kris Simmers.)

Why the Difference Matters

Here’s a plausible logical chain:

  1. Neuromyths result from a misunderstanding of neuroscience. (The clue is right there in the name!)
  2. Therefore, we can reduce belief in neuromyths by teaching people accurate neuroscience.

But this logical chain starts with a flaw. Most neuromyths aren’t primarily neuroscience claims. Most “neuromyths” are primarily psychology claims.

For example: claims about “learning styles” rest fundamentally on claims about mental behavior. “Visual learners” learn one way; “auditory learners” learn another way.

In this claim, “learning” is a mental behavior — it’s psychology — because I should test it using psychological methods. For instance, I would “align instruction with students’ learning style” and see how much more they learn. (To be clear: students don’t learn any more. The claim is false. It’s a myth.)

When challenged, champions of learning styles theory might try to defend their claim with neuroscience terminology. But the theory itself isn’t about the electrical or chemical behavior of neurons; it’s fundamentally about the mental behavior of learners.

In brief, learning styles theory isn’t a neuromyth. If anything, it’s a pschyo-myth.

To keep this blog post relatively short, I won’t go through each “neuromyth” claim to demonstrate that it’s a “psychomyth.” But almost all of them are — including the ones that have the word “brain” in them. (I’m looking at you, “left-brain/right-brain” and “brain gym.”)

Why does this difference matter?

Because: we can’t debunk “neuromyths” by teaching neuroscience. The myths don’t rest on neuroscience.

Although well-meaning, these neuroscience education efforts almost certainly won’t succeed.

Opportunity Costs Matter

At this moment, I can imagine a reasonable-sounding objection: “teachers benefit from learning neuroscience even if that knowledge doesn’t change their neuromyths.”

I have three concerns with that response:

First: I’m skeptical that neuroscience knowledge does benefit teachers. I can’t think of any classroom teaching advice that derives from neuroscience, not psychology.

Second: if the goal is to reduce belief in neuromyths, let’s choose a strategy that actually reduces teachers beliefs in neuromyths.

Third: we’re talking an ENORMOUS amount of time here.

  • In one PD intervention — described as “very brief” — teachers spent a “short” fifteen hours learning neuroscience.
  • Another study looked at the effect of neuroscience courses in schools of education. A full term course — over forty hours — “showed no effect on beliefs in neuromyths.”

As a PD presenter in schools, I’m lucky to get three hours with teachers. The idea that we’ll devote fifteen hours — or forty hours — to an approach that probably doesn’t work seems…unwise.

Uplifting Conclusion

I’d love to offer one. Here’s the best I can do:

First: Because neuromyths don’t result from a misunderstanding of neuroscience, we shouldn’t teach neuroscience to debunk them.

Second: In my own work, I don’t actually try to DEBUNK neuromyths; I try to REPLACE them.

That is, I don’t say: “give up on all that foolishness. It’s a myth.”

I do say: “Here’s what we know about working memory. If you start anticipating working memory overload in your lessons, your students will learn much more! Their attention and motivation will also improve!”

My theory is: if teachers are thinking about working memory all the time, then they’re spending less time thinking about psycho-myths. I have no research to support this claim, but it’s the best approach I’ve been able to think of.

If you have a more effective strategy, please let me know!


Rousseau, L. (2024). Dispelling Educational Neuromyths: A Review of In‐Service Teacher Professional Development Interventions. Mind, Brain, and Education18(3), 270-287.

Pashler, H., McDaniel, M., Rohrer, D., & Bjork, R. (2008). Learning styles: Concepts and evidence. Psychological science in the public interest9(3), 105-119.

Squishy Toys: Friend or Foe?
Andrew Watson
Andrew Watson

The US Department of Education recently released a video on eX/Twitter encouraging the use of “squishy toys” to calm students. In the video, an earnest school psychologist says that such toys help his students focus without distracting others.

The uproar that followed — “they’re vital!” vs. “they’re dreadful” — followed the usual eX/Twitter pattern.

Of course, here on this blog we’re interested in narrower questions than “are squishy toys good or bad?” We want to know:

  • do we have a consistent body of research giving us useful guidance on using squishy toys in K-12 classrooms?
  • if they have benefits, what are the boundary conditions? That is: perhaps they help some students but not others?
  • what kind of benefits are we talking about here? At a minimum, I think we want students to focus better and learn more. In other words: if the benefit is “students like ’em,” that’s not a good enough reason to consider squishy toys a research-supported classroom tool.

So, what happens when we start investigating those questions?

The Home Team Looks Worried

When people offer teachers “research-based” advice, we’d like them to cite the research. That’s both common courtesy and common sense.

A hand squeezing  a yellow squishy ball with a cartoon face on it; a computer screen and classroom in the background.

Sadly, the video itself — and the tweet that contains it — doesn’t provide a citation or link to any research. The school psychologist, as far as I can tell, is making an experienced-based claim, not a research-based claim.

Of course, teachers and school leaders rely on experience all the time, so this approach isn’t obviously ridiculous. At the same time, given that we can research a question like this, it’s surprising that the initial advice doesn’t take advantage of the research that is (almost certainly) out there.

So, what happens when we go looking for the research supporting the use of squishy toys?

Let’s imagine the following conversation:

ANDREW: You should take this medecine; research says so.

YOU: Tell me about this research.

ANDREW: It was done by a doctor who bleeds his patients to cure their fevers.

YOU: Wait…he bleeds his patients? That’s medical malpractice…it’s been out-of-date for decades, even centuries!

ANDREW: So what’s your point?

At this point, I suspect, you’d stop taking my medical advice.

When I asked Elicit.com to find research about using squishy toys in K-12 classrooms, the only relevant study it located notes that:

“Kinesthetic learners used the stress balls more consistently and their attention spans increased more when compared to other learners.”

This observation would be interesting if “kinesthetic learners” existed, and if we haven’t known for decades that learning styles theory isn’t supported by evidence. At a mimimum, we shouldn’t trust this study’s conclusions that squishy toys benefitted students.

To be clear, the study includes other key limitations:

  • It’s a self-described pilot study, including 29 students.
  • It mostly measures self-reported variables — not an especially persuasive foundation for teaching advice.

It’s not looking good for Team Squishy.

Don’t Stop Yet

I’ve tried two steps to verify the squishy-toy claim:

  1. Check with the person who made the recommendation.
  2. Look on Elicit.com.

Although those attempts didn’t pan out, we’ve still got several other avenues to pursue.

I checked with Scite.ai; it couldn’t even find that kinesthetes-love-squishy-toys study. (It’s REALLY rare for Scite.ai to come up short this way.)

I next went to ConnectedPapers.com to look for other studies surrounding that study; here’s the image it generated. (The study I started with is in the purple circle.)

So, I started poking around at the studies surrounding the first one. Of course, I didn’t look at them all — life is short. But, here’s what I found with some random checks:

  • “Fidget spinners negatively influence young children with ADHD’s attentional functioning, even in the context of an evidence-based classroom intervention.” (This from the Graziano study right in the middle.) 1
  • “Due to a recent surge in popularity, fidget spinners and other self-regulatory occupational therapy toys have yet to be subjected to rigorous scientific research. Thus, their alleged benefits remain scientifically unfounded. Paediatricians should be aware of potential choking hazards with this new fad, and inform parents that peer-reviewed studies do not support the beneficial claims.” 2
  • “Student performance was lower when they were allowed to use fidget spinners than when the fidget spinner was removed. The current study suggests that fidget spinners may cause a deficit in student performance. However, the effect of fidget spinners may actually lessen as the students habituate to the objects.” 3
  • “Using a fidget spinner was associated with increased reports of attentional lapses, diminished judgments of learning, and impaired performance on a memory test for the material covered in the video. The adverse effect on learning was observed regardless of whether the use of fidget spinners was manipulated between‐subjects (Experiment 1) or within‐subjects (Experiment 2), and was observed even when the sample and analysis were limited to participants who came into the study with neutral or positive views on the use of fidget spinners.” 4

I could go on. From this quick investigation, I notice two important patterns.

First:

No one seems to be researching stress balls (a.k.a. “squishy toys”) at all. The ONLY study I found on that topic is the pilot study with the kinesthetic learners.

Second:

Scholars ARE studying fidget spinners … and not finding any good news. Whether you’re teaching K-12 students or college students, neurotypical students or students with ADHD, kinesthetic learners or students who really exist: none of them receives classroom benefits from a fidget spinner.

Third:

To be on the safe side, I looked for meta-analyses — both ones that focus on squishy toys, and ones focusing on fidget spinners.

Unsurprisingly, I didn’t find ANY looking at squishy toys. And — equally unsurprisingly — I the fidget-toy meta-analyses sounded consistently discouraging notes. (For example, this one.)

My Current Conclusion

When I do this kind of research deep dive, I usually find conflicting evidence. As a result, I typically write a tentative, partly-yes and partly-no summary: “strategy X seems to work well with these students studying these things, but we don’t have good research outside that small group.” Or something like that.

In this unusual case, the research picture seems unambiguous to me:

  1. We have NO reliable research showing that squishy toys benefit (or harm) students.
  2. We have LOTS of research showing that fidget spinners provide few benefits, and can indeed interfere with learning.

If you have research that contradicts these conclusions, please let me know.


  1. Graziano, P. A., Garcia, A. M., & Landis, T. D. (2020). To fidget or not to fidget, that is the question: A systematic classroom evaluation of fidget spinners among young children with ADHD. Journal of attention disorders24(1), 163-171. ↩︎
  2. Schecter, R. A., Shah, J., Fruitman, K., & Milanaik, R. L. (2017). Fidget spinners: Purported benefits, adverse effects and accepted alternatives. Current opinion in pediatrics29(5), 616-618. ↩︎
  3. Hulac, D. M., Aspiranti, K., Kriescher, S., Briesch, A. M., & Athanasiou, M. (2021). A multisite study of the effect of fidget spinners on academic performance. Contemporary School Psychology25, 582-588. ↩︎
  4. Soares, J. S., & Storm, B. C. (2020). Putting a negative spin on it: Using a fidget spinner can impair memory for a video lecture. Applied Cognitive Psychology34(1), 277-284. ↩︎