We’ve been hearing A LOT about fidget spinners and squishy toys lately. Our blogger has been on this story for months…
Good news: cognitive science research can be SPECTACULARLY useful in guiding and informing teachers.
Less good news: that guidance isn’t always clear or prompt.
After all:
Research findings often contradict one another.
They can also contradict our deeply-held beliefs, and/or our school’s policies.
Not everyone who quotes research actually understands research, or quotes it accurately.
And so forth.
Another problem with research gets less focus: it takes so much time.
In this case: fidget spinners were a thing back in 2017!
Perhaps you — like me — frequently heard the idea that “we can give fidget spinners to students to help them get the wiggles out!”
More specifically: “fidget spinners will help those students diagnosed with ADHD let off jittery steam without wandering around or distracting others.”
At that time, those claims sounded plausible or implausible — depending on your perspective and experience. But we didn’t have any research to support or contradict them.
Fidget spinners and other self-regulatory occupational therapy toys have yet to be subjected to rigorous scientific research. Thus, their alleged benefits remain scientifically unfounded. Paediatricians should […] inform parents that peer-reviewed studies do not support the beneficial claims.
Well: SEVEN years later, now we do have research!
Worth the Wait
Researchers in Florida worked with a group of 60 children enrolled in a summer program for young children diagnosed with ADHD.
This program offered both academic and behavioral training over eight weeks, to better prepare these children for the upcoming school year.
Both in the first two weeks and the final two weeks, the research team gave several children fidget spinners during the 30-minute ELA class. They looked for data on these questions:
Did the fidget spinners change the amount of wandering around?
Did they change the students’ attention to the class work?
And, did they affect the other children who did not get fidget spinners?
Sure enough, the data they gleaned provide helpful classroom guidance.
Good News, Bad News
If you’re in the pro-fidget spinner camp, you’ll be glad to know that the fidget spinners did NOT lead to an increase it problems among the other students who didn’t get one.
They didn’t wander any more than usual; they didn’t pay less attention than usual. (“Area violations” and “attention violations” were the two categories tracked by researchers.)
That’s the good news.
If you’re pro-fidget spinner, the rest of the news won’t encourage you.
First: the spinners reduced wandering a bit at the beginning of the program. But they did NOT reduce wandering at the end.
Second: the spinners raised inattention levels both at the beginning and at the end of the program. And the increases in inattention were greater than the decreases in wandering.
In brief, fidget-spinner champions will not find much support here.
Questions and Caveats
No one study can answer all questions, so we should keep its limitations in mind.
What about older students? This research doesn’t explore that question.
Won’t students get better at using fidget spinners AND paying attention over time? We don’t know. (But: eight weeks is an unusually long research study.)
Don’t they benefits SOME students? Maybe. A 60-person study doesn’t really allow us to look for granular sub-populations.
A later study should show the opposite results! It certainly could.
In other words, passionate fidget-spinner advocates can ask worthwhile questions. And, we shouldn’t be too emphatic based on one study.
But we can say this:
According to this one research study, fidget spinners did not help young students diagnosed with ADHD pay attention; they did more harm than good.
Graziano, P. A., Garcia, A. M., & Landis, T. D. (2020). To fidget or not to fidget, that is the question: A systematic classroom evaluation of fidget spinners among young children with ADHD. Journal of attention disorders, 24(1), 163-171.
How does this logic apply to “squishy toys”? We’re glad you asked…
The idea that “gestures improve learning” has be increasingly popular in recent years. I first heard about this idea from Dr. Susan Goldin-Meadow at a Learning and the Brain conference back in 2015, and the claim has gotten more emphatic with every passing year.
This emphasis on gesture — within the broader field of “embodied cognition” — is approaching certainty in the field.
As is so often the case, I myself am less confident than others about many of these claims.
In this post, I want to
Explain my concerns about the “gestures promote learning” claims
Explore a recent study that pushes back against my concerns, and
Highlight the features of that study that support my concerns.
In other words: we’re taking baby steps here. This study that I’ll explain does offer modest support to this claim — and offers some useful specifics.
But it certainly does not make the “gestures promote learning” a slam dunk. And it flatly contradicts some of the most vehement statements you’ll hear when people champion embodied cognition and gestures.
Initial Concerns
I am certainly open to the idea that gestures might promote learning. No, honestly: it could be true!
At the same time, I want to
understand the SPECIFICS of that claim, and
have confidence in the research that supports them.
These requests feel reasonable to me.
So, specifically:
Who is making the gesture? The teacher? The learner? Both? A video?
Does some kind of learning get more benefit from gestures than others?
For example, I know how I could use a gesture to reinforce the word “hatchet.” (A hatchet gesture is easy to envisage.)
But: how would I make a gesture to help students learn “protagonist” and “antagonist”? Or, “pleistocene”? Or “tradition”?
What if students learn the same gesture in two different classes? Say: the “hatchet” gesture means “hatchet” in my Spanish class but “guillotine” in my history class.
Do gestures help students of all ages? Are there cultural differences?
In other words: what are the rules to govern the creation and use of gestures? And: what research supports each specific component?
Last year, I wrote about a study that starts answering some of these questions. But: we’ve got a long way to go.
Frankly, the questions listed above fit quite nicely under the heading of “boundary conditions”: we need to know the “boundaries” within which teaching suggestions work, and outside of which they don’t. I don’t quite understand why people look vexed when I ask such questions.
A Recent Study Strengthens the Case
Because I have outlier views on this topic, I’m always on the lookout for research that will change my mind.
Recently, I came across a study from 2021 that offers lots of helpful information.
This study does, in fact, give me some increased confidence in the ideas that gestures can help. And, it offers helpful specifics about ways that gestures MIGHT NOT help. That is: it clarifies those boundary conditions.
This research has LOTS of moving parts, so I’ll try to summarize it accurately and briskly.
Twenty-two German adults spent 4 days studying foreign language vocabulary.
Specifically, they learned 90 word pairs: a word in German, and its “translation” in a made-up language called “Vimmi.” (Researchers used made-up words so that the participants’ prior knowledge wouldn’t muddle their results.)
45 of those word pairs were accompanied by a gesture. (The participants saw someone making the gesture; and they also made the gesture themselves.)
The other 45 pairs were accompanied by a drawing of the word; in other words, not a gesture.
This group of non-gesture word pairs gives the researchers a useful comparison point. They can ask not just “did gestures helps participants learn?,” but “did the gestures help compared to the pictures?”
Now things get REALLY interesting.
Researchers then used a fancy gizmo to DISRUPT NEURAL PROCESSING in a part of the brain that coordinates gestures.
Specifically, several days after participants learned the word-pairs, researchers asked participants to recall them out loud. While participants did so, researchers used the (perfectly safe) neuro-scrambler for half of the words; they then faked the neuro-scrambling for the other half of the words. The participants, in other words, didn’t know when the scrambler was on or off.
Let’s add two more cool variables — variables that help answer my pesky questions:
Researchers tested participants five days and five months later. So: they could see how long these effects lasted.
And, they also tested two kinds of words: concrete nouns (“window”) and abstract nouns (“benefaction”).
Based on all these moving pieces, you can see that this study might finally convince me that gestures help — or give us some ideas about boundary conditions.
Yup: (Some) Gestures Help (Some) Learning (Somewhat)
When researchers tested the participants five days later, they found that seeing and producing gestures helped participants learn concrete nouns.
Specifically, the neuro-scrambler (presumably) reduced the neural encoding of the gestures for half of the words. And participants took longer to remember those words than the un-scrambled ones.
The likeliest interpretation: part of the memory of the word pair had been encoded in the motor cortex. When we interfere with that part of the memory, we interfere with the memory overall.
So, even though I’m naturally cautious about such things, I’m comfortable saying:
“This research supports the idea that seeing and making gestures about concrete objects helps learn those words in the short term.”
Baby steps.
Yup: Important Limits Exist
At the same time, you’ve probably noticed the boundary conditions in that statement.
The gestures did help participants remember concrete nouns (“window,” “hatchet”); at the same time, participants did not remember abstract nouns (“benefaction”, “protagonist/antagonist”) as smoothy over time. (I should be clear: this statement summarizes a very complex set of findings; others might look at the findings and disagree with that statement.)
The gestures did help after 5 days, but not after five months. So: short-term memory got a boost. But long-term memory — not so much.
I should also say that this study design (like ALL study designs) includes some features that require our caution.
22 is not a huge number of participants. I wouldn’t often make big decisions based on such a small group. (Of course, as more and more small studies agree, they might add up to compelling recommendations.)
Also, this study pushes back against one of the more common statements within the broader field of “embodied cognition,” and also in the narrower field of “gestures promote learning.”
“Learning doesn’t just happen in the brain; it happens in the body.“
Now, I spend LOTS of time on this blog warning readers against over-reliance on neuroscience. We spend far too much time saying “ooh, look, BRAIN.” (I think we should focus much more on psychology: the study of the mind.)
This study, however, requires that we take the brain itself seriously. Those brain scramblers, after all, scrambled signals in the motor cortex: a part of the brain. They did not do anything to the arm or the rest of the body.
We can, I think, reasonably say that the learning happened WITH the body. But it didn’t happen IN the body; more precisely, it happened IN the part of the body called “the brain.” The arm didn’t change and learn. The brain changed. The mind learned.
TL; DR
This small study adds to a growing body of research that suggests seeing and making the right kind of gesture can help people learn some things, at least for a few days.
It does not reach grander conclusions than those.
And, it strongly contradicts the idea that “learning happens in the body.” Bodily movements can help help the brain change; but the brain itself changes, and the mind itself learns.
A Final Note
This study includes substantially more complicated procedures and calculations than I typically try to summarize on the blog. As a result, reasonable readers might believe I have over-simplified the study and its conclusions; you might even think I have MIS-summarized both.
I have, quite genuinely, tried to do justice to the study and its conclusions. And, I include this link so you can check for yourself.
Mathias, B., Waibel, A., Hartwigsen, G., Sureth, L., Macedonia, M., Mayer, K. M., & von Kriegstein, K. (2021). Motor cortex causally contributes to vocabulary translation following sensorimotor-enriched training. Journal of Neuroscience, 41(41), 8618-8631.
I collaborated on this post with Dr. Cindy Nebel. Her bio appears below.
Everyone in this field agrees: we should begin our work with SKEPTICISM. When someone comes to us with a shiny new “research-informed” teaching suggestion, we should be grateful…and be cautious as well. After all:
The “someone” who gave us the “research-based” suggestion might…
Misunderstand the research; it’s easy to do!
Have found an outlier study; researchers rarely agree on complex subjects — like teaching and learning.
Work in a context that differs from ours, and so offer a suggestion that helps their students but not other students.
Misrepresent the research. Yup, that happens.
And so forth.
For all these reasons, we have to kick the tires when we’re told to change what we do because of research.
Easy Cases, Hard Cases
At times, this advice feels easy to follow. Ask any speaker at a Learning and the Brain conference, and they will assure you that:
Learning Styles are not a thing;
Left Brain/Right Brain distinctions don’t really matter;
The Learning Pyramid (“students remember 5% of what they hear…”) is self-contradicting, and based on no research whatsoever;
The list goes on…
Friends describe these ideas as “zombie beliefs”: no matter how many times we “kill them off” with quality research, they rise from the dead.
As we attack these zombie beliefs with our evidence stakes, we always chant “remember: you have to be SKEPTICAL!”
These cases are easy.
Alas, we often face hard cases. In my experience, those hard cases often combine two key elements:
First: we already believe — and/or WANT to believe — the “research-based” claim; and
Second: the research is neuroscience.
If a new neuro-study confirms a belief I already hold, my ability to be skeptical deserts me completely. I accept the research as obviously true — and obviously beyond criticism. I find myself tweeting: “Only a fool could disagree with this claim, which is now settled…”
Of course, if I fail to be skeptical in these hard cases, I’m abandoning scientific principles just as surely as people who purport to teach left-brained thinkers.
One example: in my experience, people REALLY want to believe that handwritten notes foster learning more surely than notes taken on a laptop. (I have detailed the flaws in this claim many times.)
A study published in 2023 is regularly used to support this “handwriting > laptop” claim. I first read about it in an article entitled “Handwriting promotes memory and learning.”
Notice that this study fits the pattern above:
First: people already believe — and really WANT to believe — the claim.
Second: it’s neuroscience.
LOTS of thoughtful people held this study up to champion handwritten notes.
Alas, because this study is a hard case, the skepticism practices that we typically advocate fell to the side. As it turns out, the flaws in this study are easy to spot.
It’s based on a very small sample;
The laptop note-takers had to type in a really, really unnatural way;
The study didn’t measure how much the students remembered and learned.
No, I’m not making that last one up. People used a study to make claims about memory and learning even though the researchers DIDN’T MEASURE memory and learning.
In other words: in this hard case, even the most basic kinds of skepticism failed us — and by “us” I mean “people who spend lots of time encouraging folks to be skeptical.”
Today’s Hard Case
The most recent example of this pattern irrupted on eX/Twitter last week. An EEG study showed that students who used ChatGPT
a) remembered less of what they “wrote,” and
b) experienced an enduring reduction of important kinds of brain-wave activity.
Here’s a sentence from the abstract that captures that second point precisely:
“The use of LLM had a measurable impact on participants, and while the benefits were initially apparent, as we demonstrated over the course of 4 months, the LLM group’s participants performed worse than their counterparts in the Brain-only group at all levels: neural, linguistic, scoring.”
Once again, this study fits into the hard-case pattern:
Confirms a prior belief (for LOTS of people), and
Neuroscience
The unsurprisingly surprising result: this study has been enthusiastically championed as the final word on the harms of using AI in education. And some of that enthusiastic championing comes from my colleagues on Team Skepticism.
I want to propose a few very basic reasons to hesitate before embracing this “AI harms brains” conclusion:
First: the PDF of this study clocks in at 206 pages. To evaluate a study of that length with a critical eye would take hours and hours and hours. Unless I have, in fact, spent all those hours critically evaluating a study, I should not rush to embrace its conclusions.
Second: I’m going to be honest here. Even if I spent hours and hours, I’m simply not qualified to evaluate this neuroscience study. Not many people are. Neuroscience is such an intricately technical field that very few folks have the savvy to double- and triple-check its claims.
For the same reason you should not fly in a jet because I’ve assured you it’s airworthy, you should not trust a neuro study because I’ve vetted it. I can’t give a meaningful seal of approval — relatively few people can.
Knowing my own limitations here, I reached out to an actual neuroscientist: Dr. Cindy Nebel*. Here are her thoughts:
Here are my takeaways from this study:
Doing two different tasks requires different brain areas.
In this study, participants were explicitly told to write using AI or on their own. Unsurprisingly, you use a different part of your brain when you are generatingyour own ideas than if you are looking up content and possibly coping and pasting it. In this study, participants were explicitly encouraged to use AI to write their essay, so it’s likely they did — in fact — just copy and paste much of it.
2. When you think back on what you did using different brain areas, you use those same differentiated brain areas again.
When we remember an event from our lives, we actually reactivate the neural network associated with that event. So, let’s say I’m eating an apple while reading a blog post. My neural areas associated with taste, vision, and language will all be used. When I recall this event later, those same areas will be activated. In this study, the people who didn’t use their brains much when they were copy/pasting still didn’t use their brains much when they recalled their copy/pasting. This finding is entirely unsurprising and says nothing about getting “dumber”.
3. It’s harder to quote someone else than it is to quote yourself.
The only learning and memory effect in this study showed that individuals who copied and pasted had a harder time quoting their essays immediately after writing them than those who generated the ideas themselves. Shocking, right?
My neuroscience-informed conclusion from this study is that not using your brain results in less neural activation. [*insert sarcastic jazz hands here*.]
To be clear: I did not spend the requisite hours and hours reading the 206 page article. I did scan all 206 pages, read the methods thoroughly, and took a close look at the memory results in particular. I skipped the bulk of the paper which is actually a linguistic analysis of the type of language used in prompts and essays written with and without the support of AI. I am very much not an expert in this area and, very importantly, this seemed to make up the most important findings.
Back to you, Andrew.
Customary Caveats
This post might be misunderstood to say: “this study is WRONG; teachers SHOULD use AI with their students.”
I’m not making either of those claims. Instead, I am saying:
Like all studies, this study should be evaluated critically and skeptically before we embrace it. Because it’s so complicated, few people have the chops to confirm its findings. (And not many have time to review 206 pages.)
As for the use of AI in schools, I think the topic resists blanket statements. Probably the best shorthand — as is so often the case — goes back to Dan Willingham’s famous sentence:
“Memory is the residue of thought.”
If we want students to learn something (“memory”), they have to THINK about it. And if they’re using ChatGPT, they’re thinking about — say — high-quality prompts. They’re probably NOT thinking about the content of the essay, or effective essay-writing strategies.
Because we want students to think, we should – in almost all cases – encourage them to write without AI.
(To be clear: I think we could easily create assignments that cause students to think with AI. For instance: they could ask Claude to write a bad essay about The Great Gatsby: one that’s ill organized, ungrammatical, and interpretively askew. They could then correct that essay. VOILA: an AI assignment that results in thinking.)
Ironic Coda
I wrote this blog post based on my own thinking and understanding. I then shared my thoughts with Dr. Nebel, who offered her substantial commentary.
Next — as is my recent habit – I asked Claude to proofread this post, and to make any suggestions for clarity and logical flow. Based on its suggestions, I made a few changes.
In other words: this post has an inherent bias in it.
If I trust Claude — an AI assistant — I’m probably biased against research showing that AI assistants create enduring mental decrements. Although I doubt that this bias has misled me too far, I do think you should know that it exists.
* Dr. Nebel notes: “To the neuroscientists in the audience, Andrew is using that term generously. My degree in Brain, Behavior, and Cognition yes, involved neuroscience courses including a human brain dissection and yes, involved courses and research using fMRI. But I am not a neuroscientist in the strictest sense. I do, however, understand neuroscience better than the average bear.”
Dr. Cynthia Nebel is the Director of Learning Services and Associate Professor of Psychiatry and Behavioral Neuroscience at St. Louis University School of Medicine. She holds a Ph.D. in Brain, Behavior, and Cognition and has held faculty positions at Lindenwood, Washburn, and Vanderbilt Universities. Dr. Nebel has published two influential books on the science of learning and is a leading collaborator with The Learning Scientists, an organization focused on bridging the gap between learning research and educational practice. Dr. Nebel has presented the science of learning nationally and internationally and is dedicated to bridging research and practice to improve educational and organizational outcomes.
Van der Weel, F. R., & Van der Meer, A. L. (2024). Handwriting but not typewriting leads to widespread brain connectivity: a high-density EEG study with implications for the classroom. Frontiers in psychology, 14, 1219945.
Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X. H., Beresnitzky, A. V., … & Maes, P. (2025). Your brain on chatgpt: Accumulation of cognitive debt when using an ai assistant for essay writing task. arXiv preprint arXiv:2506.08872.
I spend my days giving research-informed advice to teachers and school leaders. You could summarize my pitch this way:
According to my current best understanding, we’ve got some good research suggesting that X strategy is likely to help most students learn most things.
Sometimes I say:
We’ve got LOTS of research suggesting…
Or occasionally:
We’ve got a few encouraging studies that make me think…
In every case, these research-informed suggestions arise from the CURRENT state of the research.
Of course, researchers haven’t stopped working…they’ve only just begun. Hundreds (thousands?) of graduate students and professors are out there right now. They’re exploring:
working-memory limitations, and
the benefits of mindfulness, and
the relationship between handwriting fluency and reading comprehension
And SO MANY other topics.
For this reason the advice that I gave yesterday based on “the current state of the research” could be contradicted by today’s research. My job isn’t just to find “what the research has said,” but to keep track of the ongoing conversation that current researchers have with prior conclusions.
In other words: “research-informed” teaching advice evolves over time. Occasionally, I have to retract advice I’ve given in the past.
Prior Conclusions: An Example
I’ve spent the last week talking with teachers from across the country about — among other topics — working-memory limitations, and ways to work around them.
One sensible approach: because stress hampers working memory-function, we can support working memory by reducing stress. Happily, we’ve got a few research-supported strategies for doing so.
I first heard about one strategy at a Learning and the Brain conference many years ago. Researcher Sian Beilock — currently the President of Dartmouth College! — teamed up with Gerardo Ramirez to test a surprising hypothesis. The title of their study says it all: “Writing About Testing Worries Boosts Exam Performance in the Classroom.”
Although the idea sounds counter-intuitive — in fact, both researchers were skeptical of the hypothesis — they found that letting students VENT their stress in writing dissipated the stress. More specifically, students given time to write about their anxiety about an upcoming exam did better on that exam than students who wrote about something else.
When I heard Dr. Beilock’s talk, the suggestion struck me as SO PRACTICAL. It doesn’t take much time. It helps anxious students without harming other students. It cost $0. What’s not to love?
Happily, we don’t have to rely on one study. More recently, DiMenichi and colleagues tried a similar strategy — and got similar results.
For that reason, I’ve been giving this advice for several years now: “we’ve got some good studies suggesting that letting students VENT their stress actually helps them dissipate their stress. The result: they do better on various academic measures.”
Today’s News: It’s Time for a Retraction
Recently, as I was preparing for an upcoming presentation, I looked again at my slide about the Beilock and DiMenichi studies. I realized that I hadn’t kicked those tires in a while, and I thought it would be helpful to go back and see what more researchers have found in this field.
Who knows: perhaps someone had recently done helpful study that offers creative new approach to this counter-intuitive suggestion.
I have several different strategies to evaluate research based claims: you can find a quick guide here.
Recently I’ve been using Claude to jumpstart my searches. I asked it to do a research deep dive on this question: here’s the first paragraph of its answer:
The evidence for helping students manage stress through “venting” shows a complex picture with both promising findings and significant contradictory results.
While foundational studies by Ramirez & Beilock (2011) and DiMenichi et al. (2018) demonstrated large effects for brief expressive writing interventions, subsequent research reveals substantial methodological concerns, replication failures, and evidence that such interventions can be harmful for certain students.
Meta-analyses consistently show only small effect sizes, and the majority of studies with healthy college students find no significant benefits
Now, I didn’t stop here. I looked over this review — and then explored several of those studies on my own. (As always, I relied a lot on ConnectedPapers.com and Elicit.com.) Especially because I’m new to Claude, I didn’t simply accept its guidance without question.
But — sure enough: as I see the CURRENT research picture, we don’t have a consistent and persuasive pool of studies suggesting that students benefit from venting their stress.
Pointing Fingers
If we got a wrong answer, presumably someone is at fault. So: who is it? At whom should I point when I cry j’accuse?
Should we blame Ramirez and Beilock, for leading us astray with that initial study?
Should we blame DiMenichi for confirming their error?
Should we blame ME for spreading erroneous information?
I think the correct answer is: D) none of the above.
This turn of events is an entirely predictable possibility in the world of research-informed teaching advice.
Ramirez and Beilock didn’t do anything wrong. They had a hypothesis. They tested it. They reported their results. That’s what they’re supposed to do.
DiMenichi didn’t do anything wrong. She followed up that initial study. She reported her results. That’s what she’s supposed to do.
I (probably) didn’t do anything wrong. I saw interesting and well-designed studies. The results aligned with each other, and with my teaching experience. I told teachers about these studies. That’s what I’m supposed to do.
(I might have made this mistake: I might not have emphasized enough that the advice was based on a small number of studies — and therefore tentative.)
Those of us who base teaching advice on research should always acknowledge that some of our conclusions will be contradicted by future research. Occasionally, that kind of reversal just happens.
In fact: we can have confidence in research-based suggestions BECAUSE the research cycle will probably reveal false leads sooner or later. It’s not a perfect system. But as long as we stay realistic about its limitations, science can be self-correcting.
What This Means for You
First: when you get “research-based” teaching advice, ask how much research supports the claim. You don’t have to reject ideas with only a little research beind them — especially if the conclusions match your experience, or your school’s philosophy. But: be sure to check back in every now and then to see what subsequent research has found.
Second: although we can be sad that “expressive writing” probably isn’t a good strategy for helping students manage stress, we do have other strategies that can help. I’ll write about “cognitive reframing” in a future post…
Ramirez, G., & Beilock, S. L. (2011). Writing about testing worries boosts exam performance in the classroom. science, 331(6014), 211-213.
Those of us who study the intersection of research and teaching can get carried away all to easily.
After all: psychology research can provide enormous benefits to teachers and school leaders. More important, it benefits the students and families who depend on us. Because this reseach can help us understand — say — memory and attention, it can improve the way we teach and think about almost every school-keeping topic.
No wonder we get so excited. “Here’s the research!” we cry. “Now, go use it wisely!”
Of course, the gap between “having the research” and “using it wisely” is…ENORMOUS. If we understand research correctly, it can indeed help us know what to do. But…
… what if we’re relying on flawed research? or
… what if we’re using research that doesn’t apply to the biggest problem in our school? or
… what if we incorporate the research in our thinking, but don’t have a clear system to evaluate its effects, or…
This list goes on.
Those of us who want psychology and neuroscience research to improve education need a better system for “using research wisely” in schools.
Today’s News: Towards a Better System
I’ve been thinking about the broader difficulties of “using research wisely in education” because of a recent book: Evidence-Informed Wisdom: Making Better Decisions in Education. The authors — Bradley Busch, Edward Watson, and Matthew Shaw — see exactly this problem, and have lots of guidance to offer.*
This book includes several quotations and charts and anecdotes worth savoring; I want to focus on a Venn diagram that has really got methinking. Here’s an incomplete version:
As you can see, Busch, Watson and Shaw argue that we can find the “sweet spot” when we undertake three complex processes simultaneously.
In the first place, we need to identify the problem correctly.
Second, we should have a plan that relies on evidence and research.
And third, we need to monitor that plan.
Of course, each circle in this Venn diagram requires lengthy exploration. I myself have written a book about evaluating the “evidence” which we might use in the green circle. And Evidence-Informed Wisdom takes several pages to explore the “review cycle” in the blue circle: an idea drawn from Bruce Robertson’s work.
But the core insight here is: while each of these processes merits its own book, we get the most powerful effect from doing all three at the same time.
Mind the Gaps
Venn diagrams help us think because they label the places that circles overlap. Those labels typically emphasize the commonalities that the circles share.
For instance, a humorous Venn diagram considers “bank robbers” and “night club DJs.” Their commonality: “people who tell you to put your hands in the air.”
Busch, Watson, and Shaw have a different approach to that overlap. Check out this more-complete version (from page 80 of their book):
Notice that in this Venn diagram, the labels do NOT highlight the commonality between the two circles. Instead, they name the specific problems created by the absence of the third circle.
So: if I successfully identify the problem (the purple circle) and use evidence to plan a solution (the green circle), I’m wisely accomplishing two key processes. For that reason, I might make meaningful progress for a while.
However — and this is a BIG however — I’ve skipped out on Robertson’s review cycle (the blue circle). Without that additional process, my initiative might…
…lose focus. For example: my colleagues and I might all think we’re using retrieval practice…but we’re not all using it consistently and well.
…lose momentum. We could start out strong, but easily get distracted by the next shiny new thing. (I’m looking at you, AI.)
Or, if our school runs a scrupulous process to identify our most pressing problem and develop a clear “try-review-reflect” cycle, our good work might well help our students.
HOWEVER, because we didn’t use research to inform our decision making, we might be trying, reviewing, and reflecting upon a foolish plan.
In other words: this Venn diagram reminds us of three essential processes to keep in mind. And, it highlights the symptoms we’ll see and feel if we skimp on one of those processes.
As a bonus, it also offers a fun new way to think about creating Venn diagrams.
TL;DR
We absolutely should use research to inform our teaching and our school keeping.
The process of “using research” isn’t straightforward; it requires at least three complex processes used in a just and nuanced balance.
Busch, Watson, and Shaw have created a really helpful way to think simultaneously about all three.
When you and your school decide to follow the evidence-informed path, this diagram will guide your exploration.
* Two important notes:
As far as I know, I’m not related to Edward Watson.
I have a policy that I don’t review books written by friends. For that reason, I’m not reviewing this book. I am, instead, writing about a topic that it explores.
Let’s imagine that I — a 10th grade classroom teacher — want to motivate my students. I discover this list of motivational suggestions:
listen to students
give them time for independent work
provide time for students to speak
acknowledge students’ improvement
encourage their effort
offer them hints when they’re stuck
respond to their comments and questions
acknowledge students’ perspectives
Even better, I’m told that this list has RESEARCH behind it.
Before we dive in and adopt these suggestions, let’s pause for just a moment. Please look over that list and ask yourself: “if I were a student, would I feel more academically motivated if my teacher did any or all of those things?”
[I’m pausing here so you can review the list.]
Welcome back. If you like these suggestions, I’ve got some good news for you:
For a few decades now, Richard Deci and Edward Ryan have developed self-determination theory: a theory of academic motivation that gets LOTS of love. The headlines sound like this:
“If we want to improve our students’ academic motivation, we should be sure that they feel
AUTONOMY,
RELATEDNESS, and
COMPETENCE
in the classroom.”
Those three feelings — again: autonomy, relatedness, and competence — foster all sorts of good human outcomes, including academic motivation.
I have even better news. Unlike almost all psychology theories, self-determination theory uses these words in their everyday meaning. I don’t need to offer several paragraphs of translation to describe what Deci and Ryan mean by “autonomy.” They mean EXACTLY what you think they mean.
(This news might not sound like a big deal. But — trust me — in the world of cognitive psychology, few scholars offer clear terminology.)
I’ve described my three-word summary as “the headlines” of Deci and Ryan’s theory. But my blog title says that three words aren’t enough. What happens when we go beyond those three words?
Not So Fast
We should start by admitting that LOTS of people who talk about self-determination theory don’t get past these headlines. You’ll read SDT summaries that list and define those words, and then conclude with uplifting advice: “teachers — just go do that!”
Alas, I think even the headlines themselves raise pressing questions.
For instance: to my ear, “autonomy” and “relatedness” suggest two contrasting vibes. The words don’t exactly contradict one another — they’re not antonyms. But it’s easy to imagine a teaching strategy that INCREASES one but REDUCES the other.
For instance, let’s look at point #6 on the motivational list above: “offer struggling students a hint when they’re stuck.”
On the one hand, that advice could certainly foster a sense of relatedness in the classroom.
A student might think: “I was struggling, and the teacher noticed and helped me. This is a great class!” This student didn’t precisely use the word “relatedness,” but that vibe is in the air.
At the same time, offering hints might lessen another student’s sense of autonomy.
This student might think: “Does this guy think I’m completely helpless? I would have gotten it if he just left me alone. Sheesh.” So much for feelings of autonomy.
Let’s throw “competence” into this mix. If I offer a struggling student a hint, she might think:
“Oh, wow — I can solve this problem now! I feel so happy an successful!” She exudes an aura of joyful competence. Or
“Oh, wow — my teacher thinks I’m so hopeless that he has to offer me the answer on a platter. I must be the biggest loser in this class.” Her feelings of competence have clearly drained away.
Yes, you will hear “autonomy! relatedness!! competence!!!” offered as a formula to enhance student motivation. But I don’t think this formula — or ANY formula — works simply as a formula.
Reading the Fine Print
SDT’s emphasis on autonomy, relatedness, and comptence DOES provide an excellent place to start our pedagogical thinking. And, we need to keep going.
Each of the eight strategies listed above has been researched as a way to enhance student autonomy. Before we use any of them, however, I think we should stop and ask reasonable questions:
What are the potential conflicts here? Will this strategy enhance one of the Big Three, but harm another?
What are the individual differences here? Will this student respond to the strategy by feeling more competent, while that student responds by feeling more foolish?
What are the cultural differences here? Will students in — say — Korea find a particular autonomy strategy confounding while their counterparts in — say — Brazil find that same strategy encouraging? (To be clear: I’m being entirely speculative here. I don’t know enough about either of those cultures to even attempt an example.)
I’ll offer one more example, simply to emphasize the concerns that trouble me.
Strategy #2 says that we should “give students time for independent work.”
On the one hand, what could possibly be more foundational to teaching. OF COURSE students need time to work on their own.
As a motivational benefit, all this independent work might make them feel autonomous and competent.
On the other hand, students who lack appropriate prior knowledge could be overwhelmed and demotivated by all that time to work alone. What should they be doing? How do they do it?
Motivationally, they have no partner to rely on (so much for “relatedness”), and feel their own lack of skill all too forcibly (good bye “competence”).
In other words: if I worry that my students lack motivation, I shouldn’t simply look at that list and pick one that sounds uplifting and research-y. “Oh, yes, I’ll give them independent work time — they’ll feel more motivated!”
Instead, I should look at the list and ask myself those follow-up questions. In brief, does this uplifting and research-y teaching strategy fit my students’ current educational and motivational needs? Have I considered both upsides and downsides?
In fact: these concerns about self-determination theory point to a broader challenge that teachers face when trying to implement research-backed strategies.
The Bigger Picture
In writing this post, I am using self-determination theory as an example of a larger problem. Research and researchers certainly can — and should!! — offer classroom teachers practical guidance.
And: we should always filter that guidance with friendly-but-persistent questions:
How good is this research? How many studies arrive at roughly the same conclusion?
Will this guidance benefit MY students (not just someone’s students)?
Does the benefit over here create a problem over there?
Does the cost — in money and in time — outweigh the potential benefit?
And so forth.
In brief: let’s use research to inform our practice. And: let’s also commit to being educators who dig deeper and ask tougher questions. The best teaching happens not when we follow scripts and formulas, but when we think carefully about the unique circumstances and students in our classrooms.
Reeve, J., & Jang, H. (2006). What teachers say and do to support students’ autonomy during a learning activity. Journal of educational psychology, 98(1), 209.
Ryan, R. M., & Deci, E. L. (2020). Intrinsic and extrinsic motivation from a self-determination theory perspective: Definitions, theory, practices, and future directions. Contemporary educational psychology, 61, 101860.
Teachers often hear that we should divide students into different groups and teach them differently.
Most famously, learning styles theory (now thoroughly debunked) says that we can think of students as auditory, visual, or kinesthetic learners. It further suggests we should align our instruction with their natural learning style: visual learners get visual instruction, and so forth.
(To be very clear: visual learners should NOT get visual instruction, because visual learners do not exist. We all learn through our senses, and the best one to prioritize depends on the content being learned.)
Honestly, most advice that sounds like “sort-students-and-teach-them-differently” lacks good research support.
At least one obvious exception to this rule demands attention: prior knowledge.
Imagine this hypothetical:
You and I are taking a class on the Spanish Civil War.
You know considerably more about Spain, fascism, military history, and Picasso than I do.
That is: you are, relatively speaking, an “expert.” I am, relative to you, “a novice.”
In this hypothetical, it just makes sense to suspect that we will benefit from different instructional strategies.
This argument goes by the name the “Expertise Reversal Effect.” That is: instructional supports that benefit novices instead interfere with learning for experts.
I will need A LOT of instructional support, and my learning will suffer if I don’t get it.
You will probably learn best if you get a fair amount of freedom and intellectual room to knock about on your own.
If you have to sit through all the basics that I find essential (“here is a map of Spain; notice Madrid here, Barcelona here, and Guernica here”), you’ll feel bored and constrained. And you’ll learn less than you would with greater freedom.
So far in this blog post, the expertise reversal effect is a hypothesis. Is it supported by evidence?
Does It Work in Theory?
In a minute, I’ll explore the data. (Preview: yes, this hypothesis has good research support.)
Before we get there, let’s pause to consider WHY the expertise reversal effect might be true.
First: cognitive load theory tells us we want to get rid of needless extra mental work (“extraneous cognitive load”) so that students can focus on the essential mental work (“intrinsic cognitive load”).
In my Spanish Civil War hypothetical, my working memory will be threatened by all the new information. I will learn more if the teacher organizes it and focuses on the essentials.
You, however, have lots of working-memory headroom — because you store more relevant information in long-term memory than I do — so extra information isn’t always extraneous. It might, in fact, help you figure out new and unforeseen ideas.
Second: self-determination theory tells us that student motivation benefits from their feeling autonomy, competence, and relatedness.
For me — the Spanish Civil War novice — I will feel more competent if a supportive teacher helps me out, and more related to my teacher as well.
All that extra support, however, might reduce feelings of autonomy for an expert like you.
Now that we have some reason to think that the expertise reversal effect has a plausible theoretical background, let’s see if we’ve got FACTS to support it.
Crunching the Numbers
You want data; we’ve got data.
More precisely, Tetzlaff, Simonsmeier, Peters, and Brod have data from that recent meta-analysis mentioned above. This team found 60 relevant studies; those studies included 176 relevant effect sizes, and included almost 6000 participants. (“Effect size” means, roughly, what the name suggests: the size of the effect of teaching one way or another.)
In brief:
Yes: students with a LOW level of prior knowledge learn more from high-support pedagogy than low-support pedagogy. (Stats folks: the d-value here is 0.505.)
Yes: students with a HIGH level of prior knowledge learn more from low-support pedagogy than high-support pedagogy. (D-value: -0.428.)
Those d-values are traditionally in the “medium size” range. I speak d-value, and I pay attention to numbers that size.
The expertise reversal effect gets bigger as students get older. The effect was strongest in higher ed; we don’t honestly have lots of studies in primary school.
It also varies somewhat depending on academic discipline. (There’s less of an effect in language and humanities.) But — roughly speaking — we can say that “beginners benefit from higher-structure pedagogy; experts benefit from lower-structure pedagogy.”
Not So Fast
But wait just a minute. Why did I slip the phrase “roughly speaking” into that previous sentence? We’ve got all those data — shouldn’t we just do what they say?
Let me focus on one of several reasons.
Sometimes research considers variables that are either one thing or the other:
The switch is either on or off.
The school is either in Hawai’i or in Delaware.
The subject being studied is either chemistry or Latin.
At other times, research considers variables that fall along a continuum.
The research might sort teachers into a “cheerful” group and a “glum” group. But not all “cheerful” teachers are equally cheerful. Heck, on some days, they might be glum.
Research might describe playground behavior as “cooperative” or “competitive,” but not all cooperation is equally cooperative. Heck, some cooperation is a kind of competition. (From the little I’ve seen of it, the entire Survivor franchise depends on competitive cooperation.)
And so forth.
Notice that — in my “roughly speaking” summary — ALL OF THE KEY TERMS are continuous.
There is no exact line separating LOW prior knowledge from HIGH prior knowledge. Prior knowledge exists on a continuum.
There is no exact line separating LOW-structure pedagogy from HIGH-structure pedagogy. Pedagogy exists on a continuum of support.
For that reason, classroom teachers can’t simply “do what the research says.”
I might say to myself:
“My 1st period class has slightly more prior knowledge than my 3rd period class. For that reason, my instruction 3rd period should be somewhat higher structure than during 1st period. But they’re both novice groups compared to my advanced class 5th period, which should have lower-structure than both…”
These delicate — even hunchy — internal monologues can be guided by the expertise reversal effect. But that effect and this meta-analysis don’t translate into step-by-step rules that teachers can follow.
Now, some signs will be obvious.
If my students can’t define basic terms — “what’s a noun again?” — they’re clearly novices.
If they spontaneously combine ideas in ways we haven’t discussed — “I want to strengthen this sentence by using the participle as a gerund” — they’re well into expert territory.
For this reason, I should begin classes and units with questions designed to identify my students’ place along the low-to-high prior knowledge contiuum. And, I should be reasonable about the fuzzy limitations of that placement.
In Sum
This meta-analysis suggests that the expertise reversal effect is a thing.
When possible, teachers should provide beginners with relatively more pedagogical structure, and experts relatively less.
Part of our own developing expertise will be adapting — lesson-plan by lesson-plan, class by class, unit by unit — to those subtle gradations.
Pashler, H., McDaniel, M., Rohrer, D., & Bjork, R. (2008). Learning styles: Concepts and evidence. Psychological science in the public interest, 9(3), 105-119.
Tetzlaff, L., Simonsmeier, B., Peters, T., & Brod, G. (2025). A cornerstone of adaptivity–A meta-analysis of the expertise reversal effect. Learning and Instruction, 98, 102142.
If you’ve watched educational videos lately, you’ve probably noticed that captions are EVERYWHERE. In fact, caption technology has evolved in recent months so that captions now highlight individual words as spoken, rather than present entire phrases or sentences one after another. But what if this well-intentioned practice is actually interfering with learning?
Given their prevalence, I’ve just assumed that we have good reasons to include captions. At the same time, people do LOTS of things that contradict evidence — so perhaps the time has come to investigate my assumption.
I started by casting a wide net. I went to Google Scholar and put in “captions and subtitles.” The first hit sounded a confident tone: “Video Captions Benefit Everyone.”
To be sure we understand the confidence of this study, let’s read the first two sentences:
Video captions, also known as same-language subtitles, benefit everyone who watches videos (children, adolescents, college students, and adults).
More than 100 empirical studies document that captioning a video improves comprehension of, attention to, and memory for the video.
If in fact captions promote understanding, focus, and recall for practically everyone who watches, we’ve got as close to a slam dunk as I can imagine.
Because this blog focuses on education, I’m interested captioned videos used to help K-12 students learn stuff. That is: research into captions when people watch movies for fun don’t fit my question.
For the time being, I’m also going to focus on same-language captions: a video where the narrator speaks in English and the captions show the narrated words in English. Of course, a student who speaks Spanish at home might benefit from seeing Spanish subtitles for a video in English, but that’s a very different research questions.
For similar reasons, I’ll start by focusing on research into neurotypical students. I can imagine that students with particular diagnoses might — as part of their learning profile — have different requirements than their peers. It’s probably helpful to start by understanding how most people learn, and then adapt that practice as needed for individuals.
(To be clear: we will ultimately be interested in different-language captions, and in the potential benefits for different categories of learners. To start with, I simply want the most basic question answered.)
I’d like to find several rigorously designed studies all pointing in the same direction. I want sample sizes that rise above the trivial; I’d like plausible control groups; I’d like objective measures — not mere self-report. And so forth. After all, I shouldn’t tell you that captions are (or aren’t) a reseach-informed instructional practice if all the research I’m citing doesn’t meet basic standards.
Finally, if I’m really lucky, I’d like to have both research and theoretical frameworks pointing in the same direction.
Now that we’ve got some parameters in place, let’s return to that study and see what we find.
Working the Steps
I spend lots of my time double-checking (or triple-checking) “research-based” claims, so I’ve got a process to follow.
I won’t walk you through each step of the journey — it took a few hours — but the results are impressively clear.
First:
We have essentially no research that fits the criteria above.
No, really: we don’t have a pool of persuasive research giving us an answer one way or another.
That “Captions Benefit Everyone” study focuses on foreign-language captions, or on non-neurotypical learners, or on college students, or on self-report. LOTS of self-report.
I should explain, by the way, why self-report data don’t persuade most researchers: people are REALLY bad at knowing what helps us learn. College students might THINK they pay more attention, or remember better, when they see vids with captions. But unless we actually measure their attention, understanding, or learning, we shouldn’t actually make claims about attention, understanding, or learning.
When I asked Elicit.com to research this question, I found the same problem. The studies it summarized focused almost entirely on Chinese students watching videos with English captions. That research helps answer an important question — but it’s not the question I asked.
Second:
The study that comes the closest to answering my question suggests that captions might interfere with reading for not-at-risk students.
This study suggests that captions DO help at-risk 2nd and 3rd graders recognize words. But the not-at-risk students recognized fewer words with the captions on. Obviously we’re glad to have strategies to help at-risk students. But that’s not the big-picture question we started with.
(I’m honestly puzzled that captions benefit struggling readers…but because I don’t teach reading I’m not going to have a strong opinion here.)
Let’s Talk Theory
I noted above that I’d like to have both well-done empirical research AND a theoretical framework to answer my question.
Richard Meyer’s “redundancy principle” tells us that presenting the same information both verbally and visually at the same time increases cognitive load.
In his excellent book Sweller’s Cognitive Load Theory in Action, Oliver Lovell gives a common example: conference presentations.
It’s a common practice for presenters to provide written information on their slides and then to read out that information during the presentation. To conventional audiences, this represents the presentation of redundant information. Only one presentation format is needed, either the written words, or the spoken words. (62)
That example sounds A LOT like captions, no?
A full explanation of the redundancy principle requires a blog post of its own. The short version goes like this:
Because I read faster than others speak, I’m constantly reading ahead of the speaker’s current point in the text. I must therefore stop and go back multiple times. All this back-n-forth adds to my cognitive muddle.
With captions, I have to focus either on the WORDS that the captions present or the IMAGES in the video — and that back-n-forth also adds to the cognitive work I have to do.
For the dual-coding folks reading this post, remember: dual coding advocates that words and images complement one another — not that they represent precisely the same information.
Putting It All Together
I found NO research with objective measures of neurotypical K-12 learners reading same-language captions. The one study that comes closest hints — but does not say — that captions might interfere with word recognition for early readers.
Meyer’s redundancy principle gives us a good reason to be VERY skeptical of claims saying that “captions benefit everyone.”
If you find research that matches the criteria above, please send it my way. I always want to keep this blog as up-to-date an accurate as possible.
In the meanwhile, here are my suggestions:
a) Be wary of claims that captions benefit most learners — especially neurotypical K-12 learners reading same-language captions.
b) Be ESPECIALLY cautious if the video includes cognitively complicated material — where cognitive load is already high.
c) Be aware of legal requirements, especially for students with diagnosed learning differences. Also, I myself would be more open to the benefits of captions for students watching videos in languages they don’t speak fluently. I haven’t done a deep dive into that research pool, but common sense suggests such captions could have real benefits.
Gernsbacher M. A. (2015). Video Captions Benefit Everyone. Policy insights from the behavioral and brain sciences, 2(1), 195–202. https://doi.org/10.1177/2372732215602130
Linebarger, D., Piotrowski, J. T., & Greenwood, C. R. (2010). On‐screen print: the role of captions as a supplemental literacy tool. Journal of Research in Reading, 33(2), 148-167.
Lovell, O., & Sherrington, T. (2020). Sweller’s cognitive load theory in action. John Catt.
I’ve been staring at my to grade pile—essays, exams, books I’ve been meaning to read, skills I want to develop—and honestly, it’s not that I don’t want to begin. I just… can’t. I open a document, then blink, and suddenly it’s dinner time. I’ve read all the Getting Things Done books, but what is it that gets me On Task when I already know what to do? Sound like a familiar question? That tension between desire and initiation is exactly at the heart of David Badre’s On Task: How Our Brain Gets Things Done.
Badre, a cognitive neuroscientist, invites us into that murky space between knowing and doing. He shows how our brains—particularly the prefrontal cortex—juggle goals nested inside other goals (make coffee, generate a lesson plan, grade essays), and why that juggling sometimes comes crashing down. He doesn’t promise a self‑help checklist; instead, he offers compassionate clarity: our executive function is powerful and fragile, built for hierarchies, stability‑vs‑flexibility trade‑offs, and moment‑to‑moment cost‑benefit calculations. Badre is willing to wade into the philosophical and biological depths of what it means to have a mind at all.
Throughout the book, Badre asks: are we really steering our lives, or are we just riding the rails of our biology and past conditioning? When my students and I discuss biopsychology or epigenetics, we circle the same tension: with so much shaped by brain circuitry, classical conditioning, even the hidden influence of our genes and society—what does it mean to choose? Badre is honest about these boundaries. He uses case studies—like patients who, after prefrontal brain injury, can explain their intentions but can’t act on them—to explore the razor-thin margin where knowledge ends and true agency might begin. He draws on neuropsychological history, from Penfield’s sister to the famous EVR, and roots these questions in the living, vulnerable architecture of the brain.
You will get a strong foundation along with these great stories: Badre digs into computational models and the messy, ongoing debates about how cognitive control is organized. He walks us through the brain’s hierarchies—how the prefrontal cortex can set broad, abstract goals and then decompose them into practical action—and then pulls back to ask what these models do (and don’t) explain about everyday life. You get stability and flexibility, multitasking, inhibition and switching, the information retrieval problem, the limits and benefits of control across the lifespan. Some readers call the book demanding or dense in spots, but that’s part of the payoff: Badre trusts us to join the scientific conversation, not just spectate. And even just getting the gist of tough parts will change your thinking.
In my daily life, all this plays out like driving home on autopilot—forgetting the road, feeling the inertia of routine, and ending up somewhere unintended. In classrooms, I see students wrestling with the same forces: between conditioned knowledge of what to do, procrastination, and action. In pandemic classrooms and the distractions of current politics, we all feel a deficit in our cognitive systems—our routines unravel, our attention frays, and our brains realize how much effort it takes to get On Task when the scaffolding disappears. If you’ve ever wondered why a simple act like making coffee can feel so complicated—or what happens in a brain when you try to stop one task and start another—this book offers insight, not just explanation.
What stays with me is that Badre refuses false optimism. He doesn’t say, “just build more willpower.” Instead, he hands us a mirror: our executive function is shaped—and we can shape it too, through environment, practice, small routines. That kind of insight feels hopeful because it’s real. It demands curiosity, not quick fixes.
So here’s my punch at the end, inspired by his closing: On Task feels less like a how‑to‑guide and more like an invitation—to observe our own hidden machinery, to notice how easily routine can slide into unawareness, and to ask: Who am I when my executive function is truly steering? What small moments—making coffee, grading papers, reading a chapter—might I reclaim to bring more awareness, more agency, more grounded action?
If your familiar with “I know what I should do, but I just can’t!”—this is a book to read. It doesn’t diagnose you. It doesn’t sell you magic. It helps you see the space where your choices live—and getting to know me, feels like a foundation worth building on.
When I first started teaching — it’s been a few decades now — nobody ever talked about trauma. Quite literally, I do not remember a single faculty meeting or PD day or all-school read or … really … ANYTHING that raised the subject.
If you had asked me back then (the ’80s and ’90s), I would have said: “as far as I can tell, trauma happens to some people somewhere — and that’s terrible. But I don’t know of trauma happening here to our students. It’s just not on the radar.”
In the last twenty years, the world of education has done a 180 degree pivot. We talk about trauma, and trauma-informed education, a lot. We’ve got books and conferences and speakers and data.
Trauma — it seems — is everywhere. Whereas in the ’80s, trauma was a “someone else, somewhere else” problem, today it seems to be a “lots and lots of people, right here” problem.
The transition threatens whiplash. How can we manage it?
A Surprising Perspective
Back in the fall of 2024, Prof. George Bonanno presented his research at our Boston conference. His hour-long keynote included a number of surprising findings, and prompted me to buy his book. At last (!) I’ve had a chance to read it and to understand those surprising findings more deeply.
As I wrote back in December, Bonanno finds that
Roughly 10% of people who experience trauma have enduring symptoms;
Less than 10% start without symptoms, but symptoms develop over time and persist;
Roughly 20% initially experience symptoms, but recover over two years;
The rest never respond with serious symptoms.
In other words, most people do NOT respond to threatening events with PTSD. And, many who DO initially experience PTSD recover within months.
For these reasons, Bonanno doesn’t speak of “traumatic events” but of “potentially traumatic events.” After all, such events might lead to a trauma response…but most of the time they don’t.
By the way: Bonanno doesn’t arrive at these conclusions by looking at marginally threatening experiences. Two of his data sets come from people who experienced the 9/11 attacks directly — as in, they fled the buildings after the planes hit — and from members of the military who served in combat.
Even in these populations, he finds that people are mostly resilient — both in the short term and over the longer term.
In brief: we can start to manage our whiplash by realizing that PTSD is obviously very bad, but not remotely inevitable. Our students and colleagues and community members are likelier to respond to potentially traumatic events by being stable and resilient.
The Non-Recipe Recipe
This initial insight leads to an important question: exactly WHY are some people more resilient than others? If you and I go through roughly similar “potentially traumatic experiences,” why do I develop PTSD symptoms while you don’t?
To ask the same question another way: is there a formula to follow? A set of steps that leads away from PTSD? A recipe?
Bonanno answers this set of questions with nuance, sympathy, data, wisdom, and humility.
In the first place, he argues that — no — we don’t have a one-size-fits-all series of steps. In fact, he explains in thoughtful detail why no one pathway will work for all people in all circumstances.
In fact, he specifically rejects this approach. Yes: individual research studies show that character trait X or mental habit Y is “associated with a reduction in ultimate PTSD symptoms.” But the list of X, Y, and Z goes on at remarkable length — a few dozen letters at least. (Our poor alphabet taps out at 26.)
Instead, Bonanno’s research says that resilient people have a flexible collection of traits and perspectives that they use in different ways at different times.
The Return of the “Mindset”
Bonanno summarizes this this collection of traits with the phrase “flexibility mindset.” He defines the word “flexibility” quite carefully:
I’ve used the word “resilience” throughout this book to describe a pattern of continued good mental health after potential trauma, or, more precisely, a stable trajectory of healthy functioning across time.
Flexibility is not resilience. Flexibility is the process we use to adapt ourselves to traumatic stress so that we can find our way to resilience. (121)
Bonanno’s flexibility mindset rests on three connected beliefs:
“Optimism about the future,
confidence in our ability to cope, and
a willingness to think about a threat as a challenge.”
No one of these beliefs by itself is enough. And, no one of them is a straightforward first-A-then-B-then-C process. But — combined with nuance and maturity — they result, over time, in better mental health outcomes.
Bonanno, in fact, devotes several chapters to specific stories of people who successfully (or not) use a flexibility mindset to manage the potentially traumatic events in their lives.
A Big Caveat
Whenever I write a book review, I always try to include at least one point where I disagree with the author, or think the book could be better. In this case, that’s a surprisingly easy goal to meet.
Here’s why: I think the book’s TITLE is doubly misleading.
In the first place, Bonanno doesn’t for a minute suggest that we can “end” trauma. He in no way claims that you can follow his simple steps to bring trauma to an early end — either for an individual or a society. Quite the contrary, he argues that the process requires endurance, frequently includes grave setbacks, and might not work for everyone.
In other words, The End of Trauma isn’t about the end of trauma. It’s about rethinking the inevitability of trauma, and reframing strategies to cope with trauma.
In the second place, the book’s subtitle includes an equally misleading phrase: “the new science of resilience.”
Bonanno says over and over that he’s NOT proposing anything radically new. His “flexibility mindset,” after all, suggests that we be optimistic, confident, and inclined to think of threats as challenges. None of those insights — or the word “mindset” — is new.
The novelty in Bonanno’s work lies first in his data, which find the PTSD is a relatively unusual response to potentially traumatic events — not, as we’ve heard so often, an inevitable one.
Bonanno also makes a novel argument when he focuses on broad flexible categories (“optimism, confidence”) rather than specific steps (“first do this, then do that, then try t’other”).
I don’t doubt that The End of Trauma is a more saleable title than Rethinking the Inevitability of Trauma and Proposing a Flexble Path to Work Past It within 2 Years or So. But that title would be more accurate.
TL;DR
Bonanno’s book The End of Trauma isn’t about the end of trauma. It does, however, make a compelling — and ultimately optimistic — argument: we’re mostly resilient; we can bounce back from potentially traumatic events; and we’ve got a challenging-but-flexible framework to guide us as we do so.