Skip to main content
Too Good to be True? The Effect of Tetris on PTSD Symptoms
Andrew Watson
Andrew Watson

Teachers and school leaders turn to research — in part — because it can help us solve problems. Of course, the bigger the problem, the more urgent the need for a solution.

For this reason, we’re easily drawn to the more dramatic “research-based” claims. When a big problem has an easy solution — an easy solution with RESEARCH behind it — we’re tempted to adopt that new strategy right away.

In this blog post, I want to suggest — first — that we should slow down. And — second — that we should ask some important questions before we make big changes. After all: if the problem is big and urgent, we could make it worse by enacting an ill-considered solution based on flimsy research claims.

First: Slow Down

Carl Sagan famously said: “extraordinary claims require extraordinary evidence.”

Let’s imagine that I tell you: “You can add 20 points to your IQ just by eating raisins every day!” That claim is a biggie. You’ll want LOTS of VERY STRONG evidence to support it. (To be clear: I don’t know of any way to increase IQ at all — much less a raisin-centric diet.)

Dan Willingham made a related point in his book Why Don’t Students Like School? Willingham argues that — over a few hundred years — teachers have tried all sorts of strategies to help students learn. For this reason, it’s difficult to imagine that a simple strategy will have a surprisingly large effect. After all: if a one-step solution works so well, why hasn’t someone tried it — and reported on its benefits?

For this reason, it’s likelier that research will find incremental benefits to a new teaching strategy — and that the new strategy will require more than a simple tweak or two. In other words: we’ve (probably) already figured out the big, easy stuff. Research will help with the small, difficult stuff.

So, let’s consider the claim implied in my title: “playing Tetris can reduce PTSD symptoms.”

That claim pushes up against both Sagan’s and Willingham’s guidance. To me, at least, the idea that Tetris (of all things) helps with trauma — that’s extraordinary all right. And the idea that something as simple as Tetris will have an effect strains credulity. If reducing trauma symptoms is so easy, why haven’t we figured that out yet?

For both of these reasons: slow down.

Second: Questions to Ask

Most researchers take care to speak very precisely about the scope and limitations of their claims. Before we get carried away, we want to be sure we understand exactly what they’re claiming.

Here’s a handy approach. All published research begins with a one-paragraph summary of the study. This summary goes by a rather odd name: the “abstract.” So, read the abstract carefully and focus on the researchers’ precise claim.

I do need to warn you: an abstract is often — well — rather abstract. The researchers are condesing hundreds of hours and thousands of words and data-points into a dense paragraph. The abstract will not delight you. But it will tell you what you need to know.

In this study, the reseachers claim that a single session of Tetris reduces the frequency of intrusive memories of traumatic events. Notice several limits here:

  • They don’t make broad claims about PTSD. Instead, they make claims about one specific symptom — intrusive memories.
  • They don’t claim to cure PTSD. Instead, they claim to reduce the frequency of intrusive memories.
  • They don’t claim that the strategy works for everyone. Instead, they claim it worked for nurses in Swedish hospitals who had experienced traumatic events while treating Covid-19 patients.

All this specificity will be very helpful for us. A Tetris salesman might show up at our school brandishing this paper and says “we can cure your students’ PTSD — research says so!” Because we read the abstract carefully, we know this claim just ain’t so.

More Questions: Trust the Process

Researchers typically do not attempt Great Leaps Forward. Instead, they take incremental steps.

So, if one research team shows that “retrieval practice helps college students remember Spanish vocabulary words,” other research times might investigate these questions: “Does retrieval practice help…

  • college students remember physics vocabulary words?”
  • 5th graders learn English spelling rules?”
  • budding baseball players recall obscure-but-important rules?”

And so forth. They’re unlikely to ask “does retrieval practice help marmosets improve their mindful meditation practice during a full moon?” That’s just too big a leap.

If you see a “too good to be true” claim, it’s often helpful to investigate the research history behind it. If this study right here takes only a small step beyond previous research steps, its modesty and carefulness inspire greater confidence.

Because research studies can be REALLY tedious to read, I’ve found AI to be a big help in cutting through the glutenous jargon and finding important facts. For this project, I often use chatpdf.com — although other AIs work too. In this case, I asked chatpdf this question:

According to this study, how much research has been done on the tetris-helps-reduce-PTSD-symptoms theory before this study?

Turns out, this study has LOTS of relevant work behind it:

  • a theory about how trauma and PTSD affect sensory processing, then
  • a small-scale study to see if if tetris has any effect, then
  • a “proof-of-concept” study in a psych lab, then
  • a few more steps.

In other words, this “too good to be true” idea didn’t come out of nowhere — it has a well-established foundation supporting it. My level of confidence is increasing…

Check the Field

As I’ve written before, I rely on three websites to see what other researchers are thinking about particular research topics. The first of the three — Scite.ai — shows that no one has yet responded to this specific study. Because this research is both very new and very niche-y, this silence isn’t totally surprising.

My second go-to website is connectedpapers.com. A quick visit there shows that, in fact, other researchers are exploring similar lines of inquiry. They don’t all get the same results; in fact, they don’t all get positive results. But this tetris study isn’t a bizarre outlier. It’s part of a small-but-serious cluster of studies trying this approach.

Finally, Elicit.com helpfully answers direct research questions. When I asked it “Does playing tetris reduce PTSD symptoms over non-trivial periods of time,” it scrounged the research web and thought for about 8 minutes. It then offered this helpful one-sentence summary:

Multiple studies demonstrate that Tetris gameplay combined with memory reactivation significantly decreases PTSD-related intrusive memories, with effects lasting up to six months.

It then listed the studies it had analyzed to arrive at that conclusion, and the various parameters that it found helpful and persuasive.

So, at this point I’m much more confident than I was 20 minutes ago. I know that the researchers are taking slow, careful steps; and that other researchers are taking other careful steps along parallel paths. We’re not alone in this forest.

A Recap, and Two Caveats

The steps so far:

  1. When I see a surprising claim, I SLOW DOWN.
  2. Next, I check the abstract to understand EXACTLY what the researchers are claiming. (Newspaper and blog headlines often misrepresent researchers’ claims quite badly.)
  3. Third, I look for the history behind this study to be sure that the idea has a good foundation.
  4. Finally, I use other web resources to see if this idea is a quirky outlier or part of a meaningful body of research.

I should also highlight two caveats.

  • First: AI is evolving very quickly. As of today’s date (May 12, 2025), these tools do what I’ve described, and are the best ones I know. By May 13, 2025, they could behave very differently, or have been superceded by better tools.
  • Second: I have explored this study as an interesting example of a “too good to be true” claim. That is: when I first saw this research, I assumed the claims would prove to be exaggerated or silly. (I am, at least for now, reassured that this is a serious line of inquiry.)

However, I am NOT suggesting that schools use tetris to treat students’ trauma symptoms. I’m not qualified to make any recommendations about treating trauma.

And, to be clear, the researchers aren’t making that suggestion either. They’re still trying to figure out if this simple idea might be helpful (to nurses). We need LOTS more research


Kanstrup, M., Singh, L., Leehr, E. J., Göransson, K. E., Pihlgren, S. A., Iyadurai, L., … & Holmes, E. A. (2024). A guided single session intervention to reduce intrusive memories of work-related trauma: a randomised controlled trial with healthcare workers in the COVID-19 pandemic. BMC medicine22(1), 403.

Will Chess Make Me Better at Baseball?
Andrew Watson
Andrew Watson

Imagine for a moment that I’m coaching college baseball.

A baseball batter makes contact with a ball, hitting it out over an empty, brightly lit baseball diamond

I’ve noticed that my players have lots of specific skills, but lack the ability to make strategic decisions from an above-the-playing-field perspective. How can I help them improve?

Suddenly I think: aha! I’ll have my boys of summer learn chess. What better way to train them up in an above-the-field view? What better way to improve strategic decision making with that view in mind?

After all, the mental skills that they learn playing chess will — no doubt — transfer to playing ball.


I don’t know if a coach has actually tried this experiment. But I do know that a similar thought process drives A LOT of inquiry in the world of cognitive science.

If I want my students to learn history — or science, or tuba — I could teach them history, or science, or tuba.

Or, perhaps, I could boost their underlying brain power in some general way that will ultimately transfer to their history learning.

That is: rather than train their historical knowledge and thinking, I could enhance the cognitive resources with which they do the historical thinking. VOILA! More learning.

In my analogy, I could enhance my baseball players’ strategic vision and thinking (by teaching them chess); they can use their chess-enhanced vision as they play baseball.

So many possibilities…

Where to Begin; How to Proceed

If I want to pursue this path, I have LOTS of “cognitive resources” to choose from. Should I train my students attention? Or, one of their executive functions?

A research team has recently tried this approach with “cognitive control”: “a set of processes critical for guiding thoughts, feelings, and actions in a flexible, goal-directed manner.”

For their research method to be persuasive, it should meet several criteria. It should

1. Include enough people to make its results credible:

If a study includes 20 or 30 people, the results might be interesting, but won’t be compelling.

2. Test its results in both the short term and the long term:

When I train my baseball players with chess, I want them to preserve their chess-enhanced vision for a long time. If they lose that vision as soon as they stop playing chess, then they haven’t really improved their cognive function in a meaningful way.

3. Test those results in meaningful areas:

When I train my first baseman in chess, I’m happy if he gets better at chess. But I really want him to get better at baseball.

To be a little bit technical, I’m glad if I see “near transfer”: that is, chess training helps my players get better at speed chess. But I care about “far transfer”: that is, chess training helps my players spot the best place to force an out during a tricky fielding play.

Better and Better

This research team — led by Nikolaus Steinbeis — included some extra steps as well. I’m DELIGHTED to see that this study…

4. Includes a plausible comparison:

Researchers often take a worrisome shortcut. They try out a Cool New Thing — say, a curriculum, or a pedagogical strategy. When students learn more, they say: “look, our Cool New Thing enhanced learning.”

But this claim disguises a logical flaw. The benefits could come from doing SOME New Thing, not THIS New Thing.

To overcome this logical flaw, researchers should have an “active control group.” That is: some participants try THIS new thing, while another group tries a PLAUSIBLE NEW ALTERNATIVE thing.

If one group benefits more than the other, we can say that — yup — the change came from the curriculum itself, and not just from the newness.

5. Psych + Neuro

Wouldn’t it be wonderful if reseachers BOTH checked out psychological measures (“do the students learn more?”) AND neurobiological measures (“do their brains physically change?”).

Well, I’m happy to report that Team Steinbeis did ALL OF THESE THINGS.

1: The study included 250+ children, age 6-13. That’s not a HUGE number, but it’s noticeably larger than most studies.

2: They tested participants at the end of the study, and again a YEAR later. In my experience, very few studies have that kind of time horizon.

3: They checked to see if their cognitive control game improved participants’ cognitive control (“near transfer”). AND, they also check if it helped their learning, mental health, decision making, creativity, and resilience in the face of COVID stressors (“far transfer”).

4: This study included an active control group. Half the students played a video game with instructions that focused on improving their cognitive control. The other half played the same video game with instructions that focused on improving their response time. That’s a plausible alternative, no?

5: Researchers scanned relevant brain regions — inferior frontal gyrus, cingulo-opercular and fronto-parietal networks — to see if the training changed structure or function. (Don’t worry: I have only a dim understanding of what those words mean, and I’ve been in this field since 2008.)

Results, PLEASE

I’ve gone into more detail than usual because I want you to see why I find this study helpful and persuasive. As far as I can tell, this team has done everything right.

If training cognitive control helps students, we should see meaningful differences in far transfer effects — and in brain structure or function — after a year. This study design will let us see that.

So, their results? Close to nothing.

  • Cognitive control training didn’t help students learn more, or make better decisions. (Far transfer)
  • It didn’t make them more creative. (Also far transfer)
  • It didn’t didn’t change relevant brain structures, or the function of those structures.

Now, the training did help students do better at tests of cognitive control — even after a year. But we don’t really care about cognitive control on its own — that’s simply near transfer. We care about cognitive control because it usually helps with learning, and creativity, and so forth.

This research tells us: untrained cognitive control might predict academic success. But increasing cognitive control with computer game training does not result in greater academic success — or greater anything else.

In the language of my starting anecdote: my players got better at chess, but they didn’t get better at baseball. And — as the coach — I care about baseball.

The Big Picture

The idea that we can “train our students’ brains” has a lot of intuitive appeal. Perhaps because the claim includes the word “brain,” it gets lots of hopeful attention. (Because it includes the potential for enormous profits, it has lots of economic appeal as well.)

I wanted to focus on this study because it does such a careful job of rebutting that claim: at least as long as “cognitive control” is the particular element we’re trying to train.

In the future, if someone brings a brain training program to your attention, consider this research example. If that someone’s research method doesn’t include all of the steps above, you might hesitate before you invest scarce time and money in this approach.

Better, instead, to focus on teaching history, science, tuba — and baseball.


Ganesan, K., Thompson, A., Smid, C. R., Cañigueral, R., Li, Y., Revill, G., … & Steinbeis, N. (2024). Cognitive control training with domain-general response inhibition does not change children’s brains or behavior. Nature neuroscience27(7), 1364-1375.

Nutrition and Learning: Do Saturated Fats Harm Cognition?
Andrew Watson
Andrew Watson

This interdisciplinary field draws on expertise from many different branches of research.

Psychologists warn us about working memory overload, and suggest strategies for enhancing students’ motivation.

Neuroscientists describe the changes in neural networks that — as we understand things so far — convert new learning into long-term memory.

Somnologists (!) can tell us about the benefits of REM sleep, while exercise physiologists help us understand the relationship between health and learning. (I’m looking at you, brain-derived neurotrophic factor.)

And so forth.

But, you might ask: what about nutrition? Can our students eat their way to more learning? Surely this field has wisdom and guidance to offer?

I hear different versions of this question frequently — often focusing on blueberries — so I was happy to see recent research exploring important questions about the relationship between diet and cognition.

Here’s the story.

Mazes, Treasures, and Meals

A research team in Australia invited 120 students to make their way through a maze in a VR world. In this VR simuation, a lake is surrounded by two rings of distinctive landmarks: a lighthouse, a windmill, the Statue of Liberty, and so forth.

Study participants had to find their way across a hexagonal grid of planks to the location of a treasure chest. If they didn’t find the chest after 4 minutes, they were transported to its location — and then given a fresh chance to find the treasure from another starting place.

Here’s the key: participants could use the distinctive landmarks to triangulate the chest’s position. That is, they could say: “if the lighthouse is at my 2 o’clock, and the windmill at my 6 o’clock, and Lady Liberty at my 10 o’clock…I’m in the right place!” Using those bearings, they could navigate back to the treasure chest fairly easily.

Now comes the nutrition part.

The researchers also asked students about their intake of saturated fats and refined carbs. For the most part, the study doesn’t list the kinds of food they ask about overall (although it does mention specific questions about adding sugar to food). But the gist is: how much of their diet comes from the bad stuff — like saturated fats and refined carbs?

So here’s the big question: what’s the relationship between unhealthy foods (fats, carbs) and spatial navigation? Are the relatively healthy eaters better at finding their way around this maze?

Answers, and Some Doubts

In a word: “yes.”

The participants who had low scores on the fat/carb inventory learned the maze somewhat better than those who had higher scores.

Said another way: if you know a participant’s fat/carb intake, you could roughly predict how well they would do on the maze.

Here’s the boldest formulation of this finding: “an unhealthy diet interferes with spatial learning.”

Of course, any such blunt sentence requires MANY caveats.

First: for obvious reasons, this study didn’t actually monitor participants’ food intake for a month. The nutrition data are entirely self-reported…and we usually take self-reported data with a dash of skepticism.

Second: this study has an ENORMOUS dropout rate. 120 people signed up for the study, and SIXTY FIVE dropped out. (Apparently the VR world created a lot of nausea.) A reasonable person might decide not to trust the study for this reason alone.

Of course, the fact that it’s published means that other reasonable people decided not to worry so much about the dropout rate. Apparently nausea is a common problem with VR research.

Third: I’m legally obligated to say: “correlation isn’t causation.” It is possible that people who are bad at spatial learning therefore prefer fats and carbs. Or that a third factor causes both deficits in spatial learning and a preference for unhealthy food. In this case, even a skeptic like me is willing to infer the potential for causality…

As is so often the case, this one study is helpfully interesting. It is not, on its own, entirely persuasive.

Another Perspective

This study suggests a small correlation (and the potential for a causal relationship) between a fatty/carby diet and spatial learning.

In other words: YES, diet and nutrition do matter for learning.

I’d like to propose an alternative framework which — in my experience — simplifies lots of complex questions. Here goes…

We have A LOT of research showing that the brain IS A PART OF the body.

The brain is, in fact, PHYSICALLY ATTACHED to the body.

For that reason, everything that’s good for the body is good for the brain. Because (say it with me), the brain is a part of the body.

Sleep is good for the body; it’s therefore good for the brain.

Exercise is good for the body; it’s therefore good for the brain.

Good nutrition is good for the body…you get the idea.

I don’t honestly think we need a special category of “brain-focused nutrition advice.” All the basic nutrition guidance we already have tells us practically everything we need to know about “brain nutrition.” *

For example: as I wrote above, people get VERY excited about the brain health benefits of blueberries. I myself haven’t been able to find any strong research on this topic. (Here’s one pilot study with 15 participants.)

TL;DR

A recent study from Australia suggests that a high carb, high fat diet might interfere with spatial learning.

While this study has some quirks and flaws, it highlights a simple principle:

“If it’s good for the body, it’s almost certainly good for the brain. If it’s bad for the body, it’s almost certainly bad for the brain. Plan your meals accordingly.”


* If you have gotten medical advice about nutrition — even brain nutrition — you should follow it. I’m a teacher and a blogger; not a doctor, and not a nutritionist.


Tran, D. M., Double, K. S., Johnston, I. N., Westbrook, R. F., & Harris, I. M. (2025). Consumption of a diet high in fat and sugar is associated with worse spatial navigation ability in a virtual environment. International Journal of Obesity, 1-9.

(How) Do Emotions Affect Learning?
Andrew Watson
Andrew Watson

When a conference speaker announces that “a student’s emotions matter for their learning,” few teachers rock back in surprise. OF COURSE emotions matter for learning. Who would have thought otherwise?

At the same time, we’re probably curious to know how emotions influence learning.

A young student with long dark hair stands pensively by a school hallway window, clutching a green notebook and wearing a blue scarf over her white uniform shirt. Her expression appears troubled or thoughtful as she gazes outside. In the background, other uniformed students interact in the corridor. The image captures a moment of isolation or contemplation within the busy school environment, suggesting themes of teenage emotional challenges in educational settings.

In fact, once we ask that question, some sense of surprise might start to creep in. After all, the word “learning” falls squarely in the realm of cognition. And the word “emotion” sounds much more like … well … emotion.

 

Aren’t cognition and emotion two different sets of mental processes? If they are, how does one affect the other?

Here’s where research can be really helpful, if we read it carefully.

One of the best known (and most misunderstood) insights in this field comes from LatB regular Mary Helen Immordino-Yang:

“It is literally neurobiologically impossible to think deeply about things that you don’t care about.”

Why? Because — in the words of a recent study led by Benjamin Hawthorne — “the brain mechanisms that give rise to conscious emotions are not fundamentally different from those that give rise to cognition.”

In other word: the parts of your brain that do the emotional work also do the thinking work. Yes, LOTS of the same neural networks operate in both processes. These two seemingly “different sets of mental processes” share very substantial plots of neural real estate. (I will, by the way, come back to the misunderstanding of Dr. Immordino-Yang’s quotation at the end of this post.)

So, STEP ONE in this multi-step argument: “students’ emotions influence their learning because — at the neurobiological level  — ’emotion’ and ‘cognition’ overlap.

Step Two

With this neuroscience understanding of the cognition/emotion relationship established, let’s turn to psychology. What mental processes might explain this relationship?

One potential answer: WORKING MEMORY (often abbreviated as WM)If emotions — positive or negative — have an effect on WM, then we can easily understand how those emotions affect learning.

This hypothesis is at the heart of that recent study, led by Dr. Hawthorne, that I quoted a few paragraphs ago.

Hawthorne’s team explored this question through the concept of “cognitive load theory.” The full theory is too complicated to review here, but the headlines are straightforward:

  • Students who can manage a WM task are facing an appropriate cognitive load.
  • When that cognitive load becomes excessive, then they experience WM overload.

Team Hawthorne hypothesized that:

  • negative emotions (or what the researchers call ‘painful’ emotions) might increase cognitive load, and thus result in WM overload. Result: less learning.
  • positive emotions might reduce cognitive load, and thus make WM overload less likely. Result: same (or more) learning.

Because they have this cognitive load theory framework (often abbreviated as CLT), they can rely on all the tools and surveys that CLT uses.

What Students Did; What Reseachers Learned

To pursue this line of inquiry, Hawthorne and his team followed a straightforward plan.

Roughly 350 students — 11 to 15 year olds in Australian schools — went through this process during their math class. In brief, they…

… watched videos teaching increasingly complicated algebra processes (that is: their cognitive load increased over time),

… rated their own experience of cognitive load for each problem,

… rated their positive and negative emotions, and

… took a final test, to see how well they learned the algebra processes.

When Team Hawthorne put all these data into the appropriate graphs and charts, they arrived at an interesting pair of results.

First:

Yes, negative emotions add to the students’ perceived cognitive load. Result: less learning.

Second:

But: positive emotions had no effect on their perceived cognitive load — although  happier students did learn more.

And so, third:

Hawthorne’s team speculates that positive emotions might help cognition via another mental process … such as motivation.

What’s a Teacher to Do?

Given these results, we might reasonably ask: “so what? What can we do with these findings?”

Good questions. I have tentative answers.

First: we now have good reasons from two distinct scientific disciplines — neuroscience and psychology — to argue that emotion and cognition aren’t different categories: they overlap a lot.

Second: we know that students experiencing more negative emotion ALSO experience more cognitive load. Potential result: less learning.

Third: because of ambiguity in the study’s language, we can’t say if the negative emotions led to the higher cognitive load, or if the higher load led to negative emotions. (Because the study measured students’ emotions only once, we can’t know if the answer is “both.”)

For that reason, I think we need a rough-n-ready, flexible set of classroom responses.

  • If I see my students are upset, I can predict their WM might be reduced; I’ll need to simplify instruction for a while.
  • If I see my students’ just can’t get their WM in gear right now, I might wonder if there’s some emotional complexity underlying the problem. So: I should check out that hunch.

Neither of these suggestions is revolutionary, but they do let me think through the two-way relationship between negative emotion and WM.

A Two-Way Street

A few hundred words ago, I wrote that Dr. Immordino-Yang’s well-know quotation is widely misunderstood. When she says:

“It is literally neurobiologically impossible to think deeply about things that you don’t care about.”

Many people hear:

“And therefore we have to start by getting students to CARE about things, because otherwise they won’t learn about them.

In other words: students’ EMOTIONS preceed their COGNITION.”

But that conclusion a) violates the research we’ve been looking at, and b) doesn’t follow logically from the original statement. Let’s try another example:

“It is literally biomechanically impossible to walk (normally) without using your ankle joints.”

We should not, I think, extend this statement to say:

“We have to start by getting walkers to focus on their ANKLES, because otherwise they can’t walk.”

The sentence really means: “ankles are an essential sub-component of the walking process. They are one of many body parts that we should be aware of as we’re teaching walkers.”

So too, I think, Dr. Immordino-Yang’s statement means: “emotion and cognition always work together. Rather than prioritize one over the other, we should be aware of their intricate interactions as we make moment-by-moment teaching decisions.”

In other words:

Yes, of course, my emotional state influences my ability to think effectively. If I’m stressed and unhappy, I might well struggle to figure out whatever academic problem faces me.

AND

Yes, of course, my ability to think effectively influences my emotional state. If I accomplish a difficult thinking task — like, say, learning a complex algebra process — I might well feel less bad and more good.

The title of this blog post asks: “how do emotions affect learning.”

I hope I’ve persuaded you that the answer is: “don’t rely on people who offer a simple answer to that question. Emotion and cognition overlap substantially, and we must keep that overlap in mind as we think our way through leading schools and classrooms.”

And also: “at a minimum, we have good reason to think that negative/painful emotions complicate working memory. No wonder they’re bad for learning!”


Hawthorne, B. S., Slemp, G. R., Vella-Brodrick, D. A., & Hattie, J. (2025). The relationship between positive and painful emotions and cognitive load during an algebra learning task. Learning and Individual Differences117, 102597.

Book Review: Primary Reading Simplified, by Christopher Such
Guest Post
Guest Post

Today’s guest book review is by Kim Lockhart.


Finding a new favourite book, one that checks all the boxes, is like finding a new favourite drink. You want to devour it without putting it down, while at the same time, you want to savour it so it isn’t finished too quickly. A good book leaves you feeling thirsty for more. And most of all, like all things we love, we want to share it with everyone we know so that they, too, can enjoy it and savour it as much as we do.

Book cover for Primary Reading Simplified, by Christopher Such

Christopher Such’s first book, The Art and Science of Teaching Primary Reading, was that book for me: the book that checked all the boxes, the book I didn’t want to put down, and was disappointed when it was over.

I wanted to share Such’s book with every teacher I knew because I wanted them to feel what I felt while reading it: a sense of relief that there was finally a book that contained the answers I’d been searching for throughout my career.

It was the first comprehensive, no-nonsense book on the science of reading I had come across.

Previously, most of what I knew about the reading research I had acquired from complex articles I read while working on my Masters of Education. But these research papers were not easy to read, not written in teacher-friendly language, and didn’t always make a direct connection between the research and what it looked like in classroom practice.

But Christopher Such’s book did.

Fast forward three years, and Such has done it again. He has written another can’t-put-it-down-until-it’s-finished book titled Primary Reading Simplified: A Practical Guide to Classroom Teaching and Whole School Implementation.

While Such’s first book focused on what teachers need to teach (phonemic awareness, phonics, reading fluency, spelling, etc.), his new book focuses more on the aspect of how to teach it.

In other words: teaching all the components of the reading process is not always enough. As teachers, we have to ensure that students are learning what we’re teaching. Even the most well-intentioned of teachers does not always meet this goal!

This book tells us exactly how we can work smarter, not harder, to ensure better learning outcomes for our students – making sure they ARE learning what we’re teaching.

Such’s new book includes an important aspect of teaching that is often the missing piece in other professional books for teachers. In teacher-friendly language, he shares the research and explains the reading routines, classroom habits, and evidence-based instructional strategies that are essential for learning to happen.

Take the chapter on reading fluency, for example.

Not only does Such explain why reading fluency is important for reading comprehension. He explicitly and systematically lays out the structure of an effective fluency lesson, including:

  • how to pair students
  • how to choose the “just right text”
  • how to model reading fluency,
  • when to explain new vocabulary, and even
  • how long the fluency practice should be each day (Such suggests less than 30 minutes to allow for repeated reading of the text; too much time could result in disengagement and not be productive).

He leaves no guess-work for teachers.

But Such doesn’t stop at developing stronger, more effective classroom routines and systems for teachers. Part IV of the book is designed for anyone interested in school-wide and district-wide implementation of science-based reading instruction.

Until now, science-based instruction has been happening in specific classrooms, with a few individual teachers, in some school districts. In other words, it isn’t consistent.  Such understands that for change to be sustainable, it needs to be implemented at the district level. These changes require fidelity, teacher support, and ongoing monitoring and commitment to change.

Chapters 13, 14, and 15 carefully outline structures for systems to be sustainable across districts. He shares the 4-Phases of implementation model (Sharples et al., 2024 as cited in Such, 2025): Explore, Prepare, Deliver, and Sustain. He also makes it clear that implementation can’t happen all at once. Instead, “implementing change across a school should be seen as an ongoing process rather than a one-off event”.

In this section of the book, Such also addresses the barriers that literacy leaders may face when trying to implement system-wide change: specifically, limited human resources, and time. He also offers practical suggestions for overcoming those barriers, including:

  • very clear expectations and vision,
  • flexible adaptations,
  • and the need to put other, lower-priority changes aside to focus on one priority and sustain it.

Such eloquently concludes this section of the book by explaining, “implementation is most likely to succeed if all involved feel it is something being done with them rather than something done to them.” (Such, 2025, p. 136).

Reviewer Kim Lockhart
Reviewer Kim Lockhart

Echoing the format of Such’s first book, each chapter of his new book is short and dense with evidence-based information in manageable chunks. I love this format because it is practical for busy teachers like me. If I have only 10 minutes to read a snippet before I have to run outside for recess duty, I can easily read a few paragraphs in a chapter, learn something, and know exactly where to return when I have time again.

Likewise, each chapter of Primary Reading Simplified concludes with an “In a Nutshell” section that reviews and highlights key information from the chapter. Such also includes the section “Further Reading” for science-of-reading-nerds like me who want to learn more. He even includes a retrieval-practice quiz for each chapter. (To be honest, I am often too scared to quiz myself because I fear that I won’t be able to remember as much as I hope to, despite my greatest efforts.) Best of all, each chapter concludes with a section called “Questions for Professional Discussion.”

Because reading proficiency is not the sole responsibility of the classroom teacher, I highly recommend Primary Reading Simplified for all teachers, reading specialists, literacy coaches, and administrators. Reading instruction is our shared responsibility. As Christopher Such says himself, “Our pupils deserve no less.”


Kim Lockhart is a French Immersion classroom teacher and Special Education teacher in Kingston, Ontario. She holds a Master of Education (M.Ed) degree with a research focus on evidence-based practices to support second language learners with reading difficulties. Kim has her Orton-Gillingham Classroom Educator certificate, CERI Structured Literacy Classroom Teacher certification, and was a Structured Literacy coach for the International Dyslexia Association of Ontario for 2 years. In 2022, Kim worked for the Ontario Ministry of Education as a Content Contributor for the new science-based Language Curriculum and has also presented for the Ontario Minister of Education, Stephen Lecce and his team after the release of the OHRC’s Right to Read report. She is currently teaching part-time at Vancouver Island University’s (VIU) Faculty of Education in the Literacy, Language and Learning Graduate program. Kim is passionate about the Science of Reading and strives to empower educators, parents and caregivers to be more knowledgeable, stronger advocates for all children’s right to read across Canada.

How to Present at a Conference…
Andrew Watson
Andrew Watson

I spend most of my professional life talking in front of groups of people. Specifically, I talk about psychology and neuroscience research — and the (careful!) application of those findings to classrooms.

For that reason, I feel quite comfortable writing blog posts about research-based teaching advice: most recently…

… how goals and feedback might motivate students, and

… the relative importance of enjoyment and/or skill in reading instruction, and

… the potential benefits of PBL.

And so forth.

In recent years, I’ve also been getting questions about presenting in public.

A close-up of a microphone on a podium, with the silver mesh head in sharp focus against a blurred background. Behind the microphone is a large, out-of-focus audience in what appears to be an auditorium or gymnasium with ceiling lights visible above. The wooden podium is partially visible at the bottom of the frame. The image captures the perspective of a speaker addressing a crowd, suggesting a public speaking event, conference, or school assembly.

While that topic isn’t central to this blog, it might be helpful. After all, I know MANY people who read this blog present at conferences. And I bet many more people would like to.

So: a few quick thoughts.

1: You Do You

No, seriously.

I won’t present well if I’m trying to do (exactly) what you do, and you won’t present well if you try to do (exactly) what I do. We’re different individuals.

While I do think the advice below is broadly helpful, you should also NOT follow this advice if it encourages you to act like someone you’re not.

That is:

  • If you try to be funny — because someone somewhere said “funny speakers are good!” — that advice will work only if you’re a funny person. If not, your audience will sense your discomfort (and your unfunniness) right away.
  • If I try to get adults up and moving around (“Everybody Conga!”) because John Almarode is GREAT at getting people up and moving, I’ll feel deeply foolish. That’s just not my style. And my audience will know I feel deeply foolish. (They will probably also feel deeply foolish.)

And so forth.

2: You Do What You Say

When I’m giving a talk about avoiding working memory overload, I have to be especially careful to avoid working memory overload. If I don’t, my audience will remember my hypocrisy; they will not remember my content.

If you’re talking about retrieval practice, include retrieval practice. If you don’t, your audience will wonder why you didn’t use the teaching practice that you said was so important.

Ditto if you’re talking about mindfulness, or mini-white-boards, or forming relationships. Speakers who don’t do what they say risk looking like hypocrites.

By the way, this truth creates real problems for presentations on specific pedagogies. If my message is “students best remember ideas they learn through open-ended inquiry,” how can I best make that argument?

  • If I give a presentation, then my medium contradicts my message.
  • If I let teachers open-endedly explore their own pedagogical interests, they might not explore “open-ended inquiry.” Or, if they do, their inquiry might not arrive at the same conclusion I do.

My point here is not to be snarky, but to note a real challenge for champions of more “student-centered” pedagogical styles. If I give a talk about the inherent futility of giving talks…the paradox probably overwhelms my message.

3: Highlight Structure

Most talks condense LOTS of information into relatively short periods of time. In fact, one reason speakers receive invitations is: their expertise allows them to organize many ideas into a coherent (if complex) package.

Alas, the more info I condense into my talk, the harder my audience has to work to follow my argument. With each passing slide, they think more nervously: “Wait — how does THIS set of ideas connect to ALL THOSE PREVIOUS sets of ideas?”

For that reason, I think speakers should include a clear outline very near the beginning of the talk. And they should return to that outline frequently throughout the talk to indicate progress.

For instance, I’ve got an upcoming talk on the subject of “Rethinking Intrinsic Motivation.” That talk will begin with this outline:

Act I: Here’s what everyone thinks about intrinsic motivation.

Act II: David Geary wants us to RETHINK intrinsic motivation.

Act III: Most people think that this other theory contradicts Geary. But I think it aligns with — and adds to — Geary’s theory.

Act IV: In fact, we need to RETHINK the other theory to convert it from a “to do” list into a “to think” list. Here’s how we do that.

Notice, this initial outline stays fairly abstract. I say that Geary “wants us to rethink motivation,” but I don’t get specific. I don’t even name the other theory. And so forth.

As I make my way through the talk, I explicitly return to that outline and add in all those specifics:

“As you saw in Act II, Geary wants us to rethink intrinsic motivation from an evolutionary angle. For that reason, he argues, school should emphasize topics that we didn’t evolve to learn, not those that we do.

Now, in Act III…”

By starting with an outline, and by returning to it, I clarify my ideas. Even more important: I clarify the relationship among the ideas.

4: Presenting Online

Four quick rules to raise your online game.

a) My eyes should not be at the center line of the screen, but 1/3 of the way down from the top of the screen. Film and TV shows are shot with this “rule of thirds,” so my audience expects it. Eyes at the midline look odd.

b) For heaven’s sake, I must NOT let my laptop camera point up at my face — and therefore up my nostrils. That view is really unpleasant. I should prop the laptop on books so that it’s level with my face.

c) In daily conversation, we don’t typically get very close to the people we’re talking with. If my face is too close to the camera, I’m a “close talker.” The look is unsettling. So, I should move the camera back so that most of my torso is visible.

d) My background matters. If the room behind me is cluttered and unkempt, I look unprofessional. I should find a simple, classy look.

None of those guidelines is complicated; all of them improve online presentations.

In Sum…

Although speakers should be themselves, some guidelines improve almost all talks. I hope the list above proves helpful.

Do *Goals* Motivate Students? How about *Feedback*?
Andrew Watson
Andrew Watson

Motivation has been a HOT TOPIC this year in all the schools I’ve visited. Everywhere I go, I get questions about students’ apathy and indifference, and teachers’ frustration.

So, what can schools and teachers do?

Well, Self-Determination Theory offers a framework to answer that pressing question.

In this post, I’m going to introduce the theory — with a focus on its key concepts.

And then I’ll describe a study which helpfully reveals the complexity of enacting the theory wisely.

And, yes, as this post’s title suggests, that helpful study focuses on goals and feedback as motivational strategies.

Let’s see what researchers have discovered about the motivational benefits of goals and feedback.

Introducing Self-Determination Theory

Like many theories, self-determination theory (SDT) can be easily caricatured. Here’s the caricature:

  • Extrinsic motivation BAD!
  • Intrinsic motivation GOOD!!

These six words fall short in lots of ways, starting with this startling observation. SDT doesn’t contrast intrinsic and extrinsic motivation.

Instead, it defines six (yes, SIX) different motivational states — including four (yes, FOUR) different kinds of extrinsic motivation.

Here’s the kicker:

  • Unsurprisingly, intrinsic motivation is GOOD for learning.
  • Surprisingly, two flavors of extrinsic motivation are ALSO good for learning. (The other two flavors: not so much.)

The scholars who created the theory — Deci and Ryan — have a name for “good flavors of extrinsic motivation”; they call them “autonomous extrinsic motivation.”

At the top of this blog post, I asked: what can teachers do about apathetic students? Deci and Ryan answer: “foster the good kinds of motivation.”

Let’s Get Fostering!

Okay, if “the good kinds of motivation” can help, how do we teachers conjure them?

If I’m understanding SDT correctly, it includes bad news and good news.

  • Bad news: we really can’t create intrinsic motivation (as Deci and Ryan define it).
  • Good news: we really CAN create autonomous extrinsic motivation, which — as you recall — benefits learning.

We foster this good extrinsic motivation by focusing on three internal experiences: autonomy, relatedness, and competence.

That is: the more that my students feel in control (“autonomous”), close to one another (“related”), and effective at dealing with their environment (“competent”), the more autonous extrinsic motivation they will experience. And: the more they will learn.

The obvious implication of this theory, then: let’s focus on enhancing our students’ autonomy, relatedness, and competence.

Plausible Start

When I talk with teachers about this theory, they can easily start to brainstorm suggestions for creating autonomy, relatedness, and competence — and, presumably, the good kind of extrinsic motivation.

As a thought experiment, we can easily imagine that clear goals will have those results. And, while we’re at it, we might predict that process feedback will likewise.

Several middle school students eagerly raise their hands to answer questions

But let’s go beyond a thought experiment. Let’s have an experiment experiment — with students and data and calculations and all that good stuff.

What happens?

Happily, a research team in the Netherlands wanted to know. They ran a survey study with almost 600 students — aged 11 to 18 — in PE classes.

They asked two sets of questions.

First: did the teachers clarify the goals during class? That is, did they…

  • … tell the students what they were going to learn, or
  • … how they would be evaluated?

Likewise, did they offer process feedback? That is, did they …

  • … encourage reflection on how to improve, or
  • … discuss how to use the students’ strengths?

And so forth.

Second: they asked if the students experienced greater autonomy, relatedness, and/or competence.

To be thorough, they also asked if they experienced LESS autonomy, relatedness, and/or competence.

Once they crunched all the numbers, what did this research team find?

Not Surprising, or Surprising?

From one perspective, this study seems to be asking rather obvious questions. I mean: OF COURSE students will feel more autonomous if we tell them what the goals are, or more related if we give them feedback.

What other result would we expect?

Here’s the thing: in the world of research, we don’t just assume; we measure. And, sure enough, those measurements gave us the results we (probably) expected.

Yesclear goals enhance autonomy, relatedness, and competence.

And yesprocess feedback does too.

At the same time, the number crunching also provided surprising results.

In some cases, process feedback reduced two of those classroom experiences: “relatedness” and “competence.”

While this result might seem surprising at first, I think it’s easy to understand the chain of emotional events here.

If I give my students lots of feedback, they might feel like I’m hovering or pestering or interfering.

Of course, “hovering, pestering, and interfering” could quite easily reduce the quality of the teacher/student relationship. And, they might also reduce my students’ feelings of competence.

In other words: all that feedback could suggest the students are not doing very well. And that feeling of incompetence could — in turn — reduce the quality of their relationship with the teacher.

Solving the Conundrum

So, which is it? Should teachers give students process feedback because it enhances autonomy, relatedness, and competence? Or, should we limit process feedback, because it reduces relatedness and competence?

As is so often the case, I think we answer that question by rethinking the relationship between research and classroom practice.

Research can almost never tell teachers what to do. Instead, research is awesome at helping us think about what we do.

In this case, our thought process might sound something like this:

  • I want to create autonomous extrinsic motivation, so I should enhance my students’ sense of competence.
  • [Thinking]
  • I wonder if I can promote competence by giving them lots of feedback during today’s class.
  • [more thinking]
  • Now that I think about it, my feedback could enhance their sense of competence. But if I give too much feedback — or unwanted feedback — students could infer that I don’t have confidence in them.
  • [even more thinking]
  • So, I’ll put a note in my lesson plan to make time for feedback. But first, I need to think about the cues my students give me when the feedback is just too much…

Of course, those cues will look different depending on context.

  • 2nd graders will give different cues than 7th graders.
  • I suspect that — for cultural reasons — students in Japan signal frustration differently than those in New Zealand.
  • Students react differently to the cool, with-it teacher than they do with me. (It’s been a minute since I was the cool, with-it teacher.)

And so forth.

But if I consider self-determination theory as a THOUGHT PROCESS, not a TO-DO LIST, I’m much likelier to get the results I want.

In this case: my feedback is likelier to enhance than reduce competence. It’s therefore likelier to promote autonomous extrinsic motivation.

And my students are likelier to learn.


Krijgsman, C., Mainhard, T., van Tartwijk, J., Borghouts, L., Vansteenkiste, M., Aelterman, N., & Haerens, L. (2019). Where to go and how to get there: Goal clarification, process feedback and students’ need satisfaction and frustration from lesson to lesson. Learning and Instruction61, 1-11.

Enjoyment or Skill? The Case of Reading
Andrew Watson
Andrew Watson

Do we want our students to ENJOY math, or to BE SKILLED AT math?

At first, this question sounds like a false choice. Obviously, we want BOTH.

As an English teacher, I want my students to have fun analyzing the books we read…and I want their analyses to have heft, merit, and substance.

I suspect that most teachers, no matter the subject  — Math, English, Chemistry, Religion, Pickleball — want our students to revel in core ideas and arrive at correct answers.

At some times, alas, we probably need to prioritize one or the other. Especially at the beginning of a unit, should I focus on …

… ensuring that my students like this stuff (even if they don’t immediately understand it), or on

… ensuring they understand the stuff (even if they don’t immediately like it)?

In teaching as in life: if I try to accomplish both goals simultaneously, I’m likely to accomplish neither.

Reading Research

I’m not surprised to discover in a recent study that students’ enjoyment of reading correlates with their skill at reading.

That is: students who get high scores on various reading tests report enjoying reading more than their low-test-scoring peers.

Of course, correlation (say it with me) isn’t causation.

Does the enjoyment lead to the skill? The skill lead to the enjoyment?

Both?

Neither?

To answer these questions, Elsje van Bergen’s research team looked at twins in Finland — more than 3500 of them.

In theory, if we ask all the right questions, gather the right data, and run the right calculations, we should glean insight into the correlation/causation question.

So: what did Team van Bergen find?

But First…

Before you read the answers to that question, you might pause to make a committment. Try to decide NOW if you’re inclined to trust this methodology.

That is:

a) you think well-done twin studies are likely to be a good way to answer this question. For that reason, you will be inclined to accept this answer even if you initially disagree with it.

or

b) you think twin studies can’t answer questions about skill and enjoyment. Thus, you will not cite this study to support your beliefs even if it aligns with those beliefs.

If we’re going to use research to make decisions about education, we should be scrupulous about doing so even when research contradicts the conclusions we had initially held.

Answers, and Questions

Now, back to this post’s main narrative…

Unlike many studies, this one can be summarized in a few pithy sentences.

A young student looks at a book open on her desk and scratches her head in confusion

Based on the twin data they analyzed, van Bergen’s team concludes that:

  • reading skill increases reading enjoyment,
  • reading enjoyment has no effect on reading skill,
  • genetics influences both positively.

Unsurprisingly, the stats get all stats-y. But the above-the-fold headlines are that simple.

Because I don’t teach reading, I’ve always hesitated to be too opinionated on the topic. Now that this study is in the wild, I do think it adds a useful perspective while the reading wars rage on.

For instance: teachers whom I like and respect have told me that older methods might not have science behind them, but they’re excellent at “making students feel like readers.”

This claim has always puzzled me. How can a student feel like a reader if s/he can’t read?

Van Bergen’s study, I think, gives me permission to address that point directly: “this study suggests that skill at reading will be the more important place to start in reading instruction.”

Zooming the Camera Back

While this study and this post have focused on reading instruction, I do think there’s a broader message here as well.

We frequently hear about the importance of intrinsic motivation; that is, a motivation that springs from students’ natural interests, not from external encouragement (or pressure).

This study, to the contrary, finds that the work teachers do to improve students’ skill simultaneously enhances their motivation. That motivation might be — in effect — extrinsic; but, it’s working. (Working = students read better, and want to read more.)

Overall, I believe we need a substantial rethink of the (false) intrinsic/extrinsic dichotomy, and the (unhelpful) criticism of motivational strategies that many teachers currently find themselves using.

If you want to join me for just such a rethink, I’m giving a webinar for Learning and the Brain on April 5th. We’ll be talking about several research-informed approaches to intrinsic motivation, and brainstorming strategies to make those ideas fit in our classrooms.

I hope I’ll persuade you that we have better ways to talk about motivation than “intrinsic/extrinsic,” and those better ways give us useful teacherly guidance.

I hope you’ll join us!


van Bergen, E., Hart, S. A., Latvala, A., Vuoksimaa, E., Tolvanen, A., & Torppa, M. (2023). Literacy skills seem to fuel literacy enjoyment, rather than vice versa. Developmental Science26(3), e13325.

Still Doubting My Doubts: The Case of PBL
Andrew Watson
Andrew Watson

Last week, I described my enduring concerns about “embodied cognition.” I’m not sure I understand the concept clearly: what exactly counts as “embodied cognition” — mindfulness? Direct instruction? (No, seriously, a well-known book on the subject says it does!)

And the “best research” supporting some of the claims doesn’t feel persuasive to me.

Could using gestures help learning? SURE. Have I found enough research for me to advocate for this strategy? Not yet…

This week, I wanted to test my doubts about project-based learning (universally acronymed as PBL). SURPRISE: I end up feeling kinda persuaded — at least in this one case.

Here’s the story…

Another Steelman

If I’m going to critique a teaching method, I want to be sure to avoid straw men. Neither you nor I learn anything if I point out the flaws in an obviously foolish study or approach. I’m going to learn something if and only if I take on the very best case.

Some thoughtful soul — I’m embarrased to say, I can’t remember who — recommended this PBL study to me.

Given the strength of that recommendation, I thought it worth a read — despite my PBL concerns.

What are those PBL concerns?

As is so often the case for me, I worry about working memory overload. If I ask my students to

  • Film a scene from Hamlet, but re-imagine it in a new setting, or
  • Build a model city that enacts 3 core principles of ecological design, or
  • Write a new law that prevents a problem in our school’s community

I’m certainly giving them a rich cognitive task.

However, they almost certainly don’t have enough foundational knowledge to manage any of those tasks. Heck, graduate students in those fields struggle with such problems.

So, while I find the questing adventurousness of such tasks intriguing, my knowledge of working memory limitations tells me: ain’t gonna happen.

I should also confess: my experience assigning project-y work hasn’t gone well.

In brief: although “constructivist” approaches often sound appealing, my focus on basic cognitive capacities makes me extra skeptical.

(Important note: “constructivism” is an ENORMOUSLY broad category, and it’s inaccurate/unfair to lump so many pedagogies together into one ill-defined word.)

The Goals; The Problems

When I look at research, I’ve got a few desiderata:

One: The study should — as much as possible — isolate the variable. I can’t say that (to take a comic example) “chewing gum improves learning” if the participants both chewed gum and tap-danced.

Another one: the study should have a plausible control group. The question isn’t “did X improve learning?” but “did X improve learning compared to the plausible alternative Y?”

Yet another one: the researchers should try hard to measure what they claim. If I say “PBL helps students learn stuff,” I should have some reliable measurement of what they learned. If reseachers make up their own test…well…I worry that they’re (subconsciously) putting a thumb on the scale.

Because I’m a PBL doubter, I read this study with a keen eye on those topics. I’m used to finding such problems. For instance:

Isolate the variable: the study about “using gestures” actually used gestures AND cool tech stuff. I don’t believe claims about X if the students did both X and Y.

Plausible control group: again, the “using gestures” study compared teachers who got something (extra PD; extra curricular materials) with teachers who got nothing (no extra anything).

Measuring the claim: a study claiming that “handwriting helps students learn” didn’t measure learning. (I still can’t get over how many people are citing this study despite this extraordinary flaw.)

So, would this PBL study fall short of these standards?

To be clear — and fair — no study is perfect. Psychology is complicated; teaching is complicated; PEOPLE are complicated. So, I’m not asking that everything be perfect.

But I am asking that the study make a good-faith effort on most of those things.

Envelopes, Please

As a skeptic, I was pleasantly surprised by what I read. Two points stood out in particular:

First: unlike the “gesture” study, the PBL study made an impressive effort to treat teachers in both groups equally.

  • Both groups — not just the PBL group — got extra PD time.
  • Both groups — not just the PBL group — were told that classroom visits were a part of the program.

This kind of care is, in my experience, unusual. I was pleasantly surprised.

Second: the “measurement” sounds (largely) plausible. The researchers did NOT simply make up their own test of the science learning.

Instead, they used the Michigan State standardized test for both the PBL group and the control group. For time reasons, they didn’t use all the questions from that test — so they did have a chance to put that thumb on the scale. But they had less of a chance than if they’d simply created their own test.

Now, don’t get me wrong. I do have some concerns. For instance:

  • Although the teachers in both groups got special treatment, the students didn’t. That is: both groups of teachers got extra PD, but the students in the control group got “same old, same old.” The study would be more persuasive if they too got a new teaching approach.
  • The teachers in both groups got extra stuff, but the teachers in the PBL group got MORE extra stuff. They got more (and more frequent) PD, and more curriculur support, and class visits. (For scheduling reasons, the promised class visits for the control group largely didn’t happen.)
  • As noted above, the research team didn’t exactly use someone else’s measurement — although it seems they made a good-faith effort to do so.

In brief, I can quibble with the study — but I don’t think its flaws merit easy disqualification.

Final Verdict

The research team measured LOTS of variables, and scrupulously tallied scores for MANY important sub-groups and special circumstances.

A student appears to be flying like a superhero in mid-air, but he is actually lying on his side against a dark gray background. He extends one arm forward in a classic "superhero flight" pose, while his legs are bent, creating the illusion of movement. He wears brown pants, a leather belt, and sneakers. The creative perspective and lighting make it look as if he is defying gravity.

If we take the headline number, they found an effect size of 0.277 (technically, “small”) for the amount of additional science knowledge that the students in the PBL group learned compared to the control group.

That is: PBL produced more learning, but not lots-n-lots. We can’t rule out the possibility that all that extra learning resulted from the “shiny new thing,” not from the PBL.

At the same time, my concerns about working memory overload were — at least in this one example — calmed. If this PBL program had overwhelmed WM for these 3rd graders, they wouldn’t have learned much at all; instead, they learned a bit more.

I still have lots of questions and concerns. But I’m heartened to see that — done right — this PBL program offers a potential pathway for further exploration.


Krajcik, J., Schneider, B., Miller, E. A., Chen, I. C., Bradford, L., Baker, Q., … & Peek-Brown, D. (2023). Assessing the effect of project-based learning on science learning in elementary schools. American Educational Research Journal60(1), 70-102.

Doubting My Doubts; The Case of Gesture and Embodied Cognition
Andrew Watson
Andrew Watson

The more time I spend hearing “research-informed educational advice,” the more I worry about the enticing words “research-informed.”

Many MANY people toss around the phrase “research says…”; all too often, even a brief investigation suggests that research really doesn’t say that.

Young girl swinging on a playground swing; a wooden structure behind her

For this reason, I find myself slower to get excited about new “research-based” teaching ideas than many of my colleagues…even colleagues whom I admire, respect, and generally trust.

For instance: lots of scholars are investigating the field of embodied cognition and — more specifically — of using gestures to promote learning.

I’m certainly open to the idea that combining gestures with words and visuals will improve learning. And: I want to know A LOT more about the specifics of this idea:

  • Who is making these gestures? Teachers? Students? Actors in videos?
  • What kind of gestures are they? “Deictic” or”iconic”? Rehearsed or improvised?
  • Does the strategy work well in all disciplines/grades/cultures?

And so forth.

I’d also love to see some straightforwardly convincing research to support the answers to those questions.

So, for instance, I wrote a post about students using gestures to learn about Brownian motion. While the outline of the study made sense to me, it…

… didn’t have a control group,

… chose a topic easily translated into gestures, and

… measured “learning” 2 days later. (Does 2 days count as learning?)

While I’m glad I read the study, and appreciate some of its nuances, I don’t think it’s a slam dunk.

At the same time, I should turn some of my skeptical energy towards myself.

In other words: given all of my doubts, I should also be ready to doubt my own doubtsMaybe the wisdom of the crowd should outweigh my own habitual caution here. Maybe I’m so invested in my skeptic’s persona that I’m subconsciously unwilling to be persuaded…

Enter the Steelman

Because I doubt my doubts, I’m always on the lookout for EXCELLENT research contradicting my outlier point of view. I genuinely WANT to have my errors pointed out to me.

For that reason, I was delighted to find a highly touted study about teaching physics with embodied cognition.

My source here — published by the Educational Endowment Foundation — looks for the very best evidence supporting all sorts of cognitive science-based teaching advice: interleaving, retrieval practice, schemas, and so forth.

Of the 26 studies they found looking at embodied cognition, one stood out for its excellence. (In their rating system, it’s the only only one they rated “high priority.”) If the EEF, and all the wise scholars behind this report, find this study persuasive, it’s likely to be among the best research I can find.

In other words: I’m not analyzing a straw man here. This study is the “steelman.”

Playground Physics

The idea behind this study sounds both sensible and fun. Many of the abstract concepts studied in physics class are acted out quite concretely — that is, they are EMBODIED — when our children get to the playground.

If we could connect abstract classroom physics with embodied playground phyics, that approach could be really helpful.

This study begins with a good idea…and an ENORMOUS sample size. Over 3400 (!) students were in the initial sample; after (unusually high) attrition, that number dropped to about 1300 — roughly 800 in the “playground physics” group, and 500 in the control group.

The researchers wanted to see if the students in the playground group would a) learn more physics, b) feel more engaged, and c) feel more motivated — all compared to the control group.

The special “playground physics” program begins with a training session for the teachers, and includes curricular materials.

Crucially, playground physics also includes a phone app that students use to analyze their own motion:

“Using the app, users record videos of themselves and their friends engaging in physical play, and the app generates graphs of distance traveled, speed, direction, and kinetic and potential energy. As users watch the video, they see graphs of their movement unfolding. Users can pause to examine where they are moving fastest or slowest, where a force is pushing or pulling, and where their kinetic and potential energies are at their highest and lowest points. This is intended to support conversations grounded in the children’s physical experience”

Honestly, the whole experience sound really interesting!

Persistent Doubts

Although I tried to find a Steelman Study to support the case for Team Embodied Cognition, I’m still not persuaded.

I have two substantial concerns:

First:

This study does not measure the benefits of embodied cognition for learning physics.

Instead, it measures the benefits of embodied cognition PLUS cool tech gadgetry for learning physics. In fact, the study is published in a journal that focuses on technology in education.

Yes, the students learned more — but the extra learning could have come from the app (so much fun with video!) or from the embodied cognition (moving is so cool!) or both. We just don’t know.

I am not the only person pointing out this concern. The study’s authors say several times that they don’t know what the “mechanism” is that created additional learning. In other words: they do not claim that the embodiment matter more than the tech — or that it mattered at all. They don’t know.

To be persuaded by research into the use of gestures, I want to see a study that singles out the gestures; it should — in the lingo of research — “isolate the variable.” This one doesn’t.

Second:

When we compare two groups, we want them to be close enough to each other to be good proxies for each other. I’m not sure we can say that for this study.

A) The teachers of Playground Physics received extra PD; the teachers in the control group didn’t. Did the PD itself make the difference? We don’t know.

B) The study used a “business-as-usual control group.” That is: control group teachers just did what they always did. Teachers and students in the Playground Physics group got a Shiny New Thing. Was it the novelty that made the difference? We don’t know.

C) The Playground Physics group spent 15.5 hours studying physics; the control group spent 13.2 hours. The study’s authors write that this difference isn’t “statistically significant.” But — as a classroom teacher — I’m thinking two hours and fifteen minutes of additional practice would be significant, even if it isn’t “significant.” *

Because the study doesn’t isolate the variable (that’s the first concern) and the two groups don’t sufficiently resemble each other (that’s the second concern), I’m still stuck thinking: “this study doesn’t persuade me that embodied cognition is a thing.”

And — as you recall — I looked at this study because a respected group said it’s the best one they found.

TL;DR

I’m still looking for the study that makes the Embodied Cognition approach to teaching persuasive enough for me to recommend it to others.

I haven’t found it yet…but I haven’t given up hope.

By the way: if you know of such a study, please send it my way!


* I spoke with a stats-whisperer friend, who agrees with me that this simply isn’t a reasonable claim.


Margolin, J., Ba, H., Friedman, L. B., Swanlund, A., Dhillon, S., & Liu, F. (2021). Examining the impact of a play-based middle school physics program. Journal of Research on Technology in Education53(2), 125-139.