L&B Blog – Page 2 – Education & Teacher Conferences Skip to main content
Book Review: Primary Reading Simplified, by Christopher Such
Guest Post
Guest Post

Today’s guest book review is by Kim Lockhart.


Finding a new favourite book, one that checks all the boxes, is like finding a new favourite drink. You want to devour it without putting it down, while at the same time, you want to savour it so it isn’t finished too quickly. A good book leaves you feeling thirsty for more. And most of all, like all things we love, we want to share it with everyone we know so that they, too, can enjoy it and savour it as much as we do.

Book cover for Primary Reading Simplified, by Christopher Such

Christopher Such’s first book, The Art and Science of Teaching Primary Reading, was that book for me: the book that checked all the boxes, the book I didn’t want to put down, and was disappointed when it was over.

I wanted to share Such’s book with every teacher I knew because I wanted them to feel what I felt while reading it: a sense of relief that there was finally a book that contained the answers I’d been searching for throughout my career.

It was the first comprehensive, no-nonsense book on the science of reading I had come across.

Previously, most of what I knew about the reading research I had acquired from complex articles I read while working on my Masters of Education. But these research papers were not easy to read, not written in teacher-friendly language, and didn’t always make a direct connection between the research and what it looked like in classroom practice.

But Christopher Such’s book did.

Fast forward three years, and Such has done it again. He has written another can’t-put-it-down-until-it’s-finished book titled Primary Reading Simplified: A Practical Guide to Classroom Teaching and Whole School Implementation.

While Such’s first book focused on what teachers need to teach (phonemic awareness, phonics, reading fluency, spelling, etc.), his new book focuses more on the aspect of how to teach it.

In other words: teaching all the components of the reading process is not always enough. As teachers, we have to ensure that students are learning what we’re teaching. Even the most well-intentioned of teachers does not always meet this goal!

This book tells us exactly how we can work smarter, not harder, to ensure better learning outcomes for our students – making sure they ARE learning what we’re teaching.

Such’s new book includes an important aspect of teaching that is often the missing piece in other professional books for teachers. In teacher-friendly language, he shares the research and explains the reading routines, classroom habits, and evidence-based instructional strategies that are essential for learning to happen.

Take the chapter on reading fluency, for example.

Not only does Such explain why reading fluency is important for reading comprehension. He explicitly and systematically lays out the structure of an effective fluency lesson, including:

  • how to pair students
  • how to choose the “just right text”
  • how to model reading fluency,
  • when to explain new vocabulary, and even
  • how long the fluency practice should be each day (Such suggests less than 30 minutes to allow for repeated reading of the text; too much time could result in disengagement and not be productive).

He leaves no guess-work for teachers.

But Such doesn’t stop at developing stronger, more effective classroom routines and systems for teachers. Part IV of the book is designed for anyone interested in school-wide and district-wide implementation of science-based reading instruction.

Until now, science-based instruction has been happening in specific classrooms, with a few individual teachers, in some school districts. In other words, it isn’t consistent.  Such understands that for change to be sustainable, it needs to be implemented at the district level. These changes require fidelity, teacher support, and ongoing monitoring and commitment to change.

Chapters 13, 14, and 15 carefully outline structures for systems to be sustainable across districts. He shares the 4-Phases of implementation model (Sharples et al., 2024 as cited in Such, 2025): Explore, Prepare, Deliver, and Sustain. He also makes it clear that implementation can’t happen all at once. Instead, “implementing change across a school should be seen as an ongoing process rather than a one-off event”.

In this section of the book, Such also addresses the barriers that literacy leaders may face when trying to implement system-wide change: specifically, limited human resources, and time. He also offers practical suggestions for overcoming those barriers, including:

  • very clear expectations and vision,
  • flexible adaptations,
  • and the need to put other, lower-priority changes aside to focus on one priority and sustain it.

Such eloquently concludes this section of the book by explaining, “implementation is most likely to succeed if all involved feel it is something being done with them rather than something done to them.” (Such, 2025, p. 136).

Reviewer Kim Lockhart
Reviewer Kim Lockhart

Echoing the format of Such’s first book, each chapter of his new book is short and dense with evidence-based information in manageable chunks. I love this format because it is practical for busy teachers like me. If I have only 10 minutes to read a snippet before I have to run outside for recess duty, I can easily read a few paragraphs in a chapter, learn something, and know exactly where to return when I have time again.

Likewise, each chapter of Primary Reading Simplified concludes with an “In a Nutshell” section that reviews and highlights key information from the chapter. Such also includes the section “Further Reading” for science-of-reading-nerds like me who want to learn more. He even includes a retrieval-practice quiz for each chapter. (To be honest, I am often too scared to quiz myself because I fear that I won’t be able to remember as much as I hope to, despite my greatest efforts.) Best of all, each chapter concludes with a section called “Questions for Professional Discussion.”

Because reading proficiency is not the sole responsibility of the classroom teacher, I highly recommend Primary Reading Simplified for all teachers, reading specialists, literacy coaches, and administrators. Reading instruction is our shared responsibility. As Christopher Such says himself, “Our pupils deserve no less.”


Kim Lockhart is a French Immersion classroom teacher and Special Education teacher in Kingston, Ontario. She holds a Master of Education (M.Ed) degree with a research focus on evidence-based practices to support second language learners with reading difficulties. Kim has her Orton-Gillingham Classroom Educator certificate, CERI Structured Literacy Classroom Teacher certification, and was a Structured Literacy coach for the International Dyslexia Association of Ontario for 2 years. In 2022, Kim worked for the Ontario Ministry of Education as a Content Contributor for the new science-based Language Curriculum and has also presented for the Ontario Minister of Education, Stephen Lecce and his team after the release of the OHRC’s Right to Read report. She is currently teaching part-time at Vancouver Island University’s (VIU) Faculty of Education in the Literacy, Language and Learning Graduate program. Kim is passionate about the Science of Reading and strives to empower educators, parents and caregivers to be more knowledgeable, stronger advocates for all children’s right to read across Canada.

How to Present at a Conference…
Andrew Watson
Andrew Watson

I spend most of my professional life talking in front of groups of people. Specifically, I talk about psychology and neuroscience research — and the (careful!) application of those findings to classrooms.

For that reason, I feel quite comfortable writing blog posts about research-based teaching advice: most recently…

… how goals and feedback might motivate students, and

… the relative importance of enjoyment and/or skill in reading instruction, and

… the potential benefits of PBL.

And so forth.

In recent years, I’ve also been getting questions about presenting in public.

A close-up of a microphone on a podium, with the silver mesh head in sharp focus against a blurred background. Behind the microphone is a large, out-of-focus audience in what appears to be an auditorium or gymnasium with ceiling lights visible above. The wooden podium is partially visible at the bottom of the frame. The image captures the perspective of a speaker addressing a crowd, suggesting a public speaking event, conference, or school assembly.

While that topic isn’t central to this blog, it might be helpful. After all, I know MANY people who read this blog present at conferences. And I bet many more people would like to.

So: a few quick thoughts.

1: You Do You

No, seriously.

I won’t present well if I’m trying to do (exactly) what you do, and you won’t present well if you try to do (exactly) what I do. We’re different individuals.

While I do think the advice below is broadly helpful, you should also NOT follow this advice if it encourages you to act like someone you’re not.

That is:

  • If you try to be funny — because someone somewhere said “funny speakers are good!” — that advice will work only if you’re a funny person. If not, your audience will sense your discomfort (and your unfunniness) right away.
  • If I try to get adults up and moving around (“Everybody Conga!”) because John Almarode is GREAT at getting people up and moving, I’ll feel deeply foolish. That’s just not my style. And my audience will know I feel deeply foolish. (They will probably also feel deeply foolish.)

And so forth.

2: You Do What You Say

When I’m giving a talk about avoiding working memory overload, I have to be especially careful to avoid working memory overload. If I don’t, my audience will remember my hypocrisy; they will not remember my content.

If you’re talking about retrieval practice, include retrieval practice. If you don’t, your audience will wonder why you didn’t use the teaching practice that you said was so important.

Ditto if you’re talking about mindfulness, or mini-white-boards, or forming relationships. Speakers who don’t do what they say risk looking like hypocrites.

By the way, this truth creates real problems for presentations on specific pedagogies. If my message is “students best remember ideas they learn through open-ended inquiry,” how can I best make that argument?

  • If I give a presentation, then my medium contradicts my message.
  • If I let teachers open-endedly explore their own pedagogical interests, they might not explore “open-ended inquiry.” Or, if they do, their inquiry might not arrive at the same conclusion I do.

My point here is not to be snarky, but to note a real challenge for champions of more “student-centered” pedagogical styles. If I give a talk about the inherent futility of giving talks…the paradox probably overwhelms my message.

3: Highlight Structure

Most talks condense LOTS of information into relatively short periods of time. In fact, one reason speakers receive invitations is: their expertise allows them to organize many ideas into a coherent (if complex) package.

Alas, the more info I condense into my talk, the harder my audience has to work to follow my argument. With each passing slide, they think more nervously: “Wait — how does THIS set of ideas connect to ALL THOSE PREVIOUS sets of ideas?”

For that reason, I think speakers should include a clear outline very near the beginning of the talk. And they should return to that outline frequently throughout the talk to indicate progress.

For instance, I’ve got an upcoming talk on the subject of “Rethinking Intrinsic Motivation.” That talk will begin with this outline:

Act I: Here’s what everyone thinks about intrinsic motivation.

Act II: David Geary wants us to RETHINK intrinsic motivation.

Act III: Most people think that this other theory contradicts Geary. But I think it aligns with — and adds to — Geary’s theory.

Act IV: In fact, we need to RETHINK the other theory to convert it from a “to do” list into a “to think” list. Here’s how we do that.

Notice, this initial outline stays fairly abstract. I say that Geary “wants us to rethink motivation,” but I don’t get specific. I don’t even name the other theory. And so forth.

As I make my way through the talk, I explicitly return to that outline and add in all those specifics:

“As you saw in Act II, Geary wants us to rethink intrinsic motivation from an evolutionary angle. For that reason, he argues, school should emphasize topics that we didn’t evolve to learn, not those that we do.

Now, in Act III…”

By starting with an outline, and by returning to it, I clarify my ideas. Even more important: I clarify the relationship among the ideas.

4: Presenting Online

Four quick rules to raise your online game.

a) My eyes should not be at the center line of the screen, but 1/3 of the way down from the top of the screen. Film and TV shows are shot with this “rule of thirds,” so my audience expects it. Eyes at the midline look odd.

b) For heaven’s sake, I must NOT let my laptop camera point up at my face — and therefore up my nostrils. That view is really unpleasant. I should prop the laptop on books so that it’s level with my face.

c) In daily conversation, we don’t typically get very close to the people we’re talking with. If my face is too close to the camera, I’m a “close talker.” The look is unsettling. So, I should move the camera back so that most of my torso is visible.

d) My background matters. If the room behind me is cluttered and unkempt, I look unprofessional. I should find a simple, classy look.

None of those guidelines is complicated; all of them improve online presentations.

In Sum…

Although speakers should be themselves, some guidelines improve almost all talks. I hope the list above proves helpful.

Do *Goals* Motivate Students? How about *Feedback*?
Andrew Watson
Andrew Watson

Motivation has been a HOT TOPIC this year in all the schools I’ve visited. Everywhere I go, I get questions about students’ apathy and indifference, and teachers’ frustration.

So, what can schools and teachers do?

Well, Self-Determination Theory offers a framework to answer that pressing question.

In this post, I’m going to introduce the theory — with a focus on its key concepts.

And then I’ll describe a study which helpfully reveals the complexity of enacting the theory wisely.

And, yes, as this post’s title suggests, that helpful study focuses on goals and feedback as motivational strategies.

Let’s see what researchers have discovered about the motivational benefits of goals and feedback.

Introducing Self-Determination Theory

Like many theories, self-determination theory (SDT) can be easily caricatured. Here’s the caricature:

  • Extrinsic motivation BAD!
  • Intrinsic motivation GOOD!!

These six words fall short in lots of ways, starting with this startling observation. SDT doesn’t contrast intrinsic and extrinsic motivation.

Instead, it defines six (yes, SIX) different motivational states — including four (yes, FOUR) different kinds of extrinsic motivation.

Here’s the kicker:

  • Unsurprisingly, intrinsic motivation is GOOD for learning.
  • Surprisingly, two flavors of extrinsic motivation are ALSO good for learning. (The other two flavors: not so much.)

The scholars who created the theory — Deci and Ryan — have a name for “good flavors of extrinsic motivation”; they call them “autonomous extrinsic motivation.”

At the top of this blog post, I asked: what can teachers do about apathetic students? Deci and Ryan answer: “foster the good kinds of motivation.”

Let’s Get Fostering!

Okay, if “the good kinds of motivation” can help, how do we teachers conjure them?

If I’m understanding SDT correctly, it includes bad news and good news.

  • Bad news: we really can’t create intrinsic motivation (as Deci and Ryan define it).
  • Good news: we really CAN create autonomous extrinsic motivation, which — as you recall — benefits learning.

We foster this good extrinsic motivation by focusing on three internal experiences: autonomy, relatedness, and competence.

That is: the more that my students feel in control (“autonomous”), close to one another (“related”), and effective at dealing with their environment (“competent”), the more autonous extrinsic motivation they will experience. And: the more they will learn.

The obvious implication of this theory, then: let’s focus on enhancing our students’ autonomy, relatedness, and competence.

Plausible Start

When I talk with teachers about this theory, they can easily start to brainstorm suggestions for creating autonomy, relatedness, and competence — and, presumably, the good kind of extrinsic motivation.

As a thought experiment, we can easily imagine that clear goals will have those results. And, while we’re at it, we might predict that process feedback will likewise.

Several middle school students eagerly raise their hands to answer questions

But let’s go beyond a thought experiment. Let’s have an experiment experiment — with students and data and calculations and all that good stuff.

What happens?

Happily, a research team in the Netherlands wanted to know. They ran a survey study with almost 600 students — aged 11 to 18 — in PE classes.

They asked two sets of questions.

First: did the teachers clarify the goals during class? That is, did they…

  • … tell the students what they were going to learn, or
  • … how they would be evaluated?

Likewise, did they offer process feedback? That is, did they …

  • … encourage reflection on how to improve, or
  • … discuss how to use the students’ strengths?

And so forth.

Second: they asked if the students experienced greater autonomy, relatedness, and/or competence.

To be thorough, they also asked if they experienced LESS autonomy, relatedness, and/or competence.

Once they crunched all the numbers, what did this research team find?

Not Surprising, or Surprising?

From one perspective, this study seems to be asking rather obvious questions. I mean: OF COURSE students will feel more autonomous if we tell them what the goals are, or more related if we give them feedback.

What other result would we expect?

Here’s the thing: in the world of research, we don’t just assume; we measure. And, sure enough, those measurements gave us the results we (probably) expected.

Yesclear goals enhance autonomy, relatedness, and competence.

And yesprocess feedback does too.

At the same time, the number crunching also provided surprising results.

In some cases, process feedback reduced two of those classroom experiences: “relatedness” and “competence.”

While this result might seem surprising at first, I think it’s easy to understand the chain of emotional events here.

If I give my students lots of feedback, they might feel like I’m hovering or pestering or interfering.

Of course, “hovering, pestering, and interfering” could quite easily reduce the quality of the teacher/student relationship. And, they might also reduce my students’ feelings of competence.

In other words: all that feedback could suggest the students are not doing very well. And that feeling of incompetence could — in turn — reduce the quality of their relationship with the teacher.

Solving the Conundrum

So, which is it? Should teachers give students process feedback because it enhances autonomy, relatedness, and competence? Or, should we limit process feedback, because it reduces relatedness and competence?

As is so often the case, I think we answer that question by rethinking the relationship between research and classroom practice.

Research can almost never tell teachers what to do. Instead, research is awesome at helping us think about what we do.

In this case, our thought process might sound something like this:

  • I want to create autonomous extrinsic motivation, so I should enhance my students’ sense of competence.
  • [Thinking]
  • I wonder if I can promote competence by giving them lots of feedback during today’s class.
  • [more thinking]
  • Now that I think about it, my feedback could enhance their sense of competence. But if I give too much feedback — or unwanted feedback — students could infer that I don’t have confidence in them.
  • [even more thinking]
  • So, I’ll put a note in my lesson plan to make time for feedback. But first, I need to think about the cues my students give me when the feedback is just too much…

Of course, those cues will look different depending on context.

  • 2nd graders will give different cues than 7th graders.
  • I suspect that — for cultural reasons — students in Japan signal frustration differently than those in New Zealand.
  • Students react differently to the cool, with-it teacher than they do with me. (It’s been a minute since I was the cool, with-it teacher.)

And so forth.

But if I consider self-determination theory as a THOUGHT PROCESS, not a TO-DO LIST, I’m much likelier to get the results I want.

In this case: my feedback is likelier to enhance than reduce competence. It’s therefore likelier to promote autonomous extrinsic motivation.

And my students are likelier to learn.


Krijgsman, C., Mainhard, T., van Tartwijk, J., Borghouts, L., Vansteenkiste, M., Aelterman, N., & Haerens, L. (2019). Where to go and how to get there: Goal clarification, process feedback and students’ need satisfaction and frustration from lesson to lesson. Learning and Instruction61, 1-11.

Enjoyment or Skill? The Case of Reading
Andrew Watson
Andrew Watson

Do we want our students to ENJOY math, or to BE SKILLED AT math?

At first, this question sounds like a false choice. Obviously, we want BOTH.

As an English teacher, I want my students to have fun analyzing the books we read…and I want their analyses to have heft, merit, and substance.

I suspect that most teachers, no matter the subject  — Math, English, Chemistry, Religion, Pickleball — want our students to revel in core ideas and arrive at correct answers.

At some times, alas, we probably need to prioritize one or the other. Especially at the beginning of a unit, should I focus on …

… ensuring that my students like this stuff (even if they don’t immediately understand it), or on

… ensuring they understand the stuff (even if they don’t immediately like it)?

In teaching as in life: if I try to accomplish both goals simultaneously, I’m likely to accomplish neither.

Reading Research

I’m not surprised to discover in a recent study that students’ enjoyment of reading correlates with their skill at reading.

That is: students who get high scores on various reading tests report enjoying reading more than their low-test-scoring peers.

Of course, correlation (say it with me) isn’t causation.

Does the enjoyment lead to the skill? The skill lead to the enjoyment?

Both?

Neither?

To answer these questions, Elsje van Bergen’s research team looked at twins in Finland — more than 3500 of them.

In theory, if we ask all the right questions, gather the right data, and run the right calculations, we should glean insight into the correlation/causation question.

So: what did Team van Bergen find?

But First…

Before you read the answers to that question, you might pause to make a committment. Try to decide NOW if you’re inclined to trust this methodology.

That is:

a) you think well-done twin studies are likely to be a good way to answer this question. For that reason, you will be inclined to accept this answer even if you initially disagree with it.

or

b) you think twin studies can’t answer questions about skill and enjoyment. Thus, you will not cite this study to support your beliefs even if it aligns with those beliefs.

If we’re going to use research to make decisions about education, we should be scrupulous about doing so even when research contradicts the conclusions we had initially held.

Answers, and Questions

Now, back to this post’s main narrative…

Unlike many studies, this one can be summarized in a few pithy sentences.

A young student looks at a book open on her desk and scratches her head in confusion

Based on the twin data they analyzed, van Bergen’s team concludes that:

  • reading skill increases reading enjoyment,
  • reading enjoyment has no effect on reading skill,
  • genetics influences both positively.

Unsurprisingly, the stats get all stats-y. But the above-the-fold headlines are that simple.

Because I don’t teach reading, I’ve always hesitated to be too opinionated on the topic. Now that this study is in the wild, I do think it adds a useful perspective while the reading wars rage on.

For instance: teachers whom I like and respect have told me that older methods might not have science behind them, but they’re excellent at “making students feel like readers.”

This claim has always puzzled me. How can a student feel like a reader if s/he can’t read?

Van Bergen’s study, I think, gives me permission to address that point directly: “this study suggests that skill at reading will be the more important place to start in reading instruction.”

Zooming the Camera Back

While this study and this post have focused on reading instruction, I do think there’s a broader message here as well.

We frequently hear about the importance of intrinsic motivation; that is, a motivation that springs from students’ natural interests, not from external encouragement (or pressure).

This study, to the contrary, finds that the work teachers do to improve students’ skill simultaneously enhances their motivation. That motivation might be — in effect — extrinsic; but, it’s working. (Working = students read better, and want to read more.)

Overall, I believe we need a substantial rethink of the (false) intrinsic/extrinsic dichotomy, and the (unhelpful) criticism of motivational strategies that many teachers currently find themselves using.

If you want to join me for just such a rethink, I’m giving a webinar for Learning and the Brain on April 5th. We’ll be talking about several research-informed approaches to intrinsic motivation, and brainstorming strategies to make those ideas fit in our classrooms.

I hope I’ll persuade you that we have better ways to talk about motivation than “intrinsic/extrinsic,” and those better ways give us useful teacherly guidance.

I hope you’ll join us!


van Bergen, E., Hart, S. A., Latvala, A., Vuoksimaa, E., Tolvanen, A., & Torppa, M. (2023). Literacy skills seem to fuel literacy enjoyment, rather than vice versa. Developmental Science26(3), e13325.

Still Doubting My Doubts: The Case of PBL
Andrew Watson
Andrew Watson

Last week, I described my enduring concerns about “embodied cognition.” I’m not sure I understand the concept clearly: what exactly counts as “embodied cognition” — mindfulness? Direct instruction? (No, seriously, a well-known book on the subject says it does!)

And the “best research” supporting some of the claims doesn’t feel persuasive to me.

Could using gestures help learning? SURE. Have I found enough research for me to advocate for this strategy? Not yet…

This week, I wanted to test my doubts about project-based learning (universally acronymed as PBL). SURPRISE: I end up feeling kinda persuaded — at least in this one case.

Here’s the story…

Another Steelman

If I’m going to critique a teaching method, I want to be sure to avoid straw men. Neither you nor I learn anything if I point out the flaws in an obviously foolish study or approach. I’m going to learn something if and only if I take on the very best case.

Some thoughtful soul — I’m embarrased to say, I can’t remember who — recommended this PBL study to me.

Given the strength of that recommendation, I thought it worth a read — despite my PBL concerns.

What are those PBL concerns?

As is so often the case for me, I worry about working memory overload. If I ask my students to

  • Film a scene from Hamlet, but re-imagine it in a new setting, or
  • Build a model city that enacts 3 core principles of ecological design, or
  • Write a new law that prevents a problem in our school’s community

I’m certainly giving them a rich cognitive task.

However, they almost certainly don’t have enough foundational knowledge to manage any of those tasks. Heck, graduate students in those fields struggle with such problems.

So, while I find the questing adventurousness of such tasks intriguing, my knowledge of working memory limitations tells me: ain’t gonna happen.

I should also confess: my experience assigning project-y work hasn’t gone well.

In brief: although “constructivist” approaches often sound appealing, my focus on basic cognitive capacities makes me extra skeptical.

(Important note: “constructivism” is an ENORMOUSLY broad category, and it’s inaccurate/unfair to lump so many pedagogies together into one ill-defined word.)

The Goals; The Problems

When I look at research, I’ve got a few desiderata:

One: The study should — as much as possible — isolate the variable. I can’t say that (to take a comic example) “chewing gum improves learning” if the participants both chewed gum and tap-danced.

Another one: the study should have a plausible control group. The question isn’t “did X improve learning?” but “did X improve learning compared to the plausible alternative Y?”

Yet another one: the researchers should try hard to measure what they claim. If I say “PBL helps students learn stuff,” I should have some reliable measurement of what they learned. If reseachers make up their own test…well…I worry that they’re (subconsciously) putting a thumb on the scale.

Because I’m a PBL doubter, I read this study with a keen eye on those topics. I’m used to finding such problems. For instance:

Isolate the variable: the study about “using gestures” actually used gestures AND cool tech stuff. I don’t believe claims about X if the students did both X and Y.

Plausible control group: again, the “using gestures” study compared teachers who got something (extra PD; extra curricular materials) with teachers who got nothing (no extra anything).

Measuring the claim: a study claiming that “handwriting helps students learn” didn’t measure learning. (I still can’t get over how many people are citing this study despite this extraordinary flaw.)

So, would this PBL study fall short of these standards?

To be clear — and fair — no study is perfect. Psychology is complicated; teaching is complicated; PEOPLE are complicated. So, I’m not asking that everything be perfect.

But I am asking that the study make a good-faith effort on most of those things.

Envelopes, Please

As a skeptic, I was pleasantly surprised by what I read. Two points stood out in particular:

First: unlike the “gesture” study, the PBL study made an impressive effort to treat teachers in both groups equally.

  • Both groups — not just the PBL group — got extra PD time.
  • Both groups — not just the PBL group — were told that classroom visits were a part of the program.

This kind of care is, in my experience, unusual. I was pleasantly surprised.

Second: the “measurement” sounds (largely) plausible. The researchers did NOT simply make up their own test of the science learning.

Instead, they used the Michigan State standardized test for both the PBL group and the control group. For time reasons, they didn’t use all the questions from that test — so they did have a chance to put that thumb on the scale. But they had less of a chance than if they’d simply created their own test.

Now, don’t get me wrong. I do have some concerns. For instance:

  • Although the teachers in both groups got special treatment, the students didn’t. That is: both groups of teachers got extra PD, but the students in the control group got “same old, same old.” The study would be more persuasive if they too got a new teaching approach.
  • The teachers in both groups got extra stuff, but the teachers in the PBL group got MORE extra stuff. They got more (and more frequent) PD, and more curriculur support, and class visits. (For scheduling reasons, the promised class visits for the control group largely didn’t happen.)
  • As noted above, the research team didn’t exactly use someone else’s measurement — although it seems they made a good-faith effort to do so.

In brief, I can quibble with the study — but I don’t think its flaws merit easy disqualification.

Final Verdict

The research team measured LOTS of variables, and scrupulously tallied scores for MANY important sub-groups and special circumstances.

A student appears to be flying like a superhero in mid-air, but he is actually lying on his side against a dark gray background. He extends one arm forward in a classic "superhero flight" pose, while his legs are bent, creating the illusion of movement. He wears brown pants, a leather belt, and sneakers. The creative perspective and lighting make it look as if he is defying gravity.

If we take the headline number, they found an effect size of 0.277 (technically, “small”) for the amount of additional science knowledge that the students in the PBL group learned compared to the control group.

That is: PBL produced more learning, but not lots-n-lots. We can’t rule out the possibility that all that extra learning resulted from the “shiny new thing,” not from the PBL.

At the same time, my concerns about working memory overload were — at least in this one example — calmed. If this PBL program had overwhelmed WM for these 3rd graders, they wouldn’t have learned much at all; instead, they learned a bit more.

I still have lots of questions and concerns. But I’m heartened to see that — done right — this PBL program offers a potential pathway for further exploration.


Krajcik, J., Schneider, B., Miller, E. A., Chen, I. C., Bradford, L., Baker, Q., … & Peek-Brown, D. (2023). Assessing the effect of project-based learning on science learning in elementary schools. American Educational Research Journal60(1), 70-102.

Doubting My Doubts; The Case of Gesture and Embodied Cognition
Andrew Watson
Andrew Watson

The more time I spend hearing “research-informed educational advice,” the more I worry about the enticing words “research-informed.”

Many MANY people toss around the phrase “research says…”; all too often, even a brief investigation suggests that research really doesn’t say that.

Young girl swinging on a playground swing; a wooden structure behind her

For this reason, I find myself slower to get excited about new “research-based” teaching ideas than many of my colleagues…even colleagues whom I admire, respect, and generally trust.

For instance: lots of scholars are investigating the field of embodied cognition and — more specifically — of using gestures to promote learning.

I’m certainly open to the idea that combining gestures with words and visuals will improve learning. And: I want to know A LOT more about the specifics of this idea:

  • Who is making these gestures? Teachers? Students? Actors in videos?
  • What kind of gestures are they? “Deictic” or”iconic”? Rehearsed or improvised?
  • Does the strategy work well in all disciplines/grades/cultures?

And so forth.

I’d also love to see some straightforwardly convincing research to support the answers to those questions.

So, for instance, I wrote a post about students using gestures to learn about Brownian motion. While the outline of the study made sense to me, it…

… didn’t have a control group,

… chose a topic easily translated into gestures, and

… measured “learning” 2 days later. (Does 2 days count as learning?)

While I’m glad I read the study, and appreciate some of its nuances, I don’t think it’s a slam dunk.

At the same time, I should turn some of my skeptical energy towards myself.

In other words: given all of my doubts, I should also be ready to doubt my own doubtsMaybe the wisdom of the crowd should outweigh my own habitual caution here. Maybe I’m so invested in my skeptic’s persona that I’m subconsciously unwilling to be persuaded…

Enter the Steelman

Because I doubt my doubts, I’m always on the lookout for EXCELLENT research contradicting my outlier point of view. I genuinely WANT to have my errors pointed out to me.

For that reason, I was delighted to find a highly touted study about teaching physics with embodied cognition.

My source here — published by the Educational Endowment Foundation — looks for the very best evidence supporting all sorts of cognitive science-based teaching advice: interleaving, retrieval practice, schemas, and so forth.

Of the 26 studies they found looking at embodied cognition, one stood out for its excellence. (In their rating system, it’s the only only one they rated “high priority.”) If the EEF, and all the wise scholars behind this report, find this study persuasive, it’s likely to be among the best research I can find.

In other words: I’m not analyzing a straw man here. This study is the “steelman.”

Playground Physics

The idea behind this study sounds both sensible and fun. Many of the abstract concepts studied in physics class are acted out quite concretely — that is, they are EMBODIED — when our children get to the playground.

If we could connect abstract classroom physics with embodied playground phyics, that approach could be really helpful.

This study begins with a good idea…and an ENORMOUS sample size. Over 3400 (!) students were in the initial sample; after (unusually high) attrition, that number dropped to about 1300 — roughly 800 in the “playground physics” group, and 500 in the control group.

The researchers wanted to see if the students in the playground group would a) learn more physics, b) feel more engaged, and c) feel more motivated — all compared to the control group.

The special “playground physics” program begins with a training session for the teachers, and includes curricular materials.

Crucially, playground physics also includes a phone app that students use to analyze their own motion:

“Using the app, users record videos of themselves and their friends engaging in physical play, and the app generates graphs of distance traveled, speed, direction, and kinetic and potential energy. As users watch the video, they see graphs of their movement unfolding. Users can pause to examine where they are moving fastest or slowest, where a force is pushing or pulling, and where their kinetic and potential energies are at their highest and lowest points. This is intended to support conversations grounded in the children’s physical experience”

Honestly, the whole experience sound really interesting!

Persistent Doubts

Although I tried to find a Steelman Study to support the case for Team Embodied Cognition, I’m still not persuaded.

I have two substantial concerns:

First:

This study does not measure the benefits of embodied cognition for learning physics.

Instead, it measures the benefits of embodied cognition PLUS cool tech gadgetry for learning physics. In fact, the study is published in a journal that focuses on technology in education.

Yes, the students learned more — but the extra learning could have come from the app (so much fun with video!) or from the embodied cognition (moving is so cool!) or both. We just don’t know.

I am not the only person pointing out this concern. The study’s authors say several times that they don’t know what the “mechanism” is that created additional learning. In other words: they do not claim that the embodiment matter more than the tech — or that it mattered at all. They don’t know.

To be persuaded by research into the use of gestures, I want to see a study that singles out the gestures; it should — in the lingo of research — “isolate the variable.” This one doesn’t.

Second:

When we compare two groups, we want them to be close enough to each other to be good proxies for each other. I’m not sure we can say that for this study.

A) The teachers of Playground Physics received extra PD; the teachers in the control group didn’t. Did the PD itself make the difference? We don’t know.

B) The study used a “business-as-usual control group.” That is: control group teachers just did what they always did. Teachers and students in the Playground Physics group got a Shiny New Thing. Was it the novelty that made the difference? We don’t know.

C) The Playground Physics group spent 15.5 hours studying physics; the control group spent 13.2 hours. The study’s authors write that this difference isn’t “statistically significant.” But — as a classroom teacher — I’m thinking two hours and fifteen minutes of additional practice would be significant, even if it isn’t “significant.” *

Because the study doesn’t isolate the variable (that’s the first concern) and the two groups don’t sufficiently resemble each other (that’s the second concern), I’m still stuck thinking: “this study doesn’t persuade me that embodied cognition is a thing.”

And — as you recall — I looked at this study because a respected group said it’s the best one they found.

TL;DR

I’m still looking for the study that makes the Embodied Cognition approach to teaching persuasive enough for me to recommend it to others.

I haven’t found it yet…but I haven’t given up hope.

By the way: if you know of such a study, please send it my way!


* I spoke with a stats-whisperer friend, who agrees with me that this simply isn’t a reasonable claim.


Margolin, J., Ba, H., Friedman, L. B., Swanlund, A., Dhillon, S., & Liu, F. (2021). Examining the impact of a play-based middle school physics program. Journal of Research on Technology in Education53(2), 125-139.

Revisiting the “Handwriting vs. Laptops” Debate: More Moving Goalposts
Andrew Watson
Andrew Watson

I don’t often repost articles, but I think this one deserves another look — for at least two reasons:

First: The study described below has been cited FREQUENTLY in recent weeks, although it really does not merit the confidence that it inspires, and

Second: The study has in fact drawn a strong rebuttal from other researchers in the field.

For both these reasons, I think you’ll find this post worth another look. (I have, by the way, updated  this post to reflect the newly published rebuttal.)


Imagine this conversation that you and I might have:

ANDREW: The fastest way to drive from here to the school is via South Street.

YOU: It is? That seems like a long detour. Why would I go that way?

ANDREW: I didn’t say it was the fastest; I said it was the best because it’s the prettiest.

YOU: You DID say it was fastest…wait, the prettiest? It’s basically junk yards and construction sites.

ANDREW: Yes, but because of all the bakeries, it smells really nice.

YOU: What does that have to do with fastest/prettiest?

ANDREW: Why are you being so unpleasant and difficult? South Street is the best route…

I suspect you would think: “this conversation is very frustrating and unhelpful because the goal posts keep moving.”

That is: I initially claimed that South Street is the fastest…but keep moving my claims as soon as you object. (And, oddly, I’m mad at you for being unreasonable.)

I routinely notice this pattern when I ask questions about the claim that “handwriting is better than laptops for note taking.”

Watch the goalposts move:

CLAIM: Handwriting is better than laptops for note taking. This study says so.

ANDREW: That study starts with the BIZARRE assumption that students can’t learn how to do new things — like, how to take notes correctly. And, research since then has routinely complicated or contradicted it.

CLAIM: I didn’t say handwriting is better because of this study. It’s because writing by hand changes neural networks. This research says so.

ANDREW: You DID make that claim because of that study…wait, that other research says that writing by hand helps students learn to write by hand. Of course it does.

But that doesn’t mean that writing by hand helps students learn other things — like, say, history or chemistry or German. Can you show me research supporting that claim?

CLAIM: I can’t, but when students write on laptops they distract students around them.

ANDREW: Yes, but that’s a completely different claim than the one you started with.

CLAIM: Why are you being so unpleasant and difficult? Writing by hand is better than taking notes on laptops!

Once again, I find this conversation frustrating and unhelpful. SO MANY MOVING GOALPOSTS.

I am entirely open to the idea that handwriting is better. But if someone makes that claim, and says it’s “research-based,” I’d like them to provide research that actually supports the claim.

A bright yellow American football goalpost, above a bright green field and against dark stadium

So far, that turns out to be a big ask.

This idea that “handwriting is better than keyboarding” keeps popping (I suspect because of a recent study), so I want to re-investigate this claim — with a keen eye on those goalposts.

Reasonable Start

If you see a headline that says, “Why Writing by Hand Is Better for Memory and Learning,” you might interpret that claim roughly this way:

Students who take handwritten notes — in their 6th grade history class, say, or their 10th grade science class — remember more of that material after 2 weeks than students who took notes on laptops.

Yes, I conjured up some of those specifics: “6th grade history,” “two weeks later.” But those seem like reasonable extrapolations. What else could the claim substantively mean?

Briefly: plausible goalpost = “students remember more history 2 weeks later.”

So, let’s look at the recent research being used to support this claim.

Here’s a very basic question: “how did the researchers measure how much the students learned and remembered?”

Did the students take a quiz two weeks later? Did they undertake a “brain dump” the following day? How, precisely, do we know what they learned?

The answer is:

The researchers did not measure how much the students learned/remembered.

Honestly. No quiz. No brain dump. Nothing.

And yet, even though the study doesn’t measure memory or learning, it is being used to argue that handwriting enhances memory and learning.

I find this astonishing.

Imagine that I claimed “research shows that this drug will lower your blood pressure!” but I never actually measured anyone’s blood pressure. This study takes a similar logical shortcut.

That is: the study measures activity “in brain regions associated with memory and learning.”

Did you notice something?

Goalpost plausibly was: “students remember more history 2 weeks later.”

Goalpost now is: “more activity in important brain regions.”

Grrr.

Getting Specific

When evaluating “research-based” claims, it’s helpful to know exactly what the participants in the research did.

So, these 36 participants wrote the same fifteen words multiple times. Sometimes they wrote with a stylus on a tablet; sometimes they typed using only their right index finger. (BTW: all the participants were right handed.)

Now, this insistance on “right index finger” makes sense from a neuro-research perspective. If both “handwriters” and “keyboarders” are using one hand, then the researchers reduce lots of confounding variables.

At the same time, this emphasis also leads to highly artificial circumstances.

Presumably some people type with one finger. But, I’m guessing that most people who want to take laptop notes don’t. I suspect they want to take laptop notes because they have some degree of facility on a keyboard.

So:

Goalpost initially was: “students remember more history 2 weeks later.”

Goalpost then was: “more activity in important brain regions.”

Goalpost now is: “more activity in important brain regions when participants write as they usually do than when they type in a really, really unnatural way.”

Double grrr.

It is, of course, helpful to know about these differences in neural responses. But I don’t think they plausibly add up to “students remember more.” Because — remember — no one measured learning.

I Am Not Alone

Since I published the original version of this article almost a year ago, it has been sharply questioned by other scholars in the very same journal.

These scholars describe the original study’s conclusions as “a logical shortcut.” They share my alarm that research which never measured any learning is being used to make strong claims about learning.

They also note we shouldn’t reach an emphatic verdict about grade-school learners based on college-age students:

“Drawing conclusions on learning processes in children in a classroom from a lab study carried out on a group of university students that did not include any type of learning seems slippery at best.” (Exasperated emphasis added)

This rebuttal also expresses technical concerns about the original study’s neuro-conclusions:

While theta and alpha oscillations have been functionally related to a variety of cognitive processes it has not been clearly established that increased theta/alpha connectivity creates appropriate conditions for learning.

I don’t know enough about theta and alpha oscillations to have a strong opinion here — but I think it’s helpful to know that other neuro-experts express reasons to doubt the original study’s confidence.

Lest I Be Misunderstood

In such conversations, I’m often misunderstood to be confident about the right answer. That is: I might seem to be saying “I’m confident that laptops are better than handwriting for learning.”

I am NOT saying that.

Instead, I’m asking for research that directly measures the claim being made.

If I say to you: “research shows that handwriting is better for learning than laptops,” I should be able to show you research that directly measures that claim.

If, instead, I have research showing that handwriting develops neural networks that might be beneficial for learning, I should say that.

My frustration about this point stems from a broader concern.

Over and over, I find that non-teachers cite research — especially neuroscience research — to boss teachers around. While I certainly do believe that teachers should know about pertinent research findings (that’s why I write this blog!), I also believe that we need to acknowledge the limits of our research-based knowledge.

I just don’t think that research (yet) demonstrates that handwritten notes generate more learning than laptop notes.

Overall, I’m inclined to believe:

Practicing fine motor skills (by, say, handwriting) is really important for young learners.

Practicing handwriting makes us better at handwriting — and other word-related skills.

As students get older and more facile with a keyboard, the benefits of handwriting vs. keyboarding will probably depend on the student, the subject, the kind of notes being taken, etc.

And if I see more than one study directly testing the claim that handwriting helps people learn better, I’m entirely open to that possibility.

But at least so far, that claim is not — by any definition that seems reasonable to me– “research-based.”


Van der Weel, F. R., & Van der Meer, A. L. (2024). Handwriting but not typewriting leads to widespread brain connectivity: a high-density EEG study with implications for the classroom. Frontiers in Psychology14, 1219945.

Pinet, S., & Longcamp, M. (2025). Commentary: Handwriting but not typewriting leads to widespread brain connectivity: a high-density EEG study with implications for the classroom. Frontiers in Psychology15, 1517235.

Goals, Failure, and Emotions: a Conceptual Framework
Andrew Watson
Andrew Watson

Researchers can provide guidance to teachers by looking at specific teaching practices.

In last week’s post, for instance, I looked at a study about learning from mistakes. TL;DR: students learned more from review sessions where they explored their own mistakes than those where teachers reviewed ideas.

Or,

Back in December, I looked at a study about using “pre-questions” to reduce mind-wandering. Sure enough, students who answered pre-questions about a topic spent less time mind-wandering than those who didn’t.

Obviously, these studies might provide us with lots of useful guidance.

At the same time, this “one-study-at-a-time” approach has its drawbacks. For instance:

What if my students (or class) don’t really resemble the students (or class) in the study?

What if THIS study says that pre-questions reduce mind-wandering, but THAT study says they don’t?

What if THIS study (again) says that pre-questions reduce mind wandering, but THAT study says that mindful meditation reduces mind-wandering? Which strategy should I use?

And so forth.

Because of these complexities, we can — and should — rely on researchers in another way. In addition to all that research, they might also provide conceptual frameworks that help us think through a teaching situation.

These conceptual frameworks don’t necessarily say “do this.” Instead, they say “consider these factors as you decide what to do.”

Because such guidance is both less specific and more flexible, it might be either especially frustrating or especially useful.

Here’s a recent example…

Setting Goals, and Failing…

We spend a lot of time — I mean, a LOT of time — talking about the benefits of short-term failure. Whether the focus is “desirable difficulty” or “productive struggle” or “a culture of error,” we talk as if failure were the best idea since banning smoking on airplanes.

Of course, ask any student about “failure” and you’ll get a different answer. Heck: they might prefer smoking on airplanes.

After all: failure feels really unpleasent — neither desirable nor productive, nor cultured.

In a recent paper, scholars Ryan Carlson and Ayelet Fishbach explore the complexity of “learning from failure”: specifically, how failure interefers with students’ goals.

To create a conceptual framework around this question, Carlson and Fishbach create two concept pairs.

First: they consider the important distinction between goal setting and goal striving.

Happily, those terms mean just what they say.

When I decide that I want to learn Spanish, or strengthen my friendships, or stop drinking caffein, I am setting a goal.

When I decide to enroll in a Spanish class, schedule more frequent dinners with pals, or purge my kitchen of all my coffee clutter, now I’m goal striving.

This pair helps us think through the big category “goals” in smaller steps.

Second: Carlson and Fishbach consider that both emotional barriers and cognitive barriers can interfere with goal setting and goal striving.

The resulting conceptual possibilities look like this:

A 2x2 grid: with "goal setting" and "goal striving" as two columens, and "emotional barriers" and "cognitive barriers" as two rows.

The grid created by these conceptual pairs allows us to THINK differently about failure: both about the problems that students face, and the solutions that we might use to address them.

Troubling Examples

Having proposed this grid, Carlson and Fishbach explore research into its four quadrants. I’ll be honest, resulting research and insights frequently alarmed me.

For instance, let’s look at the top-left quadrant: “emotional barriers during goal setting.”

Imagine that one of my students contemplates an upcoming capstone project. She wants to set an ambitious goal, but fears that this ambitious target will lead to failure.

Her emotional response during  goal setting might prompt her to settle for an easier project instead.

In this case, her emotional response shuts down her thinking before it even started. As Carlson and Fishbach pithily summarize this situation: “people do not need to fail for failure to undermine learning.”

YIKES. (Suddenly, the whole “desirable difficulties” project sounds much less plausible…)

Or, top right (emotional barriers/goal striving): it turns out that “information avoidance” is a thing.

People often don’t want to learn results of medical tests — their emotions keep them from getting to work solving a potential health problem.

So, too, I can tell you from painful experience that students often don’t read the comments on their papers. When they’re disappointed with a grade, they don’t consistently react by considering the very feedback that would help them improve — that is, “strive to meet the goal of higher grades.”

Or, lower right (cognitive barriers/goal striving). Carlson and Fishbach describe a study — intriguingly called “The Mystery Box Game.”

Long-story short: in this game, learning how to fail is more beneficial than learning about one path to success. Yet about 1/3 of participants regularly choose the less beneficial path — presumably because “learning how to fail” feels too alarming.

Problems Beget Solutions?

So far, this blog post might feel rather glum: so much focus on failure!

Yet Carlson and Fishbach conclude their essay by contemplating solutions. Specifically, they use a version of that grid above to consider solutions to the cognitive and emotional barriers during goal setting and goal striving.

For example:

  • “Vicarious learning”: people learn more from negative feedback when it’s directed at someone else.
  • “Giving advice”: counter-intuitively, people who give advice benefit from it at least as much as those who receive it. So, students struggling with the phases above (say: cognitive barriers during goal striving) might be asked for advice on how to help another student in a similar situation. The advice they give will help them.
  • “Counter-factual thinking”: students who ask “what if” questions (“what if I had studied with a partner? what if I had done more practice problems”) bounce back from negative feedback more quickly and process it more productively.

Because I’ve only recently come across this article, I’m still pondering its helpfulness in  thinking about all these questions.

Given the optimism of “desirable difficulty/productive struggle” in our Learning and the Brain conversations, I think it offers a helpful balance to understand and manage these extra levels of realism.


Carlson, R. W., & Fishbach, A. (2024). Learning from failure. Motivation Science.

“Learning from Mistakes” vs. “Learning from Explanations”
Andrew Watson
Andrew Watson

As I wrote last week, thinkers in edu-world often make strong claims at the expense of nuanced ones.

For example:

  • “A growth mindset undergirds all learning” vs. “growth mindset is an obvious boondoggle.”
  • “AI will transform education for the better” vs. “AI will make people dumber and schools worse.”
  • “Be the sage on that stage!” vs “get off the stage to guide from the side!”

The list goes on (and gets angrier).

A closeup of a young student leaning his face up against a chalkboard, his eyes closed in frustration.

When researchers start digging into specifics, however, the daily experience of teaching and learning gets mightily complicated, and mighty fast.

All those strong claims start to look…well…too strong for their own good.

One extraordinary example of “digging into the specifics” can be found in Graham Nuthall’s The Hidden Lives of Learners. Nuthall put cameras and mics on students in New Zealand classrooms, and arrived at all sorts of astonishing conclusions.

Another recent study looks quite specifically — no, really specifically — at 4 teachers. The goal: to understand what part of their work helped students learn.

Here’s the story.

Time to Review

A group of scholars, led by Dr. Janet Metcalfe, wondered if students learned more from teachers’ responses to their mistakes than from teachers’ direct instruction. (You can learn more about the study here.)

A few important points merit attention right away.

First: the classroom sessions I’m about to describe are REVIEW sessions. The students have ALREADY learned the math covered in these lessons; the teachers are helping them review in preparation for a high stakes exam.

In other words: this study does not focus on initial instruction. It focuses on subsequent review.

Second: the students involved VOLUNTEERED to take part. They are, presumably, atypically motivated to learn math.

Keep these points in mind as you think about applying the ideas described below.

In this study, 4 teachers helped 175 8th grade students prepare for upcoming state math exams.

For half of the students, the teachers taught 8 lessons (“explicit instruction”) reviewing core math concepts that would be on that exam.

For the other half, the teachers responded to the misakes that students made on practice tests. That is: during 4 sessions, students took 45 minute math tests. And after each of those sessions, the teachers

“were instructed […] to focus on the students’ errors and to do whatever they deemed appropriate to ensure that the issues underlying the errors would not reoccur and that the students would learn from their errors.”

So, which review approach proved more helpful — the explicit instruction, or the learn-from-mistakes instruction? And, why?

An Envelope, and LOTS of Questions…

The answer to that first question — which kind of review proved most helpful? — is easy to answer.

Students in both groups learned math; they did better on the post-test than the pre-test.

The students in the “learn-from-mistakes” group learned more.

This straightforward finding leads to obvious questions. And — alas — those obvious questions are VERY tricky to answer.

For instance, “how much more did the students in the learn-from-mistakes group learn?” That’s a reasonable question. The answer takes some careful parsing.

Roughly speaking, students in the explicit instruction group increased their scores about 2% per hour of instruction.

For those in the learn-from-mistakes group, the answer depended on the teacher.

The least successful teacher helped students in this group improve 2% per hour of instruction. The most successful teacher helped students improve 5% per hour of instruction.

Of course, that last paragraph prompts another reasonable question: what was different about those two teachers? Why did one teacher benefit his/her students more than twice as much as their colleague?

Let the Sleuthing Commence…

The study’s authors spend a great deal of time — and crunch a great many equations — to answer that question.

For instance:

Maybe the teacher whose students learned more (let’s call her Teacher M) is just a better teacher than the one whose students learned less (Teacher L).

As the researchers point out, that explanation doesn’t make much sense. After all, in their explicit instruction sessions, both Teacher M and Teacher L helped their students equally.

(By the way: to simplify this blog post, I’m leaving out the two other teachers for now.)

Okay, maybe Teacher M did a better job of focusing on students’ mistakes, whereas Teacher L spent too much time focusing on questions that students got right.

Nope. This study includes quite an eye-watering graph to show that they both focused about the same on students’ mistakes.

As the researchers write: “all of the teachers taught to the errors of their students, and … the extent to which they did so did not predict student learning.”

So, what was the secret sauce?

The Perfect Combination

After a few more false leads, the study focuses on two moment-by-moment variables: the teachers’ focus, and the kind of interaction with the student.

Focus: did the teachers

“[dwell] upon how to solve the problem correctly,” or

“[delve] into the nature of the errors – why the students had made them, what the difficulty in the logic was, and/or how to recognize and circumvent such mistakes in the future”?

Kind of interaction: did the teachers explain/lecture, or did they discuss/interact?

With this pair of questions, at last, the study struck gold.

Teacher L — whose students learned relatively little — focused almost all her time on “how to solve the problem correctly.” While pursuing that goal, she divided her time equally between lecture and discussion.

Teacher M — whose students improved more quickly — spent almost all her time in discussion, with almost no time in lecture. While in this interactive mode, she divided her time more-or-less equally between solving problems and understanding the nature of the mistake.

This final insight allows us to make this claim:

Highly motivated 8th grade math students,

reviewing in preparation for a high-stakes exam,

learn less from explicit instruction and more from making and reviewing their mistakes,

as long as the teacher keeps those review sessions interactive,

and equally focused on “getting the answer right” and “understanding the nature of the mistake.’

Notice, by the way, all the nuance in this statement.

To emphasize just one point here: this study does NOT argue that “learning from mistakes” is better than “direct instruction” in all circumstances.

It argues that students learn more from mistakes when reviewing, as long as the teacher follows a very particular formula.

A Final Note

Heated battles in this field often get hung up on specific labels.

As I’ve written before, we do a LOT of arguing about benefits of “desirable difficulty” vs. “productive struggle” — an odd set of arguments, given that both phrases seem to mean the same thing.

This study was co-authored by (among other scholars) Robert Bjork — who helped coin the phrase “desirable difficulty.” For that reason, you might be surprised to learn that this study touts the benefits of “productive struggle.”

That is: the students took a test, they made mistakes, they wrestled with those mistakes, and they learned more. Their struggle (trying to understand what they did wrong) was productive (they improved on their test scores — and probably their understanding of math).

Of course, I could just as easily describe that process as “desirable difficulty.” The difficulties these students faced here — the test, the mistakes, the analysis — turned out to be beneficial — that is, “desirable.”

My own view is: don’t get hung up on the label. The question is: are the students both thinking harder and ultimately succeeding? If “yes” and “yes,” then this teaching approach will benefit students.


Metcalfe, J., Xu, J., Vuorre, M., Siegler, R., Wiliam, D., & Bjork, R. A. (2024). Learning from errors versus explicit instruction in preparation for a test that counts. British Journal of Educational Psychology.

“All People Learn the Same Way”: Exploring a Debate
Andrew Watson
Andrew Watson

Over on eX/Twitter, a debate has been raging — with all the subtlety and nuance of your typical Twitter debate. The opening salvo was something like:

“Despite what you’ve heard, all people learn the same way.”

You can imagine what happened next. (Free advice: look away.)

Despite all the Twitter mishegas, the underlying question is useful and important — so I’ll do my best to find the greys among the black-vs-white thinking.

Here goes.

Useful…

I suspect that this claim — “all people learn the same way” — got started as a rebuttal to various myths about “meaningful sub-categories of learners.” Alas, most of those proposed sub-categories turn out not to be true or useful.

  • No, learning styles theory has not held up well.
  • No, the theory of “multiple intelligences” has no useful teaching implications. (And Howard Gardner didn’t claim that it did.)
  • No, “left-brain, right-brain” dichotomies don’t give us insights into teaching and learning.
  • No, the Myers-Briggs Type Indicator doesn’t tell us how to manage classrooms or lesson plans. *
  • My British friends tell me about some system to sort students according to different colored hats. (I do not think I’m making this up.)
  • (I’ve written about these claims so many times that I’m not going to rehash the evidence here.)

Whenever anyone says “we can usefully divide students into THIS kind of learner and THAT kind of learner,” we should be highly suspicious and ask to see lots of research. (If you want to evaluate that research critically, I can recommend a good book.)

A graphic of two heads facing each other in conversation: one with a lightbulb inside, the other with a question mark.

Well, the shortest rebuttal to this sort of claim is: “Those sub-categories don’t exist. ALL PEOPLE LEARN THE SAME WAY.”

Now, any time someone makes an absolute claim about teaching and learning in six words and seven syllables, you know that claim is oversimplified.

But you can understand the temptation to cut off all those untrue claims with a brusque rejoinder. That temptation pulses all the stronger because those untrue claims persist so stubbornly. (In 2025, schools of education are STILL teaching learning styles.)

…and (substantially) True

This claim (“all people…”) isn’t simply useful; it’s also largely accurate.

For example:

At the neuro-biological level — neurons, neurotransmitters, synapses, myelin, etc. — long-term memories form the same way for everyone.

As far as we know…

  • men and women
  • tall people and short people
  • introverts and extroverts
  • people who think cilantro tastes like soap, and the rest of us

… everyone forms new neural networks (that is: “learns”) the same way. (I should emphasize that our understanding of this neural process is still VERY basic. We’ve still got SO MUCH to learn.)

When we switch our analysis from neuroscience to psychology, the claim still holds up well.

For instance:

  • Everyone uses working memory to combine new information from the environment with concepts and facts stored in long-term memory.
  • Everyone depends on a complex of systems that we call “attention” to control the flow of all that information.
  • Everyone responds simultaneously with emotion and cognition to any given set of circumstances. (These two systems overlap so much that distinguishing between them creates lots o’ challenges.)

And so forth.

Given all these similarities, cognitive science research really can offer up advice that applies to almost everyone in almost all circumstances.

Yes: we really must manage working memory load so that students can build concepts effectively.

Yes: retrieval practice helps almost all learners consolidate and transfer almost all school learning. (Yes, “retrieval-induced forgetting” is a concern, but can be managed if we strategize effecively.)

Yes: spacing and interleaving enhance learning in most circumstances.

And so on…

Given the broad usefulness and truth of the “we-all-learn-the-same” claim, I certainly understand why it’s tempting to make it — and to defend it.

Exceptions Matter

I’ve written that the claim is “broadly” useful and true; but I don’t think it’s ALWAYS true.

For example:

Students with diagnoseable learning differences really might learn differently.

For instance: dyslexic readers combine distinctive neural networks to get their reading done. Those readers almost certainly benefit from distinct teaching strategies. In other words: by any reasonable definition, they “learn differently.”

Another example:

All learning depends on prior knowledge.

That claim — which sounds like “all people learn the same way” — also suggests that people learn differently.

Let’s imagine that you know A LOT more about opera than I do. (This assumption is almost certainly true.) If you and I both attend an advanced lecture about an obscure opera — “Der Häusliche Krieg” —  your learning will function quite differently from mine. Because you’re an expert and I’m a novice, we will learn differently.

Lots of individual differences will bring teachers to this same point.

Because I teach English, I teach grammar — and MANY of my students simply hate grammar. Their prior experience tells them it’s boring, useless, and impossible to understand.

On the one hand, those enduring cognitive principles listed above (working memory, retrieval practice, etc.) do apply to them. But their emotional response to the content will in fact shape the way they go about learning it.

Core principles of learning apply, and my students’ prior experience means that their learning process might well be different.

Beyond Twitter Rage

Twitter generates lots of extreme debates because complex ideas can’t be boiled down into its trivializing format.

So it’s not surprising that a nuanced understanding of “individual differences within important, broad, and meaningful similarities” doesn’t work in Twitter-ville.

At the same time, I do think our discussions of learning should be able to manage — and to focus on — that nuance.

Our students will learn more when we recognize BOTH the broad cognitive principles that shape instruction, AND the individual variation that will be essential within those principles.


Back in 2019, Paul Kirschner wrote a blog post on this same point. His “digestive system” analogy is VERY helpful.


* A few years back, I emailed the MBTI people to ask for research supporting their claims. They did not send me any. They did, however, sign me up for their newsletter.