Andrew Watson – Education & Teacher Conferences Skip to main content

Andrew Watson About Andrew Watson

Andrew began his classroom life as a high-school English teacher in 1988, and has been working in or near schools ever since. In 2008, Andrew began exploring the practical application of psychology and neuroscience in his classroom. In 2011, he earned his M. Ed. from the “Mind, Brain, Education” program at Harvard University. As President of “Translate the Brain,” Andrew now works with teachers, students, administrators, and parents to make learning easier and teaching more effective. He has presented at schools and workshops across the country; he also serves as an adviser to several organizations, including “The People’s Science.” Andrew is the author of "Learning Begins: The Science of Working Memory and Attention for the Classroom Teacher."

Are Science Videos Really the Answer for Struggling Readers?
Andrew Watson
Andrew Watson

Obviously, we want our students to learn science. But: what if they struggle to learn the science by reading?

One solution: we could use VIDEOS to teach science. Videos can transform the written words into an audio track. Voila — no reading required!

A recent study explores this very question. To paraphrase its conclusion: “YUP — students learn more from science videos, so we can/should use them in science classes.”

While the study itself strikes me as well done in a number of ways, I want to push back against this conclusion. Ultimately, this disagreement might reveal the tension between a researcher’s perspective and a teacher’s perspective.

The outline for this post:

  • I’ll start by summarizing the study, and noting its strengths
  • I’ll then try to explain why I’m not persuaded that this study should make science teachers favor videos — even for weaker-than-average readers.

All The Right Steps

Doing education research well requires enormous care, because it can go wrong in so many different ways. Consider all the expectations that we have — and should have! — for research.

  1. We want to have enough participants in the study for the results to be meaningful.
  2. We want researchers to compare plausible alternatives — not just show that “doing something” is better than “doing nothing.”
  3. We’d like reports on meaningful results — not just “did the students have fun” or something like that.
  4. We’d be REALLY happy if the research took place in typical school conditions — not in some hermetically sealed zone that bears little resemblance to actual classrooms.
  5. We expect lots of statsy/mathy results (even if we don’t promise to read them carefully).

And so on. If researchers skip any of these steps, we can complain from the sidelines: “this study doesn’t include an ‘active control group,’ so we shouldn’t rely on its results.” (Honestly, I’m one of the people who object to studies without active control groups.)

Because I’m so aware of these expectations — and lots more — I felt more and more impressed as I made my way through this study. This team has clearly thought through many of the possible objections and found ways to anticipate and mollify them.

  • Enough participants? More than 100.
  • Active controls? Students learned by watching videos, or by reading an illustrated text (with identical words!). In my view, that’s an entirely plausible comparison.
  • Meaningful results? Researchers measured how well the students remembered and transfered their learning…up to a week later!
  • Within a school? Yup. In fact, students had RECESS in the middle of the study, because that’s how they roll in Finland.
  • All the stats? Yes. (More on this point in a minute.)

Thus, I was inclined to be persuaded that, as the abstract says:

The results indicate that videos are beneficial to most children across reading skill levels, especially those with weaker reading skills. This suggests that incorporating videos into primary school science intruction supports diverse learning needs associated with weaker reading skills.

By the way, in this case, “primary school” includes 5th and 6th grade.

A Teacher Has Doubts

Despite this study’s strengths — and I’m being quite serious when I compliment them — I was struck by the actual statistical findings.

The research team focused on three results:

  • how much cognitive load did the students experience while watching videos or reading illustrated texts?
  • how much difference did video vs. illustrated text make for remembering the information?
  • how much difference it make for using information in another context — that is, for transfer?

To answer these questions, they focused on a statistical measure called “R²ₘₐᵣ”. As is always true with stats, it’s tricky to explain what they mean. But here’s a rough-n-ready explanation.

Smiling girl wearing a pink shirt and blue headphones taking notes while looking at a laptop; a library-like setting in the background.

Let’s say that when I study for a quiz using Method A I score 0 points, and when I study using Method B I get 100 points. The R²ₘₐᵣ tells me how much of that difference comes from the two different methods.

So, if R²ₘₐᵣ = .25, that means 25% of the difference between the two scores came from the difference in the two study methods. The other 75 points came from other stuff.

Typically, according to this measure:

  • any R²ₘₐᵣ bigger than 0.25 is “large,”
  • a value between .09 and .25 is “medium,” and
  • a value between .01 and .09 is “small.”

Now that we have a very introductory understanding to this measurement, how meaningful were the results in this study?

  • The “cognitive load” R²ₘₐᵣ was .046: right in the middle of “small.”
  • The R²ₘₐᵣ for remembering information was .016: barely above the bottom of the scale. And
  • R²ₘₐᵣ for transferring information was .003. That’s too small even to register as small.

In brief: did the researchers find STATISTICALLY significant results? It seems they did. Did they find MEANINGFUL differences between videos and illutrated texts? I’m not so sure.

Opportunity Costs

My objection at this point might reflect this difference between a researcher’s perspective and a teacher’s perspective.

The researchers can — entirely reasonably — say: “we ran a scrupulous experiment, and came up with statistically significant results. The data show that videos helped students learn science content better than illustrated texts. Teachers should at least think about using videos to teach science — especially for weak readers.”

As a teacher, my perspective has this additional variable: one of school’s core functions is to teach students to read. And: they get better at reading by — among other strategies — practicing reading.

In other words: according to this study, the benefits of video are so small as to be statistically almost invisible. The benefits of reading practice are — over time — likely to be quite important. I hesitate to give up on one of school’s essential functions (reading) to get such a small benefit (marginal increase in science knowledge) in return.

TL;DR

Someone might say to you — as a friend said to me — “this study shows that we should use videos to teach science content.”

If you hear that claim, be aware that this well executed study found only minute differences between videos and illustrated texts. We should consider this finding alongside a clear understanding of our broader educational mission: teach ALL students to read.


Haavisto, M., Lepola, J., & Jaakkola, T. (2025). The “simple” view of learning from illustrated texts and videos. Learning and Instruction100, 102200.

The Shelf-Life of Research: When Education Theories Evolve…
Andrew Watson
Andrew Watson

To most of us, research conclusions have an air of finality about them. If research shows that … say … prequestions help students learn, well then: case closed. Teachers MUST start class with prequestions!

But wait…we have many reasons to push back against this kind of blunt finality.

  • We could have other strategies that help more: retrieval practice might provide more benefits than prequestions.
  • Perhaps prequestions help these students in this discipline: say — 6th graders learning science concepts. But they might not help 2nd graders practicing phonics.
  • In some cases, initial research findings don’t hold up over time. I used to cite research about “expressive writing,” but I’ve lost confidence in that suggestion because later studies don’t consistently support it.

But there’s another crucial limitation we often overlook: the theories themselves change over time.

Here’s the story.

Once Upon a Time

Scientific researchers rarely arrive at theories in a single great leap. Instead, those theories build up incrementally: study after study, year after year, decade after decade.

In one well-known example: Dr. Carol Dweck started her research in the early 1970s. For most of the subsequent decades, she and her fellow researchers focused on “entity theorists” and “incremental theorists.” Only in the early 2000s did she rebrand to “Mindset.” That is:

  • An “entity theorist” thinks that intelligence is a specific entity, a completed mental capacity. It can’t be changed. These days, we call that a “fixed mindset.”
  • An “incremental theorist” thinks that intelligence can change incrementally. It’s not complete. Today, we talk about a “growth mindset.”

Dweck took, quite literally, more than 30 years to develop the theory to the point that we recognize it today. And since the publication of her book — almost 20 years ago now!! — she has developed it still further.

In other words: mindset theory circa 2005 differs meaningfully from mindset theory circa 2025.

Crucially, this evolution does not mean that Dweck has done anything wrong. Instead, she and her colleagues have been doing what they should be doing. They have a hypothesis; they test it; they adjust the theory based on the results. That updated theory in turn prompts a new hypothesis, so Team Dweck repeats the cycle.

The evolution of these theories isn’t a bug in the system; it’s a feature of the system.

Buggy Features

Although I’ve described this evolution as a feature (not a bug), this feature does allow for some bugs to creep into advice based on theories. A recent example comes quickly to mind.

Cognitive load theory helps many thinkers organize their ideas about working memory in the classroom. (In my view, few topics could be more important than working memory in the classroom.)

For all the normal reasons, cognitive load theory has evolved over time. For instance, it used to distinguish among “intrinsic load,” “extrinsic load,” and “germane load.”

However, in recent years, the scholars who have championed the theory have removed “germane load” from the theoretical framework. It now focuses on “intrinsic” and “extrinsic” only.

The reasons for the change don’t really matter here. What does matter is: cognitive load theory in 2025 doesn’t say exactly what it did in 2015 — and this kind of change is normal.

What to Do?

This insight — “theories do change over time” — offers at least two practical suggestion to teachers and school leaders.

FIRST: Evaluate your own understanding.

That is: if you’re thinking about going all in on a particular theory, be sure you’re planning to enact its most up-to-date version.

In the case of growth mindset, you might start by reading Dweck’s book to understand this foundational text. At the same time, you will no doubt notice that the book was written almost 20 years ago. Next step: take time to see what’s happened to the theory since then.

In the case of mindset, this investigation should be fairly simple. LOTS of people have written books on growth mindset, and research in this field regularly gets attention.

In other cases, double-checking the currency of your information might require some creative sleuthing.

For instance, I asked Claude (the LLM) how Mindset Theory and Cognitive Load Theory have changed over the years; in both cases, Claude’s answer hit the revelant highlights. No doubt other strategies can help you navigate this kind of research history.

Features and Filters

A SECOND practical suggestion also arises from this insight: use it to evaluate the “research-based” advice you hear from someone else.

That is: if a PD speaker says “cognitive load theory tells you to do XYZ to manage your students’ germane load,” you’ve gained an important data point. Although you might find cognitive load theory itself persuasive, you probably shouldn’t take this speaker’s advice about it — because his/her understanding of the theory is more than five years out of date.

In other words: our understanding that theories evolve helps us filter out advice from self-styled experts whose understanding stops with a long-ago version of the theory.

To be fair, I could become overly obsessed with having the very most recent update. Few people can be fully current on all the nuances.

But if a PD speaker says that “a growth mindset is all about effort,” that’s a (mis)interpretation that Dweck has corrected many times. (The thoughtful quality of the effort, and focus on the process, matter more that sheer determination.)

TL;DR

Because educatation is complicated, research-based theories about education are complicated — and they change over time.

To ensure that our students benefit from the work we do, we need to make sure that our own mental models haven’t gone beyond their expiration dates.

Enjoyment or Skill? The Case of Reading [Repost]
Andrew Watson
Andrew Watson

Student motivation has always been a concern; never more so than since the pandemic. What guidance do we get from recent research into reading?


Do we want our students to ENJOY math, or to BE SKILLED AT math?

At first, this question sounds like a false choice. Obviously, we want BOTH.

As an English teacher, I want my students to have fun analyzing the books we read…and I want their analyses to have heft, merit, and substance.

I suspect that most teachers, no matter the subject  — Math, English, Chemistry, Religion, Pickleball — want our students to revel in core ideas and arrive at correct answers.

At some times, alas, we probably need to prioritize one or the other. Especially at the beginning of a unit, should I focus on …

… ensuring that my students like this stuff (even if they don’t immediately understand it), or on

… ensuring they understand the stuff (even if they don’t immediately like it)?

In teaching as in life: if I try to accomplish both goals simultaneously, I’m likely to accomplish neither.

Reading Research

I’m not surprised to discover in a recent study that students’ enjoyment of reading correlates with their skill at reading.

That is: students who get high scores on various reading tests report enjoying reading more than their low-test-scoring peers.

Of course, correlation (say it with me) isn’t causation.

Does the enjoyment lead to the skill? The skill lead to the enjoyment?

Both?

Neither?

To answer these questions, Elsje van Bergen’s research team looked at twins in Finland — more than 3500 of them.

In theory, if we ask all the right questions, gather the right data, and run the right calculations, we should glean insight into the correlation/causation question.

So: what did Team van Bergen find?

But First…

Before you read the answers to that question, you might pause to make a committment. Try to decide NOW if you’re inclined to trust this methodology.

That is:

a) you think well-done twin studies are likely to be a good way to answer this question. For that reason, you will be inclined to accept this answer even if you initially disagree with it.

or

b) you think twin studies can’t answer questions about skill and enjoyment. Thus, you will not cite this study to support your beliefs even if it aligns with those beliefs.

If we’re going to use research to make decisions about education, we should be scrupulous about doing so even when research contradicts the conclusions we had initially held.

Answers, and Questions

Now, back to this post’s main narrative…

Unlike many studies, this one can be summarized in a few pithy sentences.

A young student looks at a book open on her desk and scratches her head in confusion

Based on the twin data they analyzed, van Bergen’s team concludes that:

  • reading skill increases reading enjoyment,
  • reading enjoyment has no effect on reading skill,
  • genetics influences both positively.

Unsurprisingly, the stats get all stats-y. But the above-the-fold headlines are that simple.

Because I don’t teach reading, I’ve always hesitated to be too opinionated on the topic. Now that this study is in the wild, I do think it adds a useful perspective while the reading wars rage on.

For instance: teachers whom I like and respect have told me that older methods might not have science behind them, but they’re excellent at “making students feel like readers.”

This claim has always puzzled me. How can a student feel like a reader if s/he can’t read?

Van Bergen’s study, I think, gives me permission to address that point directly: “this study suggests that skill at reading will be the more important place to start in reading instruction.”

Zooming the Camera Back

While this study and this post have focused on reading instruction, I do think there’s a broader message here as well.

We frequently hear about the importance of intrinsic motivation; that is, a motivation that springs from students’ natural interests, not from external encouragement (or pressure).

This study, to the contrary, finds that the work teachers do to improve students’ skill simultaneously enhances their motivation. That motivation might be — in effect — extrinsic; but, it’s working. (Working = students read better, and want to read more.)

Overall, I believe we need a substantial rethink of the (false) intrinsic/extrinsic dichotomy, and the (unhelpful) criticism of motivational strategies that many teachers currently find themselves using.

If you want to join me for just such a rethink, I’m giving a webinar for Learning and the Brain on April 5th. We’ll be talking about several research-informed approaches to intrinsic motivation, and brainstorming strategies to make those ideas fit in our classrooms.

I hope I’ll persuade you that we have better ways to talk about motivation than “intrinsic/extrinsic,” and those better ways give us useful teacherly guidance.

I hope you’ll join us!


van Bergen, E., Hart, S. A., Latvala, A., Vuoksimaa, E., Tolvanen, A., & Torppa, M. (2023). Literacy skills seem to fuel literacy enjoyment, rather than vice versa. Developmental Science26(3), e13325.

Book Review: 10 to 25, by David Yeager
Andrew Watson
Andrew Watson

As long as humans have lived into their teens, adults have complained about their behavior. Aristotle famously described this age cohort as “fickle, passionate, irascible, and apt to be carried away by their impulses.” If your experience aligns with mine, you’ve seen your share of grouchy selfishness and demotivation in your classroom — especially before 10 am.

Although complaints about adolescents have been around as long as adolescence, those gripes have grown sharper in recent months. PANDEMIC this and CELL PHONE that and AI CHEATING to boot – all these additional concerns have ramped up adult exhaustion with teenage too-muchness.

Given this bleak outlook – both historic and contemporary – what’s a middle-school or high-school teacher to do?

Happily, noted researcher Dr. David Yeager has wise thoughts – and LOTS of research – to give us hope. His recent book 10 to 25: The Science of Motivating Young People will guide and inspire teachers and school leaders.

START HERE

Before getting to specific suggestions, Yeager wants us to rethink our starting point. Whereas Plato and others start with a “deficit mindset” – seeing only bad behavior and gloomy prognosis – Yeager wants us to see the strengths and potentials in humans between the ages of 10 and 25.

So, for instance: you’ll often hear that “the human pre-frontal cortex isn’t fully wired until people reach their mid to late 20s!” The implication of this claim: without this “self-control center” fully developed, young ‘uns are doomed to erratic and immature behavior.

Yeager, however, has a different perspective. This additional time for important brain regions to develop gives growing youth the chance to adapt to the ever-changing world. As the computer people say: the drawn-out schedule of brain development “isn’t a bug, it’s a feature!”

In Yeager’s analysis, most adults respond to this time period with some blend of expectations and support. But most of us get that blend wrong.

Specifically:

  • Some of us default to HIGH expectations – but don’t offer much by way of support. It’s my way or the highway – and teens often find themselves out on that road. Yeager calls this blend the “enforcer” mindset.
  • Others offer HIGH support – and don’t worry too much about expectations. Not wanting to make a difficult time even harder, we soften standards when they seem too onerous. Yeager thinks of these folks as having a “protector” mindset.

Perhaps you can anticipate Yeager’s next move.

  • Adults with a “mentor” mindset balance HIGH expectations with HIGH support. This combination, in Yeager’s view, offers the best path to help adolescents navigate the 10-to-25 stage with optimal growth and development.

One of Yeager’s many strengths in this book: he doesn’t claim to be the only person to advocate for this “mentor mindset” balance. In fact, he carefully and respectfully charts the terminology that other scholars – Lewin, Baumrind, Scott – have used to describe these perspectives. I myself am likelier to take advice from scholars who conspicuously acknowledge their debts to others in the field; Yeager does just that.

WHAT TO DO

To help us translate this mentor’s mindset into specific action, Yeager outlines five broad approaches to help teens through these years: transparency, questioning, stress, purpose, and belonging. He devotes a chapter to each, offering both uplifting stories and scrupulous research to specify and support his case.

For instance, looks at the problem of stress – familiar to anyone dealing with emerging adults. We might, by way of shorthand, define stress as a situation where the DEMANDS of the moment exceed a student’s RESOURCES.

How might we respond?

  • Those with an enforcer mindset say: “Deal with it. The demands are the demands, so get the job done – with whatever resources you have.”
  • Those with a protector mindset say: “Well, that looks stressful. Let’s reduce the demands until the align with the resources you’ve got.”
  • A mentor’s mindset would encourage a different approach: “Let me help you increase your resources so that they match the demands that you face.” The mentor doesn’t reduce the requirements of the moment, but helps develop the skills or knowledge necessary to face it.

Yeager then explores a body of research (by Jameison and others) showing how to boost resources.

Specifically, students who understand the evolutionary history of stress responses recognize that all those seemingly unpleasant symptoms – the sweaty palms, the stomach butterflies – signal helpful physiological responses.

Once students have this additional resource – the knowledge to reframe their physiological responses to stress – they can face the demands of the current situation.

Voila: a mentor’s mindset, combined with research, helps us coach and support a student through an age-appropriate challenge.

HESITATIONS?

I always think a book review should include a caveat or two, but 10 to 25 makes this belief something of a challenge. Yeager writes conversationally, keeps the jargon to a minimum, and modestly acknowledges the difficulties of applying research-based ideas to real-life situations. He even includes a chapter of guided practice to ensure the book’s suggestions take root.

My only consistent concern: as noted above, the book draws on both research and “uplifting” stories. More than most readers, perhaps, I find myself reacting to uplift with suspicion. An uplifting story is – from another perspective – simply an anecdote. I don’t think we should rely on anecdotes to make teaching decisions. And I especially resist stories about life far outside of school – say, in Microsoft training programs.

(I should say: I’m probably an outlier here. Many people find anecdotes a helpful way to translate research-y principles into real-life situations.)

This modest critique aside, Yeager’s book both explains and organizes lots of research. In this way, it helps teachers think differently about real challenges in our work, and gives us clear guidance about what to do.

If you wish you had a clearer set of principles to help motivate and support an emerging adult, Yeager’s book is for you.

The Best Way to Teach: When Clarity Leads to Muddle [Repost]
Andrew Watson
Andrew Watson

Our blogger read this claim: “research shows that active learning is better than direct instruction.” Back in November, he responded to that research, and that claim.


Most teachers want to be better teachers. You’re probably reading this blog for research-based guidance on doing so.

A young student wearing plastic goggles carefully pours something into a beaker slightly filled with green liquid

I recently read a study that offers emphatic — and paradoxical — guidance. Exploring this research — as well as its paradoxes — might be helpful as we think about being better teachers.

Here’s the story.

A research team, led by Louis Deslauriers, worked with students in an introductory physics class at Harvard. This class was taught by an experienced professor who mostly lectured; he also supplemented the class with “demonstrations, … occasional interactive quizzes or conceptual questions.”

Let’s call this approach “interactive lecture.”

In Deslauriers’s study, students also attended two additional classes. One was taught with Method A and the other with Method B.

In Method A, an experienced professor:

  • presented slides
  • gave explanations
  • solved sample problems
  • strove for fluency of presentation

What abotu Method B? Another experienced teacher:

  • used principles of deliberate practice
  • instructed students to solve sample problems together in small groups
  • circulated through the room to answer questions
  • ultimately provided a full and correct answer to the problems

The researchers strove, as much as possible, to make the class content identical; only the pedagogy differed.

What did the researchers learn about the relative benefits of Methods A and B?

Paradox #1: Effective and Unloved

First off, the students learned more from Method B.

That is: when they solved problems in small groups, wrestled with the content, and ultimately heard the right answer, students scored relatively higher on an end-of-class multiple choice test. When they experienced Method A (the prof explained all the info and solved all the problems), they scored relatively lower.

But — paradoxically — the students preferred Method A, and believed that they learned more from it. They even suggested that all their classes be taught according to Method A — the method that resulted in less learning.

The researchers offer several explanations for this paradox. The headlines sound like this:

  • When students hear straightforward explanations and see clear/succesful demonstrations of solutions strategies (Method A), the content seems easy and clear. Students think they understand.
  • But, when they have to do the cognitive heavy lifting (Method B), class feels more difficult. Result: students worry they didn’t understand.
  • Because the students are — relatively speaking — novices, they don’t know enough to know when they understand.

Team Deslauriers, sensibly enough, suggests that we can help students appreciate and accept the more challenging methods — like Method B — if we explain the reseasoning behind them.

I, by the way, take this suggestion myself. For instance: I explain the benefits of retrieval practice to my students. They don’t always love RP exercises, because retrieval practice feels harder than simple review. But they understand the logic behind my approach.

Paradox #2: Clarity vs. Muddle

Up to this point, Deslauriers and Co. pursue a sensible path.

They know that MOST college profs use Method A (the bad one), so they want those profs to change. To encourage that change, the undertake a study showing a better option: Method B!

Given these research results, Deslauriers and Co. offer two clear and emphatic suggestions:

First: teachers should use Method B teaching strategies, not Method A strategies.

Second: to counteract students’ skepticism about Method B, we should explain the logic behind it.

What could be more helpful?

Alas, these clear suggestions can lead to another muddle. This muddle results from the freighted NAMES that this study gives to Methods A and B.

Method B — the good one — is called “active.”

Method A — the bad one — is called (inevitably) “passive.”

So, this study summarizes its findings by saying that “active” teaching is better than “passive” teaching.

These labels create real problems with the research conclusions.

Because these labels lack precision, I can apply them quite loosely to any teaching approach that I believe to be good or bad.

For instance: recall the experienced professor who regularly teaches this physics course. He mostly lectures; he also supplements the class with “demonstrations, … occasional interactive quizzes or conceptual questions.”

If I disapprove of that combination, I can call it “passive” — he mostly lectures!

If I approve, I can call it “active” — consider all those demonstractions, interactions, and conceptual questions!!

These labels, in other words are both loaded and vague — a perilous combination.

The peril arises here: literally no one in the world of cognitive science champions Method A.

EVERYONE who draws on cognitive science research — from the most ardent “constructivist” to the most passionate advocate for direct instruction — believes that students should actively participate in learning by problem solving, discussion, creation, and so forth.

Advocates for those two groups have different names for this mental activity: “desirable difficulties,” “productive struggle.” They think quite differently about the best way to achieve all that active thinking. But they all agree that students must struggle with some degree of difficulty.

Slippery Logic

This naming muddle creates unfortunate logical slips.

The study certainly suggests that Method B benefits students more than Method A. But, it doesn’t suggest that Method B is better than other methods that might reasonably be called by the open-ended named “active.”

For instance: it doesn’t necessarily mean that “constructivism” is better than direct instruction. And yet — because of those highly flexible labels — the study can be misinterpreted to support that claim.

My concern isn’t hypothetical. Someone sent me this study precisely to support the argument that inquiry learning promotes more learning than direct instruction.

But: “Method B” isn’t inquiry learning. And direct instruction isn’t Method A.

The Big Picture

I said at the beginning of this post that teachers might draw on research to be better teachers.

I worry that readers will draw this inaccurate conclusion based on this study:

“Research proves that ‘active learning’ (like projects and inquiry) is better than ‘passive learning’ (like direct instruction).”

Instead, this study suggests that asking students to do additional, productive mental work results in more learning than reducing their mental work.

Champions of both projects/inquiry and direct instruction want students to do additional, productive mental work.

Those schools of though have sharply different ideas of the best ways to accomplish that goal. But dismissing one of them as “passive” — and therefore obviously bad — obscures the important insights of that approach.


Deslauriers, L., McCarty, L. S., Miller, K., Callaghan, K., & Kestin, G. (2019). Measuring actual learning versus feeling of learning in response to being actively engaged in the classroom. Proceedings of the National Academy of Sciences116(39), 19251-19257.

Executive Functions “Debunked”? [Reposted]
Andrew Watson
Andrew Watson

A recent study claims to have “debunked” the idea of executive function. Back in December, our blogger explored the research, and the claim…


As long as I’ve been in this field – heck, as long as I’ve been a teacher – the concept of executive function has floated around as a core way to discuss students’ academic development.

Although the concept has a technical definition – in fact, more than one — it tends to be presented as a list of cognitive moves: “prioritizing, switching, planning, evaluating, focusing, deliberately ignoring…”

A head made up of multiple colored puzzle pieces. The head is open at the top and back

I myself have tended to think of executive functions this way: all the cognitive skills that don’t include academic content, but matter in every discipline. So, if I’m trying to execute a lab in science class, I need to …

… focus on this thing, not that thing,

… decide where to begin,

… decide when to switch to the next step,

…realize that I’ve made a mistake,

…evaluate options to fix my mistake,

And so forth.

Crucially, that list applies to almost any academic task: writing an essay, or evaluating the reliability of a historical source, or composing a sentence in Spanish using a new verb tense…

So: these executive functions help students in school – no matter the class that they are in.

To say all this in another ways: EFs resist easy definition but are mightily important in schools and classrooms. (Truthfully they’re important in life, but that broader range lies outside of this blog’s focus.)

Today’s News

I recently saw an enthusiastic response to a newly-published study that explores,  reconceptualizes — and debunks? —  EFs. Because EFs “are mightily important,” such reconceptualization & debunkage merits our thoughtful attention.

Here’s the story.

A research team led by Andreas Demetriou wanted to see if they could translate that long list (“prioritizing, switching, evaluating,” etc.) into a core set of mental processes.

So: a carbon atom might look different from an iron atom, but both are different ways of putting protons, neutrons, and electrons together. Likewise, “prioritizing” and “switching” might seem like two different processes, but they could instead be different arrangements of the same mental elements.

Demetriou’s team focuses on two core mental processes – their “protons and electrons.” Roughly, those mental processes are:

  • Forming and holding a mental model of the goal, and
  • Mapping that mental model onto the situation or problem.

For complicated reasons, Team D combines these two processes with a label: the AACog mechanism. They then run a lengthy series of studies using a GREAT variety of different tests (Stroop this, Raven’s that) across a wide range of ages.

When they run all the calculations, sure enough: the AACog mechanism underlies all those other EFs we’ve been taught about over the years.

As they write: “AACog is the common core running through all executive functions.” (That’s an extraordinary claim, no?)

And, development of the AACog mechanisms explains all sorts of increasing mental capacities: symbolic exploration, drawing inferences, using deductive reasoning, and so forth. (The concentric circles representing this argument challenge all of my AACog mechanisms!)

In other words, this model explains an ENORMOUS amount of human cognitive processing by focusing on two elements.

What It All Means

I wrote above that this study received an “enthusiastic response” when it came out.

In my twitter feed at least, it was packaged with basically this message:

“All those people who were nattering on about EF were having you on. Look: we can boil it down to basically one thing. No need to make it so complicated!”

I can understand why Twitter responded this way: the title of the Demetriou et al. study is: “Executive function: Debunking an overprized construct.” No wonder readers think that the idea of EFs has been debunked!

At the same time, I’m not so sure. I have three reasons to hesitate:

First:

Quoth Dan Willingham: “One study is just one study, folks.” Until MANY more people test out this idea is MANY more ways, we shouldn’t suddenly stop thinking one thing (“EFs exist!”) and start thinking another (“EFs are the AACog mechanism in disguise!”).

We need more research — LOTS — before we get all debunky.

Second:

Let’s assume for a moment that the AACog mechanism hypothesis is true. What effect will that have on discussions in schools?

Honestly, I doubt very much.

The “AACog mechanism” is itself so abstract — as are the “modeling” and “mapping” functions that go into it — that I doubt they’ll usefully replace “exective functions” in daily conversations.

Imagine that a learning specialist says to me: “This student has a diagnosed problem with her AACog mechanism.”

I’ll probably respond: “I don’t understand. What does that mean?”

The learning specialist will almost certainly respond: “Well, she has difficulty with prioritizing, task switching, initiating, and so forth.”

We’re back to EF language in seconds.

Third:

I’m not sure I buy the argument that the “AACog mechanism” DEBUNKS “executive function.”

Imagine this logical flow:

  • Carbon and iron are made up of the same sub-elements: protons, neutrons, and electrons.
  • Therefore, carbon and iron don’t really exist.
  • Voila: we’ve debunked the idea of carbon and iron.

Well, that logic just doesn’t hold up. Carbon and iron DO exist, even as meaningfully different arrangements of sub-particles.

So too:

  • EFs all boil down to the AACog mechanism, which is itself just “mental modelling” and “mapping of models onto reality.”
  • Therefore, EFs don’t really exist.
  • Misson Debunk Accomplished!

I just don’t track that logic.

We understand human cognitive complexity better, but the complexity hasn’t gone away. (We understand carbon and iron better now that we know about protons and neutrons, but the periodic table is still complicated.)

This model helps us think differently about mental functions across academic disciplines. Those new thought patterns might indeed be helpful — especially to people who create conceptual diagrams of cognition.

But I don’t think it will radically change the way teachers think and talk about our students.

TL;DR

A group of thoughtful scholars have put together a new model of cognition explaining executive functions (and a whole lot more).

What does this mean for us?

  1. In ten or fifteen years, EF experts might be talking to us differently about understanding and promoting these cognitive moves.
  2. In the meantime, don’t let oversimplications on the interwebs distract you. Yes: “executive function” is a mushy and complicated category — and yes, people do go too far with this label. But something like EFs exist, and we do need to understand their complexity.

Demetriou, A., Kazali, E., Spanoudis, G., Makris, N., & Kazi, S. (2024). Executive function: Debunking an overprized construct. Developmental Review74, 101168.

Do Fidget Spinners Help Children with ADHD? [Reposted]
Andrew Watson
Andrew Watson

We’ve been hearing A LOT about fidget spinners and squishy toys lately. Our blogger has been on this story for months…


Good news: cognitive science research can be SPECTACULARLY useful in guiding and informing teachers.

Less good news: that guidance isn’t always clear or prompt.

After all:

Research findings often contradict one another.

They can also contradict our deeply-held beliefs, and/or our school’s policies.

Not everyone who quotes research actually understands research, or quotes it accurately.

And so forth.

Another problem with research gets less focus: it takes so much time.

In this case: fidget spinners were a thing back in 2017!

Perhaps you — like me — frequently heard the idea that “we can give fidget spinners to students to help them get the wiggles out!”

More specifically: “fidget spinners will help those students diagnosed with ADHD let off jittery steam without wandering around or distracting others.”

At that time, those claims sounded plausible or implausible — depending on your perspective and experience. But we didn’t have any research to support or contradict them.

As one group of pediatricians wrote in 2017:

Fidget spinners and other self-regulatory occupational therapy toys have yet to be subjected to rigorous scientific research. Thus, their alleged benefits remain scientifically unfounded. Paediatricians should […] inform parents that peer-reviewed studies do not support the beneficial claims.

Well: SEVEN years later, now we do have research!

Worth the Wait

Researchers in Florida worked with a group of 60 children enrolled in a summer program for young children diagnosed with ADHD.

A closeup of two hands holding fidget spinners out toward each other

This program offered both academic and behavioral training over eight weeks, to better prepare these children for the upcoming school year.

Both in the first two weeks and the final two weeks, the research team gave several children fidget spinners during the 30-minute ELA class. They looked for data on these questions:

Did the fidget spinners change the amount of wandering around?

Did they change the students’ attention to the class work?

And, did they affect the other children who did not get fidget spinners?

Sure enough, the data they gleaned provide helpful classroom guidance.

Good News, Bad News

If you’re in the pro-fidget spinner camp, you’ll be glad to know that the fidget spinners did NOT lead to an increase it problems among the other students who didn’t get one.

They didn’t wander any more than usual; they didn’t pay less attention than usual. (“Area violations” and “attention violations” were the two categories tracked by researchers.)

That’s the good news.

If you’re pro-fidget spinner, the rest of the news won’t encourage you.

First: the spinners reduced wandering a bit at the beginning of the program. But they did NOT reduce wandering at the end.

Second: the spinners raised inattention levels both at the beginning and at the end of the program. And the increases in inattention were greater than the decreases in wandering.

In brief, fidget-spinner champions will not find much support here.

Questions and Caveats

No one study can answer all questions, so we should keep its limitations in mind.

What about older students? This research doesn’t explore that question.

Won’t students get better at using fidget spinners AND paying attention over time? We don’t know. (But: eight weeks is an unusually long research study.)

Don’t they benefits SOME students? Maybe. A 60-person study doesn’t really allow us to look for granular sub-populations.

A later study should show the opposite results! It certainly could.

In other words, passionate fidget-spinner advocates can ask worthwhile questions. And, we shouldn’t be too emphatic based on one study.

But we can say this:

According to this one research study, fidget spinners did not help young students diagnosed with ADHD pay attention; they did more harm than good.


Graziano, P. A., Garcia, A. M., & Landis, T. D. (2020). To fidget or not to fidget, that is the question: A systematic classroom evaluation of fidget spinners among young children with ADHD. Journal of attention disorders24(1), 163-171.


How does this logic apply to “squishy toys”? We’re glad you asked

Do Gestures Improve Learning? Baby Steps Toward a Conclusion [Reposted]
Andrew Watson
Andrew Watson

This post first appeared in October of 2025.  


 The idea that “gestures improve learning” has been increasingly popular in recent years. I first heard about this idea from Dr. Susan Goldin-Meadow at a Learning and the Brain conference back in 2015, and the claim has gotten more emphatic with every passing year.

Drawings of men and women making a variety of hand gestures

This emphasis on gesture — within the broader field of “embodied cognition” — is approaching certainty in the field.

As is so often the case, I myself am less confident than others about many of these claims.

In this post, I want to

  • Explain my concerns about the “gestures promote learning” claims
  • Explore a recent study that pushes back against my concerns, and
  • Highlight the features of that study that support my concerns.

In other words: we’re taking baby steps here. This study that I’ll explain does offer modest support to this claim — and offers some useful specifics.

But it certainly does not make the “gestures promote learning” a slam dunk. And it flatly contradicts some of the most vehement statements you’ll hear when people champion embodied cognition and gestures.

Initial Concerns

I am certainly open to the idea that gestures might promote learning. No, honestly: it could be true!

At the same time, I want to

  • understand the SPECIFICS of that claim, and
  • have confidence in the research that supports them.

These requests feel reasonable to me.

So, specifically:

Who is making the gesture? The teacher? The learner? Both? A video?

Does some kind of learning get more benefit from gestures than others?

For example, I know how I could use a gesture to reinforce the word “hatchet.” (A hatchet gesture is easy to envisage.)

But: how would I make a gesture to help students learn “protagonist” and “antagonist”? Or, “pleistocene”? Or “tradition”?

What if students learn the same gesture in two different classes? Say: the “hatchet” gesture means “hatchet” in my Spanish class but “guillotine” in my history class.

Do gestures help students of all ages? Are there cultural differences?

In other words: what are the rules to govern the creation and use of gestures? And: what research supports each specific component?

Last year, I wrote about a study that starts answering some of these questions. But: we’ve got a long way to go.

Frankly, the questions listed above fit quite nicely under the heading of “boundary conditions”: we need to know the “boundaries” within which teaching suggestions work, and outside of which they don’t. I don’t quite understand why people look vexed when I ask such questions.

A Recent Study Strengthens the Case

Because I have outlier views on this topic, I’m always on the lookout for research that will change my mind.

Recently, I came across a study from 2021 that offers lots of helpful information.

This study does, in fact, give me some increased confidence in the ideas that gestures can help. And, it offers helpful specifics about ways that gestures MIGHT NOT help. That is: it clarifies those boundary conditions.

This research has LOTS of moving parts, so I’ll try to summarize it accurately and briskly.

Twenty-two German adults spent 4 days studying foreign language vocabulary.

Specifically, they learned 90 word pairs: a word in German, and its “translation” in a made-up language called “Vimmi.” (Researchers used made-up words so that the participants’ prior knowledge wouldn’t muddle their results.)

45 of those word pairs were accompanied by a gesture. (The participants saw someone making the gesture; and they also made the gesture themselves.)

The other 45 pairs were accompanied by a drawing of the word; in other words, not a gesture.

This group of non-gesture word pairs gives the researchers a useful comparison point. They can ask not just “did gestures helps participants learn?,” but “did the gestures help compared to the pictures?”

Now things get REALLY interesting.

Researchers then used a fancy gizmo to DISRUPT NEURAL PROCESSING in a part of the brain that coordinates gestures.

Specifically, several days after participants learned the word-pairs, researchers asked participants to recall them out loud. While participants did so, researchers used the (perfectly safe) neuro-scrambler for half of the words; they then faked the neuro-scrambling for the other half of the words. The participants, in other words, didn’t know when the scrambler was on or off.

Let’s add two more cool variables — variables that help answer my pesky questions:

Researchers tested participants five days and five months later. So: they could see how long these effects lasted.

And, they also tested two kinds of words: concrete nouns (“window”) and abstract nouns (“benefaction”).

Based on all these moving pieces, you can see that this study might finally convince me that gestures help — or give us some ideas about boundary conditions.

Yup: (Some) Gestures Help (Some) Learning (Somewhat)

When researchers tested the participants five days later, they found that seeing and producing gestures helped participants learn concrete nouns.

Specifically, the neuro-scrambler (presumably) reduced the neural encoding of the gestures for half of the words. And participants took longer to remember those words than the un-scrambled ones.

The likeliest interpretation: part of the memory of the word pair had been encoded in the motor cortex. When we interfere with that part of the memory, we interfere with the memory overall.

So, even though I’m naturally cautious about such things, I’m comfortable saying:

“This research supports the idea that seeing and making gestures about concrete objects helps learn those words in the short term.”

Baby steps.

Yup: Important Limits Exist

At the same time, you’ve probably noticed the boundary conditions in that statement.

  • The gestures did help participants remember concrete nouns (“window,” “hatchet”); at the same time, participants did not remember abstract nouns (“benefaction”, “protagonist/antagonist”) as smoothy over time. (I should be clear: this statement summarizes a very complex set of findings; others might look at the findings and disagree with that statement.)
  • The gestures did help after 5 days, but not after five months. So: short-term memory got a boost. But long-term memory — not so much.

I should also say that this study design (like ALL study designs) includes some features that require our caution.

  • 22 is not a huge number of participants. I wouldn’t often make big decisions based on such a small group. (Of course, as more and more small studies agree, they might add up to compelling recommendations.)

Also, this study pushes back against one of the more common statements within the broader field of “embodied cognition,” and also in the narrower field of “gestures promote learning.”

“Learning doesn’t just happen in the brain; it happens in the body.

Now, I spend LOTS of time on this blog warning readers against over-reliance on neuroscience. We spend far too much time saying “ooh, look, BRAIN.” (I think we should focus much more on psychology: the study of the mind.)

This study, however, requires that we take the brain itself seriously. Those brain scramblers, after all, scrambled signals in the motor cortex: a part of the brain. They did not do anything to the arm or the rest of the body.

We can, I think, reasonably say that the learning happened WITH the body. But it didn’t happen IN the body; more precisely, it happened IN the part of the body called “the brain.” The arm didn’t change and learn. The brain changed. The mind learned.

TL; DR

This small study adds to a growing body of research that suggests seeing and making the right kind of gesture can help people learn some things, at least for a few days.

It does not reach grander conclusions than those.

And, it strongly contradicts the idea that “learning happens in the body.” Bodily movements can help help the brain change; but the brain itself changes, and the mind itself learns.


 

A Final Note

This study includes substantially more complicated procedures and calculations than I typically try to summarize on the blog. As a result, reasonable readers might believe I have over-simplified the study and its conclusions; you might even think I have MIS-summarized both.

I have, quite genuinely, tried to do justice to the study and its conclusions. And, I include this link so you can check for yourself.


Mathias, B., Waibel, A., Hartwigsen, G., Sureth, L., Macedonia, M., Mayer, K. M., & von Kriegstein, K. (2021). Motor cortex causally contributes to vocabulary translation following sensorimotor-enriched training. Journal of Neuroscience41(41), 8618-8631.

The Neuroscience Trap: When Skeptics Stop Being Skeptical
Andrew Watson
Andrew Watson

I collaborated on this post with Dr. Cindy Nebel. Her bio appears below.


Everyone in this field agrees: we should begin our work with SKEPTICISM. When someone comes to us with a shiny new “research-informed” teaching suggestion, we should be grateful…and be cautious as well. After all:

The “someone” who gave us the “research-based” suggestion might…

  1. Misunderstand the research; it’s easy to do!
  2. Have found an outlier study; researchers rarely agree on complex subjects — like teaching and learning.
  3. Work in a context that differs from ours, and so offer a suggestion that helps their students but not other students.
  4. Misrepresent the research. Yup, that happens.

And so forth.

For all these reasons, we have to kick the tires when we’re told to change what we do because of research.

Easy Cases, Hard Cases

At times, this advice feels easy to follow. Ask any speaker at a Learning and the Brain conference, and they will assure you that:

  • Learning Styles are not a thing;
  • Left Brain/Right Brain distinctions don’t really matter;
  • The Learning Pyramid (“students remember 5% of what they hear…”) is self-contradicting, and based on no research whatsoever;
  • The list goes on…

Friends describe these ideas as “zombie beliefs”: no matter how many times we “kill them off” with quality research, they rise from the dead.

As we attack these zombie beliefs with our evidence stakes, we always chant “remember: you have to be SKEPTICAL!”

These cases are easy.

Alas, we often face hard cases. In my experience, those hard cases often combine two key elements:

  • First: we already believe — and/or WANT to believe — the “research-based” claim; and
  • Second: the research is neuroscience.

If a new neuro-study confirms a belief I already hold, my ability to be skeptical deserts me completely. I accept the research as obviously true — and obviously beyond criticism. I find myself tweeting: “Only a fool could disagree with this claim, which is now settled…”

Of course, if I fail to be skeptical in these hard cases, I’m abandoning scientific principles just as surely as people who purport to teach left-brained thinkers.

One example: in my experience, people REALLY want to believe that handwritten notes foster learning more surely than notes taken on a laptop. (I have detailed the flaws in this claim many times.)

A study published in 2023 is regularly used to support this “handwriting > laptop” claim. I first read about it in an article entitled “Handwriting promotes memory and learning.”

Notice that this study fits the pattern above:

  • First: people already believe — and really WANT to believe — the claim.
  • Second: it’s neuroscience.

LOTS of thoughtful people held this study up to champion handwritten notes.

Alas, because this study is a hard case, the skepticism practices that we typically advocate fell to the side. As it turns out, the flaws in this study are easy to spot.

  • It’s based on a very small sample;
  • The laptop note-takers had to type in a really, really unnatural way;
  • The study didn’t measure how much the students remembered and learned.

No, I’m not making that last one up. People used a study to make claims about memory and learning even though the researchers DIDN’T MEASURE memory and learning.

In other words: in this hard case, even the most basic kinds of skepticism failed us — and by “us” I mean “people who spend lots of time encouraging folks to be skeptical.”

Today’s Hard Case

The most recent example of this pattern irrupted on eX/Twitter last week. An EEG study showed that students who used ChatGPT

a) remembered less of what they “wrote,” and

b) experienced an enduring reduction of important kinds of brain-wave activity.

Here’s a sentence from the abstract that captures that second point precisely:

“The use of LLM had a measurable impact on participants, and while the benefits were initially apparent, as we demonstrated over the course of 4 months, the LLM group’s participants performed worse than their counterparts in the Brain-only group at all levels: neural, linguistic, scoring.”

Once again, this study fits into the hard-case pattern:

  • Confirms a prior belief (for LOTS of people), and
  • Neuroscience

The unsurprisingly surprising result: this study has been enthusiastically championed as the final word on the harms of using AI in education. And some of that enthusiastic championing comes from my colleagues on Team Skepticism.

I want to propose a few very basic reasons to hesitate before embracing this “AI harms brains” conclusion:

  • First: the PDF of this study clocks in at 206 pages. To evaluate a study of that length with a critical eye would take hours and hours and hours. Unless I have, in fact, spent all those hours critically evaluating a study, I should not rush to embrace its conclusions.
  • Second: I’m going to be honest here. Even if I spent hours and hours, I’m simply not qualified to evaluate this neuroscience study. Not many people are. Neuroscience is such an intricately technical field that very few folks have the savvy to double- and triple-check its claims.

For the same reason you should not fly in a jet because I’ve assured you it’s airworthy, you should not trust a neuro study because I’ve vetted it. I can’t give a meaningful seal of approval — relatively few people can.

Knowing my own limitations here, I reached out to an actual neuroscientist: Dr. Cindy Nebel*. Here are her thoughts:


Here are my takeaways from this study:

  1. Doing two different tasks requires different brain areas.

In this study, participants were explicitly told to write using AI or on their own. Unsurprisingly, you use a different part of your brain when you are generating your own ideas than if you are looking up content and possibly coping and pasting it. In this study, participants were explicitly encouraged to use AI to write their essay, so it’s likely they did — in fact — just copy and paste much of it.

2. When you think back on what you did using different brain areas, you use those same differentiated brain areas again.

When we remember an event from our lives, we actually reactivate the neural network associated with that event. So, let’s say I’m eating an apple while reading a blog post. My neural areas associated with taste, vision, and language will all be used. When I recall this event later, those same areas will be activated. In this study, the people who didn’t use their brains much when they were copy/pasting still didn’t use their brains much when they recalled their copy/pasting. This finding is entirely unsurprising and says nothing about getting “dumber”.

3. It’s harder to quote someone else than it is to quote yourself.

The only learning and memory effect in this study showed that individuals who copied and pasted had a harder time quoting their essays immediately after writing them than those who generated the ideas themselves. Shocking, right?

My neuroscience-informed conclusion from this study is that not using your brain results in less neural activation. [*insert sarcastic jazz hands here*.]

To be clear: I did not spend the requisite hours and hours reading the 206 page article. I did scan all 206 pages, read the methods thoroughly, and took a close look at the memory results in particular. I skipped the bulk of the paper which is actually a linguistic analysis of the type of language used in prompts and essays written with and without the support of AI. I am very much not an expert in this area and, very importantly, this seemed to make up the most important findings.

Back to you, Andrew.


Customary Caveats

This post might be misunderstood to say: “this study is WRONG; teachers SHOULD use AI with their students.”

I’m not making either of those claims. Instead, I am saying:

  1. Like all studies, this study should be evaluated critically and skeptically before we embrace it. Because it’s so complicated, few people have the chops to confirm its findings. (And not many have time to review 206 pages.)
  2. As for the use of AI in schools, I think the topic resists blanket statements. Probably the best shorthand — as is so often the case — goes back to Dan Willingham’s famous sentence:

“Memory is the residue of thought.”

If we want students to learn something (“memory”), they have to THINK about it. And if they’re using ChatGPT, they’re thinking about — say — high-quality prompts. They’re probably NOT thinking about the content of the essay, or effective essay-writing strategies.

Because we want students to think, we should – in almost all cases – encourage them to write without AI.

(To be clear: I think we could easily create assignments that cause students to think with AI. For instance: they could ask Claude to write a bad essay about The Great Gatsby: one that’s ill organized, ungrammatical, and interpretively askew. They could then correct that essay. VOILA: an AI assignment that results in thinking.)

Ironic Coda

I wrote this blog post based on my own thinking and understanding. I then shared my thoughts with Dr. Nebel, who offered her substantial commentary.

Next — as is my recent habit – I asked Claude to proofread this post, and to make any suggestions for clarity and logical flow. Based on its suggestions, I made a few changes.

In other words: this post has an inherent bias in it.

If I trust Claude — an AI assistant — I’m probably biased against research showing that AI assistants create enduring mental decrements. Although I doubt that this bias has misled me too far, I do think you should know that it exists.


* Dr. Nebel notes: “To the neuroscientists in the audience, Andrew is using that term generously. My degree in Brain, Behavior, and Cognition yes, involved neuroscience courses including a human brain dissection and yes, involved courses and research using fMRI. But I am not a neuroscientist in the strictest sense. I do, however, understand neuroscience better than the average bear.”

Dr. Cynthia Nebel is the Director of Learning Services and Associate Professor of Psychiatry and Behavioral Neuroscience at St. Louis University School of Medicine.  She holds a Ph.D. in Brain, Behavior, and Cognition and  has held faculty positions at Lindenwood, Washburn, and Vanderbilt Universities. Dr. Nebel has published two influential books on the science of learning and is a leading collaborator with The Learning Scientists, an organization focused on bridging the gap between learning research and educational practice. Dr. Nebel has presented the science of learning nationally and internationally and is dedicated to bridging research and practice to improve educational and organizational outcomes.


Van der Weel, F. R., & Van der Meer, A. L. (2024). Handwriting but not typewriting leads to widespread brain connectivity: a high-density EEG study with implications for the classroom. Frontiers in psychology14, 1219945.

Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X. H., Beresnitzky, A. V., … & Maes, P. (2025). Your brain on chatgpt: Accumulation of cognitive debt when using an ai assistant for essay writing task. arXiv preprint arXiv:2506.08872.

I Take It All Back: Research on “Expressive Writing” Has Evolved
Andrew Watson
Andrew Watson

I spend my days giving research-informed advice to teachers and school leaders. You could summarize my pitch this way:

According to my current best understanding, we’ve got some good research suggesting that X strategy is likely to help most students learn most things.

Sometimes I say:

We’ve got LOTS of research suggesting…

Or occasionally:

We’ve got a few encouraging studies that make me think…

In every case, these research-informed suggestions arise from the CURRENT state of the research.

Of course, researchers haven’t stopped working…they’ve only just begun. Hundreds (thousands?) of graduate students and professors are out there right now. They’re exploring:

  • working-memory limitations, and
  • the benefits of mindfulness, and
  • the relationship between handwriting fluency and reading comprehension

And SO MANY other topics.

For this reason the advice that I gave yesterday based on “the current state of the research” could be contradicted by today’s research. My job isn’t just to find “what the research has said,” but to keep track of the ongoing conversation that current researchers have with prior conclusions.

In other words: “research-informed” teaching advice evolves over time. Occasionally, I have to retract advice I’ve given in the past.

Prior Conclusions: An Example

I’ve spent the last week talking with teachers from across the country about — among other topics — working-memory limitations, and ways to work around them.

A sign reading "Your GPS is wrong; turn around."

One sensible approach: because stress hampers working memory-function, we can support working memory by reducing stress. Happily, we’ve got a few research-supported strategies for doing so.

I first heard about one strategy at a Learning and the Brain conference many years ago. Researcher Sian Beilock — currently the President of Dartmouth College! — teamed up with Gerardo Ramirez to test a surprising hypothesis. The title of their study says it all: “Writing About Testing Worries Boosts Exam Performance in the Classroom.”

Although the idea sounds counter-intuitive — in fact, both researchers were skeptical of the hypothesis — they found that letting students VENT their stress in writing dissipated the stress. More specifically, students given time to write about their anxiety about an upcoming exam did better on that exam than students who wrote about something else.

When I heard Dr. Beilock’s talk, the suggestion struck me as SO PRACTICAL. It doesn’t take much time. It helps anxious students without harming other students. It cost $0. What’s not to love?

Happily, we don’t have to rely on one study. More recently, DiMenichi and colleagues tried a similar strategy — and got similar results.

For that reason, I’ve been giving this advice for several years now: “we’ve got some good studies suggesting that letting students VENT their stress actually helps them dissipate their stress. The result: they do better on various academic measures.”

Today’s News: It’s Time for a Retraction

Recently, as I was preparing for an upcoming presentation, I looked again at my slide about the Beilock and DiMenichi studies. I realized that I hadn’t kicked those tires in a while, and I thought it would be helpful to go back and see what more researchers have found in this field.

Who knows: perhaps someone had recently done helpful study that offers creative new approach to this counter-intuitive suggestion.

I have several different strategies to evaluate research based claims: you can find a quick guide here.

Recently I’ve been using Claude to jumpstart my searches. I asked it to do a research deep dive on this question: here’s the first paragraph of its answer:

The evidence for helping students manage stress through “venting” shows a complex picture with both promising findings and significant contradictory results.

While foundational studies by Ramirez & Beilock (2011) and DiMenichi et al. (2018) demonstrated large effects for brief expressive writing interventions, subsequent research reveals substantial methodological concerns, replication failures, and evidence that such interventions can be harmful for certain students.

Meta-analyses consistently show only small effect sizes, and the majority of studies with healthy college students find no significant benefits

Now, I didn’t stop here. I looked over this review — and then explored several of those studies on my own. (As always, I relied a lot on ConnectedPapers.com and Elicit.com.) Especially because I’m new to Claude, I didn’t simply accept its guidance without question.

But — sure enough: as I see the CURRENT research picture, we don’t have a consistent and persuasive pool of studies suggesting that students benefit from venting their stress.

Pointing Fingers

If we got a wrong answer, presumably someone is at fault. So: who is it? At whom should I point when I cry j’accuse?

  • Should we blame Ramirez and Beilock, for leading us astray with that initial study?
  • Should we blame DiMenichi for confirming their error?
  • Should we blame ME for spreading erroneous information?

I think the correct answer is: D) none of the above.

This turn of events is an entirely predictable possibility in the world of research-informed teaching advice.

  • Ramirez and Beilock didn’t do anything wrong. They had a hypothesis. They tested it. They reported their results. That’s what they’re supposed to do.
  • DiMenichi didn’t do anything wrong. She followed up that initial study. She reported her results. That’s what she’s supposed to do.
  • I (probably) didn’t do anything wrong. I saw interesting and well-designed studies. The results aligned with each other, and with my teaching experience. I told teachers about these studies. That’s what I’m supposed to do.
  • (I might have made this mistake: I might not have emphasized enough that the advice was based on a small number of studies — and therefore tentative.)

Those of us who base teaching advice on research should always acknowledge that some of our conclusions will be contradicted by future research. Occasionally, that kind of reversal just happens.

In fact: we can have confidence in research-based suggestions BECAUSE the research cycle will probably reveal false leads sooner or later. It’s not a perfect system. But as long as we stay realistic about its limitations, science can be self-correcting.

What This Means for You

First: when you get “research-based” teaching advice, ask how much research supports the claim. You don’t have to reject ideas with only a little research beind them — especially if the conclusions match your experience, or your school’s philosophy. But: be sure to check back in every now and then to see what subsequent research has found.

Second: although we can be sad that “expressive writing” probably isn’t a good strategy for helping students manage stress, we do have other strategies that can help. I’ll write about “cognitive reframing” in a future post…


Ramirez, G., & Beilock, S. L. (2011). Writing about testing worries boosts exam performance in the classroom. science331(6014), 211-213.