2025 – Education & Teacher Conferences Skip to main content
Can Stress BENEFIT Teens? A Surprising New Approach
Andrew Watson
Andrew Watson

“Stress” has a branding problem: EVERYONE hates it. Other than snake venom and nuclear waste, few experiences are labeled “toxic” more often than stress.

And yet, we have good reason to pause and reconsider this perspective. Let me propose a few hypotheticals:

  • What if stressors are a perfectly normal part of life and development?
  • What if it’s not stressors, but our response to potentially stressful events, that causes harm?
  • What if–properly considered–stressful events and circumstances could benefit us?

The evolutionary logic goes like this.

A zebra sees a lion. That’s a stressful moment, for sure. The zebra needs all its physiological resources to escape. It needs–say–lots of oxygen, dilated pupils, extra glucose in the brain, and so forth. So zebras have evolved to have all those bodily reactions.

Now, the experience of those physical changes might not be pleasant. If you’ve ever felt your heart pounding or your palms sweating, you know that you’d rather not feel that way very long.

But–here’s the key point–all these stress responses are signs that our bodies have rapidly adjusted to help us meet important challenges.

Let’s switch from predation to the everyday life of a teen. Although they rarely face lions, adolescents certainly face many age-appropriate challenges.

  • They have to negotiate new, individuated identities outside of their family structure. (We don’t want 30 year-olds living in parents’ basements.)
  • And they have to learn the challenging material that schools teach.

In brief, we want teens to face the age-appropriate challenges of individuation and school effectively. And, all those unpleasant physiological symptoms can help them do so. For instance, a teen will probably do better on an exam if their heart is pumping extra glucose to their brain.

So let’s ask two big questions:

First: is this logical chain true?

Second: if yes, how can we help teens think differently about stress?

Let’s explore.

A Powerful Partnership

Because we’re asking BIG questions about an ESSENTIAL topic, we’d like to know that the research-informed answers we get have some heft behind them. Well: good news. The study I’m about to summarize includes six experiments with more than 4000 participants. (Not a typo.) I occasionally see meta-analyses including that many people, but almost no actual studies.

Equally compelling, this study is published in the journal Nature. When it comes to scientific research, they’re as rigorous and respected as it gets.

In this study, a research team (including Dr. David Yeager and Dr. Jeremy Jamieson) explored several hypotheses. The core question:

If we change the way students think about stress, does their new thought process have a beneficial effect?

To explore this question, the researchers invited students to complete a 30 minute online exercise. These students read passages, and did some thinking and writing about how those ideas applied to their own lives and experiences. Crucially, these passages combined two psychological approaches:

  1. They provided information–like that outlined above–explaining the physiological benefits of our stress responses. And
  2. They helped students think about the fact that their abilities can change. If we work hard at something–for instance, during a stressful experience–that hard mental work will make us better.

If point #2 sounds a lot like “growth mindset”…well…it is EXACTLY growth mindset. Yeager studied with Dr. Carol Dweck; they have worked together for years.

So, here’s the big question: when students complete this online exercise–combining a new perspective on stress with a growth mindset–what effect does it have?

What Changed? Everything.

If you run 6 experiments with 4000 people, you’re going to get A LOT of data. I’ll spare you a laundry list of the findings; if you want the full rundown, check out the study here. But the headlines all tell the same story: the combo strategy worked.

That is, students who completed the online exercise:

  • Said that they found stressful events less threatening
  • Showed healthier measures of physiological data (e.g.: cardiovascular measurements, cortisol levels)
  • Reported lower stress levels during the day
  • Had higher levels of academic success (more precisely: lower levels of academic failure)
  • Coped better with COVID stress

I think you’ll agree, that’s an impressive–even comprehensive–list of results! By the way, these researchers found that the combination mattered. They didn’t get the results that they wanted by doing one or the other; students benefitted from both new perspectives.

What should teachers do with this information?

First, I’ve found a link where you can request access to the intervention itself. I’m not 100% sure it’s still publicly available–I’ve sent an email to ask. But if you’d like to learn more, that website is a great place to start.

Second, I think these findings encourage high-school teachers to build both these lessons into our own teaching practices.

  • The more that students believe that the right kind of hard work can change their abilities;
  • The more that they understand that the unpleasant physiology of stress response in fact helps them succeed in difficult circumstances;
  • The likelier they are to get that lengthy list of benefits above.

I’ve written before that I think it’s easy to get mindset wrong in the classroom. Most of us have been encouraged to teach students about mindset, and then to put up posters. I suspect that approach has no effect whatsoever. Instead, we need to change our policies and procedures to align with a growth mindset. The best way to teach a growth mindset isn’t posters; instead, it’s embodying the principle in the way we teach and work with students.

Third, we shoud be aware of two caveats.

  • The authors emphasize that they’ve researched this approach with high-school and college students only. We don’t know whether or not it will benefit other groups of learners.
  • While reappraisal can help for everyday stressors–an argument with a friend, a challenging assignment–more serious problems such as trauma should not be framed in this way. (For a further discussion of such complexities, read more here.)

In Sum

A groundbreaking study in Nature shows that:

  • A single 30-minute online intervention—teaching BOTH that abilities can grow AND that stress responses help us perform—helped teens in all the ways that researchers measured.
  • The combination of both mindsets matters: neither works as well alone.

High-school teachers can–and probably should–incorporate these ideas in our schools and classrooms.


Yeager, D. S., Bryan, C. J., Gross, J. J., Murray, J. S., Krettek Cobb, D., HF Santos, P., … & Jamieson, J. P. (2022). A synergistic mindsets intervention protects adolescents from stress. Nature607(7919), 512-520.

The Next Frontier: The Right Questions to Ask?
Andrew Watson
Andrew Watson

I recently sat on a panel exploring the question “what’s next in our field?” Where should we be going as we try to apply cognitive science (broadly) to the field of education (broadly)? Perhaps it’s helpful to share some ideas we discussed…

But First…

Before we think about next steps, I wonder if we have a clear enough idea of the current state of “our field.” I myself am not always clear which topics are within our purview and which are not.

For example: I recently came across an article exploring the ASTONISHING cognitive benefits of improving air quality in schools. In this research pool, cleaner air led to higher test scores (as well, of course, as important health benefits).

  • On the one hand, this question connects research with academic and cognitive performance: it seems like an obvious part of “the field.”
  • On the other, the research here focuses on lung function, and requires lots of technical knowledge about HEPA filters and “parts per million.” None of my training in education, psychology, and neuroscience research prepares me to evaluate the relationship between mold spores and math performance.
  • On the other other hand: neuroscience is itself a highly specialized kind of biology. Why shouldn’t our field include other kinds of highly specialized biology — like, say, research into lung function and air quality?

So, my first concern is: I don’t know what’s next in our field because I’m not sure what our field currently IS.

A related problem follows on this first one: how do all these topics connect to each other? The topic of mindfulness clearly fits within the field. So does reading instruction for dyslexic students. But: how should I think about connecting those two topics? Where does generative drawing fit? Or cultural differences in the relationship between students and teachers? The best way to ask questions? Or, technology and AI? Given the enormous number of topics, how can we think about the almost infinite number of intersections and interactions?

My wise colleague Glenn Whitman recently gathered together several representations of different parts of “the field.” We could start with a comprehensive LIST put together by Evidence Based Education.

We could counsider Oliver Caviglioli’s revision of Dan Willingham’s “simple model of cognition”:

Here’s my own attempt to add affective processes (emotion and motivation) to a more cognitive model (attention and memory).

Or we could reframe these concepts with Stephen Chew’s model, that looks at difficulties at each stage in this process:

Or Efrat Furst’s representation of memory and schema formation:

But none of these models envisions human development. Or the role of the physical body in learning. Or differences among disciplines. Or technology…

Broadly speaking, before we start considering where we go, I’d like to have a better idea of where we are. And: how “we” all relate to each other.

But Wait: Another Thought…

The question “what’s next” implies that we’re nearly done with the tasks on our to-do list. It has a vibe: “We’ve got this project almost wrapped up–what should we move on to now?”

I myself think we are nowhere near being done with the work we’ve already got. As far as I can tell:

  • Relatively few teachers know much about working memory, or think about it as they plan;
  • Many (MANY) teachers still believe in various neuromyths/psychomyths;
  • We don’t often talk about the goal of education; so we have no common basis from which to think about the practice of teaching;
  • The precise way to use retrieval practice with 6th graders learning science in a Montessori program in Madrid is probably not the right way to use retrieval practice to help 12th graders with ADHD learn calculus at a military academy in Reykjavik.

If teachers at my school chose even one of those topics, we would need several months of work to make lasting progress and change — because we have LOTS of other stuff to do! I’m so busy helping my students understand the poetic debate between Langston Hughes and Countee Cullen (and grading all their analytical paragraphs) that I have very little bandwidth left to invest in new approaches.

I hope that we shift from thinking about this work as something that we do, cross off the list, and move on to the next thing. Instead, we need to steep in any one of these topics for…well…months at a minimum.

Here’s an example. I’ve done some work over the years with a school out in Western Massachusetts. Each year, they pick a PD topic and add it to a list of three. They then keep that topic under explicit discussion and practice for THREE YEARS. When they’ve had that much time to think it over, they cycle that one out and add a new one.

I don’t know if that precise schedule will work for everyone, I do think that time horizon will be much likelier to have an effect than a more typical approach: “we talked about AI last month, so THIS month we’ll be thinking about trauma…”

Okay, NOW we can talk about next frontiers.

Sorry, Hold Up for a Second…

Before we move on to the next topic, I think we should get better at persuading people to join our team.

Because you attend Learning and the Brain conferences, and you’re reading this blog, you probably already believe that a cognitive science + research approach to education makes lots of sense. But let’s be honest: many, MANY people do NOT share our belief.

Astronaut stands on cliff, gazing at desert horizon, dawn sky.

And: the teaching practices supported by cognitive science provoke genuine alarm–even revulsion–in other educational spheres. Direct instruction and retrieval practice and cold calling may make sense within a psychology-focused approach to instruction, but they often prompt real dismay among colleagues who do not share our starting point. (Heck: not every speaker at a Learning and the Brain conference would champion even these approaches.)

I honestly don’t know where or how to change people’s minds; that’s not my specialty at all.

I do worry, however, that the best known public venues for changing minds–say, social media platforms–mostly promote angry shouting and name calling. I myself don’t think that we can insult people enough to cause them to join our team. We need to welcome and befriend them; doing so probably requires lots of listening and curiosity. The more often I call someone a grifter, the less likely I am to persuade them to think the way I think.

If we can’t get people to join us where we are, I don’t know that thinking about the “next frontier” will provide us with much additional benefit.

Back to Where We Started

Although I sat on a panel about “next frontiers” in this field, I don’t really know much about those next frontiers. Instead, I think we should focus on:

  1. Defining and mapping the field as it currently stands,
  2. Shifting our timescale: meaningful change takes months and years, and
  3. Persuading others to join our work with welcoming curiosity.

I don’t doubt that others will have excellent new ideas. Me: I’m still pondering and processing the ideas we’ve already had…

Executive Functions: Setting the Record Straight
Guest Post
Guest Post

If you’ve been in education long enough, you’ve probably heard the term executive functions. Maybe you’ve seen it in IEPs, professional development sessions, or even on social media, where it’s often tossed around as shorthand for “kids’ lack of organization or self-control.”

The problem? Executive functions are far more complex than simply being organized or staying in control. They are also one of the most misunderstood and misused concepts in education.

Let’s establish some clarity on what executive functions really are.

What Executive Functions Really Are

Executive functions are not a buzzword, a diagnosis, or just another catch-all for learning and behavior problems. They are a set of cognitive processes that act as the mind’s control center, making goal-oriented behavior possible, such as finishing a task, following directions, managing materials, listening and paying attention, or adapting when plans change.

Executive functions can be divided into two sets: core and higher order.

The three core executive functions are:

  • Working Memory – the mental “scratchpad” where we hold and manipulate information in the moment.
    • Examples: remembering the steps of a math problem or keeping track of multi-step directions.
  • Inhibition – the ability to pause, resist impulses, and filter out distractions.
    • Examples: not blurting out an answer or staying focused despite a tempting distraction.
  • Cognitive Flexibility – the capacity to shift perspectives, adapt to change, or try new approaches.
    • Examples: adjusting when classroom routines suddenly change or trying a new strategy after a mistake.

These core processes make it possible for students to develop higher-order executive functions, which include:

  • Reasoning – making sense of information, identifying patterns, and drawing logical conclusions.
  • Planning – setting goals, mapping out the steps needed to achieve them, and monitoring progress along the way.
  • Problem-Solving – figuring out what to do when things don’t go as expected or when obstacles arise.

Together, these core and higher-order functions shape how students think, behave, and succeed in school.  If knowledge is the orchestra, executive functions are the conductor.

Executive functions develop gradually over time. The core functions emerge and strengthen from roughly ages 3 to 12, while the higher-order functions mature from about 13 to 18. These abilities continue refining and level off in early adulthood (around ages 19 to 25) as the brain’s prefrontal cortex fully matures. There is also evidence that executive function performance may decline in later adulthood.

Why Teachers Should Care

Executive functions are at the heart of what it means to “do school.” Paying attention, following directions, transitioning between activities, managing time, keeping track of materials, and interacting with peers and adults appropriately all draw on the successful coordination of executive functions.

These skills are sometimes called the hidden curriculum, the unspoken rules and routines of school life. Students who have compromised or underdeveloped executive functions may appear “lazy,” “defiant,” or as if they’re struggling to understand the material. In reality, their developing executive functions may be to blame.

When teachers understand this alternative explanation, they can not only recognize these patterns as potential signals but also use that knowledge to protect students’ developing executive functions by being intentional about how they create learning experiences.

A Quick Classroom Vignette

Picture this: you’ve just given directions for a simple math activity. Most students get started, but one child is staring at the board, clearly lost. Another is wandering the room looking for a pencil. A third is loudly asking, “Wait, what are we doing again?” Meanwhile, a ‘fast finisher’ has blown through the assignment but made every mistake possible.

It might be tempting to chalk their behavior up to inattention or lack of motivation. But what you’re possibly seeing are students at different points in their executive function development. One child’s working memory couldn’t hold the directions long enough. Another’s inhibition couldn’t resist the pull of distraction. A third lacked the cognitive flexibility to restart when things didn’t go smoothly.

The takeaway: these kids don’t necessarily lack interest or effort. They are students whose minds are still developing the control processes needed for goal-oriented behavior, and who benefit from structures, routines, and supports that make thinking easier to manage.

The Keys to Supporting Executive Functions

When it comes to supporting executive functions in the classroom, two guiding principles matter most:

1. Decrease Ambiguity.
Students can’t regulate themselves or direct their attention if they’re unsure what’s expected. Clear routines, explicit instructions, and modeled examples take the guesswork out of school. These practices free up valuable working memory that might otherwise be spent decoding unclear directions or figuring out the rules.

2. Increase Automaticity.
The more students can do certain routines and tasks automatically–such as unpacking, collaborating with peers, or setting up materials–the less mental energy those tasks require. Automaticity preserves cognitive bandwidth so students can devote their best thinking to the actual learning.

These two keys don’t require special programs or extra curriculum. They are about designing classrooms that lighten the mental load so executive functions can do their job: supporting learning.

What Teachers Can Do

The good news is that executive functions can be strengthened in every classroom regardless of grade, content area, tier, or related service. Research and practice point to three powerful teacher levers.

Most teachers are already implementing these strategies in some form, but having accurate information about executive functions provides the professional knowledge to be intentional about how we approach the following three areas:

Establish Targeted Routines – Consistent, predictable routines reduce ambiguity and free up mental space for learning, lessening the demand on cognitive flexibility.

Implement Strategic Instruction – Chunk directions, use visuals, and pace instruction volume thoughtfully to protect students’ working memory.

Design Supportive Environments – Minimize unnecessary visual and auditory distractions and provide accessible resources, tools, and supports that allow students to offload cognitive demands.

These approaches are not about lowering expectations. They are about aligning them with how the mind actually works.

The Bottom Line

Executive functions are not just a special education concern or a miscellaneous label for problematic or underdeveloped learning behaviors. They are the mental capacities every learner draws on every day. When teachers understand and intentionally support them, classrooms become more equitable, accessible, and effective for all students.

Want to know more?  Check out the following:

A. Diamond (2013)

Serpell & Esposito (2016)

Siregar & Colleagues (2021) 


Intrigued? We’ve got an ENTIRE CONFERENCE focused on “Teaching Executive Skills” in New York City, April 16-18, 2026. We hope to see you there (or online).


Dr. Sarah Oberle authored this post to share insights from her work on executive functions in education. Along with co-author Mitch Weathers, she has written a forthcoming book, Executive Functions for Every K–3 Classroom, which will be published in April 2026. Stay tuned for this practical guide, packed with evidence-based strategies to help K–3 educators support students’ developing executive functions in the classroom.

ChatGPT and the Future of AI—The Deep Language Revolution by Terrence Sejnowski
Erik Jahner, PhD
Erik Jahner, PhD

Every semester, a few essays arrive so polished they almost sparkle, but they do so with visceral discomfort—like a poorly animated movie too close to reality but not quite there. Graceful syntax, perfect grammar, not a single false note—and somehow, no soul. That uncanny smoothness feels less like a student voice and more like the hum of a machine operating in the “uncanny valley.” Terrence Sejnowski positions ChatGPT and the Future of AI—The Deep Language Revolution right there, in that uneasy space between fluency and understanding.

Sejnowski isn’t an outsider gazing at the AI spectacle; he’s one of its architects. A pioneer of computational neuroscience and deep learning, he helped lay the foundations that made ChatGPT possible. This book extends his earlier Deep Learning Revolution into a new frontier—the “deep language” era, where machines have not only learned to recognize patterns but to converse with us in our own language, feeling like a new friend.

What makes this volume distinctive is its method. Sejnowski doesn’t simply describe large language models (LLMs); he talks with them. Throughout the text, unedited AI dialogues appear as miniature laboratories—moments where he prods a model, exposes a glitch, or lets it surprise him. These exchanges give the book an experimental rhythm, turning abstract theory into lived conversation. It’s storytelling through inquiry: part lecture, part dialogue, part field report from the edge of cognition.

At its heart, the book asks how we might live with LLMs rather than fear or worship them. The “deep” in Sejnowski’s title refers both to the neural architectures driving these systems and to their deepening impact on human culture. The early chapters read like a chronicle of adaptation—teachers revising curricula, programmers gaining new copilots, writers negotiating inspiration and imitation. He captures the dawning normalcy of a world where thinking now includes a digital partner. He also addresses the ethics of actor, artist, and writer replacement.

Sejnowski’s tone is what makes the technical accessible. When he explains transformers—the attention-based models at AI’s core—he does so with the warmth of a teacher, using metaphor rather than jargon. He reminds us that prompting a model resembles prompting a student: clarity, tone, and curiosity determine the quality of response.

His insights land close to home. In my own classroom, I have regularly used ChatGPT as a thinking coach—a partner for brainstorming, counter-arguing, and reflection. But my students must practice prompt literacy: learning to frame, question, and verify. The reality is they compete in a world where AI is the new baseline, and they need to determine what more they can add. The goal isn’t to outsource thinking but to observe it more closely.

As the book progresses, Sejnowski tightens the lens. He unpacks the mathematics of attention, scaling laws, and embedding with clarity, then widens again to ask what this means for work, education, and the mind itself. His tone is steady—curious, not alarmed. If the Industrial Revolution amplified muscle, he writes, this one amplifies mind. The challenge is adaptation, not extinction. There’s an echo of Vygotsky here: language, whether human or machine-generated, becomes a tool of thought, extending cognition beyond the skull.

He’s careful, too, about risk: hallucination, bias, misplaced trust. His solution is epistemic rather than technical—verification as a daily habit, curiosity as our best defense. The danger isn’t deception from machines but complacency in ourselves. AI can extend the mind, but only if we resist the temptation to let it stand in for one.

Stylistically, Sejnowski writes like a scientist who enjoys teaching. The book moves from conversational to analytic to philosophical, mirroring the dialogue he champions and instigating discussion. The AI transcripts punctuate the text with moments of surprise, proof that discovery often begins with a question.

For educators, this book is essential reading to stay updated with the developing and culturally embedded mind. For those drawn to cognitive science or the inner workings of language models, it offers a rare synthesis of neuroscience, computation, and pedagogy. Even readers who’ve followed every AI headline will find something new here: a deeper understanding of how these systems “think,” and what that reveals about how we do.

Sejnowski ends with quiet but unmistakable optimism. The revolution he maps is one of shared cognition—language itself becoming a bridge between human and machine minds. For teachers, researchers, and anyone studying how thought extends beyond the individual, this book is both a compass and a challenge: to stay curious, critical, and conversational at the edge of intelligence.

Interleaving Meets Prequestioning: A Powerful Combination for Learning?
Andrew Watson
Andrew Watson

A few weeks ago, I wrote about an intriguing research question. We know that retrieval practice helps students learn. And we know that generative drawing helps students learn. So: what happens if we combine those two research-informed strategies?

As I wrote back then, I think we have three answers to that question:

  • Answer #1: combining strategies didn’t help. Students remembered the same amount whether or not they retrieved as they drew.
  • Answer #2: Wait a minute: I’m actually not so sure about answer #1. The “retrieval” in this study didn’t really match the typical definition of “retrieval practice.” The students recalled something they had just read, not something they had learned a few days before. In other words: they were “retrieving” from working memory, not from long-term memory. That’s not really retrieval practice.
  • Answer #3: although we haven’t learned the answer to the question–at least in my opinion–we have learned that research into combining strategies gets tricky.

I’ve just come across another study asking a similar “is two better than one?” question. Here goes:

  • We know that jumbling practice on related topics together helps students learn. That is: rather than practice “adjectives” and “adverbs” separately, students benefit from doing practice questions on both topics together. (We call that approach “interleaving.”)
  • And we know that asking students to answer questions before they have studied the answer helps students learn. (We call that technique “prequestioning” or “pretesting.”)
  • Dr. Steven Pan and his colleagues wanted to know: does interleaving + prequestioning result in more learning than either strategy on its own? Or, the same amount of learning? Less learning?

Let’s explore…

“What Is Bipolar Disorder?”

Using an online research platform, Pan and his colleagues asked adults to learn about six different mental conditions: ADHD, bipolar disorder, etc.. To learn this information, participants read short “case study” passages about three people with each condition.

  • Half of the participants read three consecutive passages about any one condition. That is: they read 3 passages about a person with schizophrena; then 3 about someone with autism; and so forth. (This structure is the opposite of “interleaving”; we call it “blocking.”)
  • The other half of the participants read those same passages–but all jumbled together. So, they might read a passage about someone with ADHD, then another about someone with schizophrenia, then another about someone with bipolar disorder. (Yup: this is “interleaving.”)

So: all the participants read the same 18 passages in groups of 3 passages. But half of the participants read them in “blocked” groups; the other half read them jumbled up in an “interleaving” structure.

So far, this research explores the relative benefits of blocking or interleaving. But what about the “prequestions“?

  • Within each of those groups, half of the participants saw the list of technical names for the six conditions that they’re learning about: “cyclothymic affect disorder,” “resonance development disorder,” “schismic cognition disorder,” and so forth. (The researchers deliberately chose obscure technical names so that participants couldn’t rely on prior knowledge—they had to learn the associations from scratch.) Participants had to guess: which one of these six is the passage about?
    • They had to answer these questions without having yet learned about the conditions. That’s a “prequestion.”
  • The other half saw the technical name at the top of each passage they were reading. So: a case study about bipolar disorder would be labeled “cyclothymic affect disorder.”
    • No prequestion here.

With these groups and subgroups, Pan and Co. can now discover which combination helps students learn the most:

  • It could be BLOCKING, with our without PREQUESTIONS
  • It could be INTERLEAVING, with our without PREQUESTIONS

To be extra careful, the researchers checked to see how much the participants learned five minutes later. And then — with a completely different group of participants — they checked to see how much they learned 48 hours later.

We’ve Got Answers (I’ve Got Questions)

Team Pan wanted to know: does combining interleaving (a structure) with prequestioning (a technique) increase learning?

In a word: YES.

  • Both right away and 48 hours later, these adults learned more about identifying cognitive disorders if they read jumbled case studies (“interleaving) and had to guess what each one was (“prequestioning”).
  • The group that read the case studies all clumped together (“blocking”) without prequestions learned the least.
  • And the groups that either studied with interleaved case studies or answered prequestions scored in the middle — roughly the same as each other.

On the one hand, this helpfully direct answer gives teachers useful guidance. Sure enough, using an interleaving structure with a prequestioning technique helps students learn.

I do, however, have two questions.

First: A quiz after two days gives us more useful information than a quiz right away. But “two days” isn’t yet learning. In fact, we’ve got all sorts of reasons to worry that “short term performance does not accurately predict long-term learning.” (Check out this research review.) I will feel MUCH more confident offering this guidance if we have studies confirming these results after noticeably longer periods of time.

While I’m at it, I should say: adult learners might benefit from different combinations than K-12 learners. For this reason, I’d also like to see replications with different age groups.

To be clear, Pan and his colleagues haven’t done anything wrong by limiting their study to two days, and working with adults. No one study answers all questions. They’ve given us a helpful set of data points. I myself think we need several more collections of data points before we offer strong advice. They themselves raise this point in their study.

Second: The procedure described above uses a version of “prequestioning”; but it’s not exactly “prequestioning.” With typical prequestions, students almost certainly don’t know the answer because they haven’t yet studied the material. In this case, the students (probably) don’t know the answer at the beginning of the study. However, over time, they have more and more reason to know the answer.

To my way of thinking, this study STARTS by combining interleaving with prequestioning, and ENDS UP combining interleaving with a version of retrieval practice. After all: by the final round or two, the students can start recalling the correct answers that they’ve been studying up to now.

Here again, I’m not describing a flaw in the study. At the same time, I do think it’s helpful to have our terminology as clear as possible.

To Combine Or Not

Based on this research, we have a tentative reason to believe that starting with questions that students can’t yet answer (“prequestioning”) and jumbling practice problems together (“interleaving”) helps students learn.

As teachers, we can start trying out this combination of research-supported strategies.

And: we can keep an eager eye out for more research into strategy combinations. A whole new fruitful field awaits…


Pan, S. C., Selvarajan, G., & Murphy, C. S. (2024). Interleaved pretesting enhances category learning and classification skills. Journal of Applied Research in Memory and Cognition13(3), 393.

From Facts to Application: The Surprising Power of Repeated Retrieval
Andrew Watson
Andrew Watson

The more time you’ve spent in Learning and the Brain world, the more you’ve heard about retrieval practice.

  • If students practice by looking back over work they have done, we call that “simple review.”
    • “Simple review” doesn’t benefit learning very much, if at all.
  • If students practice by trying to recall information and concepts, we call that “retrieval practice.”
    • Compared to “simple review,” “retrieval practice” enhances learning A LOT.

So: rather than reviewing their notes or rereading the textbook, students should ask themselves and each other questions. The additional mental struggle of trying to answer those questions will — in the long run — help them learn the material.

This basic idea has been around for decades, and has gotten lots of research love in the last 20 years. If you’d like to brush up on previous research, you can:

Of course, researchers constantly explore new questions within this well-established field. One perfectly reasonable question sounds like this:

“Okay, we know that retrieval practice helps students recall facts and concepts. But does it help them APPLY those facts and concepts? We want students to USE information, not just HAVE information.”

Good point!

A recent study by Corral and Carpenter explores this question across three experiments, and offers a few striking conclusions.

The Setup

Corral and Carpenter ran three experiments with several hundred college students. These students studied slides that defined terminology relevant for a research methodology course: e.g., “reverse causation.”

After looking at the slides:

  • Some students simply reviewed the information; they looked at the slides again.
  • Some took a short-answer quiz about it (that’s retrieval practice).
  • Some studied the quiz questions and answers, but didn’t have to answer them. (A different kind of review.)
  • And some did none of those things; that is, they saw the information only once.

As a final step, students took a test. Some test questions checked to see if they recalled the information. Students could get these answers right if they simply memorized the slides. Crucially, other test questions checked to see if they could apply the information in a novel situation. For these questions, simple memorization wouldn’t get the job done — students had to transfer their knowledge to a new situation.

For instance:

  • A RECALL question about reverse causation:
    • “When it is unclear whether variable X causes variable Y or whether variable Y causes variable X, what kind of causal inference problem do we have?”
  • An APPLICATION question about reverse causation:
    • Lisa just received her master’s for her work in astronomy. Lisa found that planets that are closer to their moons rotate faster than planets that are farther away from their moons. However, Lisa is not sure how to interpret her data because it is unclear whether a planet that rotates faster draws its moons in closer to the planet or whether moons that are closer to a planet cause the planet to rotate faster. Ignoring the possibility that other factors may be involved, what kind of problem does Lisa have?

Headline Number One: A Big Nuthin’

In the short term, the review strategy didn’t matter.

That is: when there was only an eight minute gap between the review and the final test, then (on average) none of the review strategies helped students score better than their peers who didn’t review. Not retrieval practice; not quiz review; not slide study. Nothing.

And, by the way, they found this same result for both the recall questions and the application questions.

This finding reminds us of an important truth: we’re looking for a study strategy that helps students learn more over the long term. Of course, EIGHT MINUTES isn’t the long term. You can be in and out of Starbucks in under eight minutes. Heck, you can probably read this blog post in under eight minutes.

I’m emphasizing this point because it answers an essential question: “if retrieval practice is so great, why didn’t teachers and students figure it out on our own? If it’s obviously better than review, why didn’t we stumble across this strategy centuries ago?”

We didn’t figure it out because its benefits take a while to appear. Retrieval practice doesn’t help right away. It helps long-term learning.

Headline Number Two: More Reps + More Time = Better Application

Corral and Carpenter repeated their experiment twice: this time with two important changes.

  • First: students used their review strategy — either “simple review” or “retrieval practice” or “quiz review” — THREE TIMES, not just once.
  • Second: they waited a week to take that final test. (You have already noticed: one week is a lot more time than eight minutes.)

Sure enough, once a meaningful amount of time had passed, students who took three retrieval practice quizes scored higher on the recall questions. AND, they scored higher on the application questions.

In other words: repeated retrieval practice helps students recall facts and definitions. But, it doesn’t only do that. It also helps students transfer that knowledge and apply it in new ways.

This finding merits attention because it pushes back against one of the most common concerns about retrieval practice: “it’s just for memorizing disconnected facts! It doesn’t help students use their learning!”

Well, if Corral and Carpenter are correct, retrieval practice does help students use their learning. Of course, retrieval practice might not help right away — after 8 minutes. But repeated retrieval practice does help students over the long run.

Teaching Implications

First, in language proposed by Agarwal and Bain:

  • Rather than ask students to put information back into their brains (that’s “simple review”),
  • Ask them to try to take information out of their brains (that’s retrieval practice).

Second: the exact form of retrieval practice isn’t terribly important. According to an ENORMOUS meta-analysis I describe here,

  • Multiple choice and short-answer questions work just fine. (Contrary to popular advice, “brain dumps” are less effective.)
  • Combining retrieval practice with correct-answer feedback is good, but retrieval practice helps even without feedback.
  • Graded or ungraded: doesn’t matter. (Me: I prefer ungraded.)
  • Repeated retrieval practice helps students learn and transfer more than one-time retrieval practice. (No doubt there’s a limit here, but I suspect most students benefit from “more than they are currently doing.”)

Third: don’t be misled by the critics. Repeated retrieval practice helps remember facts, ideas, and procedures; and it helps students use those facts, ideas, and procedures.

This research offers real encouragement: when we build repeated retrieval practice into our teaching, we’re equipping students with knowledge they can actually use.


Corral, D., & Carpenter, S. K. (2025). Effects of retrieval practice on retention and application of complex educational concepts. Learning and Instruction100, 102219.

From Lab to Life: Testing Study Strategies with 2,500+ Real Students
Andrew Watson
Andrew Watson

Research-based conclusions often appeal to us because of their tidy clarity. For example, research shows that…

  • Retrieval practice helps students learn, compared to simple review;
  • Spreading practice out (“spacing”) help students learn better than doing all the practice at once (“massing”);
  • A quiet study environment fosters concentration and learning;
  • Study strategies like rereading and highlighting are almost certainly a waste of time.

These suggestions — and many, many others — have LOTS of research behind them. We’ve got numbers and graphs and equations and peer review and replication. All that science-y good stuff.

Of course, the actual gritty world of classroom learning rarely achieves this level of regulated clarity. Students want to play with their friends. A spider will appear, seemingly with fangs. The copying machine got glitchy, and we have only half as many handouts as we need.

For all these reasons, we need to do research BOTH in psychology labs AND in actual classrooms. This second kind of research lets us know what happens when scientific rigor meets day-to-day school muddle. (A classic of this genre: Powerful Teaching by Dr. Pooja Agarwal and Patrice Bain.)

Alas, even classroom-based research doesn’t tell the whole story. Once students start doing academic work at home, then their own beliefs and practices add an additional layer of reesarch complexity. If classrooms are “messy” compared to psychology labs, imagine how much “MESSIER” an individual student’s academic life could be.

A recent study surveys Spanish high school students — from 7th to 10th grade — to understand exactly what they do to study at home, and exactly what they believe about learning. It then correlates those practices and beliefs with students’ academic success. If everything works out just right, we can discover:

  • Are students DOING all those research-based study practices, like retrieval practice and spacing?
  • Are they AVOIDING all the useless practices, like highlighting?
  • Are their grades confirming the researchers’ predictions? That is: do students who study in quiet rooms learn more than those who don’t?

In other words: do students (in their cluttered lives) get the same results from study practices that researchers (in their pristine psychology labs) do?

Good News/Bad News, Part I

I read A LOT of studies for my work, and this one stands out to me for its combination of research chops and common sense. The easiest point to notice: the authors report data on almost 2500 students! Given this very large sample size, we can have some confidence that flukey outliers are not skewing the results.

Although I won’t go into the methodological calculations, I also admire the care with which the authors developed their question list.

All this procedural good news does come with some caveats.

First: This study asks students what do and think. Alas, self-report data don’t inspire lots of confidence. If I ask you what you ate for lunch on August 15, you might remember. But your self report of that menu isn’t as reliable as most scientific variables. So too: when we ask students how they study, they might not remember — or might not admit that they study with music playing.

Second: the researchers use grades to measure academic success — although they admit that grades don’t measure learning very well. We have many reasons to doubt the relationship between grades and learning. For example:

  • Imagine that a student uses ineffective study strategies at home, but her teacher uses effective teaching practices in the classroom. The effective teaching might mask the effects of the ineffective studying.
  • Although cramming is bad for long-term learning, it’s good for short-term performance. If I cram for a test, I might get an A on that test…even though I won’t remember those ideas over the long term.

To make the broader point a different way:

  • Research in a psychology lab is useful, but not perfect;
  • Research in classrooms is useful, but not perfect;
  • Reasearch into students’ study practices is useful, but not perfect.

This study can give us helpful data, but it doesn’t provide some kind of comprehensive “final answer” to question about study practices. (No one research method can provide such a comprehensive answer.)

Good News/Bad News, Part II

If you ask 2500 students lots of questions, you’re going to get LOTS of data. For that reason, I will touch on a few highlights, but I’m not aiming for a comprehensive survey of this study’s findings. (If you want to know more, you can read the study itself here. It’s open access!)

The good news:

  1. Sure enough, research-supported practices (largely) support learning. So:
    • students who study in a quiet atmosphere get higher grades than those who listen to music while they study.
    • (by the way: most students report that they don’t listen to music while they study!)
    • students who use retrieval practice score higher than those who don’t.
    • those who “elaborate” on their learning — by creating concept maps, for example, or by generating their own examples — learn more than those who don’t.
  2. Sure enough, research-discouraged practices don’t help learning. So:
    • highlighting, copying, and rereading didn’t really do much to enhance learning.
  3. Sure enough: high “self-efficacy” belief correlates with more learning (although we don’t know if it enhances learning, or results from learning, or both).

The bad news:

  1. Compared to college students, high-school students use much less retrieval practice. (This result might not be surprising. Students who don’t use retrieval practice get lower grades…and are therefore less likely to go to college.)
  2. More puzzling: students who report spacing their practice don’t see benefits in higher grades. In other words, this research-supported practice doesn’t produce the touted benefits.

So, what’s going on with spacing?

As noted above, the authors have a potential explanation here. Cramming doesn’t help long-term learning, but it does improve short-term performance. Students who cram for a test might get the same high score as those who spaced out their practice. For this reason, the long-term learning benefits of spacing might not show up in research that uses grades as a proxy for learning.

Two students studying together sitting on a bench outdoors.

I myself have a second hypothesis. I think that students CAN control some study behaviors: retrieval practice, studying in a quiet space, not wasting time highlighting, and so forth. But some study practices are really OUTSIDE student control.

For spacing to work as researchers intended, a particular topic should be re-opened and re-practiced at unpredictable intervals over long periods of time. Spacing, in other words, requires syllabus-level planning. I just don’t think it’s reasonable to ask students — especially young high-school students — to do that. In brief: I suspect that some students believe that they’re “spacing,” and say so on those surveys. Alas, their “spacing” might not match researchers’ definitions of spacing. If students are not really spacing, they don’t get the benefit of spacing.

TL;DR

We have good research — both in psychology labs and in classrooms — to guide students’ study habits. In all that research, retrieval practice and elaboration help; highlighting and underlining don’t. Quiet study spaces foster learning; so do “self-efficacy” beliefs.

This study from Spain — with 2500 high-school students! — shows that those strategies probably work in real life as well as in more tightly controlled research settings.

Although teachers should probably be responsible for spacing, students can — and SHOULD — rely on those other study practices. When teachers give this advice to students, our guidance has the backing of multiple layers of research: in labs, in classrooms, and in students’ real lives.


Ruiz-Martín, H., Blanco, F., & Ferrero, M. (2024). Which learning techniques supported by cognitive research do students use at secondary school? Prevalence and associations with students’ beliefs and achievement. Cognitive Research: Principles and Implications9(1), 44.

Research Reality Check: ‘More Is Better’ Doesn’t Always Apply to Learning Strategies
Andrew Watson
Andrew Watson

We’ve got lots of strong research suggesting that drawing can improve memory for specific words.

And

We’ve got LOTS of strong research showing that retrieval practice enhances learning.

So

Simple logic suggests that combining drawing with retrieval practice will super-duper strengthen memory.

To capture this specific suggestion, let’s propose a general equation:

  • [Research-supported technique A] + [research-supported technique B] = even MORE benefit

That equation just sounds like common sense. Of course, in the world of science, we always have to stop and check our common-sense assumptions. (It just makes sense to believe that the earth is flat. But because we stopped to check that assumption, we now know it to be untrue — knowledge that has proven HIGHLY useful.)

I’ve found a study that explores the specific question raised above: “do students learn more when they combine drawing with retrieval practice?”

Of course, it also invites us to consider that general equation: [technique A] + [technique B] = MORE benefit.

Let’s dig in.

Drawing Biology; Drawing Conclusions

The researchers here wanted to know if students who combined drawing with retrieval practice learned more than those who simply drew. To do so, they had some students [retrieve + draw], while others [drew without retrieval].

Specifically: 8th and 9th grade students in Germany read a 670 word passage for biology class. This five-paragraph passage discussed the human immune response to the flu.

The [retrieve + draw] students followed a multi-step procedure. (If you want to skim these details, that’s ok. I’ve skipped a few steps to keep things simple.)

  1. They read the five-paragraph passage.
  2. After each paragraph, they drew a picture to capture the key ideas that they just read. Note: they could NOT look back at the paragraph while they did so. In other words: they had to retrieve the important ideas as they drew.
  3. Once they had completed the full passage and generated five drawings, they then looked at the passage while they revised and corrected those earlier drawings.

The [draw without retrieval] group did all that, but they could always see the passages while they drew. So: lots of drawing, but no retrieving.

To measure the potential benefits of [retrieving + drawing], the researchers measured three outcomes:

  • How well did students remember the information — as measured by a written quiz?
  • How well did students remember the information — as measured by drawings?
  • Did the students understand other ideas — for instance, HIV infection — better based on learning these related concepts?

The research team asked these questions TWICE: both 2 weeks later and 8 weeks later. This point merits attention, because research only rarely looks for results so long after an experiment. Because we have data gathered two months after the students combined [retrieving + drawing], we can have a plausible basis to start talking about “learning.”

So: how much better did the [retrieve + draw] group do than the [draw only] group?

When a Bust is Not a Bust

We can summarize the findings in three words:

  • “Nope,”
  • “Nope,” and
  • “Nope.”

That is:

  • [Retrieving + drawing] didn’t improve scores on a written quiz after 2 weeks. Or 8 weeks.
  • It didn’t improve scores on drawings.
  • And, it didn’t improve scores on learning related new information.

These results might feel like a bust. The researchers did ALL THAT WORK, and they found nothing. But I think this “what a bust” response misses several key points.

First: the researchers DID find something. They found that combining [retrieving + drawing] DIDN’T produce extra learning. That information gives us useful guidance — because it tells us not to devote our scarce resources to this extra work.

Second: These results help us rethink our general equation: “[research-supported technique A] + [research-supported technique B] = even MORE benefit.”

While that equation initially sounded like common sense, we can see that it’s not necessarily true. (Just as: it’s common sense to think that the earth is flat, but…)

As we enter the world of research-based teaching advice, we should be careful how far we extrapolate research findings.

Intriguing Possibilities

This research suggests that [reading followed immediately by drawing] doesn’t improve learning.

BUT…

I myself don’t think that IMMEDIATE retrieval is, in fact, “retrieval practice.” As it’s typically understood, retrieval practice involves attempts to recall from long-term memory. But in this case, the information the students just read hasn’t gotten anywhere near long-term memory. It’s probably awash in some working-memory-ish zone.

So, drawing right after reading might not help learning. But

  • Maybe [drawing the next day as retrieval practice] might help. Or,
  • Maybe [reading] then [immediate generative drawing] then [retrieving the drawing the next day] might help. Or,
  • Perhaps [an alternating schedule of drawing and retrieving], or…

In other words: I don’t think this study rules out the benefits of [drawing + retrieval practice]. I think it shows that IMMEDIATE drawing doesn’t produce the benefits of retrieval practice — because it isn’t retrieval practice. (By the way: I have checked with my friend Dr. Cindy Nebel — one of the Learning Scientists — to confirm my suspicion that immediate recall isn’t technically retrieval practice.)

In Sum

This study provides a specific “no”: drawing as retrieval practice doesn’t improve learning (if the drawing happens immediately after reading).

It also provides a broader “no”: we shouldn’t assume that two research-informed teaching strategies interact positively.

Finally, it offers an intriguing possibility: we might get the benefits of both drawing and retrieval practice if that retrieval practice happens after some time. We don’t know yet…but it’s a question worth asking.

After all: research-based teaching strategies don’t always enhance each other, but they might!


Kim, S., Abel, R., Leutner, D., Schmiemann, P., & Roelle, J. (2025). Can the effects of generative drawing on lasting learning be optimized through integration of retrieval practice?. Learning and Instruction98, 102143.

The Teach Like a Champion: Guide to the Science of Reading by Lemov, Driggs, and Woolway
Erik Jahner, PhD
Erik Jahner, PhD

It’s not often that a podcast reshapes classrooms across the country, but Emily Hanford’s Sold a Story did exactly that. It exposed how decades of misguided reading programs left children unable to decode the very words on the page. The revelations were damning, and rightly so: “the makers of those faulty early reading programs,” as Lemov and his co-authors remind us, “were not just wrong about how to teach, they were wrong in the face of clear evidence to the contrary.” If that podcast sounded the alarm, this book steps in with the map for what to do next.

Doug Lemov, Colleen Driggs, and Erica Woolway pick up where the phonics debate leaves off. Once students crack the code of decoding, what then? For too long, the answer in many classrooms has been a bag of vague strategies — “find the main idea,” “make an inference,” “use context clues.” Strategies that sound sensible but float untethered from how comprehension actually works. The Teach Like a Champion Guide to the Science of Reading argues that reading beyond phonics isn’t mystery or magic, and certainly not guesswork. It’s science, and it can be taught.

What’s striking here is how seamlessly the book marries cognitive science with classroom craft. Each chapter revolves around a single sturdy idea — attention, fluency, knowledge, vocabulary, writing, books, close reading — and together they form a framework that feels less like theory and more like a map teachers can actually use. It’s rigorous, yes, but also deeply human: a vision of reading that honors both the science of learning and the lived experience of students in front of us. Read-alouds aren’t treated as pleasant extras but as fuel for fluency. Knowledge organizers aren’t dusty handouts but living guides to be revisited and tested. Vocabulary isn’t a guessing game but a body of knowledge that must be taught, used, and returned to. Page by page, the book insists on one thing: comprehension has to be built with intention.

The prose itself carries the urgency of a coach on the sidelines. Lemov and his colleagues write with conviction, sometimes prescriptive, but rarely heavy-handed. Their tone says: we’ve seen what works, the evidence is clear, now let’s not waste time. That kind of clarity may feel brisk to some readers, but it’s hard to argue with the payoff — guidance distilled from thousands of hours in real classrooms, sharpened by research, and delivered without apology.

Perhaps the book’s most powerful thread is its argument for equity. Students from language-rich homes accumulate background knowledge and vocabulary almost without noticing. Others do not. And when schools offer only fragmented passages and hollow strategies, the gap only widens. But put whole books at the center, layer them with explicit word work and purposeful writing, and suddenly access to knowledge — and the doors it opens — becomes available to all. Literacy, the authors argue, is not just an academic skill but a matter of justice.

This is not a handbook of quick tricks. It is, instead, a coherent framework for teaching reading once decoding is in place. The argument is simple but compelling: attention is trainable, fluency is lifelong, knowledge is power, vocabulary is cumulative, writing clarifies thinking, and books are not optional add-ons but the very heart of the curriculum. Close reading belongs in the mix, but as a spotlight — not the whole show.

The Science of Reading movement has rightly restored phonics to its place at the center of early literacy. But as Sold a Story made plain, fixing what went wrong in the early years is only half the battle. Lemov and his colleagues now push us to ask the harder question: once the code is cracked, how do we create readers who can think deeply, critically, and with joy? Their answer is demanding, evidence-driven, and — most importantly — actionable.

In the end, this book is both a manifesto and a manual. It reminds us that literacy is not just about unlocking words, but about unlocking worlds — and that requires attention, fluency, knowledge, vocabulary, writing, and, above all, books. In a field awash with buzzwords and quick fixes, The Teach Like a Champion Guide to the Science of Reading stands out as something rarer: a guide worth following.

Are Science Videos Really the Answer for Struggling Readers?
Andrew Watson
Andrew Watson

Obviously, we want our students to learn science. But: what if they struggle to learn the science by reading?

One solution: we could use VIDEOS to teach science. Videos can transform the written words into an audio track. Voila — no reading required!

A recent study explores this very question. To paraphrase its conclusion: “YUP — students learn more from science videos, so we can/should use them in science classes.”

While the study itself strikes me as well done in a number of ways, I want to push back against this conclusion. Ultimately, this disagreement might reveal the tension between a researcher’s perspective and a teacher’s perspective.

The outline for this post:

  • I’ll start by summarizing the study, and noting its strengths
  • I’ll then try to explain why I’m not persuaded that this study should make science teachers favor videos — even for weaker-than-average readers.

All The Right Steps

Doing education research well requires enormous care, because it can go wrong in so many different ways. Consider all the expectations that we have — and should have! — for research.

  1. We want to have enough participants in the study for the results to be meaningful.
  2. We want researchers to compare plausible alternatives — not just show that “doing something” is better than “doing nothing.”
  3. We’d like reports on meaningful results — not just “did the students have fun” or something like that.
  4. We’d be REALLY happy if the research took place in typical school conditions — not in some hermetically sealed zone that bears little resemblance to actual classrooms.
  5. We expect lots of statsy/mathy results (even if we don’t promise to read them carefully).

And so on. If researchers skip any of these steps, we can complain from the sidelines: “this study doesn’t include an ‘active control group,’ so we shouldn’t rely on its results.” (Honestly, I’m one of the people who object to studies without active control groups.)

Because I’m so aware of these expectations — and lots more — I felt more and more impressed as I made my way through this study. This team has clearly thought through many of the possible objections and found ways to anticipate and mollify them.

  • Enough participants? More than 100.
  • Active controls? Students learned by watching videos, or by reading an illustrated text (with identical words!). In my view, that’s an entirely plausible comparison.
  • Meaningful results? Researchers measured how well the students remembered and transfered their learning…up to a week later!
  • Within a school? Yup. In fact, students had RECESS in the middle of the study, because that’s how they roll in Finland.
  • All the stats? Yes. (More on this point in a minute.)

Thus, I was inclined to be persuaded that, as the abstract says:

The results indicate that videos are beneficial to most children across reading skill levels, especially those with weaker reading skills. This suggests that incorporating videos into primary school science intruction supports diverse learning needs associated with weaker reading skills.

By the way, in this case, “primary school” includes 5th and 6th grade.

A Teacher Has Doubts

Despite this study’s strengths — and I’m being quite serious when I compliment them — I was struck by the actual statistical findings.

The research team focused on three results:

  • how much cognitive load did the students experience while watching videos or reading illustrated texts?
  • how much difference did video vs. illustrated text make for remembering the information?
  • how much difference it make for using information in another context — that is, for transfer?

To answer these questions, they focused on a statistical measure called “R²ₘₐᵣ”. As is always true with stats, it’s tricky to explain what they mean. But here’s a rough-n-ready explanation.

Smiling girl wearing a pink shirt and blue headphones taking notes while looking at a laptop; a library-like setting in the background.

Let’s say that when I study for a quiz using Method A I score 0 points, and when I study using Method B I get 100 points. The R²ₘₐᵣ tells me how much of that difference comes from the two different methods.

So, if R²ₘₐᵣ = .25, that means 25% of the difference between the two scores came from the difference in the two study methods. The other 75 points came from other stuff.

Typically, according to this measure:

  • any R²ₘₐᵣ bigger than 0.25 is “large,”
  • a value between .09 and .25 is “medium,” and
  • a value between .01 and .09 is “small.”

Now that we have a very introductory understanding to this measurement, how meaningful were the results in this study?

  • The “cognitive load” R²ₘₐᵣ was .046: right in the middle of “small.”
  • The R²ₘₐᵣ for remembering information was .016: barely above the bottom of the scale. And
  • R²ₘₐᵣ for transferring information was .003. That’s too small even to register as small.

In brief: did the researchers find STATISTICALLY significant results? It seems they did. Did they find MEANINGFUL differences between videos and illutrated texts? I’m not so sure.

Opportunity Costs

My objection at this point might reflect this difference between a researcher’s perspective and a teacher’s perspective.

The researchers can — entirely reasonably — say: “we ran a scrupulous experiment, and came up with statistically significant results. The data show that videos helped students learn science content better than illustrated texts. Teachers should at least think about using videos to teach science — especially for weak readers.”

As a teacher, my perspective has this additional variable: one of school’s core functions is to teach students to read. And: they get better at reading by — among other strategies — practicing reading.

In other words: according to this study, the benefits of video are so small as to be statistically almost invisible. The benefits of reading practice are — over time — likely to be quite important. I hesitate to give up on one of school’s essential functions (reading) to get such a small benefit (marginal increase in science knowledge) in return.

TL;DR

Someone might say to you — as a friend said to me — “this study shows that we should use videos to teach science content.”

If you hear that claim, be aware that this well executed study found only minute differences between videos and illustrated texts. We should consider this finding alongside a clear understanding of our broader educational mission: teach ALL students to read.


Haavisto, M., Lepola, J., & Jaakkola, T. (2025). The “simple” view of learning from illustrated texts and videos. Learning and Instruction100, 102200.