March 2024 – Education & Teacher Conferences Skip to main content
To Insta or Not to Insta: That Is the Memory Question
Andrew Watson
Andrew Watson

Here in the US, we’re having something of a national debate about the benefits/harms of social media.

The potential ban of Tik Tok is just the most visible example of the current fervor on the topic.

A serious-looking college student examining her phone

When we consider such heated question, research offers us several benefits.

Specifically, it helps us get past vast and unproveable assertions:

“Social media is destroying a generation!”

“No! Social media will transform education and allow discovery and dialogue across the globe!”

When we turn to a research-based approach, we start asking narrow questions, measuring precise variables, and following well-established protocols.

So, let’s ask one of those precise questions: “does time on social media help consolidate new learning, or does it interfere with new learning?”

A research team in Germany wanted to know just that

Let’s Get Quizzical

Happily, memory researchers have lots of experience in measuring this kind of question. So, they could follow well-established procedures.

In this study, sixty seven college students in Germany learned Icelandic-German word pairs.

Immediately after doing so, half of them chatted away on Facebook or Instagram for eight minutes.

The other half put their heads down and rested quietly for eight minutes.

The research team measured their memory for those word pairs the following day.

Did the social-media users remember more, or fewer, word pairs?

I’m so glad you asked…

Possibilities, Possibilities

Before we open that envelope, let’s consider possible outcomes.

We could predict that social media usage would distract these students from the word pairs that they just learned. So much tweeting and ticking and toking will naturally interefere with memory formation.

Or, we could predict that social media will let students explore and extend their thinking. They might be intrigued by a particular Icelandic word, and start looking up cool Icey stuff and sharing Viking-sounding words with their online friends. All these connections might strengthen memories.

So, which is it?

In this case, the answer was clear: time on social media distracted students and reduced learning. (How much? The Cohen’s d was 0.33; not huge, but certainly noticeable.)

When we set our passions aside, ask a precise question, and measure the answer — we get helpful new data.

 Not So Fast Now…

Although I (for one) am glad to have these data, we always have to acknowledge the limits of our research-based conclusions.

This research was done with German college students learning word pairs.

Would we get the same results with, say, Brazilian 2nd graders learning math?

Or, Japanese students on the autism spectrum practicing art?

We can’t say for sure, because this study didn’t examine those combinations of participants and disciplines. I don’t see any obvious reasons why the results would be different, but we should remain open to those possibilities.

More substantively, this one study does NOT allow us to conclude…

… students who use MORE social media are worse students than those who use LESS, or

… social media destroys children’s ability to focus, or

… parents should forbid their children from using Instagram or Facebook.

Other research studies might answer the first two of those questions; parental judgment will have to take on the third.

All those caveats being duly registered, I think we can draw this reasonable conclusion:

Common sense suggests that that social media use will distract students from stuff they might want to remember — especially they turn to Instagram right away. This research supports that presumption.

Keeping social media away from active learning experiences is — in most cases we can reasonably foresee — almost certainly a good idea.


Martini, M., Heinz, A., Hinterholzer, J., Martini, C., & Sachse, P. (2020). Effects of wakeful resting versus social media usage after learning on the retention of new memories. Applied Cognitive Psychology, 34(2), 551-558. https://doi.org/10.1002/acp.3641

I Am a Doctrinaire Extremist; S/he Is a Thoughtful Moderate
Andrew Watson
Andrew Watson

I recently had an email exchange with an educational thinker and leader who has spent several decades in the field.

After some back and forth, he dismissed my “tenacious belief in the centrality of memorization and retrieval” as ultimately missing the point of learning.

This summary struck me for a number of reasons:

First: it’s true (as far as it goes). I certainly do think that, under some circumstances, memorization can be helpful. And — supported by piles o’ research — I think that retrieval practice helps students form, consolidate, and transfer long-term memories.

Second: this summary implies that I’m in favor ONLY of memorization and retrieval practice. It suggests that I — like Dickens’s Gradgrind — want my students to know facts, facts, facts. (No doubt, someone is aching to use the verb “regurgitate” to capture my purported obsession with facts.)

Third: it further implies that I genuinely don’t care about the meaning behind the facts, my students’ interest in them, or the future usefullness or flexibility of them.

I am, simply put, a doctrinaire extremist.

Crowds surround a burning mansion at night

Because I see myself quite differently — heck, I recently wrote a book with the name “Goldilocks” in the title — I was taken aback by this rhetorical move.

I’ve been thinking about my new Gradgrind Status since receiving this email, and have arrived at a few tentative conclusions about the nature of educational debates.

We’re Mostly Moderates (?)

As implied above, I see myself as seeking out a reasonable middle ground in many educational debates

For instance, as I’ve written repeatedly, I think that working memory limitations suggest that novices will benefit from “high-structure” pedagogies more than from more “low-structure” pedagogies. (See this recent blog post for the difficulties in summarizing this  “high-vs-low” debate simply, fairly, and accurately.)

At the same time, as I’ve also emphasized, I think students’ increasing expertise should promote them from high- to low-structure pedagogies.

That is: the more my students know, the more they should be challenged with open-ended, creative, quest-like assignments that will help them consolidate, connect, and extend their knowledge. (If you know Adam Boxer’s book Teaching Secondary Science, you know he makes the same argument.)

Given these three paragraphs — so earnest in their moderation —  you can see why I’m puzzled (and amused) to see myself reduced to a pitchfork carrier.

At the same — and here I’m guessing — I suspect almost everyone in an educational debate believes they’ve struck up the most reasonable position: probably one in the middle of some continuum.

For instance: my interlocutor explicitly champions a stem-to-stern overhaul of the US educational system.

From his perspective, the system we currently have is so disastrously out of synch with the needs of human flourishing and the genuine truths behind human cognitive and emotional functioning that its wholesale replacement is the only logical option.

That is: although “stem-to-stern overhaul” might sound radical, it is — in fact — an entirely moderate and sensible position given the extremity of the crisis we face.

Just as I think I’m a sensible moderate, he (I suspect) thinks his position is sensible and moderate-given-the-dreadful-circumstances.

We’re all moderates here.

We Are Moderates, but Extremes Exist

When someone accuses me of being a “high-structure extremist,” I have an easy rejoinder at hand: “oh, come on; NO ONE believes any such thing.”

As in: NO ONE follows the Gradgrind method and stuffs students with (facts)3.

In an early draft of that Goldilocks book I was just talking about, I made that very argument. I found a study that contrasts two teaching methods.

Method A: to understand what functions bones serve, students test chicken-bone strength by using vinegar to remove calcium from them.

Method B: students copy down the names of 206 bones from the chalkboard.

I argued — in this early draft — that “no one in the history of the planet has asked students to copy down the names of 206 bones. That’s an absurd straw man.”

A colleague who read this draft took me aside one day and assured me that — sure enough — some schools do exactly that. She, in fact, had taught at such a school.

Now, I’m probably right that no cognitive science research supports this method. But I do have to admit that some people distort cognitive science research to champion this method.

My approach is moderate, but extreme versions of my moderation do exist. In other words: my interlocutor is wrong about me (I think), but not entirely wrong about the world of education.

The Double Flip

This insight, in turn, invites two more aha! moments.

When I worry about the dangers of “low-structure” pedagogy, I might be tempted to highlight examples where teachers throw students overboard into a stormy ocean of cognitive stuff — and ask them to swim to shore. (“In your groups, figure out how to cure rabies …”)

Folks who champion low-structure pedagogies have a handy rejoineder: “NO ONE could misunderstand us to be in favor of such nonsense. That’s an absurd straw man extreme; I’m a sensible moderate.”

And — here’s the first aha! — I suspect low-structure advocates are entirely sincere in this claim. They see this approach as a moderate one, and I’m yoking them to an extreme version of it.

That rhetorical move is as unfair as is my interlocutor’s attempt to make me into Gradgrind.

And yet — here’s the second aha! — those extreme examples do exist; just as extreme versions of direct instruction do.

This tangle of circumstances leads to (at least) two prohibitions:

Low-structure proponents should not say: “those extreme versions of our pedagogy don’t exist!”

Why not? Because they do.

And I should not say: “because those extremes exist, your pedagogy is obviously unsound!”

Why not? Because those extremes are — almost certainly — misunderstandings of their plausibly moderate position.

Honestly, all this moderation is making me a little dizzy.

The Gradgrind Perch

From my new Gradgrindian vantange point, I see two conclusions:

One: although I see myself as a reasonable moderate, others easily perceive me as an extremist — because extreme versions of my way of thinking do exist,

and

Two: although I occasionally see other approaches as extreme, it’s possible/likely that their most thoughtful advocated champion a moderate version of them.

At this point, I’ve maxed out on the even-handed moderation that I can muster. To recover my equilibrium, I’m going to write the names of 206 bones on a chalkboard…

The Neuroscience of You by Chantel Prat
Erik Jahner, PhD
Erik Jahner, PhD

pratWhat a blast! Despite diving into countless neuroscience introductions, this journey felt uniquely enjoyable, resonating with me both as an educator and an eager neuroscience explorer. Chantel Prat’s The Neuroscience of You: How Every Brain Is Different and How to Understand Yours effortlessly blends captivating storytelling with profound insights into the emerging understandings and mysteries of the human brain. Prat’s background as a professor of neuroscience at the University of Washington and her expertise in a variety of cross-disciplinary fields and translation of psychology and neuroscience in several popular science outlets makes her book feel deeply personal and conversational. Through personal anecdotes, and reflective questions and surveys that make the book relevant to you, Prat makes neuroscience accessible and appealing to newcomers and seasoned enthusiasts alike.

The first part of the book introduces neuroscience basics in relatable terms, avoiding overwhelming terminology while still challenging experts with nuanced concepts. Prat ensures the content remains current, reflecting modern understandings rather than outdated perspectives. Delving into familiar core topics like brain localization, hemisphere specialization, neurochemical dynamics, and brain rhythms, she leaves readers feeling empowered to explore their own brain’s unique mix. But she adds some amazing flavor to the topic as is clear with her reference to the neurochemical makeup of the brain as “mixology.” In each section as you apply the concepts there is a necessary and important qualification of “it depends.” You walk away feeling like you are ready to start being your own mixologist realizing the importance of environment, developmental, and genetic variation involved in the process of designing you. You are set up for some fun life designing and biohacking. Along with this preparation, you are set to participate in understanding the developing field of neuroscience. She emphasizes that you are right in the middle of this scientific journey and prepares you to take your own journeys into the wide field of emerging studies.

Building on this foundation, the second part of the book delves deeply and personally into essential cognitive skills, offering robust theory and engaging narratives to help readers understand and utilize their brain’s inner workings. From focusing and adapting to navigating, predicting, and building curiosity, the book covers key aspects of cognitive function, concluding with a fascinating exploration of the brain’s social nature and its role in human interaction. This second part of the book is wonderfully up to date adding nuance and understanding of the science that is currently being evaluated. Hopefully, you walk away from these sections with new questions and ideas as you will better understand your interaction with the world but also be more curious about it.

One of the most important aspects of this book is that each individual is unique. Our unique mix of genes and environment has prepared each of us to interact with our world in our own way. But she also points out that it is this same mix that unites us.

This book deals artfully with the tension between freedom and determination, between scientific terminology and layperson accessibility, and between abstract theory and personal relevance. It is well-rounded so regardless of your expertise level you are bound to get something enjoyable from this text.

Above all, Prat’s passion for neuroscience shines through and is contagious, infusing the book with the excitement of a favorite theme park ride, promising endless returns for those eager to deepen their understanding. Additionally, the book serves as an excellent introduction to cutting-edge research and notable researchers in the field, making it an invaluable resource for anyone curious about the latest developments in neuroscience.

Does Mind-Wandering Harm Learning?
Andrew Watson
Andrew Watson

If you teach children for several hours a day, you just know that sometimes they’re with you…and sometimes not.

Side view of student girl enjoying summer breeze , smiling with eyes closed

They might be focused on your description of the “angle-side-angle” theorem; or, they might be thinking about the Oscars. (What a speech!)

So we might reasonably ask: “is their mind-wandering a problem? Do they learn less?”

We might be tempted by an uplifting answer: “mind-wandering allows students to make fresh and helpful connections.” If they link angle-side-angle to the Oscars, after all, they have made connections that will help them consolidate this new geometry information.

Or, we might be worried about a tough-minded answer: “it seems sort of obvious that if students aren’t focusing, they almost certainly aren’t learning.”

Which is it?

We’ve got a fair amout of research with adolescents and adults; for them, mind-wandering hampers learning.

But, what about younger students?

Pharaohs and Dinosaurs

As always, research details matter.

In this recent study, researchers asked 8-9 year olds to listen to two stories: one about pharaohs, the other about dinos.

These stories — about 12 minutes long — were interrupted every 90 seconds or so. The students answered whether they were …

… focusing on the story,

… thinking about something unrelated to the story (“It was fun being at the zoo yesterday”),

… thinking about their interest in — or abilities relative to — the story (“I’m not very good at this,” “I’m really interested in this”), or

… distracted by the environment (a slamming door).

Researchers also asked the students how interested they were in the content of the stories.

And — of especial interest — they measured the students’ understanding of the stories both immediately after the story and also one week later.

I’d Rather Know than Not Know

The results include lots of useful information: some surprising, some not.

First: unsurprisingly (to me), students who mind-wandered remembered less.

And, crucially, they remembered less both right away and AND a week later.

This point really matters. We know from Nick Soderstrom’s work that initial performance isn’t a reliable indicator of long-term learning.

If we had only short-term results, we might optimistically think that short-term memory problems would give way to long-term improvements.

But: nope.

Students who reported more mind wandering didn’t learn as much.

Second: surprisingly (to me), the students’ interest level didn’t matter.

That is: even the students who REALLY LIKE DINOS didn’t learn as much if they mind-wandered.

Interest doesn’t protect students from the dangers of mind-wandering.

Third: horrifyingly (to me), students lose focus roughly 25% of the time.

In this study, they spent…

… about 10% of their time thinking about something else (“the zoo”),

… about 10% of their time thinking about their ability/interest (“I bet I won’t remember this part”), and

… about 5% of the time distracted by the environment (the slamming door).

If we want students to learn 100% of the material, and they’re mentally elsewhere for 25% of the time…well, that distraction puts a firm cap on what they can learn.

To be clear: this study took place during the pandemic, so student were at home and participating on Microsoft Teams. We therefore can’t take this finding as an entirely reliable measurement of their off-task thoughts during class.

However, I honestly worry that they might be mentally off task even more during school hours. The average classroom has LOTS more people in it, and features fewer dinosaur videos…

Teaching Implications

I think this study (especially if others confirm its findings) encourages us to several tough-minded conclusions:

Mind-wandering really does interfere with learning.

It happens A LOT.

Students’ intrinsic interest doesn’t mitigate this problem.

Each of us will respond to those conclusions differently, but they do offer several suggestions:

First: reduce classroom distractions with energy and purpose.

Second: check for understanding even more frequently than we might think necessary. Doing so will a) help us know if they are mind-wandering, and b) help keep them focused.

Third: remain vigilant even if the topic seems intrinsically interesting. I might think that dinosaurs will keep students’ focus…but in this study they didn’t.

More broadly: I might spend some time looking in the mirror. How distracted am I? How much mind-wandering is a part of my thought routine?

After all: if mind-wandering hampers my own understanding, that result is as bad for me as much as for my students.


Cherry, J., McCormack, T., & Graham, A. J. (2023). Listen up, kids! How mind wandering affects immediate and delayed memory in children. Memory & Cognition, 1-17.

Soderstrom, N. C., & Bjork, R. A. (2015). Learning versus performance: An integrative review. Perspectives on Psychological Science10(2), 176-199.

“Writing By Hand Fosters Neural Connections…”
Andrew Watson
Andrew Watson

Imagine this conversation that you and I might have:

ANDREW: The fastest way to drive from here to the school is via South Street.

YOU: It is? That seems like a long detour. Why would I go that way?

ANDREW: I didn’t say it was the fastest; I said it was the best because it’s the prettiest.

YOU: You DID say it was fastest…wait, the prettiest? It’s basically junk yards and construction sites.

ANDREW: Yes, but because of all the bakeries, it smells really nice.

YOU: What does that have to do with fastest/prettiest?

ANDREW: Why are you being so unpleasant and difficult? South Street is the best route…

I suspect you would think: “this conversation is very frustrating and unhelpful because the goal posts keep moving.”

That is: I initially claimed that South Street is the fastest…but keep moving my claims as soon as you object. (And, oddly, I’m mad at you for being unreasonable.)

I routinely notice this pattern when I ask questions about the claim that “handwriting is better than laptops for note taking.”

Watch the goalposts move:

CLAIM: Handwriting is better than laptops for note taking. This study says so.

ANDREW: That study starts with the BIZARRE assumption that students can’t learn how to do new things — like, how to take notes correctly. And, research since then has routinely complicated or contradicted it.

CLAIM: I didn’t say laptops are better beacuse of this study. It’s because writing by hand changes neural networks. This research says so.

ANDREW: That research says that writing by hand helps students learn to write by hand. Of course it does.

But that doesn’t mean that writing by hand helps students learn other things — like, say, history or chemistry or German. Can you show me research supporting that claim?

CLAIM: I can’t, but when students write on laptops they distract students around them.

ANDREW: Yes, but that’s a completely different claim than the one you started with.

CLAIM: Why are you being so unpleasant and difficult? Writing by hand is better than taking notes on laptops!

Once again, I find this conversation frustrating and unhelpful. SO MANY MOVING GOALPOSTS.

I am entirely open to the idea that handwriting is better. But if someone makes that claim, and says it’s “research-based,” I’d like them to provide research that actually shows this claim to be true.

A bright yellow American football goalpost, above a bright green field and against dark stadium

So far, that turns out to be a big ask.

This idea that “handwriting is better than keyboarding” has popped up again (I suspect because of a recent study), so I want to re-investigate this claim — with a keen eye on those goalposts.

Reasonable Start

If you see a headline that says, “Why Writing by Hand Is Better for Memory and Learning,” you might interpret that claim roughly this way:

Students who take handwritten notes — in their 6th grade history class, say, or their 10th grade science class — remember more of that material after 2 weeks than students who took notes on laptops.

Yes, I conjured up some of those specifics: “6th grade history,” “two weeks later.” But those seem like reasonable extrapolations. What else could the claim substantively mean?

Briefly: plausible goalpost = “students remember more history 2 weeks later.”

So, let’s look at the recent research being used to support this claim.

Here’s a very basic question: “how did the researchers measure how much the students learned and remembered?”

Did the students take a quiz two weeks later? Did they undertake a “brain dump” the following day? How, precisely, do we know what they learned?

The answer is:

The researchers did not measure how much the students learned/remembered.

Honestly. No quiz. No brain dump. Nothing.

And yet, even though the study doesn’t measure memory or learning, it is being used to argue that handwriting enhances memory and learning.

I find this astonishing.

Instead, the study measures activity “in brain regions associated with memory and learning.”

Did you notice something?

Goalpost plausibly was: “students remember more history 2 weeks later.”

Goalpost now is: “more activity in important brain regions.”

Grrr.

Getting Specific

When evaluating “research-based” claims, it’s helpful to know exactly what the participants in the research did.

So, these 36 participants wrote the same fifteen words multiple times. Sometimes they wrote with a stylus on a tablet; sometimes they typed using only their right index finger. (BTW: all the participants were right handed.)

Now, this insistance on “right index finger” makes sense from a neuro-research perspective. If both “handwriters” and “keyboarders” are using one hand, then the researchers reduce lots of confounding variables.

At the same time, this emphasis also leads to highly artificial circumstances.

Presumably some people type with one finger. But, I’m guessing that most people who want to take laptop notes don’t. I suspect they want to take laptop notes because they have some degree of facility on a keyboard.

So:

Goalpost initially was: “students remember more history 2 weeks later.”

Goalpost then was: “more activity in important brain regions.”

Goalpost now is: “more activity in important brain regions when participants write as they usually do than when they type in a really, really unnatural way.”

Double grrr.

It is, of course, helpful to know about these differences in neural responses. But I don’t think they plausibly add up to “students remember more.” Because — remember — no one measured learning.

Lest I Be Misunderstood

In such conversations, I’m often misunderstood to be confident about the right answer. That is: I might seem to be saying “I’m confident that laptops are better than handwriting for learning.”

I am NOT saying that.

Instead, I’m asking for research that directly measures the claim being made.

If I say to you: “research shows that handwriting is better for learning than laptops,” I should be able to show you research that directly measures that claim.

If, instead, I have research showing that handwriting develops neural networks that might be beneficial for learning, I should say that.

My frustration about this point stems from a broader concern.

Over and over, I find that non-teachers cite research — especially neuroscience research — to boss teachers around. While I certainly do believe that teachers should know about pertinent research findings (that’s why I write this blog!), I also believe that we need to acknowledge the limits of our research-based knowledge.

I just don’t think that research (yet) demonstrates that handwritten notes generate more learning than laptop notes.

Overall, I’m inclined to believe:

Practicing fine motor skills (by, say, handwriting) is really important for young learners.

Praticing handwriting makes us better at handwriting — and other word-related skills.

As students get older and more facile with a keyboard, the benefits of handwriting vs. keyboarding will probably depend on the student, the subject, the kind of notes being taken, etc.

And if I see more than one study directly testing the claim that handwriting helps people learn better, I’m entirely open to that possibility.

But at least so far, that claim is not — by any definition that seems reasonable to me– “research-based.”


Van der Weel, F. R., & Van der Meer, A. L. (2024). Handwriting but not typewriting leads to widespread brain connectivity: a high-density EEG study with implications for the classroom. Frontiers in Psychology14, 1219945.

Weather Forecasting and Cognitive Science
Andrew Watson
Andrew Watson

I live in Boston, and we just had an ENORMOUS snow storm. TWELVE INCHES of snow fell in just a few hours. It was, as we say, “a monstah.”

Oh, wait a minute, that didn’t happen.

A winter scene: cars covered in a foot of swon, and two pedestrians walking away from the camera, shoulders hunched agains the cold snow

The FORECAST said we’d get a monstah. In reality, by the end of the day, exactly 0.0 inches of snow had accumulated on my sidewalk. It was as close to “nothing” as was the Patriots’ chance of winning the Super Bowl this year.

You can imagine the public response:

Hah! All the “experts” with all their science-y equipment and equations and models and colorful images … they all got it wrong. AGAIN!

That’s it: I’m done with all this weather forecasting nonsense. I’ll rely on my trick knee to tell me when the weather is a-changing.

While that response is predictable, I also think it’s unfair. In fact, believe it or not, it reminded me of the work we do at Learning and the Brain.

In most ways, weather forecasting has almost nothing to do with cognitive science. But the few similarities might help explain what psychology and neuroscience research can (and can’t do) for teachers.

I want to focus on three illustrative similarities.

Spot the Butterfly

First, both meteorologists and cognitive scientists focus on fantastically complex systems.

In the world of weather:

As the butterfly theory reminds us, small changes over here (a butterfly flapping its wings in my backyard) could cause enormous changes over there (a typhoon in Eastern Samar).

In the world of education:

Whether we’re looking at neurons or IEPs or local school budgets or working memory systems or mandated annual testing, we’ve got an almost infinite number of interconnected variables.

Research might tell us to “do this thing!”, but the effect of that recommendation will necessarily depend on all those other variables.

We should not be shocked, therefore, that a one-step intervention (e.g.: growth mindset training) doesn’t have exactly the effect we want it to. That one intervention interacts with all those other complex systems.

The research-based suggestion isn’t necessarily wrong, but it also can’t briskly overcome all the other forces that influence learning.

Possibilities and Probabilities

Second: like weather forecasts, research-based suggestions focus on probabilities.

That is: the weather channel didn’t say “Boston is going to get 12 inches of snow!”

If you looked past the simplified headline, it said:

“We’ve seen conditions more-or-less like this 100 times before.

2 of those times, we got less than 2 inches

8 times, we got 2-6 inches

25 times, 6-10 inches

45 times, 10-14 inches

15 times, 14-18 inches

5 times, more than 18 inches.

Make plans accordingly.”

They don’t know for sure; they’re making predictions based on previous cases — and those previous cases provide a range of possibilities.

Notice, by the way, that the forecasters weren’t exactly wrong. New York and Philly got pounded; they got the “monstah” we were expecting.

But — because a butterfly somewhere flapped its wings — the storm went slightly south and left us alone.

So, too, with psychology and neuroscience research aimed at the classroom.

Researchers can say: “this strategy helped students score 5% higher on the end-of-year exam … ON AVERAGE.”

That means the strategy (probably) helped more students than it hurt. But the effects were different student-by-student.

Who knows: the strategy could have made learning harder for some students.

We’re looking at probabilities, not panaceas.

The Bigger the Claim…

Third: expert forecasters get their predictions right more often than they get them wrong. And — this is crucial — the “wrong” results come more often for big, outlier events.

Sunny days in June? Glum rain in November?

Relatively easy to predict.

A once-in-a-generation hurricane? A monstah snow storm?

MUCH harder to predict. We just have less data about unusual events because…they’re unusual!

So too in the world of research-based teaching advice.

I honestly think that researchers get their advice “right” much of the time — at least within the narrow confines of the context they describe.

That is: a large collection of well-designed studies probably merits careful consideration.

At the same time, if researchers loudly announce a big, outlier conclusion, we should be ready for that claim to collapse upon further investigation.

Imagine that researchers claim…

… dancing a hornpipe helps students learn fractions, or

… standing in a “power pose” does something worthwhile/important, or

… teachers don’t need to know anything about a topic to teach it well.

In each of these cases, the extremity of the claim should prepare us for doubts.

Equally true, let’s say “research shows” that a particular teaching strategy has a HUGE effect on learning.

It’s possible, but honestly kinda rare.

For instance, as I wrote recently, I found a meta-analysis claiming that the “jigsaw” method has a cohen’s d value of 1.20. As stats people know, that’s simply ENORMOUS.

It’s possible…but I wasn’t at all surprised to find very little support for that claim. I honestly can’t think of any teaching intervention that makes that much of a difference on its own.

TL;DR

Like weather forecasters, psychology and neuroscience research…

… looks at enormously complicated systems,

… offers conclusions best understood as probabilities, and

… is likeliest to be right when it makes modest claims.

In brief: this field can be fantastically useful to classroom teachers, as long as we understand its challenges and limitations.

Our teacherly “trick knee” might be right from time to time. But wisely considered research will probably be better.