Skip to main content
The Hidden Lives of Learners
Andrew Watson
Andrew Watson

Many times over the last several years, I’ve heard enthusiastic reviews of a seemingly-magical book called The Hidden Lives of Learners, by Graham Nuthall.

Book Cover for The Hidden Lives of Learners by Graham Nuthall. The cover shows a mountain range in front of a blue and cloudy sky.

Here’s the magic: Nuthall’s frankly astonishing research method.

Working in New Zealand classrooms in the 1980s, he put mics on all students and teachers. And, he had cameras in the classroom.

He and his team also broke down the teachers’ unit plans into granular learning goals. For instance, a unit on Antarctica might have 80 specific facts or concepts that the students should learn.

Finally, Nuthall’s team tested students both before and after these units.

Given this quite extraordinary data set, Team Nuthall could look at remarkably specific questions:

How much information about each topic did students already know before the unit began?

How much did they learn?

What, very specifically, did each student do and say to learn each specific new concept?

You can see why readers have responded so strongly to Nuthall’s method.

So, based on all his data, what did Nuthall conclude?

The Magic Number

Regular blog readers already know about the Spacing Effect.

That is: students learn more when they spread practice out than when they do the same amount of practice all at once.

In my experience, this research finding started getting broader notice in … say … 2015 or so. (I completed my grad program in 2012, and I don’t remember the spacing effect getting much — or any — attention at that time.)

Well, Nuthall’s research led him to a very similar conclusion more than a decade before.

That is: in Hidden Lives, Nuthall writes…

We discovered that a student needed to encounter, on at least three different occasions, the complete set of the information she or he neede to understand a concept.

If the information was incomplete, or not experienced on three different occasions, the student did not learn the concept. (63)

Similar to research into the spacing effect, Nuthall’s research shows that students must devote brain space to an idea several times — spread out over more than one class meeting — to consolidate that idea in long-term memory.

Later in Hidden Lives (p. 126), Nuthall suggests that students should “encounter the complete set of information” on four occassions — not three.

For me, the precise number (is it 4? is it 3?) is less important than the broader concept: teachers should build curricula that ensure students delve into an idea several times. One or two encounters can’t create enough momentum to change memory systems.

I think that Nuthall’s method provides substantial support for translating the spacing effect research into classroom practice. Both psychology research AND Nuthall’s deep classroom investigation arrive independently at substantially similar ideas.

Changing the Focus

Most research in this field focuses on what teachers do. Nuthall — wisely — insists that we focus on what students do.

His methodology — all those microphones, all those transcripts — helps him recognize all those “encounters” with ideas. And, crucially, students often “encounter” ideas in their conversations and projects with other students.

This observation leads to several important insights.

First, students often have prior knowledge about a topic.

When that prior knowledge is incorrect, it BOTH hinders their understanding of new ideas AND hampers their classmates’ efforts to learn correct ideas.

For this reason — I’m extrapolating from Nuthall here — teachers really should focus on students’ prior misconceptions.

Unless we know what our students (wrongly) think they know, their misinformation will substantially muddle the learning process.

Second, building classroom culture matters.

This seemingly obvious statement comes from one of Nuthall’s most alarming findings (well: alarming to me).

The students in these classes were AMAZINGLY unkind to one another. Casual insults — even racial epithets — made up a regular part of classroom dialogue.

Nuthall proposes two solutions to this problem.

Option A: “Teachers therefore need to know who is in which friendship groups, who wants to be liked by whom, who has status, who is rejected.

They also need to know the kinds of beliefs and culture — about music, clothes, curriculum, learning, co-operating, and the like — that hold students’ relationships together.” (p. 37)

While I understand the logic behind this statement, it strikes me as frankly impossible. As I think over my various sophomore and senior English classes, it’s simply inconceivable to me that I would know — with any level of consistent detail — what the exact relationships are among all these people.

I might have a dim idea that this student is especially popular, or that those two are dating, or that some song or another has everyone’s attention. But for that knowledge to be broad and current: no way.

In fact, I think it would be inappropriate for me to know such things. Inquiring too closely into students’ personal and romantic lives does not strike me as healthy or appropriate.

A Better Way?

Happily, Nuthall proposes Option B:

“Some teachers have tried to deal with this problem [peer-to-peer unkindness] by creating an alternative culture within their classrooms — a culture of mutual respect and cooperation, a culture in which everyone is expected to succeed in some significant aspect of classroom activities.” (p. 37)

Now, this approach seems healthy, appropriate, and necessary.

Yes, I want my students to learn about Macbeth and topic sentences, but I also insist that they know how to treat one another well.

Nuthall’s findings about casual peer cruelty has reminded me how much happens in my classroom that I can’t see (“hidden lives of learners”), and how important it is that I solve those invisible problems.

The Very Big Picture

One final point stood out for me in Nuthall’s book, although my interpretation of it might not persuade you. Here’s the story…

Because Nuthall measured how much students already knew, and what they did to learn new information, he could track important patterns. One pattern went like this:

Students who didn’t know much about the topic learned most from the teacher.

Students who already knew a lot learned most by working on their own, or with peers. (pp. 86-7)

I think this finding might help us see past a controvesial binary in the field of education.

Current schooling debates have encouraged us to pick sides. Either we believe in direct instruction, or we believe in project pedagogies. (This sentence oversimplifies a very complex debate, but is a useful shorthand at this moment.)

Nuthall’s findings (and my own reading of schema theory) suggest an alternative viewpoint. Perhaps

Students who don’t know much about a topic (a.k.a. “novices”) learn most from the teacher (a.k.a. “direct instruction”), whereas

Students who already know a lot (a.k.a. “relative experts”) learn most by working on their own, or with peers (a.k.a. “project pedagogies”).

That is: before we say whether direct instruction or independent investigation is better for a student, we have to know where the student lies on the novice/expert continuum.

Novices need lots of guidance; relative experts benefit from more open-ended, self-driven exploration.

To be clear: I’ve been quietly advocating for this view for a few years now. It seems to me — although I could be wrong — that Nuthall’s data roughly support it.

Read This Book If…

…You’re intrigued by the possibility of extremely granular classroom research, focusing directly on the students’ experience,

…you want to see how the spacing effect plays out in the classroom,

…perhaps you want to know more about how students actually treat each other in day-to-day interactions.

…you want to hear an inventive and thoughtful researcher think aloud about his findings.

I don’t agree with everything that Nuthall has written. For instance, his account of working memory is not at all in line with current models of this cognitive function.

But, gosh: he and his book have given me lots to think about, and new ways to think about old ideas.

To 600, and Beyond…
Andrew Watson
Andrew Watson

Photograph of the author, wearing a blue shirt, pink tie, and glasses, smiling at the cameraWordPress informs me that this is the 601st article I’ve posted on this blog. That’s a few hundred thousand words since 2015 or so.

I’ve been honored over the years to meet so many of you who read this blog, and who think aloud with me about its topics. (If you see me at a Learning and the Brain conference, I hope you’ll introduce yourself!)

And, I thoroughly enjoyed the opportunities I’ve had to chat with researchers and other scholars as I try to understand their arguments.

As I look back over these years, some emerging themes stand out to me:

A Man, a Plan

When I attended my first Learning and the Brain conference in 2008, I knew what was going to happen:

Step 1: the “brain researchers” would tell me what to do.

Step 2: I would do it.

I would, thus, be practicing “brain-based teaching.” My students would learn SO MUCH MORE than they had in the past.

How hard could it be to follow researchers’ instructions? It turns out: it’s extremely hard simply to “follow researchers’ instructions.”

In the years since that conference, I’ve realized — over and over — how little I knew about what I didn’t know.

Surprise #1: One Size Does Not Fit

The first problem with my 2-step plan: I almost certainly SHOULDN’T DO what the researchers did.

Why?

Let’s say researchers studying the spacing effect asked college students to study three math topics.

Those students did five practice problems once a week for five weeks.

Voila: those students learned more than students who just did all 25 problems at once.

So, I should have my students do five practice problems once a week for five weeks, right?

Hmmm.

I’m a high school teacher. I teach English. I might not teach only three topics at a time. I might have more than 25 practice problems.

So, I can’t simply use the researchers’ formula for my own teaching plan.

Instead of doing what the researchers did, I should think the way the researchers thought.

The researchers’ successes resulted — in part — from the goodness of fit between their method, their students, and their topic.

To get those same successes in my classroom, I have to adapt their ideas to my particular context.

And: all teachers have to do exactly that kind of adapting.

In Step 2 above, I can’t just do what the researchers did. I always have to tailor their work to my teaching world.

Surprise #2: People Are COMPLICATED

Back in 2008, I assumed that “brain-research” would consistently show the same correct answers.

If I knew a correct answer, I could simply do the correct thing.

Alas, it turns out that research studies doesn’t always arrive at the same answer — because PEOPLE ARE COMPLICATED.

So: is focusing on Growth Mindset a good idea?

Although Mindset Theory has been VERY popular, it also generates lots of controversy.

Should schools require mindful meditation?

Ditto.

How much classroom decoration is too much?

If you look at the comments on this post, you’ll see that many teachers REALLY don’t like research-based answers to that question.

In other words, I can’t just “do what the research tells me to do,” because research itself comes up with contradictory (and unpopular) answers.

Surprise Research #3: “Brain Research” Isn’t (Exactly) One Thing

Throughout this post, I’ve been putting the words “brain research” in quotation marks.

Why?

Well, I was surprised to discover that researchers study the “brain” in at least two different ways.

If you really like biology, and want to study the “brain” as a physical object, you’ll go into a field called “neuroscience.”

You’ll look at neurons and neurotransmitters and glial cells and fMRI and EEF and myelination and the nucleus accumbens.

You’ll look at cells under microscopes, and prod them with pointy things while wearing gloves.

BUT

If you really like thoughts and emotions, and want to study the “brain” according to its mental processes, you’ll go into a field called “psychology.”

You’ll look at attention and memory and stress and learning and perception.

Notice: psychologists don’t look at attention under a (literal) microscope. They can’t pick up “stress” the way they can pick up a brain or an amygdala. They don’t need to wear gloves. Nothing damply biological is happening.

Yes, these days these neuroscience and psychology are blurring together. We have people interested in “neuro-psychology”: the biological underpinnings of all those mental processes — memory, curiosity, generosity.

But that blurring is very recent — a couple of decades at most.

And most people in those fields don’t blur. They stick to one team or the other. (For most of the 20th century, these two fields eyed each other with disapproval and suspicion.)

Surprise #4: Psychology First

I don’t like the sentences I’m about to type, but I do think they’re true.

Back in 2008, when I first got into this field, I was REALLY interested in the neuroscience

The very first question I asked at a Learning and the Brain conference was “where does attention happen in the brain?”

But, the more time I spend in this field, the more I think that teachers need information from psychology more than from neuroscience.

Yes, the neuro is fascinating. But, it almost never helps me teach better.

For instance:

I don’t need to know where long-term memories are stored in the physical brain. (That’s a question that neuroscientists try to answer.)

I do need to know what teaching strategies help students form new long-term memories. (That’s a question that psychologists try to answer.)

I focus on this topic — the relative importance of psychology for teachers — because so many people use neuroscience to boss teachers around.

Heck, I recently wrote a post about the bizarre claim that “neurotransmitters are happiness chemicals“: a claim that uses neuroscience to tell teachers what to do.

I myself think that anyone who wants to tell teachers “do this new thing!” should have tested that new thing directly with students. We call that research “psychology.”

TL;DR

Here’s what I would tell my 2008 self:

“This field you’re entering will help you and your students SO MUCH!

And, you should know:

You’ll always be translating research findings to your own classroom.

Because researchers and teachers disagree, you’ll always sort through controversy before you know what to do.

Neuroscience research is fascinating (fMRI is SO COOL), but psychology research will provide specific and practical suggestions to improve your teaching and help your students learn.”

I hope this blog has helped make some of those ideas clear and interesting over the years. And: I’m looking forward to exploring them with you even more…

My Detective Adventure: “VR Will Transform Education”
Andrew Watson
Andrew Watson

A friend recently sent me a link to an article with a click-baity headline: something like “Virtual Reality Will Change Education Forever.”

Man wearing Virtual Reality goggles, making gestures in the air

Her pithy comment: “This is obviously nonsense.” (It’s possible she used a spicier word that ‘nonsense.’)

On the one hand, I’m skeptical that ANYTHING will change education forever. Heck, if Covid didn’t transform education, who knows what will.

More specifically, ed-tech claims about “transforming education” have been around for a long time. Their track record doesn’t dazzle. (Smart boards, anyone?)

On the other hand, I always like to find reserch that challenges my long-held beliefs. After all, if I can’t learn from people who disagree with me, who can I learn from?

So, I followed my usual process.

In essence, I switched into Detective Mode, and started asking lots of questions.

If I ask the right questions, I thought, I’ll get a much clearer picture of potential educational benefits of VR.

Act I: The Investigation Begins

When I reviewed the article my friend sent, I noticed a troubling gap: the article didn’t link to underlying research.

As I’ve written in the past, this absence creates a red flag. If the article champions “research-based innovation,” why not link to the research?

So, I asked my first detective question. I emailed the author of the article and asked to see the research.

How simple is that?

Obviously, any resistance to this request — “sorry, we can’t share that at this moment” — would underline my friend’s skeptical verdict: “nonsense.”

However, the author responded immediately with a link to a research summary.

A promising development…

The Plot Thickens

This research summary showed real promise.

In brief:

Some college students in an introductory Biology course followed the typical path — readings, lectures, labs. (That’s the “control group.”)

Other students in the same course followed an alternative path: readings, lectures, supplementary Virtual Reality experience, alternative labs based on the VR experience.

When researchers looked at all sorts of results, they found that students on the alternative VR path did better.

That is: not only did the students enjoy the VR experiences; not only did they engage more with the material; they (on average) learned more.

However — and this is a BIG however — this research didn’t look like it was published.

In fact, when I asked that direct question, the article author confirmed that the research hadn’t yet been published in a peer-reviewed journal.

Now, the topic of peer review creates LOTS of controversy. The peer-review system has MANY troubling flaws.

However, that system probably reduces the amount of deceptive nonsense that gets published.

I almost never blog about research that hasn’t been peer reviewed, and so I thought my detecting was at its logical end. The VR claim might not be ‘nonsense,’ but it didn’t yet have enough published evidence to strengthen it.

And then, an AMAZING thing happened: the lead researcher emailed me to say she would be happy to talk with me about the study.

Over the years, I have occasionally reached out to researchers to be sure I understand their arguments.

But no researcher has EVER straight-up volunteered for such a meeting. And I mean: EVER.

The Payoff

Honestly, I’d love to transcribe my conversation with Dr. Annie Hale and Lisa Fletcher (“Chief of Realm 4”) — both at Arizona State University because it was both fascinating and inspiring.

Because you’re busy, I will instead boil it into three key points:

First:

Hale and Fletcher have done — and continue to do — incredibly scrupulous research.

For instance, in the description above, I put the words “control group” in quotations marks.

I did so because of Hale and Fletcher’s insistance. The two groups of Biology students had somewhat similar academic experiences.

But the research paradigm required enough differences to make the words “control group” technically inappropriate.

Hale and Fletcher insisted on this precision throughout our discussion. For instance, they regularly pointed out that a particular calculation suggested a positive result, but didn’t reache statistical significance.

In other words, they highlighted both the strengths and weaknesses of their own argument.

This habit, it my view, makes them MUCH more reliable guides in this field.

Second:

Here’s a shocker: Hale and Fletcher do not claim that virtual reality will transform education.

No, really, they don’t.

The headline of the article my friend sent me made that claim, but the researchers themselves don’t.

Instead, they make a very different claim. The alternative Biology path included at least three big changes from the typical path:

Change #1: students had the VR experience (and their lab was based on that experience)

Change #2: the key underlying biology concepts had been translated into stories. For instance, this “narratively-driven virtual reality” includes an imaginary species called the Astelar. (Some of the students got QUITE protective of these imaginary creatures.)

Change #3: the TAs in these alternative path classes got special training, inspired by Doug Lemov’s Teach Like a Champion.

We can’t know — and, Hale and Fletcher don’t say they know — which of these three parts made the biggest difference.

We can tentatively suspect that these three elements working together produced all those learning benefits. And, Hale and Fletcher are planning lots of further research to confirm this tentative belief.

But, they’re not trying to get VR goggles on every forehead.

Key Point #3

Here’s one of my mantras:

Researchers isolate variables. Teachers combine variables.

In other words: research — as much as possible — looks at the effect of just one thing.

For instance: “mid-lecture aerobic movement improves learning in college students.”

However, teachers juggle hundreds of variables at every second. All those isolated variables studied by researchers might not provide me with useful guidance.

For instance: if I teach in a business school, my formally-dressed students might not appreciate my insistance that they do jumping jacks in the middle of the lecture hall.

My particular combination of variables doesn’t helpfully align with that isolated exercise variable.

Here’s my point: Hale and Fletcher seem to be changing the research half of this paradigm.

In their research, notice that they aren’t isolating variables. They are, instead, looking at combinations of variables.

VR + stories + Lemov training –> more learning

In fact, if I understand their argument right, they don’t really think that isolating variables can produce the most useful results — at least not in education research.

After all (and here I’m adding my own perspective), if teachers combine variables, shouldn’t research also look at combinations?

An Early Verdict

I set out on this detective adventure feeling quite skeptical. Both the initial claim (“transform education!”) and the absence of links made me all-but-certain that the strong claim would implode. (Example here.)

However, by persistently asking reasonable detective questions, I’ve arrived at a very different place:

VR + [concepts as stories] + [Lemov-inspired TA training] just might produce big learning gains, at least for some students.

And — crucially — a thoughtful, precise, imaginative, and cautious group of scholars is exploring this possibility in detail.

As I said back at the beginning, I’ve always got something to learn.


This post was edited on April 7, 2023 to correct Lisa Fletcher’s title.

How Teachers Can Use Neuroscience in Education
Andrew Watson
Andrew Watson

I recently saw two very different looks at neuroscience and learning, and I thought they made a useful pairing for this blog. Here goes…

 

Regular readers know that I’ve recently been exploring research into movement and learning. That is: does walking around – especially outside – help us think, learn, attend, and create?

An image of a brain in a human head, with EEG waves in the background

Because I really want the answer to be “yes,” I force myself to be extra skeptical when I look at the research. And even with all that extra skepticism, the answer is – for the most part – YES!

How do we know?

Well, various researchers have people walk around – or sit still – and then do various mental tasks. Often (although not always), they do better after walking than after sitting.

BOOM.

But wait! Wouldn’t it be great to have more evidence than walkers’ “performance on mental tasks”? Wouldn’t it be great to know what’s going on in their brains?

Beyond “Mental Tasks”

I recently read a Twitter post about this study:

Researchers at the University of Illinois at Urbana-Champaign had several 9 and 10-year-olds take various tests in reading comprehension, spelling, and math.

Researchers also had these students take tests on “attentional control” — which means, more or less, what it sounds like.

Students took these various tests once after sitting still for 20 minutes, and another time after walking at a moderate pace for 20 minutes.

Sure enough, these young students controlled their attention more effectively after walking than after sitting. And, they did better on the reading comprehension test after walking than after sitting.

Now: here’s the brain part.

Researchers also hooked students up to an electroencephalography (EEG) array while they took those tests.

EEG measures electrical activity on the outer-most layer of the brain, so – VERY roughly – it shows how various brain surfaces are acting at particular moments in time.

Here’s where things get very technical. (Neuroscience is ALWAYS very technical.)

EEGs produce up-and-down squiggles; they look a bit like lie detector tests in the movies.

Research with adults has consistently shown that exercise produces a change at the third squiggle in various brain regions. Because that squiggle (sort of) goes up, it’s called the “third positivity,” or P3.

This P3 (third positive squiggle) correlates with better attentional control in adults. Researchers hypothesized that they would get the same result with these young children.

Results, Please

Here’s the big neuroscience news – researchers DID get the same results for children as addults

Changes in P3, induced by walking, took place when the students did better at attentional control.

So, why does this research finding matter?

If students’ minds behave differently after walking – they perform better at attentional control – we would expect that their brains behave differently.

Now we know: they do!

In the field, we call this pattern “converging evidence.” Two very different kinds of research — psychology AND neuroscience — support the same conclusion.

Now we can be even more confident that walking benefits cognition – even though, as you remember, I’m trying to be extra skeptical.

So, here we have the FIRST way that teachers can use neuroscience to support their teaching:

After psychology research suggests that a teaching suggestion might be beneficial, neuroscience can provide converging evidence to make this idea even more persuasive.

FANTASTIC. (By the way: I’ll come back to this study about walking and attentional control at the end of this blog post.)

The Matrix Could Be Real?

I said that I’d seen two articles about neuroscience worth sharing. The first – as you’ve seen – is very specific and researchy.

The second article – pointed out to me by my friend Rob McEntarffer — spends time speculating, musing, and wondering.

 

Crudely speaking, this article wonders if something Matrix-like could happen. Could Laurence Fishburne ever download kung fu into Keanu Reeves?

The article, in WIRED Magazine, opens with a fascinating scene. Doctors have implanted electrodes in a patient’s fusiform face area – the FFA. (Most neuroscientists think that the FFA helps the brain identify and recognize human faces.)

When the researchers stimulate the FFA, this patient – very briefly – sees human features on a blank box: an ear, a sideways smile, an eye.

In other words, electrical current applied to the brain surface created bits of a face. THE MATRIX EXISTS.

Wait. [Sound of record scratch.] Nope. No it doesn’t.

This article does a great job pointing out all the extraordinary complexities going from this tiny baby step to actually “implanting learning in the brain.”

As in, we are nowhere near being able to do anything remotely like that.

Glitches in the Matrix

The idea itself seems plausible. As Adam Rogers writes:

The brain is salty glop that turns sensory information into mind; you ought to be able to harness that ability, to build an entire world in there.

However, all sorts of problems get in the way.

At a very basic level, there are just too many neurons for us to be able to control precisely — something like 50,000 to 100,00 in an area the size of a grain of rice.

To make anything like perception happen, we’d have to get thousands of those stimuli just right. (Imagine how complex LEARNING would be.)

The proto-matrix also faces a timing problem:

Perception and cognition are like a piano sonata: the notes must sound in a particular order for the harmonies to work.

Get that timing wrong and adjacent electrical pings don’t look like shapes — they look like one big smear, or like nothing at all.

Finally — and this point especially merits attention:

The signals you see when a brain is doing brain things aren’t actually thought; they’re the exhaust the brain emits while its thinking.

In other words: all those cool brain images can’t necessarily be reverse engineered. We can measure electrical activity when a brain does something — but artifically recreating such electrical activity won’t guarantee the same underlying thought process.

So, here’s the SECOND way to use neuroscience in teaching:

When teachers understand how fantastically complicated neuroscience — and the underlying neurobiology of thought and learning — truly are, we can see through hype and extravagant claims often made about this field.

Rogers’s article does a GREAT job highlighting that complexity.

An Example

I promised to return to that study about walking and attention, so here goes:

I do think that this study offers some converging neuroscientific evidence that movement prior to learning enhances attentional control.

However, twitter post citing this study implied it reaches a different conclusion: movement during learning is good for attention, creativity, etc.

That is: it claimed that teachers should design lessons that get students up and moving, and that this research requires this conclusion.

In particular, it highlights this image to show changes in brain activity between walking and sitting.

Rogers’s article in WIRED encourages us to think about all the neural complexity underlying this blithe suggestion.

After all, that image is simply a representation of a few dozen P3 graphs:

Many graphs showing electroencephalography results at the 3rd positivity.

Unless we have a clear idea what those squiggles mean, we shouldn’t be too confident about that image showing “changes in brain activity.”

And, by the way, people are often much too confident in interpreting such images. As in: it happens EVERY DAY.

To be clear: I think some movement during class often makes sense — although, as always, the students’ age and the school’s culture will influence this decision.

And this neuroscience research does provide “converging evidence” that movement built into the school day is a good idea.

But it certainly doesn’t require teachers to have students walking from place to place during lessons; that’s not what the any of these researchers measured, and it’s not what they claim.

TL;DR

Neuroscience research focusing on the brain can benefit teachers by supporting — or contradicting — psychology research focusing on the mind.

If both kinds of research point the same direction, we can be especially confident that a teaching suggestion makes sense.

And a deep understanding of the complexity of neuroscience (a la Rogers’s WIRED article) can help us resist overconfident advice that seems to have (but really does not have) neuroscientific backing.


Hillman, C. H., Pontifex, M. B., Raine, L. B., Castelli, D. M., Hall, E. E., & Kramer, A. (2009). The effect of acute treadmill walking on cognitive control and academic achievement in preadolescent children. Neuroscience159(3), 1044-1054.

Warning: Misguided Neuroscience Ahead
Andrew Watson
Andrew Watson

I recently ran across a version* of this chart:

An (inaccurate) chart listing neurotransmitters: their effects and activities that enhance them

As you can see, this chart lists several neurotransmitters and makes recommendations based on their purported roles.

If you want to feel love, you should increase oxytocin. To do so, play with your dog.

If you want to feel more stable, you should boost serotonin. To do so, meditate, or go for a run.

And so forth.

On the one hand, this chart seems harmless enough. It recommends that we do quite sensible things — who can argue against “self-care,” or “hugging your children”? — and so can hardly provoke much controversy.

I, however, see at least two reasons to warn against it.

Willingham’s Razor

Most everyone has read Dan Willingham’s Why Don’t Students Like School?  (If you haven’t: RUN, don’t walk, to do so.)

Professor Willingham has also written a less well known book called When Can You Trust the Experts?, which offers lots of wise advice on seeing though bad “expert” advice.

One strategy he recommends:

Reread the “brain-based” teaching advice, and mentally subtract all the brainy words. If the advice makes good sense without them, why were they there in the first place? **

In the lists above, do we really need the names of the neurotransmitters for that advice to make sense?

To feel a sense of accomplishment, accomplish something.

If you want to feel better, eat chocolate.

To calm down, walk (or run) outdoors.

Who could object to these suggestions? Do we need multi-syllable words to embrace them?

I worry, in fact, that such charts create bad mental habits for teachers. Those habits sound like this:

If someone knows complicated neuro-terminology, then their teaching advice must be accurate. When a blogger uses the phrases “corpus callosum” and “research says,” therefore, I have to take their teaching advice.

No, you really DON’T have to take their advice. LOTS of people use the language of neuroscience to make their suggestions sounds more authoritative.

As I’ve written elsewhere, neuroscience rarely produces classroom-ready teaching advice.

PSYCHOLOGY gives teachers great ideas about memory and attention and learning and motivation.

A biological understanding of what’s happening during those mental functions (i.e., neuroscience) is fascinating, but doesn’t tell teachers what to do.

In brief: beware people who use neuro-lingo to advise you on practical, day-to-day stuff. Like, say, that chart about “happiness chemicals.”

When Simplification Leads to Oversimplification

My first concern: the chart misleadingly implies that neuroscientific terminology makes advice better.

My second concern: the chart wildly oversimplifies fantastically complicated brain realities.

For instance, this chart — like everything else on the interwebs — calls oxytocin “the love hormone.”

A smiley face with the word "oxytocin" as the smile

However, that moniker doesn’t remotely capture its complexity. As best I understand it (and my understanding is very tentative), oxytocin makes social interactions more intense — in both positive AND NEGATIVE directions.

So: when we add oxytocin, love burns brighter, hatred smoulders hotter, jealously rages more greenly.

To call it the “love hormone” is like saying “the weather is good.” Well, the weather can be good — but there are SO MANY OTHER OPTIONS.

The statement isn’t exactly wrong. But its limited representation of the truth makes it a particular kind of wrong.

So too the idea that dopamine is a “reward chemical.” Like oxytocin’s function, dopamine’s function includes such intricate nuance as to be difficult to describe in paragraphs — much less a handy catchphrase. ***

By the way: the most comprehensive and useful description of neurotransmitters I know comes in Robert Sapolsky’s book Behave. As you’ll see, they’re REALLY complicated. (You can meet professor Sapolsky at our conference in February.)

TL;DR

Yes, walking outside and hugging children and exercising are all good ideas for mental health.

No, we don’t need the names of neurotransmitters to make that advice persuasive.

We might worry about taking advice from people who imply that neuro-lingo does make it more persuasive.

And we can be confident that neurotransmitters are much, MUCH more complicated than such simplistic advice implies.


* I’ve made my own modified version of this chart. The point of this blog post is not to criticize the individuals who created the original, but to warn against the kind of thinking that produced it. “Name and shame” isn’t how we roll.

** I’m paraphrasing from memory. I’m on vacation, and the book is snug at home.

*** [Update on 12/30/22] I’ve just come across this study, which explores some of the contradictions and nuances in the function of serotonin as well.

The Limitations of Retrieval Practice (Yes, You Read That Right)
Andrew Watson
Andrew Watson

Last week, I wrote that “upsides always have downsides.”

African American student wearing a bow tie, hand to forehead, looking frustrated and disappointed

That is: anything that teachers do to foster learning (in this way) might also hamper learning (in that way).

We should always be looking for side effects.

So, let me take a dose of my own medicine.

Are there teaching suggestions that I champion that have both upsides and conspicuous downsides?

Case in Point: Retrieval Practice

This blog has long advocated for retrieval practice.

We have lots (and LOTS) of research showing that students learn more when they  study by “taking information out of their brains” than “putting information back into their brains.” (This phrasing comes from Agarwal and Bain.)

So:

Students shouldn’t study vocabulary lists; they should make flash cards.

They shouldn’t review notes; insted, they should quiz one another on their notes.

Don’t reread the book; try to outline its key concepts from memory.

In each of these cases (and hundred more), learners start by rummaging around in their memory banks to see if they can remember. All that extra mental work results in more learning.

SO MUCH UPSIDE.

But wait: are there any downsides?

Let the Buyer Beware: Retrieval-Induced Forgetting

Sure enough, some researchers have focused on “retrieval-induced forgetting.”

Yup. That means remembering can cause forgetting.

How on earth can that be? Here’s the story…

Step 1: Let’s say I learn the definitions of ten words.

Step 2: I use retrieval practice to study the definitions of five of them. So, I remembered five words.

Step 3: Good news! Retrieval practice means I’ll remember the five words that I practiced better.

Step 4: Bad news! Retrieval-induced forgetting means I’ll remember the five words I didn’t practice worse. Yes: worse than if I hadn’t practiced those other five words.

In brief: when I remember part of a topic, I’m likelier to FORGET the part I didn’t practice. (Although, of course, I’m likelier to REMEMBER the part I did practice.)

So, retrieving induces forgetting. Now that’s what I call a downside.

Potential solution?

How do our students get the good stuff (memories enhanced by retrieval practice) without the bad stuff (other memories inhibited by retrieval practice)?

Here’s an obvious solution: tell our students about retrieval-induced forgetting.

Heck, let’s go one step further: tell them about it, and encourage them to resist its effects.

One research group — led by Dr. Jodi Price — tried just this strategy.

The research design here gets quite complicated, but the headline is:

They ran the same “retrieval-induced forgetting” study that others had run, and this time added a brief description of the problem.

In some cases, they added encouragement on how to overcome this effect.

So, what happened when they warned students?

Nothing. Students kept right on forgetting the un-practiced information (although they kept right on remembering the practiced information).

In brief: warnings about retrieval-induced forgetting just didn’t help. (Heck: in some cases, they seemed to promote even more forgetting.)

Alternative Solutions?

Much of the time, we benefit our students by telling them about reserach in cognitive science.

I routinely tell my high-school students about retrieval practice. I show them exactly the same studies and graphs that I show teachers in my consulting work.

In this case, however, it seems that sharing the research doesn’t help. Telling students about retrieval-induced forgetting didn’t stop retrieval induced forgetting.

Conclusion: it’s up to teachers to manage this side effect.

How? We should require retrieval of all essential elements.

For example:

When I teach my students about comedy and tragey, the definitions of those terms include lots of moving pieces.

I know that ALL THE PIECES are equally important. So I need to ensure that my retrieval practice exercises include ALL THE PARTS of those definitions.

Students don’t need to remember everything I say. But if I want them to remember, I need to ensure retrieval practice happens.

Each of us will devise different strategies to accomplish this goal. But to get the upside (from retrieval practice) we should mitigate the downside (from retrieval-induced forgetting).

TL;DR

Retrieval practice is great, but it might cause students to forget the parts they don’t retrieve.

Alas, we can’t solve this problem simply by warning our students.

So, we should structure our review sessions so that students do in fact retrieve EVERYTHING we want them to remember.

If we create such comprehensive retrieval, students can get the upsides and without the downsides.

 

 


Price, J., Jones, L. W., & Mueller, M. L. (2015). The role of warnings in younger and older adults’ retrieval-induced forgetting. Aging, Neuropsychology, and Cognition22(1), 1-24.

Upsides Always Have Downsides: “Side Effects” in Education Research
Andrew Watson
Andrew Watson

Here at Learning and the Brain, we believe that research can improve education.

Young man wearing a tie, showing thumbs up in one image and thumbs down in the other

Specifically, research into psychology (“how the mind works”) and neuroscience (“how the brain works”) can help teachers and schools. After all, we spend all day working with students’ minds and brains!

Every now and then, we should stop and look for flaws in our assumptions.

Are there ways that research might not help learning? Might it actually limit learning?

A recent article by Yong Zhao explores this important — and troubling — question.

The Medical Model

Doctors have long relied on research to test their hypotheses.

Does this treatment work better than that treatment? Let’s run a “randomized control trial” to find out.

Notably, medical research always includes this important question: what side effects* does a treatment produce?

That is:

Any treatment might produce specific benefits.

And, it might also produce specific harms.

Medical research looks for and reports on BOTH.

Sadly — and this is Zhao’s key point — education research tends to skip the second question.

Researchers look for benefits:

Does mindfulness reduce stress?

Can retrieval practice enhance learning?

Should students exercise mid-class?

When they measure the potential upsides of those “treatments,” they don’t always look equally scrupulously for downsides.

And yet: almost everything has downsides.

What to Measure, and When?

Why do we overlook the downsides?

Zhao offers two hypotheses.

First, we all agree that education is good.

If doing X helps students learn, then X is good! Its obvious goodness makes potential badness invisible.

Second, downsides take time — and alternative methods — to discover.

An example. I hypothesize a particular method will help students sing better. So, I test my method in a randomized control trial.

Sure enough, students with the new method sang better! My method worked!

However, my new teaching method just might make students hate singing.

To discover this “side effect,” I have to measure different variables. That is:

I need to check how well they sing (one set of measurements),

AND how much they like singing (a different set of measurements).

It’s also possible that the downside takes longer to arise. The improvement (right now) results in less enjoyment of singing (later on). If I don’t keep measuring, I’ll miss this “side effect.”

New Habits

As Zhao argues, our habit of overlooking potential downsides creates real problems.

For instance, Zhao takes the example of Direct Instruction.

Its proponents can show lots of research suggesting its strengths. Its detractors likewise.

How can these contradictory realities exist?

Well, any complex teaching method will have benefits and detriments. If we focus only on one — if we measure only one — we’ll necessarily miss the other.

Instead, Zhao argues, we should develop the rigorous habit of looking for both: the benefits of any teaching strategy, and also its downsides.

This realisic, complex reality will allow us to make better decisions in classroom and schools.

One More Step

Although Zhao doesn’t mention “opportunity costs,” I think they’re an important part of this conceptual re-think.

That is:

Every time I do use a particular teaching strategy, I don’t use the other one.

If I take time for this stress-reducing technique, I don’t have time for that stress-reducing technique.

Even if a strategy has good research behind it, even if it has relatively few “side effects,” I always want to know: have I given up a better strategy to make time for this merely good strategy?

For example, this point often comes up in discussion of Learning Styles Theory.

If you’ve spent any time in this field, you know: Learning Styles Theory simply doesn’t have good research support behind it.

Alas: it has LOTS of popular support, even among teachers.

When I show teachers the comprehensive research reviews contradicting the theory, they occasionally respond this way:

“Okay, but what harm is it doing? It might be true, so why not teach to my students’ learning style?”

For me, the clear answer is opportunity cost.

If we teachers ARE spending time on teaching methods that have no research support, we ARE NOT spending time on those that do.

If students ARE studying on the treadmill because they’re “kinesthetic learners,” they ARE NOT using study strategies with research support behind them.

Measuring opportunity cost requires subtle and humble calculations. We just might have to give up a long-prized approach to make time for an even better one.

If our students learn more, that sacrifice will have been worth it.

TL;DR

Like medical researchers, we should look both for benefits and for potential harms of any teaching suggestion.

This balanced perspective might take additional time, and might require consideration of  opportunity costs.

It will, however, result in a more realistic and useful understanding of teaching and learning.


*  Many years ago, I read that the phrase “side effects” is misleading. It makes unwanted effects seem unlikely, even though they’re just as likely as the wanted effects.

For that reason, I’m putting the words “side effects” in quotations throughout this post.

I believe it was Oliver Sacks who made this point, but I can’t find the citation so I’m not sure.

If you know the correct source of this insight, please let me know!


Zhao, Y. (2017). What works may hurt: Side effects in education. Journal of Educational Change18(1), 1-19.

Getting the Details Just Right: Retrieval Practice
Andrew Watson
Andrew Watson

Can we ever research a topic too much? Can we reach a point where, well, there’s nothing really more to say about teaching better and learning more?

Perhaps, for instance, we’ve reached peak retrieval practice.

Blog readers – and conference attendees – know that actively recalling information results in greater learning than simple review.

For example: rather reminding my students of yesterday’s discussion of the Harlem Renaissance, I can ask them to write down the key details from memory.  When they make the mental effort to remember, they learn more.

This blog and many authors have written about this topic at length. What more is there to say?

I recently found a study that reminds us: there’s always more to say. If we want to combine teaching experience with researcher insight, we need to take time to get the details just right.

Here’s the story.

A Problem, a Solution, Another Problem

One problem with retrieval practice: it takes time.

I ask the question.

The students write their answers to the question.

I check their answers.

The minutes tick by.

Wouldn’t it be great if we could skip a few steps. How about this abbreviated version:

I ask the question.

The students think about their answers to the question.

I move on.

If my students truly think about the answers, then they’ll get the retrieval practice benefit in much less time.

This solution, however, creates its own problem.

If my students don’t write anything down, how can I know they actually think about the answers? Couldn’t they just nod and look earnest?

After all, what’s their motivation to do the thinking?

Let’s Check

A respected research team in this field has explored this set of problems, and their potential solutions.

In a recently published study, Megan Sumeracki and Johanny Castillo wanted to see if that first problem exists.

They had college students read a short passage. Some wrote answers to review questions; some were instructed to think about answers to those questions.

What happened a few days later?

Sure enough, the students who just thought about (but did not write down) answers were relatively confident that they’d remember information. (That is: they were more confident than those who wrote answers down.)

However, the thinkers actually remembered less than the writers.

Sure enough, as we predicted, students don’t always follow instructions to think about answers.

In other words: when I solve the first problem (retrieval practice takes time) by asking students simply to think, I create a second problem (students don’t really think).

How do we solve this conundrum? Can I solve BOTH problems?

Despair Not

Sumeracki and Castillo had an idea.

They repeated the “think about it” strategy, but this time with an additional ingredient: cold calling.

That is: they asked students to try retrieval practice by thinking about the answer. AND then they cold-called one student at random. (That is, they called on one student who hadn’t raised a hand.)

The researchers hoped to communicate this message: when told to think about the answer, students really should think about it – because they might actually have to answer the question.

What did they find?

Sure enough: students who thought about the answer now remembered as much as the students who wrote down their answers – presumably because they really did the thinking.

This two-part strategy – “retrieval practice by thinking” plus “cold calling” – takes less time AND produces the learning benefits of retrieving.

Problem solved!

One More Problem?

Some readers will have noticed that I raced past a potential controversy.

Truthfully, people do worry about cold calling.

Teachers worry that it creates a hostile, punitive environment. One grad school professor told me that cold calling ramps up stress, and stress destroys the hippocampus, so cold calling is malpractice.

Honestly, we don’t have lots and lots of research here.

One study I’ve found pushes back on the “ramps up stress” narrative. Others support that narrative.

And, as far as I know, we just don’t have good research in K-12 classrooms.

My own instincts say: yes, cold calling can be done badly. But, anything can be done badly. The key point is that cold calling can be done well.

If we create a classroom environment where making mistakes is an entirely normal part of the class routine – an environment that Doug Lemov calls a “culture of error” – then the potential stress of cold calling shouldn’t be a problem.

But, until we have actual research in many different classrooms, I can’t make that recommendation too emphatically.

The Sumeracki and Castillo strategy strikes me as a sensible solution to a real problem. More research on cold calling will make it more persuasive still.


Sumeracki, M. A., & Castillo, J. (2022). Covert and overt retrieval practice in the classroom. Translational Issues in Psychological Science.

Walking Promotes Creativity? A Skeptic Weighs In…
Andrew Watson
Andrew Watson

When teachers try to use psychology research in the classroom, we benefit from a balance of optimism and skepticism.

Family walking toward camera in autumn woods

I confess, I’m often the skeptic.

When I hear that – say – “retrieval practice helps students learn,” I hope that’s true, but I want to see lots of research first.

No matter the suggestion…

… working memory training!

… dual coding!

… mindfulness!

… exercise breaks!!!

… I’m going to check the research before I get too excited. (Heck, I even wrote a book about checking the research, in case you want to do so as well.)

Here’s one surprising example.

Because I really like the outdoors (summer camp, here I come!), I’d LOVE to believe that walking outside has cognitive benefits.

When I get all skeptical and check out the research…it turns out that walking outside DOES have cognitive benefits.

As I wrote back in May, we’ve got enough good research to persuade me, at least for now, that walking outdoors helps with cognition.

Could anything be better?

Yup, Even Better

Yes, reader, I’ve got even better news.

The research mentioned above suggests that walking restores depleted levels of both working memory and attention.

“Yes,” I hear you ask, “but we’ve got other important mental functions. What about creativity? What does the research show?”

I’ve recently found research that looks at that very question.

Alas, studying creativity creates important research difficulties.

How do you define “creativity”?

How do you measure it?

This research, done by Oppezzo and Schwartz, defines it thus: “the production of appropriate novelty…which may be subsequently refined.”

That is: if I can come up with something both new and useful, I’ve been creative – even if my new/useful thing isn’t yet perfect.

Researchers have long used a fun test for this kind of creativity: the “alternative uses” test.

That is: researchers name an everyday object, and ask the participants to come up with alternative uses for it.

For example, one participant in this study was given the prompt “button.” For alternative uses, s/he came up with…

“as a doorknob for a dollhouse, an eye for a doll, a tiny strainer, to drop behind you to keep your path.”

So much creativity!

Once these researchers had a definition and a way to measure, what did they find?

The research; the results

This research team started simple.

Participants – students in local colleges – sat for a while, then took a creativity test. Then they walked for a while, and took second version of that test.

Sure enough, students scored higher on creativity after they walked than after they sat.

How much higher? I’m glad you asked: almost 60% higher! That’s a really big boost for such a simple change.

However, you might see a problem. Maybe students did better on the 2nd test (after the walking) because they had had a chance to practice (after the sitting)?

Oppezzo and Schwartz spotted this problem, and ran three more studies to confirm their results.

So, they had some students sit then walk, while others walked then sat.

Results? Walking still helps.

In another study, they had some students walk or sit indoors, and walk or sit outdoors.

Results: walking promotes creativity both indoors and out.

Basically, they tried to find evidence against the hypothesis that walking boosts creativity…and they just couldn’t do it. (That’s my favorite kind of study.)

Just One Study?

Long-time readers know what’s coming next.

We teachers should never change our practice based on just one study – even if that study includes 4 different experiments.

So, what happens when we look for more research on the topic?

I’ve checked out my go-to sources: scite.ai and connectedpapers.com. (If you like geeking out about research, give them a try – they’re great!)

Sure enough, scite.ai finds 13 studies that support this conclusion, and 3 that might contradict it. (In my experience, that’s a good ratio.)

Connectedpapers.com produces fewer on-point results. However, the most recent study seems like a very close replication, and arrived at similar findings.

In brief: although I’m usually a skeptic, I’m largely persuaded.

TL;DR

Walking outdoors helps restore working memory and attention; walking either indoors or outdoors enhances creativity (at least as measured by the “alternative uses”  test).

I’d love to see some studies done in schools and classrooms. For the time being, I think we have a persuasive foundation for this possible conclusion.

Our strategies for putting this research to good use will, of course, be different for each of us. But it’s good to know: simply walking about can help students think more creatively.


Oppezzo, M., & Schwartz, D. L. (2014). Give your ideas some legs: the positive effect of walking on creative thinking. Journal of experimental psychology: learning, memory, and cognition40(4), 1142.

The Most Important 5 Minutes in Class: The Primacy/Recency Effect
Andrew Watson
Andrew Watson

As we put our lesson plans together, we teachers want to know: are some minutes more valuable than others?

Student Holding Clock

That is:

Do students remember most at the 10-minute mark of the lesson, because they’re mentally revved up?

Or, perhaps they remember most from the final five minutes, because the whole class has led to this grand conclusion.

Or, perhaps some other time slot generates the most learning, because psychology reasons.

What does the research tell us?

Start Here

I occasionally see teaching advice that seeks to answer this question. That advice typically begins with a fascinating research pool.

Here’s the story.

Researchers present students with — say — a list of 15 words. After distraction, how many  words do students remember? And, can we predict which ones?

Several studies suggest a consistent answer.

Students tend to remember words from the beginning of the list. Researchers call that the “primacy” effect.

And, they remember words from the end of the list. That result gets the moniker “recency effect.”

Going all the way back to 1962, this primacy/recency effect has a lot of research behind it. (For a more recent study, click here.)

Lab to Classroom

So, how should teachers plan our lessons based on this particular finding?

Let’s imagine that I tell my students a list of 8 instructions. Because of the primacy/recency effect, I suspect they’ll remember the early and late instructionst better than the ones in the middle. (Hint: maybe I should write down a long list of instructions…)

But: what does this effect tell us about the most valuable teaching time during a class period as a whole?

From time to time, scholars who translate psychology research for classroom teachers make this argument:

The primacy/recency effect suggests that the first several minutes of class, and the final several minutes of class, have the greatest effect on learning.

That is: For the same reason that students remember the first and last instruction from my list of 8, they’ll learn the most during the first and last minutes of class.

Voila: a research-based answer to the question.

I confess, however, that I myself have doubts.

The argument says, in effect:

Rules governing mental processes for 60-120 seconds also govern mental processes for 45-80 minutes.

Honestly, I’m just not sure that’s plausible. My doubts spring from two sources.

Doubts, and More Doubts

In the first place, I doubt this advice because it extrapolates so far beyond the initial research conditions.

If research tells me something about — say — college students, that conclusion might also apply to 1st graders. But it might not. 1st graders aren’t college students.

If research tells me something about adolescents in Iceland, that conclusion might apply to teens in Brazil. But it might not. Icelandic culture differs from Brazilian culture.

And, if research tells me about mental functions over one minute, that conclusion might apply to 20 minutes. (Or 45, or 80.) But IT MIGHT NOT. One minute isn’t twenty.

Long-time readers know I always focus on “boundary conditions.” From my perspective, this advice goes WAY beyond the boundaries of the initial research.

By the way: I’ve asked SEVERAL wise people if they know of primacy/recency research that goes beyond a minute or two. So far, the answer is “no.”

The second reason I doubt this advice because of the specific mental functions involved.

As far as I can tell, researchers explain the primacy/recency effect by talking about short-term memory and working memory.

Both of these mental faculties describe very short-term mental functions. In my grad-school classes, our profs typically said that working memory holds information somewhere between 5 and 30 seconds.

If, in fact, the primacy/recency effect results from short-term and working memory functions, then those findings almost certainly won’t apply to mental processes that take 30+ minutes.

Like, say, our classes.

Just Answer the Question

If this advice doesn’t hold, what can research tell us about the “most important five minutes in class”?

I’ve got two answers.

Answer #1:

I’ve asked lots of people if they have a resaerch-informed answer to this question. So far, no one has a strong “yes.” But, If I hear of one, I’ll pass it along.

And, btw, a friend has answered “we really have to research that question!” So, I’ll let you know if/when his results come through.

Answer #2:

Long-time readers know my mantra: “don’t just do this thing; instead, think this way.”

In this case, I don’t think we can plausibly identify any one time slot that consistently generates the most learning.

Instead, we want to use core ideas from cognitive science to structure lesson plans effectively.

Use retriveal practice.

Beware working-memory overload.

Foster attention.

Activate prior knowledge.

And so forth.

If we follow this approach, every minute will build ultimately — and more-or-less equally — toward students’ learning.


Castel, A. D. (2008). Metacognition and learning about primacy and recency effects in free recall: The utilization of intrinsic and extrinsic cues when making judgments of learning. Memory & Cognition36(2), 429-437.