Skip to main content
Goals, Failure, and Emotions: a Conceptual Framework
Andrew Watson
Andrew Watson

Researchers can provide guidance to teachers by looking at specific teaching practices.

In last week’s post, for instance, I looked at a study about learning from mistakes. TL;DR: students learned more from review sessions where they explored their own mistakes than those where teachers reviewed ideas.

Or,

Back in December, I looked at a study about using “pre-questions” to reduce mind-wandering. Sure enough, students who answered pre-questions about a topic spent less time mind-wandering than those who didn’t.

Obviously, these studies might provide us with lots of useful guidance.

At the same time, this “one-study-at-a-time” approach has its drawbacks. For instance:

What if my students (or class) don’t really resemble the students (or class) in the study?

What if THIS study says that pre-questions reduce mind-wandering, but THAT study says they don’t?

What if THIS study (again) says that pre-questions reduce mind wandering, but THAT study says that mindful meditation reduces mind-wandering? Which strategy should I use?

And so forth.

Because of these complexities, we can — and should — rely on researchers in another way. In addition to all that research, they might also provide conceptual frameworks that help us think through a teaching situation.

These conceptual frameworks don’t necessarily say “do this.” Instead, they say “consider these factors as you decide what to do.”

Because such guidance is both less specific and more flexible, it might be either especially frustrating or especially useful.

Here’s a recent example…

Setting Goals, and Failing…

We spend a lot of time — I mean, a LOT of time — talking about the benefits of short-term failure. Whether the focus is “desirable difficulty” or “productive struggle” or “a culture of error,” we talk as if failure were the best idea since banning smoking on airplanes.

Of course, ask any student about “failure” and you’ll get a different answer. Heck: they might prefer smoking on airplanes.

After all: failure feels really unpleasent — neither desirable nor productive, nor cultured.

In a recent paper, scholars Ryan Carlson and Ayelet Fishbach explore the complexity of “learning from failure”: specifically, how failure interefers with students’ goals.

To create a conceptual framework around this question, Carlson and Fishbach create two concept pairs.

First: they consider the important distinction between goal setting and goal striving.

Happily, those terms mean just what they say.

When I decide that I want to learn Spanish, or strengthen my friendships, or stop drinking caffein, I am setting a goal.

When I decide to enroll in a Spanish class, schedule more frequent dinners with pals, or purge my kitchen of all my coffee clutter, now I’m goal striving.

This pair helps us think through the big category “goals” in smaller steps.

Second: Carlson and Fishbach consider that both emotional barriers and cognitive barriers can interfere with goal setting and goal striving.

The resulting conceptual possibilities look like this:

A 2x2 grid: with "goal setting" and "goal striving" as two columens, and "emotional barriers" and "cognitive barriers" as two rows.

The grid created by these conceptual pairs allows us to THINK differently about failure: both about the problems that students face, and the solutions that we might use to address them.

Troubling Examples

Having proposed this grid, Carlson and Fishbach explore research into its four quadrants. I’ll be honest, resulting research and insights frequently alarmed me.

For instance, let’s look at the top-left quadrant: “emotional barriers during goal setting.”

Imagine that one of my students contemplates an upcoming capstone project. She wants to set an ambitious goal, but fears that this ambitious target will lead to failure.

Her emotional response during  goal setting might prompt her to settle for an easier project instead.

In this case, her emotional response shuts down her thinking before it even started. As Carlson and Fishbach pithily summarize this situation: “people do not need to fail for failure to undermine learning.”

YIKES. (Suddenly, the whole “desirable difficulties” project sounds much less plausible…)

Or, top right (emotional barriers/goal striving): it turns out that “information avoidance” is a thing.

People often don’t want to learn results of medical tests — their emotions keep them from getting to work solving a potential health problem.

So, too, I can tell you from painful experience that students often don’t read the comments on their papers. When they’re disappointed with a grade, they don’t consistently react by considering the very feedback that would help them improve — that is, “strive to meet the goal of higher grades.”

Or, lower right (cognitive barriers/goal striving). Carlson and Fishbach describe a study — intriguingly called “The Mystery Box Game.”

Long-story short: in this game, learning how to fail is more beneficial than learning about one path to success. Yet about 1/3 of participants regularly choose the less beneficial path — presumably because “learning how to fail” feels too alarming.

Problems Beget Solutions?

So far, this blog post might feel rather glum: so much focus on failure!

Yet Carlson and Fishbach conclude their essay by contemplating solutions. Specifically, they use a version of that grid above to consider solutions to the cognitive and emotional barriers during goal setting and goal striving.

For example:

  • “Vicarious learning”: people learn more from negative feedback when it’s directed at someone else.
  • “Giving advice”: counter-intuitively, people who give advice benefit from it at least as much as those who receive it. So, students struggling with the phases above (say: cognitive barriers during goal striving) might be asked for advice on how to help another student in a similar situation. The advice they give will help them.
  • “Counter-factual thinking”: students who ask “what if” questions (“what if I had studied with a partner? what if I had done more practice problems”) bounce back from negative feedback more quickly and process it more productively.

Because I’ve only recently come across this article, I’m still pondering its helpfulness in  thinking about all these questions.

Given the optimism of “desirable difficulty/productive struggle” in our Learning and the Brain conversations, I think it offers a helpful balance to understand and manage these extra levels of realism.


Carlson, R. W., & Fishbach, A. (2024). Learning from failure. Motivation Science.

The Downsides of Desirable Difficulties
Andrew Watson
Andrew Watson

For several years now, we’ve been talking about the benefits of “desirable difficulties.”

For instance, we know that spreading practice out over time helps students learn more than does doing all the practice at once.

Why? Because that schedule creates greater mental challenges. Our students must think harder.

In other words: “spacing” creates “desirable difficulty.”

Likewise, we know that jumbling many topics together during practice helps students learn more than does practicing only one thing at a time.

Why? Students face greater cognitive challenges as they try to figure out which strategy to use or topic to notice.

Desirable difficulty.

And: requiring students to use retrieval practice helps them lots more than simple review.

Yup, you guessed it: d_______ d__________.

A theory that is simultaneously counter-intuitive and common-sense. What’s not to love?

Not So Desirable

I’ll tell you what’s not to love: the big silence.

The phrase “desirable difficulty” implies, obviously, that our students might face UNdesirable difficulties.

And yet, very few people ever discuss — much less research — this topic.

So, what exactly would an undesirable difficulty be? How can I predict or spot them?

I discuss this question with teachers quite often, and I have two sets of suggestions.

The First Strategy

At a Learning and the Brain conference a few years ago, Dr. Robert Bjork (who coined the phrase “desirable difficulty” with his wife, Dr. Elizabeth Ligon Bjork) explained that DDs have two core features.

First: they require students to think harder about the material.

Second: despite the difficulties, students ultimately succeed.

By implication, difficulties that don’t meet those criteria aren’t desirable.

For instance, I’ve just assigned a final project on Macbeth to my sophomores: they must think about the play, create a new something (a set design, a costume plot, a new scene, etc.), and then explain their thinking.

I’ve warned my students quite strictly: they may use technology, but they should NOT get carried away with all the cool tech possibilities at hand.

If they know how to edit videos and want to shoot a scene, that’s fine. But they should not simply throw in 1001 cool editing effects. Those edits would make them think harder, perhaps, but not think harder about the play.

The work would be difficult, but not desirably difficult.

So, too, I might ask my students to write a sentence that uses noun clauses as both the subject of the verb and as an appositive, and also uses an introductory subordinate clause as an adverb.

In this case, my students would think harder (that’s Bjork’s first criterion), but they almost certainly wouldn’t succeed (Bjork’s second criterion).

Again: a difficulty, but not a desirable one.

In other words, we want to ramp up the difficulty — but not too far — without letting the focus subtly shift to another topic.

The Second Strategy

So, difficulties aren’t desirable if they don’t meet both of Bjork’s criteria.

Another way to recognize UNdesirable difficulties: MOST difficulties are undesirable.

So, I can make attention more challenging by — say — playing loud music while students read.

More difficult? Yes. Desirable? No.

I can vex my students’ working memory by giving them ten verbal instructions to remember and follow.

More difficult? Still yes. Desirable? Still no.

I could fiendishly reduce my students’ motivation by inculcating a fixed mindset.

You know the answer. That additional difficulty would in no way be desirable.

In other words, a few specific difficulties (spacing, interleaving, retrieval practice) can be desirable. Most others, however, simply are not.

TL;DR

Desirable difficulties — which require students to think harder before they succeed at their work — can foster deeper learning.

However, most classroom difficulties don’t meet that definition, and therefore aren’t desirable.

Whenever we champion desirable difficulties, we should be sure to mention and guard against the undesirable ones that imperil students’ learning.

Conflicting Advice: What to Do When Cognitive Science Strategies Clash?
Andrew Watson
Andrew Watson

Teachers like research-informed guidance because it offers a measure of certainty.

“Why do you run your classes that way?”

“Because RESEARCH SAYS SO!”

Alas, we occasionally find that research encourages AND DISCOURAGES the same strategy simultaneously.

What to do when expert advice differs?

In fact, I got this question on Thursday during a Learning and the Brain Summer Institute. Here’s the setup.

“Difficult” Can be Good

Regular readers know that desirable difficulties help students learn. As explained by Bob Bjork and Elizabeth Ligon Bjork — and researched by countless scholars — some degree of cognitive challenge enhances long-term memory formation.

In brief: “easy learning doesn’t stick.”

And so: why do spacing and interleaving help students learn? Because they ramp up desirable difficulty.

Why is retrieval practice better than simple review? Because (among other reasons) review isn’t difficult enough. Retrieval practice, done correctly, adds just the right amount of challenge.

And so, if you attend Learning and the Brain conferences (like this one on “Teaching Thinking Brains”), or if you read any of the great books about long-term memory formation, you’ll hear a lot about desirable difficulty.

Memory at Work

Cognitive scientists who don’t focus on long-term memory might instead focus on a distinct mental capacity: working memory. 

Working memory allows us to gather information — facts, procedures, etc. — into a mental holding space, and then to reorganize and combine them into new patterns and ideas.

In other words: it’s absolutely vital for thinking and learning. If students are learning academic information, they are using their working memory.

Alas, all this good news comes with some bad news: we don’t have much working memory. And, our students probably have less than we do. (For evidence, try this mental exercise: try alphabetizing the workdays of the week. No problem alphabetizing 5 words? Now try alphabetizing the twelve months of the year. OUCH.)

For this reason, effective teachers pay scrupulous attention to working memory load. Every time we go beyond working memory constraints, we make learning MUCH HARDER.

In fact, I think working memory is so important that I wrote a lengthy series of blog posts on the topic. I’m kind of obsessed. (Heck: I even wrote a book on the topic, called Learning Begins.)

Trouble in CogSci Paradise

Because both topics — desirable difficulties and working memory — provide teachers with important and powerful insights, I devoted much of last week’s workshop to them. Almost every day, in fact, we talked about both.

On Thursday, one participant asked this wise and provocative question:

Wait a minute. You’ve told us that desirable difficulties help learning. And you’ve told us that working memory overload hinders learning.

But: isn’t desirable difficulty a potential working memory overload? Don’t those two pieces of advice conflict with each other? Won’t “spacing” and “interleaving” vex working memory?

Yes, reader, they certainly might.

So, what’s a research-focused teacher to do? Team Desirable Difficulty tells us to space and interleave practice. Team Working Memory tells us to beware overload. How can we make sense of this conflicting advice?

This (entirely reasonable) question has two answers: one specific, one general.

A Specific Answer

When we consider the tension between “working memory” and “desirable difficulty,” we can focus for a moment on the adjective “desirable.”

In almost every case, working memory overload is UNdesirable.

So, if our teaching strategy — spacing, interleaving, retrieval practice, metacognition — results in overload, we shouldn’t do it: it’s not desirably difficult. We should, instead, back off on the difficulty until students can manage that cognitive load.

How do we get that balance just right?

We use our teacherly experience and insight. If I create a homework assignment with lots of interleaved practice AND ALL MY STUDENTS DO TERRIBLY, then interleaving wasn’t desirably difficult. (Or, perhaps, I taught the concepts ineffectively.)

In this case, I know the next night’s assignment should be working-memory-friendlier.

No research can tell us exactly what the best balance will be. Our expertise as teachers will guide us.

The General Answer

Researchers and teachers have different goals, and follow different practices. In brief: researchers isolate variables; teachers combine variables.

We think about stress and about working memory and about alertness and about technology and about spacing and

That list goes on almost infinitely.

For that reason, I chant my mantra: when adopting cognitive science approaches to teaching, “don’t just do this thing; instead, think this way.”

That is: don’t just DO “spacing and interleaving” because research tells us they’re good ideas. Instead, we have to THINK about the ideas that guide spacing and interleaving, and be sure they make sense at this particular moment.

Should we have students meditate at the beginning of each class? It depends on our students, our school, our schedule, our culture, our curriculum, our goals, and … too many other variables to list here.

Should we ban laptops from classrooms? Ditto.

Should high schools start later? Ditto.

Should 3rd graders learn by doing projects? Ditto.

Should students read on exercycles? Ditto.

One isolated piece of research advice can’t effectively guide teaching and school-keeping decisions. We have to combine the variables, and think about them in our specific context.

Simply put: we can’t just “do what the research says.” It’s not possible; different research pools almost certainly conflict.

Instead, we’re doing something more challenge, more interesting, and more fun.

Let the adventure begin!

The Limits of “Desirable Difficulties”: Catching Up with Sans Forgetica
Andrew Watson
Andrew Watson

We have lots of research suggesting that “desirable difficulties” enhance learning.

That is: we want our students to think just a little bit harder as they practice concepts they’re learning.

Why is retrieval practice so effective ? Because it requires students to think harder than mere review.

Why do students learn more when they space practice out over time? Because they have to think back over a longer stretch — and that’s more difficult.

We’ve even had some evidence for a very strange idea: maybe the font matters. If students have to read material in a hard-to-read font, perhaps their additional effort/concentration involved will boost their learning.

As I wrote last year, a research team has developed a font designed for exactly that reason: Sans Forgetica. (Clever name, no?) According to their claims, this font creates the optimal level of reading difficulty and thereby could enhance learning.

However — as noted back then — their results weren’t published in a peer-reviewed journal. (All efforts to communicate with them go to their university’s publicity team. That’s REALLY unusual.)

So: what happens when another group of researchers tests Sans Forgetica?

Testing Sans Forgetica

Testing this question is unusually straightforward.

Researchers first asked participants to read passages in Sans Forgetica and similar passages in Arial. Sure enough, they rated Sans Forgetica harder to read.

They then ran three more studies.

First, they tested participants’ memory of word pairs.

Second, they tested memory of factual information.

Third, they tested understanding of conceptual understanding.

In other words, they were SUPER thorough. This research team didn’t just measure one thing and claim they knew the answer. To ensure they had good support behind their claims, they tested the potential benefits of Sans Forgetica in many ways.

So, after all this thorough testing, what effect did Sans Forgetica have?

Nada. Bupkis. Nuthin.

For example: when they tested recall of factual information, participants remembered 74.73% of the facts they read in Sans Forgetica. They remembered 73.24% of the facts they read in Arial.

When they tested word pairs, Sans Forgetica resulted in lower results. Participants remembered 40.26% of the Sans Forgetica word pairs, and 50.51% of the Arial word pairs.

In brief, this hard-to-read font certainly doesn’t help, and it might hurt.

Practical Implications

First, don’t use Sans Forgetica. As the study’s authors write:

If students put their study materials into Sans Forgetica in the mistaken belief that the feeling of difficulty created is benefiting them, they might forgo other, effective study techniques.

Instead, we should encourage learners to rely on the robust, theoretically-grounded techniques […] that really do enhance learning.

Second, to repeat that final sentence: we have LOTS of study techniques that do work. Students should use retrieval practice. They should space practice out over time. They should manage working memory load. Obviously, they should minimize distractions — put the cell phone down!

We have good evidence that those techniques work.

Third, don’t change teaching practices based on unpublished research. Sans Forgetica has a great publicity arm — they were trumpeted on NPR! But publicity isn’t evidence.

Now more than ever, teachers should keep this rule in mind.

Best Font Name Ever: “Sans Forgetica”
Andrew Watson
Andrew Watson

For well over a decade, teachers have heard that we should strive for the right level of “desirable difficulty.”

In brief: easy learning doesn’t stick. If we want to ensure our students learn material in lasting ways, we need to be sure they wrestle with the material just the right amount.

(Of course, getting to “just the right amount” requires lots of teacherly thought, experience, and wisdom.)

Many years ago, a Princeton undergraduate had an intriguing idea. Maybe we could increase desirable difficulty by using a difficult-to-read font.

His theory went like this. If readers have to concentrate just a little bit more to make sense of what they’re reading, that extra measure of concentration will be a “desirable difficulty.” The result just might be more learning.

He tested his theory in a psych lab. And then — being a thorough sort — he tested it for ten weeks in a nearby high school. The result: students learned more when they read material in a hard-to-read (aka, “disfluent”) font.

Amazing.

Today’s News

Researchers in Australia wanted to take this idea to the next level. They wanted to design an optimally difficult font.

They tried out several different strategies, including:

leaving out parts of letters,

having letters slant the wrong way,

even having parts of letters misalign with each other.

By testing different combinations of these potentially desirable difficulties, they came up with a winner — which they have deliciously dubbed “sans forgetica.”

In two different experiments, students remembered word pairs better when they studied them in sans forgetica, rather than a typically “fluent” font, or in other excessively “disfluent” fonts.

If you’re keen to play with typefaces, you can download that font at the link above.

You can check out their video here:

https://www.youtube.com/watch?v=PO2Eo6D5tNQ

Reasons to be Cautious

Of course, we should look before we leap.

First: later studies into disfluent fonts have led to decidedly mixed results. According to this meta-analysis, the results average out to zero.

My own hypothesis, as I’ve written here, is that disfluent fonts help only in particular circumstances.

If the cognitive challenges of a problem are already high, then a disfluent font might make them too hard. If the cognitive challenge is quite low, then a disfluent font might raise them to just the right level.

(As far as I know, no one has tested that hypothesis.)

Second: the Australian researchers haven’t published their findings. So, this research hasn’t yet been vetted in the way that research usually gets vetted. (The link above — like all news about sans forgetica — goes to a university press release.)

Third: common sense suggests that disfluent fonts include an important flaw: the more students read a particular font, the more fluent that font will become.

In other words: sans forgetica might start out optimally disfluent. However, over time, students will get used to the font. It will be increasingly fluent the more they use it.

If you want to try disfluent fonts, therefore, I suggest you use them sparingly. You should, I imagine, use them for particularly important information and assignments.

But, to ensure they remain disfluent, you should not have them be a regular part of your students’ reading experience.

To be clear, we have no research guidance at this granular level. As must be true with phrases like “desirable difficulty,” teachers must translate the helpful concept to the specifics of our daily classroom lives.

Is Failure Productive? (Hint: We Should Ask a Better Question)
Andrew Watson
Andrew Watson

Do students learn better after they experience failure? Two recent studies over at The Science of Learning help us answer that question.

In the first study, professors in a Canadian college wanted to help their Intro Bio students learn difficult concepts more effectively. (Difficult concepts include, for example, the “structural directionality of genetic material.”)

They had one Intro Biology section follow a “Productive Failure” model of pedagogy. It went like this.

First, students wrestled with conceptual problems on these difficult topics.

Second, they got in-class feedback on their solutions.

Third, they heard the professor explain how an expert would think through those topics.

Another Intro Bio section followed these same steps but in a different order:

First, they heard the professor explain how an expert would think .

Second, students wrestled with conceptual problems.

Third, they got in-class feedback on their solutions.

So, all students did the same steps. And, they all followed an “active pedagogy” model. But, one group struggled first, whereas the other group didn’t.

Who Learned More?

This answer proves to be unusually complicated to determine. The researchers had to juggle more variables than usual to come up with a valid answer. (If you want the details, click the link above.)

The headlines are:

On the next major test, students who experienced productive failure learned more.

On the final exam, however, only the “low performing” students did better after productive failure. For the middle- and upper- tier students, both strategies worked equally well.

Conclusion #1:

So, we can’t really conclude that productive failure helps students learn.

Instead, we’re on safer ground to say that – over the longer term – productive failure helps “low performing” students learn (compared to other kinds of active learning).

But Wait, There’s (Much) More

Two weeks after they published the study about Canadian college students in Biology classes, Science of Learning then published a study about German fifth graders learning fractions.

(As we discussed in this post, watching students learn fractions helps researchers measure conceptual updating.)

In particular, these researchers wanted to know if students learned better after they struggle for a while. (Again, for details click the link.)

In this case, the answer was: nope.

So, we arrive at Conclusion #2:

Some college students, but not most, learned more from productive failure in a biology class – compared to those who learned via other active learning strategies.

However, fifth graders did not learn more about fractions – compared to those who learned via direct instruction.

Got that?

The Biggie: Conclusion #3

When teachers come to research-world, we can be tempted to look for grand, once-and-for-all findings.

A particular study shows that – say – students learn better when they use an iPad to study astronomical distances. Therefore, we should equip all our students with iPads.

But, that’s NOT what the study showed. Instead, it showed that a particular group of students studying a particular topic with a particular technology got some benefit – compared to a particular alternate approach.

So, Conclusion #3:

Teachers can often find helpful research on teaching strategies.

We should assume that results vary depending on lots of highly specific conditions. And therefore, we should seek out research that includes students (and classroom subjects) as much like our own as possible.

And so: if you teach biology to college students, you might give the first study a close look to see if its methods fit your students well. (Given that it worked particularly well with struggling students, that variable probably matters to you.)

If, however, you teach fractions to fifth graders, you should probably hold off on productive failure – unless you find several other studies that contradict this one.

In other words: teachers can learn the most from psychology and education research when we investigate narrow and specific questions.


A final thought. I’ve only recently come across the website that published these studies. Congratulations to them for emphasizing the complexity of these research questions by publishing these studies almost simultaneously.

I’m sure it’s tempting to make research look like the last word on a particular topic. Here, they’ve emphasized that boundary conditions matter. Bravo.

Escaping the “Inquiry vs. Direct Instruction” Debate
Andrew Watson
Andrew Watson

If you’d like to stir up a feisty argument at your next faculty meeting, lob out a casual observation about direct instruction.

Almost certainly, you’ll hear impassioned champions (“only direct instruction leads to comprehension”) and detractors (“students must construct their own understandings”) launch into battle.

For Example…

Back in September, I reviewed two studies contrasting these approaches.

One study, looking at science instruction with 4th graders, found that direct instruction led to more learning. The second study argued for a constructivist approach — yet lacked a remotely plausible control group.

So, in that post at least, it made sense to tell students what experts had already concluded.

One Study, Two Perspectives

I’ve found another study that helpfully reopens this debate.

Daniel Schwartz and colleagues helped 8th grade science students understand concepts like density, speed, and surface pressure.

Crucially, all these concepts share an underlying “deep structure”: ratio.

That is: “speed” is distance divided by time. “Density” is mass divided by volume.

Schwartz wanted to see if students learned each concept (density, spring constant) AND the underlying deep structure (ratio).

Half of the 8th graders in this study heard a brief lecture about each concept — and about the underlying structure they shared. They had a chance to practice the formulas they learn.

That is: this “tell and practice” paradigm is one kind of direct instruction.

The rest of the 8th graders were given several related problems to solve, and asked to figure out how best to do so.

This “invent with contrasting cases” paradigm enacts constructivist principles.

Findings, and Conclusions

Schwartz and Co. found that both groups learned to solve word problems equally well.

However — crucially — the contrasting cases method led to deeper conceptual understanding.

When this group of students were given a new kind of ratio to figure out, they recognized the pattern more quickly and solved problems more accurately.

So, the obvious conclusion: constructivist teaching is better. Right?

Not so fast. Schwartz’s study includes this remarkable pair of sentences:

“There are different types of learning that range from skill acquisition to identity formation, and it seems unlikely that a single pedagogy or psychological mechanism will prove optimal for all types of learning.

Inventing with contrasting cases is one among many possible ways to support students in learning deep structure.”

That is: in this very particular set of circumstances, a constructivist approach helped these students learn this concept — at least, in the way it was tested.

What Next?

If the purists have it wrong — if both direct instruction and constructivist pedagogies might have appropriate uses — what’s a teacher to do?

Schwartz himself suggests that different approaches make sense for different kinds of learning.

For instance, he wonders if direct instruction helps learn complex procedures, whereas constructivist methods help with deep structures (like ratio).

Perhaps, instead, the essential question is the level of difficulty. We have lots of research that says the appropriate level of cognitive challenge enhances learning.

So: perhaps the “tell and practice” method of this study was just too easy; only a more open-ended investigation required enough mental effort.

However, perhaps the study with the 4th graders (mentioned above) included a higher base level of conceptual difficulty. In that case, hypothetically, direct instruction allowed for enough mental work, whereas the inquiry method demanded too much.

Two Conclusions

First: the right pedagogical approach depends on many variables — including the content to be learned. We teachers should learn about the strengths and weaknesses of various approaches, but only we can decide what will work best for these students and this material on this day.

Second: purists who insist that we must always follow one (and ONLY one) pedagogy are almost certainly wrong.

Can Quiet Cognitive Breaks Help You Learn?
Andrew Watson
Andrew Watson

We write a lot on the blog about “desirable difficulties” (for example, here and here). Extra cognitive work during early learning makes memories more robust.

cognitive breaks

Retrieval practice takes more brain power than simple review — that is, it’s harder. But, it helps students remember much more.

Wouldn’t it be great if some easy things helped too?

How about: doing nothing at all?

Cognitive Breaks: The Theory

When a memory begins to form, several thousand neurons begin connecting together. The synapses linking them get stronger.

Everything we do to help strengthen those synapses, by definition, helps us remember.

We know that sleep really helps in this process. In fact, researchers can see various brain regions working together during sleep. It seems that they’re “rehearsing” those memories.

If sleep allows the brain to rehearse, then perhaps a short cognitive break would produce the same result.

Cognitive Breaks: The Research

Michaela Dewar and colleagues have been looking into this question.

They had study participants listen to two stories. After one story, participants had to do a distracting mental task. (They compared pictures for subtle differences.)

After the other, they “rest[ed] quietly with their eyes closed in the darkened testing room for ten minutes.”

Sure enough, a week later, the quiet rest led to better memory. As a rough calculation, they remember 10% more than without the quiet rest.

10% more learning with essentially 0% extra cognitive effort: that’s an impressive accomplishment!

Classroom Questions

A finding like this raises LOTS of practical questions.

Dewar’s study didn’t focus on K-12 learners. (In fact, in this study, the average age was over 70.) Do these findings apply to our students?

Does this technique work for information other than stories? For instance: mathematical procedures? Dance steps? Vocabulary definitions?

Does this finding explain the benefits of mindfulness? That is: perhaps students can get these memory benefits without specific mindfulness techniques. (To be clear: some mindfulness researchers claim benefits above and beyond memory formation.)

Can this finding work as a classroom technique? Can we really stop in the middle of class, turn out the lights, tell students to “rest quietly for 10 minutes,” and have them remember more?

Would they instead remember more if we tried a fun fill-in-the-blank review exercise?

I’ll be looking into this research pool, and getting back to you with the answers I find.

Cognitive Breaks: The Neuroscience

If you’d like to understand the brain details of this research even further, check out the video at this website. (Scroll down just a bit.) [Edit 11/4/19: This link no longer works; alas, I can’t find the video.]

The researchers explain a lot of science very quickly, so you’ll want to get settled before you watch. But: it covers this exact question with precision and clarity.

(By the way: you’ll hear the researchers talk about “consolidation.” That’s the process of a memory getting stronger.)

If you do watch the video, you might consider resting quietly after you do. No need to strain yourself: just let your mind wander…

hat tip: Michael Wirtz

Don’t Just Do This Thing; Think This Way
Andrew Watson
Andrew Watson

Teachers love hearing about brain research because it offers us specific and persuasive guidance.

using research well

The researcher says: when I DID THIS THING, students learned more than when I DID THAT THING.

As a thoughtful teacher, I draw the obvious conclusion. I too should DO THIS THING.

And yet, you might reach a different conclusion. If you’re interested in using research well, you might even reach a better conclusion.

Using Research Well: Finding the Right Font?

Here’s a specific example.

Back in 2011, Connor Diemand-Yauman published a study about unusual fonts. (You read that right. Font. As in: typeface.)

He had students learn some information in an easy-to-read font (Arial, 100% black). They learned other information in a harder-to-read font (for example, Bodoni MT, 60% black).

When retested, they remembered more information in the hard to read font.

Being a thorough researcher, Diemand-Yauman tried this hypothesis out in a high school. He had teachers use the Arial font in one of their sections, and the Bodoni MT in another.

Sure enough, the hard-to-read fonts (called “disfluent”) lead to greater learning.

We teachers might take this study as an instruction to DO THIS THING. Given Diemand-Yauman’s results, that is, we should start using unusual fonts.

Using Research Well: Finding the Right Difficulty

Instead of DOING THIS THING, however, I think Diemand-Yauman’s research should inspire us to THINK THIS WAY.

Specifically, we should think about finding the right level of difficulty.

When students take on relatively simple material, we can help them learn it better by adding a bit of challenge.

We might — for example — print that information in a disfluent font.

We might space out practice further than usual.

Or, we might interleave this topic with other, similar kinds of information.

But: when students learn complex material, we don’t want to make it any more difficult. In this case, the font should be as fluent as possible. We would space practice out, but not so far. We would interleave, but not so much.

In other words: Diemand-Yauman’s research doesn’t tell us to use quirky fonts (“do this thing”).

Instead, it gives us another option for creating desirable difficulty (“think this way”).

But Wait: Does That Font Thing Really Work?

A just-published meta-analysis says: not so much. In the authors’ words:

“there is not enough evidence to show that it [using disfluent fonts] either stimulates analytic processing or increases extraneous cognitive load.”

In other words: hard-to-read fonts aren’t a desirable difficulty. And, they don’t stress working memory too much.

Although I haven’t looked at the nitty-gritty of this study (it’s behind a paywall), I have an alternate interpretation.

Perhaps in some cases disfluent fonts are a desirable difficulty. And, in other cases they stress working memory. In this case, those two findings would offset each other in a meta-analysis. The result would be — as this study finds — no consistent effect.

Who Decides?

If I’m right, a disfluent font might improve learning. Or, it might hinder learning.

So: who decides when to use one?

The answer is clear: THE TEACHER DECIDES. Only you know if the material is already hard enough (in which case, use a fluent font). Only you know if it needs some extra cognitive challenge to make it stick (in which case, think about a disfluent font).

No researcher can answer that question, because no researcher knows your curriculum, your school, and your students.

Rather than ask researchers tell you what to do, let them guide you in thinking about teaching problems in more effective ways.

 

If you’re especially interested in desirable difficulties, here’s an article about a potentially desirable difficulty that turns out to be…not.

Default Image
Andrew Watson
Andrew Watson

Unless you’ve been napping under a LatB rock, you’ve heard about the importance of research-based study habits.

study habits

In particular, you know that students should spread practice out over time rather than bunching practice all together. (The benefits are called the spacing effect.)

And, you know that students should not simply look over what they already know. Instead, they should quiz themselves to see what they can actively retrieve from memory. (That’s called retrieval practice.)

Here’s a little secret you might not know: most of the research about the spacing effect and retrieval practice takes place in psychology labs.

What happens in the real world? Do students who use these techniques actually learn more than those who don’t?

Asking Students about their Study Habits

In a recent study, Fernando Rodriguez and colleagues surveyed students about their study practices.

Do these students space practice over time? Do they do all of their studying all in one session?

Perhaps they quiz themselves on what they know? Or, perhaps they reread the textbook?

Rodriguez & Co. then compared these answers to the students’ grade in the class. By this method, they could tease out the effects of spacing and retrieval practice on actual learning.

So: did these research-endorsed study habits translate into classroom learning?

No. And, Yes.

Rodriguez found mixed results.

Study habits that spaced practice out didn’t make any difference. Students who crammed and students who studied material in several brief sessions got the same final grade.

(I’ll propose an explanation for this finding below.)

However, retrieval practice made a clearly measurable difference. Students who reviewed material averaged a B-. Those who self-tested averaged a B.

Given that both study techniques take the same amount of time, it obviously makes sense to self-test. Students who do so learn more. Retrieval practice just works.

Spacing Doesn’t Help? Or, Spacing Already Helped?

If we’ve got so much research showing the benefits of spacing, why didn’t it help students in this class?

We don’t know for sure, but one answer stands out as very probable: the professor already did the spacing for the students.

That is: the syllabus included frequent review sessions. It had several cumulative tests. The class structure itself required students to think about the material several times over the semester.

Even if students wanted to cram, they couldn’t wait until the last moment to review. The test schedule alone required them to review multiple times.

So: the students’ own additional spacing study habits didn’t help.

However, in a class where the professor hadn’t required spacing, it most likely would have done so.

The Bigger Picture

This possibility, in my view, underlines a bigger point about spacing and retrieval practice:

For the most part, students have primary responsibility for retrieval practice, whereas teachers have primary responsibility for spacing.

That is: students — especially older students — should learn to review by using retrieval practice strategies. (Of course, especially with younger students, teachers should teach RP strategies. And, offer frequent reminders.)

Teachers — in our turn — should design our courses to space practice out. (Of course, students should do what they can to space practice as well.)

In other words: retrieval practice is largely a study habit. Spacing is largely a teaching habit.

Students will get the most benefit from this research when we divide up responsibility this way.