Skip to main content
What (and Why) Should Students Memorize? Confidence and Fluency for the Win
Andrew Watson
Andrew Watson

In our profession, memorization has gotten a bad name.

The word conjures up alarming images: Dickensian brutes wielding rulers, insisting on “facts, facts, facts!”

In a world when students “can look up anything on the interwebs,” why do we ask students to memorize at all?

One answer from cognitive science: we think better with information we know well.

Even when we can find information on the internet, we don’t use that information very effectively. (Why? Working memory limitations muddle our processing of external information.)

A second answer appears in intriguing recent research.

Reasonable Plans, Unstated Assumptions

As a classroom teacher, I might operate on this reasonable plan:

Step one: we discuss ideas and information in class.

Step two: students write down the important parts.

And, step three: when students need that information later, they look at their notes.

This plan — the core of most high school classes I know — relies on unstated assumptions:

Assumption one: students’ notes are largely correct.

Assumption two: if students write down information INcorrectly, they’ll recognize that mistake. After all, we discussed the correct information in class.

But what if that second assumption isn’t true?

What if students trust external information (their notes) more than internal information (their memories)?

Assumptions Thwarted

In 2019, Risko, Kelly, & Gaspar studied one version of this question.

They had students listen to word lists, and type them into a storable file. After distraction, students got to review their lists. They then were tested on those words.

On the final list, however, these scholars did a sneaky thing: they added a word to the stored list. Sure enough, 100% of their students wrote down the additional word, even though it hadn’t in fact been on the initial word list.

Students trusted their written document (external “memory”) more than their own actual memory. When tested even later, students still included the additional word, even though it wasn’t a part of their initial learning.

In other words: the “reasonable plan” that teachers often rely on includes an assumption that — at least in this research — isn’t true.

Ugh.

Classroom Implications

This research, I think, reminds us that the right kind of memorization has great value for students.

We want students to know certain bedrock facts and processes with absolute certainty. We want them, for instance, to define key terms and ideas fluently. Crucially, we want them to reject — with confidence borne of certain knowledge — inaccurate claims.

For instance:

I just completed a unit on tragedy. My sophomores read August Wilson’s Fences and Shakespeare’s Macbeth.

 

On the very first day of the term, I went over a four-part definition of “tragedy.” (It’s a quirky definition, I admit, but it works really well.)

 

We reviewed that definition almost daily, increasingly relying on retrieval practice. For instance, I might give them this handout to fill in. Or we might do that work together on the board.

Over time, I started including inaccurate prompts in my questions: “So, tragedy ends in death or marriage, right?”

By this point, my students knew the definition so well that they confidently rejected my falsehoods: “No, you’re trying to trick us! Tragedy ends in death or banishment!”

For an even trickier approach, I encouraged students to correct one another’s (non-existent) mistakes:

Me: “T: what does comedy represent, and why?”

T: “The marriage (and implied birth) at the end of a comedy implies the continuity of society, and in that way contrasts tragedy’s death and banishment, which represent the end of society.”

Me: “M: what did T get wrong.”

M [confidently]: “Nothing. That was exactly right.”

Me [faking exasperation]: “S, help me out here. What did T and M miss?”

S [not fooled]: “Nothing. I agree with them both.”

Me: “Congrats to T for getting the answer just right. And congrats to M and S for not letting me fool you. It’s GREAT that you’re all so confident about this complex idea.”

Because these students knew this complex definition cold — because they had memorized it — they could stand firm when questioned skeptically. As a result, they did a great job when asked to apply that definition at the end of the term:

“How does Wilson’s Fences fit the definition of tragedy AND of comedy?”

To Sum Up

Despite all the bad press, the right kind of memorization can enhance learning.

When students know foundational information and processes by heart, they

BOTH process questions more fluently

AND resist misleading information from “external memory” sources.

Greater cognitive fluency + greater confidence in their knowledge = enduring learning.

Prior Knowledge: Building the Right Floor [Updated]
Andrew Watson
Andrew Watson

When I first published this post back in 2019, the underlying research wasn’t publicly available; now it is!


Take a gander at this passage from Michael Jones’s recent biography of The Black Prince:

“In the fourteenth century England used a silver standard of currency. The unit of account was the pound sterling (£) which was equal to one and a half marks of silver. The pound was divided into twenty shillings (s), each of twelve pence (d). There was also, from 1344, a gold coinage based on the noble, which was conventionally worth 6s 8d, but was rarely used. It would, however, be significant in the calculation of the ransom of King John II and also in the introduction of gold coinage into Gascony and then the principality of Aquitaine by the Black Prince.”

Many readers, I suspect, felt tempted to give up relatively quickly. (Don’t blame yourself if you did.) Unless you’re really up to speed on 14th century English currency–both silver and gold!–the paragraph quickly becomes overwhelming.

The vocabulary in this passage probably doesn’t strain our cognition. Except for the phrase “marks of silver,” I know what all those words mean. (And, I can guess from context that a “mark” is some unit of measurement.)

However, the passage does place several mental demands on the reader.

First, it invites you to undertake several rapid mathematical calculations. (Quick: how many shillings in a mark?)

Second, it requires you to learn abbreviations as you go. To understand the fourth sentence, you need to remember the (wildly counter-intuitive) abbreviation of “pence” as “d” from the third sentence.

Third, it assumes you recall several events and places unfamiliar–I suspect–to most Americans. Who was King John II? Why was he ransomed…was he kidnapped? Where are Gascony and Aquitaine? They don’t sound very English — why did an English prince introduce coinage to them? Actually: why is a prince empowered to introduce new currency?

Essential Background Knowledge

I thought of this paragraph when I read a recent article by Robert Pondiscio. In it, Pondiscio summarizes a study trying to determine how much background knowledge is necessary for comprehension.

In this study, students who scored higher than a 59% on a background knowledge test understood a reading passage substantially better than those who scored below 59%.

As summarized by Pondiscio, the study’s authors see some clear teaching implications here.

First, we can meaningfully measure our students’ relevant background knowledge.

Second, students who fall short on that measure will benefit A LOT if we provide them with the essentials.

For instance, students who understood that “habitat,” “species,” and “ecosystems” were relevant vocabulary for the study of ecology understood the reading passage more deeply. (The study included 3500 students, so I believe they controlled for various confounds. I haven’t read the study itself–it’s behind a paywall.)

I think those conclusions point to another:

Third: models of teaching that focus on “pure discovery” will create substantial challenges for students who lack background knowledge. Students who don’t know the basics of a topic simply can’t understand the field of inquiry within which they’re meant to discover.

And, they won’t feel motivated by curiosity to find out. They’ll feel discouraged by their confusion. (Few readers, I suspect, were motivated by the paragraph above to learn more about medieval English currency.)

A Final Thought

This study finds that 59% was the essential tipping point. Students who scored lower than 59% on the prior knowledge test found themselves in a different cognitive category than those who scored above.

Howeverthat percentage does not necessarily apply to all circumstances.

In other words: we shouldn’t give our students prior-knowledge tests, and focus only on those who score 58% and below.

Instead, we should plan our lessons and units knowing that some floor-level of knowledge will be crucial for learning most things.

In every case–as you hear me say so often–we’ll have to rely on the teacher’s judgment to discover that level.

Researchers can remind us that the floor exists. But they can’t identify it for every teacher in every classroom. Ultimately, with that research guidance in mind, we’ll find the right place for the floor. And, we’ll build it.

“Soft” vs. “Hard” Skills: Which Create a Stronger Foundation?
Andrew Watson
Andrew Watson

As teachers, should we focus on our students’ understanding of course content, or on our students’ development of foundational academic skills?

Do they benefit more from learning history (or chemistry or spelling or flute), or from developing the self-discipline (grit, focus, executive skills) to get the work — any work — done?

I’ve found a recent study that explores this question. It stands out for the rigor of its methodology, and the tough-mindedness of its conclusions.

Here’s the setup:

Daunting Problems; Clever Solutions

Researchers struggle to answer these questions because student choice can complicate the data.

When college students choose courses and professors, when they opt out of one section and opt into another, we can’t tell if the professor’s quality or the students’ preferences led to particular research results.

How to solve this problem? We find a school where students get no choices.

They must take the same courses.

They can’t change sections.

Students start the year randomly distributed, and they stay randomly distributed.

Where shall we find such a school? Here’s a possibility: the United States Naval Academy. All students take the same courses. They can’t switch. They can’t drop. Sir, yes sir!

Even better: several USNA courses are sequential. We can ask this question: how does the student’s performance in the first semester affect his/her performance in the second semester?

Do some 1st semester teachers prepare their students especially well — or especially badly — for the 2nd semester?

We can even fold in extra data. The website Rate My Professors lets students grade professors on many qualities — including the difficulty of the course, and their overall rating. Perhaps those data can inform our understanding of teacher effectiveness.

Provocative Conclusions

A research team has followed this logic and recently published their conclusions.

In their findings:

Easygoing teachers — who don’t demand lots of work, who don’t communicate high standards, who routinely give lots of high grades — harm their students. 

How so? Their students — quite consistently — do badly on subsequent courses in the field.

In other words: if I have an easygoing teacher for Calculus I, I’m likely to do badly in Calculus II — compared to my identical twin brother who had a different teacher.

On the other hand, tough-minded teachers — who insist on deadlines, who require extra work, who remain stingy with high grades — benefit their students.

How so? These students — like my identical twin — do better in subsequent courses than I do.

This research team calls such executive function topics — getting work done, even if it’s dull; prioritizing; metacognition — “soft skills.” In their analysis, professors who are tough minded about these soft skills ultimately help their students learn more.

More Provocative Still

This logic certainly makes sense; we’re not shocked that students learn more when we insist that they work hard, focus, and set high standards.

Of course, professors who DON’T insist that their students work hard get lots of student compliments (on average). We teachers know that — all things being equal — students are happier when they get less work. Their RateMyProfessor scores average higher than those of their tough-minded peers.

In turn, colleges notice student popularity ratings. School leaders feel good when students praise particular teachers. They give them awards and promotions and citations. Why wouldn’t they? After all, those highly-praised professors give the college a good reputation.

In other words: according to this research team, colleges are tempted to honor and promote teachers who get high student ratings — even though those very professors harm their students’ long term learning, and thereby diminish the quality of the academic program.

That’s a scathing claim indeed.

Caveats

Like everything I write about here, this finding comes with caveats.

First: although these students were randomly assigned once they got to the Naval Academy, admission to that Academy is very challenging indeed. (Google tells me that 8.3% of their applicants get in.)

So, a tough-minded approach might benefit this extremely narrow part of the population — who, let’s be honest, signed up for a rigorous academic program, rigorously delivered.

However, that finding doesn’t necessarily mean that this approach works for younger students, or a broader swath of the population, or students who didn’t apply for such demanding treatment.

It might. But, this study by itself shouldn’t persuade us to change our work dramatically. (Unless we work in a similar academic setting.)

Second: this report’s authors define “soft” and “hard” in a very specific way (see their page 3).

Your school might use these terms quite differently, so their claims might not apply directly to your terminology.

Equally important, the strategies they use to distinguish between “tough-minded” and “easy-going” professors require lots of intricate parsing.

I myself don’t have the stats skills to interrogate their process; I can imagine a more expert reading asking sharp questions about their methods.

Conclusion

In many parts of life, short-term challenges lead to long-term benefits.

We might not like exercise, but it helps us as we get older.

We might like bacon and ice cream, but leeks and salmon keep us fitter.

This research report suggests that we help our students in the long run by maintaining tough-minded high standards right now.

Doing so might not make us popular. Our administrative leaders don’t always recognize our wisdom. But if our students learn more, their strong “soft-skills” foundation really does help them thrive.

Seriously: What Motivates Teachers to Be Funny?
Andrew Watson
Andrew Watson

To start 2021 in the right spirit, let’s think about humor in the classroom.

It seems that, obviously, humor might be a good classroom strategy. When the lesson slows down, a joke or two might brighten the mood.

Once we begin studying this question the way researchers study things, well, it gets much more complicated. (I once heard the claim that “laughter improves learning 44%!” Unsurprisingly, so vague a statement doesn’t hold up to much scrutiny. For starters, the cited research says 28%, not 44%…)

We might study, for instance:

What kind of humor do teachers use?

Are there differences between K-12 teachers’ and college professors’ use of humor?

Are there gender differences in use of humor? (Believe it or not, there’s a controversy here.)

What motivates teachers to attempt humor?

I’ve recently found research trying to answer this question:

What motivation traits prompt college professors to try content-relevant humor? (“Content relevant” means they’re not just telling jokes; they’re being funny about the topic they’re discussing.)

What did the researchers find?

Kinds of Motivation

Psychologists often divide behavior up into plausible categories for analysis.

For instance, if you know Carol Dweck’s work on mindset, you know that some people feel motivated to learn more, while others feel motivated to demonstrate what they already know.

In this case, college professors might want to improve at professing, or they might want to demonstrate that they’re already top-notch professors.

Also, motivations can be either positive or negative.

In this case, a professor might want to demonstrate that they’re good (positive), or not reveal that they’re bad (negative).

Researchers have other motivational categories as well.

In this study, they wonder if professors use humor to improve relationships with students.

And, they wonder if a prof’s desire to avoid work influences their use of humor.

To start answering these questions, the researchers had more than 250 professors fill out surveys that give insight into their motivation. (Important: these data, in other words, come from self report. Not everyone is persuaded by such data.)

They also asked students to rate — on a scale of 1 to 5 — their agreement with this statement: “Instructor enhances presentations with the use of humor.”

The Envelope, Please

So, what did they learn?

For instance: when did students endorse the statement that their professor “enhances presentations with the use of humor?”

Answer: when those professors themselves said s/he wanted to “build amicable and valued relationships with students.” That is: profs with relational goals use humor to build those relationships.

How about the reverse? When did students consistently reject that statement?

Answer: When their professors said they wanted to avoid looking bad. (If you’re keeping score, that’s a “negative performance” goal.)

In brief: professors who want to connect with students make content-related jokes. Those who fear looking incompetent remain humorless.

Three Interesting Points

First: the researchers here are scrupulous to distinguish between college professors and K-12 teachers. They don’t claim that these findings apply to earlier grades.

Second: This research team — contradicting others — finds that women use humor more often than men. (The controversy continues!)

Third: One word in particular jumps out at me: “enhances.” Students didn’t say that the professors “tried” to be funny, but that they “enhanced presentations with humor.”

That is: the students suggest that — for teachers who want to build relationships — humor really does make lectures better. The researchers don’t address that question directly, but — at least to me — that conclusion flows directly from this research.

Oh, what the heck. Let’s have another.

Fourth: In humor as in all things, personality matters. If you’re not a funny teacher, don’t feel that you have to start telling jokes to build relationships. You — almost certainly — have your own ways to do so. Use your own authentic strategies to connect with your students.

I can’t cite research, but I’m almost certain: your own honest self-presentation will be MUCH more effective at building relationships that forced humor.

The Best Teaching Advice We’ve Got
Andrew Watson
Andrew Watson

You want to improve your teaching with psychology research?

We’ve got good news, and bad news.

And more good news.

Good News: we have lots and LOTS of research. We can talk about attention, or working memory, or the spacing effect, or motivation, or stress…the list is long. And super helpful.

So much practical advice!

Bad News: actually, the bad news is the same as the good news. We’ve got SO MUCH good research that it’s honestly hard to keep track of it all.

I mean, seriously. Should you start by looking at attention research? Or stress research?

Should we think about the motivational effects of student-teacher relationships, or the perils of working memory overload, or the benefits of desirable difficulty?

Which is most important?

Honestly, I think our next priority is not so much finding out new truths about learning, but organizing all the information we already have.

More Good News

If you agree that we really need someone to sort all these suggestions into a coherent system, you’ll be delighted to read this article by Stephen Chew (Twitter handle: @SChewPsych) and William Cerbin (@BillCerbin).

Other scholars — for instance, Barak Rosenshine — have put together a coherent system based on learning principles. Chew and Cerbin, instead, organize their system around cognitive challenges.

That is:

If students feel anxiety about a topic or discipline, that emotion will interfere with their learning.

If students have prior misconceptions, they will distort students’ understanding.

If classroom work or assignments go beyond working memory limits, students won’t learn effectively (or, at all).

When planning a course or a lesson or an assignment, teachers can think their way through these specific challenges. By contemplating each one, we can design our work to best facilitate learning.

Getting the Emphasis Right

If you’re thinking “this is such excellent news! It just can’t get any better!” — well — I’ve got some news: it gets better.

Chew and Cerbin write:

There is no single best teaching strategy for all students, topics, and situations. The proposed framework is not prescriptive … and can guide adaptation of teaching practice.

In other words, they’re not saying: here’s a list of things to do.

Instead, they are saying: here are several topics/problems to consider.

Teaching advice should not include “best practices.” (That’s a business concept.) It should include “best questions to ponder as we make decisions.” Chew and Cerbin make this point repeatedly.

Frequent readers know that I’ve been banging on for years with this mantra: “Don’t just do this thing; instead, think this way.”

We should think about our students’ working memory limitations. The strategies we use might differ for 1st graders and 8th graders.

We should think about the importance of transfer. A Montessori school and a KIPP school will (almost certainly) use differing strategies to reach that goal.

We should think about our students’ prior knowledge. The best way to measure that knowledge might be different for students with diagnosed learning differences.

Yes: we should consider these nine topics. But the ways we answer them must depend on our students, our schools, our curriculum, and ourselves.

For all these reasons, I recommend Chew and Cerbin’s article with great enthusiasm.

And, happily, you can meet Dr. Chew at our online conference in February! (In case you’re wondering: I was planning to write about this article before I knew he was joining the conference. A happy synchronicity.)

“But How Do We Know If It Works in the Classroom?”: The Latest on Retrieval Practice
Andrew Watson
Andrew Watson

We’ve heard so much about retrieval practice in the last two years that it seems like we’ve ALWAYS known about its merits.

But no: this research pool hasn’t been widely known among teachers until recently.

We can thank Agarwal and Bain’s wonderful Powerful Teaching for giving it a broad public audience. (If you had been attending Learning and the Brain conferences, of course, you would have heard about it a few years before that.)

Of course, we should stop every now and then to ask ourselves: how do we know this works?

In this case, we’ve got several answers.

In addition to Agarwal and Bain’s book, both Make it Stick (by Brown, Roediger, and McDaniel) and How We Learn (by Benedict Carey) offer helpful surveys of the research.

You could also check out current research. Ayanna Kim Thomas recently published a helpful study about frequent quizzing in college classrooms. (It helps!)

All these ways of knowing help. Other ways of knowing would be equally helpful.

For instance: I might want to know if retrieval practice helps in actual classrooms, not just in some psychology lab somewhere.

Yes, yes: Agarwal and Bain’s research mostly happened in classrooms. But if you’ve met them you know: it might work because they’re such engaging teachers! What about teachers like me — who don’t quite live up to their energy and verve?

Today’s News

A recent meta-analysis looked at the effect on retrieval practice in actual classrooms with actual students. (How many students? Almost 8000 of them…)

Turns out: retrieval practice helps when its studied in psychology labs.

And, it helps when vivacious teachers (like Agarwal and Bain) use it.

And, it helps when everyday teachers (like me) use it.

It really just helps. As in: it helps students learn.

A few interesting specifics from this analysis:

First: retrieval practice quizzes helped students learn more when they were counted for a final grade than when they weren’t. (Although: they did help when not counted toward the grade.)

Second: they helped more when students got feedback right away than when feedback was delayed. (This finding contradicts the research I wrote about last week.)

Third: short answer quizzes helped learning more than multiple choice (but: multiple choice quizzes did produce modest benefits).

Fourth: announced quizzes helped more than unannounced quizzes.

and, by the way

Fifth: retrieval practice helped middle-school and high-school students more than college students. (Admittedly: based on only a few MS and HS studies.)

In brief: all that good news about retrieval practice has not been over sold. It really is among the most robustly researched and beneficial teaching strategies we can use.

And: it’s EASY and FREE.

A Final Note

Because psychology research can be — ahem — written for other psychology researchers (and not for teachers), these meta-analyses can be quite daunting. I don’t often encourage people to read them.

In this case, however, authors Sotola and Crede have a straightforward, uncomplicated prose style.

They don’t hold back on the technical parts — this is, after all, a highly technical kind of writing.

But the explanatory paragraphs are unusually easy to read. If you can get a copy — ask your school’s librarian, or see if it shows up on Google Scholar — you might enjoy giving it a savvy skim.

“Sooner or Later”: What’s the Best Timing for Feedback?
Andrew Watson
Andrew Watson

Given the importance of feedback for learning, it seems obvious teachers should have well-established routines around its timing.

In an optimal world, would we give feedback right away? 24 hours later? As late as possible?

Which option promotes learning?

In the past, I’ve seen research distinguishing between feedback given right this second and that given once students are done with the exercise: a difference of several seconds, perhaps a minute or two.

It would, of course, be interesting to see research into longer periods of time.

Sure enough, Dan Willingham recently tweeted a link to this study, which explores exactly that question.

The Study Plan

In this research, a team led by Dr. Hillary Mullet gave feedback to college students after they finished a set of math problems. Some got that feedback when they submitted the assignment; others got it a week later.

Importantly, both groups got the same feedback.

Mullet’s team then looked at students’ scores on the final exams. More specifically, if the students got delayed feedback on “Fourier Transforms” — whatever those are — Mullet checked to see how they did on the exam questions covering Fourier.

And: they also surveyed the students to see which timing they preferred — right now vs. one week later.

The Results

I’m not surprised to learn that students strongly preferred immediate feedback. Students who got delayed feedback said they didn’t like it. And: some worried that it interfered with their learning.

Were those students’ worries correct?

Nope. In fact, just the opposite.

To pick one set of scores: students who got immediate feedback scored 83% on that section of an exam. Students who got delayed feedback scored a 94%.

Technically speaking, that’s HUGE.

Explanations and Implications

I suspect that delayed feedback benefitted these students because it effectively spread out the students’ practice.

We have shed loads of research showing that spacing practice out enhances learning more than doing it all at once.

So, if students got feedback right away, they did all their Fourier thinking at the same time.  They did that mental work all at once.

However, if the feedback arrived a week later, they had to think about it an additional, distinct time. They spread that mental work out more.

If that explanation is true, what should teachers do with this information? How should we apply it to our teaching?

As always: boundary conditions matter. That is, Mullet worked with college students studying — I suspect — quite distinct topics. If they got delayed feedback on Fourier Transforms, that delay didn’t interfere with their ability to practice “convolution.”

In K-12 classrooms, however, students often need feedback on yesterday’s work before they can undertake tonight’s assignment.

In that case, it seems obvious that we should get feedback to them ASAP. As a rule: we shouldn’t require new work on a topic until we’ve given them feedback on relevant prior work.

With that caveat, Mullet’s research suggests that delaying feedback as much as reasonably possible might help students learn. The definition of “reasonably” will depend on all sorts of factors: the topic we’re studying, the age of my students, the trajectory of the curriculum, and so forth.

But: if we do this right, feedback helps a) because feedback is vital, and b) because it creates the spacing effect. That double-whammy might help our students in the way it helped Mullet’s. That would be GREAT.

 

“Rich” or “Bland”: Which Diagrams Helps Students Learn Deeply?
Andrew Watson
Andrew Watson

Here’s a practical question: should the diagrams we use with students be detailed, colorful, bright, and specific?

Or, should they be simple, black and white, somewhat abstract?

We might reasonably assume that DETAILS and COLORS attract students’ attention. If so, they could help students learn.

We might, instead, worry that DETAILS and COLORS focus students’ attention on surface features, not deep structures. If so, students might learn a specific idea, but not transfer their learning to a new context.

In other words: richly-decorated diagrams might offer short-term benefits (attention!), but result in long-term limitations (difficulties with transfer). If so, blandly-decorated diagrams might be the better pedagogical choice.

Today’s Research

Scholars in Wisconsin — led by David Menendez — have explored this question.

Specifically, they asked college students to watch a brief video about metamorphosis. (They explained that the video was meant for younger students, so that the cool college kids wouldn’t be insulted by the simplicity of the topic.)

For half the students, that video showed only the black-and-white diagram to the left; for the other half, the video showed the colors and dots.

Did the different diagrams shape the students’ learning? Did it shape their ability to transfer that learning?

Results, Please…

No, and yes. Well, mostly yes.

In other words: students who watched both videos learned about ladybug metamorphosis equally well.

But — and this is a BIG but — students who watched the video with the “rich” diagram did not transfer their learning to other species as well as students who saw the “bland” diagram.

In other words: the bright colors and specifics of the rich diagram seem to limit metamorphosis to this specific species right here. An abstract representation allowed for more successful transfer of these concepts to other species.

In sum: to encourage transfer, we should use “bland,” abstract diagrams.

By the way: Team Menendez tested this hypothesis with both in-person learners and online learners. They got (largely) the same result.

So: if you’re teaching face-to-face or remotely, this research can guide your thinking.

Some Caveats

First: as is often the case, this effect depended on the students’ prior knowledge. Students who knew a lot about metamorphosis weren’t as distracted by the “rich” details.

Second: like much psychology research, this study worked with college students. Will its core concepts work with younger students?

As it turns out, Team Menendez has others studies underway to answer that very question. Watch This Space!

Third: Like much psychology research, this study looked at STEM materials. Will it work in the humanities?

What, after all, is the detail-free version of a poem? How do you study a presidency without specifics and details?

When I asked Menendez that question, he referred me to a study about reader illustrations. I’ll be writing about this soon.

In Sum

Like seductive details, “rich” diagrams might seem like a good teaching idea to increase interest and attention.

Alas, that perceptual richness seems to help in the short term but interfere with transfer over time.

To promote transfer, teach with “bland” diagrams — and use a different strategy to grab the students’ interest.

“Before You Change Your Teaching, Change Your Thinking”
Andrew Watson
Andrew Watson

When I attended my first Learning and the Brain conference, more than a decade ago, I had a simple plan:

Step 1: Listen to the researcher’s advice.

Step 2: Do what the researcher told me to do.

Step 3: Watch my students learn more.

Step 4: Quietly glow in the satisfaction that my teaching is research-based.

In fact, I tried to follow that plan for several years. Only gradually did I discover that it simply couldn’t work.

Why?

Because researchers’ advice almost always applies to a very specific, narrow set of circumstances.

The teaching technique they use to help — say — college students learn calculus might not help my 10th graders write better Macbeth essays.

Or: their teaching strategy encourages a technology that my Montessori school forbids.

Or: research on American adolescents might not yield results that help teens raised in other cultures.

In other words: psychology and neuroscience research don’t provide me a handy checklist. I don’t just need to change what I do; I need to change how I think. I really wish someone had said to me:

“Before you change your teaching, change your thinking.”

Example the First

I thought of this advice when I saw a recent Twitter post by Otto Warman (@MrOWarman), a math teacher in Britain.

Warman has gone WAY beyond following a researcher’s checklist. Instead, he has synthesized an impressive amount of research, and reorganized it all into a lesson-planning system that works for him.

As you can see, his lesson plan form (which he has generously shared) prompts him to begin class with retrieval practice, then to introduce new information, then to check for understanding, and so forth. (You can click on the image to expand it.)

Each circle and slice of the diagram includes helpful reminders about the key concepts that he’s putting into action.

That is: he’s not simply enacting someone else’s program in a routinized way. He has, instead, RETHOUGHT his approach to lesson planning in order to use research-supported strategies most appropriately and effectively.

To be clear: I DO NOT think you should print up this sheet and start using it yourself. That would be a way to change what you do, not necessarily a way to change what you think. The strategies that he has adopted might not apply to your students or your subject.

Instead, I DO THINK you should find inspiration in Warman’s example.

What new lesson plan form would you devise?

Are there cognitive-science concepts you should prioritize in your teaching?

Will  your students benefit especially from XYZ, but not so much from P, Q, or R?

The more you reorganize ideas to fit your particular circumstances, the more they will help your teaching and your students.

Example the Second

Over on his blog (which you should be reading), Adam Boxer worries that we might be making a mess of retrieval practice.

Done correctly, retrieval practice yields all sorts of important benefits. Done badly, however, it provides few benefits. And takes up time.

For that reason, he explains quite specifically how his school has put retrieval practice to work. As you’ll see when you review his post, this system probably won’t work if teacher simply go through the steps.

Instead, we have to understand the cognitive science behind retrieval practice. Why does it work? What are the boundary conditions limiting its effectiveness? How do we ensure that the research-based practice fits the very specific demands of our classes, subjects, and students?

Retrieval practice isn’t just something to do; it’s a way to think about creating desirable difficulty. Without the thinking, the doing won’t help.

To Sum Up

What’s the best checklist for explaining a concept clearly? There is no checklist: think differently about working memory and schema theory.

What’s the best daily schedule for a school? There is no best schedule: think differently about attention.

What steps help are most powerful to help students manage stress? Before we work steps, we have to think differently about students’ emotional and cognitive systems.

To-do lists are straightforward and easy. Teaching is complex and hard. Think different.

“Successive Relearning”: 1 + 1 = +10%
Andrew Watson
Andrew Watson

We teachers get LOTS of advice from cognitive science. Research tells us to…

…monitor and manage our students’ stress levels.

…use mid-class exercise to enhance attention.

interleave topics to create desirable difficulties.

Each individual suggestion has lots of research behind it, and we’re glad to get these ideas.

But: what happens when we start thinking about combinations?

If we use more than one technique at a time, do the benefits add onto each other? Do they conflict with each other? How can we link up research-informed strategies to create the best overall learning experience?

Focus on Memory

In the last ten years, we’ve seen a real explosion in research about long-term memory formation (aka, learning).

We’ve seen that retrieval practice takes no more time than simple review, but results in lots more learning.

We’ve seen that spreading practice out (aka, spacing) helps students learn better than bunching practice together.

So, here’s the big question: what happens if we do both? Is retrieval practice + spacing more powerful than each technique by itself? Is 1+1 greater than 2?

A research team at Kent State recently explored this question.

In this study, researchers developed a complex study paradigm that created both retrieval practice and spacing. Unlike some retrieval practice exercises — which simply require students to try to remember the answer — this paradigm required students to get questions correct before they were done.

Researchers called this combination successive relearning. Students used successive relearning for some of the topics they learned in an advanced biopsychology course. They used their regular (“business-as-usual”) study techniques for the others.

Did successive relearning help students learn?

Answers, and More Questions

To some degree, the answer to that question is: it depends on what you compare to what.

Should the researchers compare this year’s students to last year’s students in the same course?

Should they compare students’ memory of topics where they did use successive relearning to topics where they didn’t?

Being thorough, this research team compared lots of variables to lots of other variables. Quite consistently, they found that “successive relearning” helped.

That is: this year’s students learned more than last year’s. Among this year’s students, successive relearning helped students remember more than their “business-as-usual” strategies.

Roughly speaking, students remembered at least 10% more using successive relearning than other strategies.

So: 1 + 1 = +10%

Case closed.

Case Reopened?

But wait just a minute here.

If you’ve got a good memory, this blog post might be ringing some bells.

Back in February of this year, I wrote about a study in which spacing helped students learn, but — in the long run — retrieval practice didn’t.

If you’ve got an AMAZING memory, you might recall a blog post from June of 2018. Researchers polled students about their study habits. They found that students did benefit from their own retrieval practice, but did not benefit from their spacing — the exact opposite result.

What’s going on here? Why did the two studies differ from each other? Why are they different from this study I’ve just described?

My hypothesis: specifics matter.

In those previous studies, the classes already included one of these techniques.

That is: the study I wrote about in February looked at a math class. Math classes already include lots of retrieval practice, because problem solving is a kind of RP. So, additional retrieval practice didn’t help. But the spacing did.

And, in the class I wrote about in 2018, the professor’s syllabus already included lots of spacing: cumulative review sheets and cumulative exams. So additional spacing done by the students  didn’t help. But their retrieval practice did.

In this most recent study, students benefitted from both because the biopsychology course didn’t include either.

In other words: the best combination of retrieval practice and spacing will depend — in part — on the structure and content of the course you’re teaching.

Final Thoughts

Here’s how I concluded my post back in February:

In my own view, we can ask/expect our students to join us in retrieval practice strategies. Once they reach a certain age or grade, they should be able to make flashcards, or use quizlet, or test one another.

However, I think spacing requires a different perspective on the full scope of a course. That is: it requires a teacher’s perspective. We have the long view, and see how all the pieces best fit together.

For those reasons, I think we can (and should) ask students to do retrieval practice (in addition to the retrieval practice we create). But, we ourselves should take responsibility for spacing. We — much more than they they — have the big picture in mind. We should take that task off their to do list, and keep it squarely on ours.

That’s an opinion, not a research conclusion. But I still think it’s true.