Skip to main content
Practical Advice for Students: How to Make Good Flashcards
Andrew Watson
Andrew Watson

Flashcards feel to me like a research sweet-spot.

In the first place: for the most part, students believe that they help — and are even willing to make them!

In the second place: flashcards should help. After all, flashcards promote retrieval practice. And as you know, research shows that retrieval practice really helps students learn.

So, if we can find specific research about flashcards, it should be especially useful in our work.

portrait of father teaching daughter how to read by using simple words and letters on a flash card at home

It would be even better if one of the researchers were Mark McDaniel — who co-authored make it stick: one of the great books on memory research for teachers.

If you agree with me on these points, I’ve got some good news for you today!

Starting with Questions

Far and away the most common flashcard question I hear is: “does it matter if students make the flashcards themselves?”

The logic behind this question makes sense. When students think about the material in order to make good flashcards, then that thought might promote learning.

In other words: flashcard making isn’t just the bad kind of “active learning” (students are BUSY!) but the good kind of “active learning” (students are THINKING!).

I have two doubts about this thought process.

First: students might not know enough to make good flashcards.

If their cards prompt them to recall relatively unimportant ideas and processes, then the subsequent retrieval practice won’t really help.

Second: making flashcards takes time.

If students have access to good flashcards — ones that highlight the core concepts, procedures, and facts — then studying with those cards will (perhaps) be more efficient than taking time to make their own.

Two other questions also suggest themselves:

What kind of questions should be on the flashcards?

Most students make detail flashcards. That is: flashcards that focus on facts, definitions, dates, and so forth.

They might also — or instead — make “conceptual” flashcards. That is: flashcards that combine details in compare/contrast patterns, or process steps.*

Question #3:

Do flashcards help some students more than others?

The good news: a recent study explores all those questions.

The First Question

Researchers started with a straightforward experiment. They had students read textbook passages – one about biological anthropology, the other about geology – and then study for a quiz.

The students were divided into four groups, based on how they studied:

Group A studied however they chose.

Group B received 12 flashcards prepared by the textbook makers.

Group C made their own 12 flashcards. They didn’t get any instructions about them.

Group D made their own 12 flashcards. They did get special instructions: “include 4 ‘conceptual’ questions” – that is, questions that compare/contrast, or that highlight several steps in a process.

Let’s return to the first question I asked above: did the students who made their own flashcards learn more than those who studied with pre-made flashcards?

Nope.

That is: students in Groups C & D (who made their own cards) did NOT learn more than those in Group B (who received pre-made flashcards).

Even worse: they DID spend more time.

So, at least in this experiment, asking students to make their own flashcards just isn’t very efficient. They DO spend more time, but DON’T learn more. A bad combination.

Okay, but what about the second question I asked above?

Did the students who made “conceptual” flashcards learn more than those who got no special instructions?

Again, Nope.

Students in Group C — who got no special instructions — mostly made “detail” flashcards. Students in Group D — who were instructed to make 4 “conceptual” flashcards – basically followed those instructions; they made 4 “conceptual” and 8 detail flashcards.

But: both groups spent the same amount of time, and got the same score on the quiz.

Digging Deeper

This research team had hypothesized that the “conceptual” flashcards would benefit learning, and were therefore surprised by the results of this first experiment.

However, they quickly saw a plausible explanation.

C-Group students – who got no instructions – made 12 flashcards. On average, 10 of them were detail flashcards, and the other 2 were “conceptual.”

D-Group students – instructed to make 4 conceptual flashcards – did so: 8 detail cards and 4 “concept” cards.

So you can see, not much of a difference there.

In their next experiment, these researchers doubled down on these two different strategies.

Two new groups of students read the same two passages.

Group E received detail-only flashcards.

Group F received “conceptual”-only flashcards.

Now is there a difference?

By George I think they’ve got it.

Sure enough, in high enough percentages, “conceptual” flashcards do help students learn more.

Now We’re Getting Somewhere

Based on these first two experiments, I think we have some useful answers to our initial questions:

First: at least so far, we don’t have good evidence that students learn more when they make their own flashcards. Alas, this strategy seems inefficient, based on experiment #1.

Second: conceptual flashcards do help students more than detail flashcards, as long as there are enough of them.

I do think this second conclusion requires further nuance.

In the first place, some disciplines really do require lots of detail knowledge. When I studied Czech, I had (literally) HUNDREDS of vocabulary flashcards. Other fields might require a similarly detail-heavy flashcard deck.

In the second place, I do think K-12 students might need detail flashcards more than college students. After all, college students already know more details than younger students do – especially at the highly selective college where this study was performed.

Finally, the distinction between “detail” and “conceptual” might be overdrawn. Here’s a technique I’ve used in my own work.

Step 1: ask a student to draw two vocabulary flashcards, and to define those words.

In my English class, the student might define the words “protagonist” and “sympathetic.”

Step 2: ask the student to make a connection between the two words.

Now the student might say: “Well, a protagonist is typically a sympathetic character – like Janie Mae Crawford. But not always: Macbeth certainly is the protagonist, and certainly isn’t a sympathetic character.”

With this technique, two “detail” flashcards combine to require “conceptual” thinking – at least as defined by the researchers.

TL;DR

As long as we allow for nuance, and the possibility that future research will invite us to rethink these conclusions, this study suggests:

A: Students don’t benefit from making their own flashcards – as long as we provide them with good ones, and

B: Students DO benefit from flashcards that ask them to combine and organize information, not simply recall free-standing facts.

These suggestions – and others that have good research support – give us useful pointers to pass along to our students.


A Final Note:

Eagle-eyed readers will have noticed that I didn’t answer my third question: “do flashcards benefit some students more than others?”

This study does point to an answer, but … I don’t fully understand it.

In brief, “high structure-building” students don’t benefit as much from conceptual flashcards, because they already do better than “low structure-bulding” students – who DO benefit from conceptual flashcards.

Sadly, I don’t understand exactly what “high and low structure-building” means here. Those words sound like a proxy for “high scoring” and “low scoring,” but not exactly. Rather than pretend I know, I’m simply fessing up that there’s an extra variable here.

If you figure it out, please let me know!


* The authors of the study I’m describing acknowledge that this definition of “conceptual” is incomplete. They’re using that word as a handy shorthand for “flashcards that go beyond single details.”  In this blog post, I put the word “conceptual” in quotation marks to highlight the narrow specificity of their definition.


Lin, C., McDaniel, M. A., & Miyatsu, T. (2018). Effects of flashcards on learning authentic materials: The role of detailed versus conceptual flashcards and individual differences in structure-building ability. Journal of applied research in memory and cognition7(4), 529-539.

“Seductive Details” meet “Retrieval Practice”: A Match Made in Cognitive Heaven
Andrew Watson
Andrew Watson

Here’s a common problem: your job today is to teach a boring topic. (You don’t think it’s boring, but your students always complain…)

What’s a teacher to do?

One plausible strategy: You might enliven this topic in some entertaining way.

You’ve got a funny video,

or a clever cartoon,

or a GREAT anecdote about a colleague’s misadventure.

Okay, so this video/cartoon/anecdote isn’t one of today’s learning objectives. BUT: it just might capture your students’ interest and help them pay attention.

However tempting, this strategy does create its own problems. We’ve got lots of research showing that these intriguing-but-off-topic details can get in the way of learning.

That is: students rTwo baby goats, one brown and white, theo other black and white, frolicking in a field.emember the seductive details (as they’re known in the research literature), but less of the actual content we want them to know.

Womp womp.

Some time ago, I wrote about a meta-analysis showing that — yup — seductive details ACTUALLY DO interfere with learning: especially for beginners, especially in shorter lessons.

What could we do to fix this problem? If we can’t use our anecdotes and cartoons, do we just have to bore our students?

“Right-Sized” Retrieval Practice

Here’s one approach we might try: right-sized retrieval practice.

What does “right-sized” mean? Here goes:

One retrieval practice strategy is a brain dump. The instructions sounds something like this: “write down everything you remember about today’s grammar lesson.”

Another retrieval practice strategy calls for more specific questions: “what’s the differenece between a gerund and a participle?” “How might a participle create a dangling modifier?”

A group of scholars in Germany studied this hypothesis:

If teachers use the brain dump approach, students will remember the seductive detail — and it will become a part of their long-term memory.

If, on the other hand, teachers ask specific questions, students will remember the important ideas of the lesson — and not consolidate memory of the seductive detail.

They ran a straightforward study, considering a topic close to every teacher’s heart: coffee.

100+ college students in Germany read a lengthy passage on coffee: information about the coffee plant, its harvesting, its preparation, and its processing.

Half of them read a version including fun-but-extraneous information. For instance: do you know coffee was discovered?

Turns out: goat herders noticed that their goats ate the coffee beans and then did a kind of happy dance. Those herders wondered: could we get the same happy effects? Thus was born today’s coffee industry…

Remembering the GOAT

After reading these coffee passages — with or without seductive details — students answered retrieval practice questions.

Some got a “brain dump” promt: “What do you remember about coffee?”

Others got the specific questions: “What harvesting methods do you remember, and how do they differ?”

So, what effect did those specific questions have on memory of seductive details one week later?

Sure enough, as the researchers had hypothesized, students who answered specific retrieval practice questions remembered MORE of the lesson’s meaningful content.

And, they remembered LESS (actually, NONE) of the seductive details. (Of course, the details get complicated, but this summary captures the main idea.)

BOOM.

So, what’s a classroom teacher to do?

As is so often the case, we should remember that researchers ISOLATE variables and teachers COMBINE variables.

We always have to think about many (many!) topics at once, while research typically tries to find out the importance of exactly one thing.

Putting all these ideas together, I’d recommend the following path:

If I have to teach a topic my students find dull, I can indeed include some seductive details (Ha ha! Goats!) to capture their interest — as long as I conclude that lesson with some highly specific retrieval practice questioning.

And, based on this earlier post on seductive details, this extra step will be especially important if the lesson is short, or the students are beginners with this topic.

TL;DR

Seductive details can capture students’ interest, but also distract them from the important topics of the lesson.

To counteract this problem, teachers should plan for retriveal practice including specific questions — not just a brain dump.


By the way: I first heard about this “retrieval practice vs. seductive details” study from Bradley Busch (Twitter: @BradleyKBusch) and Jade Pearce (Twitter: @PearceMrs). If you’re not familiar with their work, be sure to look them up!


Eitel, A., Endres, T., & Renkl, A. (2022). Specific questions during retrieval practice are better for texts containing seductive details. Applied Cognitive Psychology36(5), 996-c1008.

Sundararajan, N., & Adesope, O. (2020). Keep it coherent: A meta-analysis of the seductive details effect. Educational Psychology Review32(3), 707-734.

Starting Class with “Prequestions”: Benefits, Problems, Solutions
Andrew Watson
Andrew Watson

We’ve known for many years now that retrieval practice works.

Hispanic student wearing a blue shirt raising his hand to ask a question in class

That is: after we have introduced students to a topic, we might REVIEW it with them the next day. However, they’ll remember it better if we ask them to try to RETRIEVE ideas and procedures about it.

As Dr. Pooja Agarwal and Patrice Bain write, we want students to “pull information out of their brains” (retrieve) not “put information back into their brains” (review).

Sadly, we know that students’ intuition contradicts this guidance. They really want to reread or review their notes, rather than ask themselves questions.

In this (very sad) study, for instance, Dr. Nate Kornell and Dr. Lisa Son found that students think review works better than retrieval even when they do better on quizzes following retrieval!

Yes, even the experience of learning more doesn’t persuade students that they learned more.

YIKES.

The More Things Change…

Let’s take this retrieval practice idea one step further.

I wrote above that answering questions helps students learn AFTER they have been introduced to a topic.

But: does answering questions help students learn a topic even BEFORE they study it?

On the one hand, this suggestion sounds very strange. Students can’t get these “prequestions” right, because they haven’t yet studied the topic.

On the other hand, we’ve got research showing that this strategy works!

In one of my favorite studies ever, Dr. Lindsay Richland found that “prequestions” help students learn. And, she then worked really hard to disprove her own findings. When she couldn’t explain away her conclusions, she finally accepted them. *

Similarly, a more recent study suggests that learning objectives framed as questions (“Where are mirror neurons located?”) helps students learn more than LOs framed as statements (“You will learn where mirror neurons are located.”).

Although this prequestion strategy hasn’t been studied as much as retrieval practice, I do think it has enough research behind it to merit teachers’ respectful attention.

However, I do think this approach has a practical classroom problem…

Sustaining Motivation

For the most part, my high-school students are an amiable lot. If I ask them to do something … say, answer retrieval practice questions … they’ll give it a go.

And, they almost certainly want to get those questions right.

In a class discussion about Their Eyes Were Watching God, for instance, we might compare Janie’s three “husbands.” If I ask a student the following day to list some points of comparison from memory (retrieval practice!), they’ll feel that they ought to remember an answer or two.

Let’s try this logic with prequestioning.

Imagine I ask my students this prequestion: “Why do you think the novel’s protagonist will have the nickname ‘Alphabet’?”

My students will gamely try some answers.

However, I worry that – over time – they’ll start losing interest.

They almost never get these answers right.

And, there’s no “penalty” for getting them wrong, or reward for getting them right. (We don’t want students to focus on rewards and penalties, but schools typically work this way…)

From the student perspective, in other words, the whole prequestion strategy feels like an exercise in futility.

Why should they bother to think seriously about these un-answerable questions? They feel like wasted mental effort…

Two Solutions

First: I’ve tried in the past to solve this problem by using the strategy infrequently.

If my students don’t experience this quirky frustration too often, I hope, they won’t mind participating in this odd ritual.

Recent research, however, offers a second solution – a more honorable solution than mine.

In this study, by Dr. Steven Pan and Dr. Michelle Rivers, prequestions consistently helped students learn.

However, students didn’t really notice the benefit of prequestions – even when they learned more from answering them. (This result sounds a lot like the Kornell and Son study about retrieval practice; students don’t register the benefits they experience.)

So, Pan and Rivers tried several solutions. Specifically, they found benefits to a multi-step approach:

Step 1: have students learn some info with prequestions, and some without.

Step 2: give them a no-stakes quiz on the info.

Step 3: let them see that they remembered information better after prequestions.

Step 4: next time, ask students to recall how well they remembered after answering prequestions.

In other words: students need to experience the benefits and to have them repeatedly pointed out. This combination, probably, helps students believe that prequestions really do help.

This insight (probably?) helps with the motivation problem that has been troubling me in the past.

In other words: students who believe that prequestions will help are much likelier to participate in the curious mental exercise of trying to answer questions whose answer they can’t yet know.

TL;DR

When students answer questions about information they’re about to learn, they remember that information better – even if they get the answers wrong.

This strategy might be effective in the short term, but hamper motivation over time. After all, why should students even try to answer questions if they’re unlikely to know the answer?

To counteract this motivational problem, take students through Pan & Rivers’s procedure for them to experience and remember the benefits that prequestions provide.

We don’t have LOTS of research on this strategy, but we do have enough to make it a plausible approach.


* Sadly, the “prequestion” strategy has frequently been called “pretesting.” Of course, the presence of the stem “test” both confuses the strategy (there’s no testing!) and disinclines people from participating (who wants more testing?).

So, let me emphasize: “prequestions” are simply questions. They’re not a test.

BTW: I’ve recently seen the word “pretrieval” as a way to avoid the “pretest” moniker. You might like it better than “prequestions.”


Agarwal, P. K., & Bain, P. M. (2019). Powerful teaching: Unleash the science of learning. John Wiley & Sons.

Kornell, N., & Son, L. K. (2009). Learners’ choices and beliefs about self-testing. Memory17(5), 493-501.

Pan, S. C., & Rivers, M. L. (2023). Metacognitive awareness of the pretesting effect improves with self-regulation support. Memory & Cognition, 1-20.

Richland, L. E., Kornell, N., & Kao, L. S. (2009). The pretesting effect: Do unsuccessful retrieval attempts enhance learning?. Journal of Experimental Psychology: Applied15(3), 243.

Sana, F., Forrin, N. D., Sharma, M., Dubljevic, T., Ho, P., Jalil, E., & Kim, J. A. (2020). Optimizing the efficacy of learning objectives through pretests. CBE—Life Sciences Education19(3), ar43.

The Hidden Lives of Learners
Andrew Watson
Andrew Watson

Many times over the last several years, I’ve heard enthusiastic reviews of a seemingly-magical book called The Hidden Lives of Learners, by Graham Nuthall.

Book Cover for The Hidden Lives of Learners by Graham Nuthall. The cover shows a mountain range in front of a blue and cloudy sky.

Here’s the magic: Nuthall’s frankly astonishing research method.

Working in New Zealand classrooms in the 1980s, he put mics on all students and teachers. And, he had cameras in the classroom.

He and his team also broke down the teachers’ unit plans into granular learning goals. For instance, a unit on Antarctica might have 80 specific facts or concepts that the students should learn.

Finally, Nuthall’s team tested students both before and after these units.

Given this quite extraordinary data set, Team Nuthall could look at remarkably specific questions:

How much information about each topic did students already know before the unit began?

How much did they learn?

What, very specifically, did each student do and say to learn each specific new concept?

You can see why readers have responded so strongly to Nuthall’s method.

So, based on all his data, what did Nuthall conclude?

The Magic Number

Regular blog readers already know about the Spacing Effect.

That is: students learn more when they spread practice out than when they do the same amount of practice all at once.

In my experience, this research finding started getting broader notice in … say … 2015 or so. (I completed my grad program in 2012, and I don’t remember the spacing effect getting much — or any — attention at that time.)

Well, Nuthall’s research led him to a very similar conclusion more than a decade before.

That is: in Hidden Lives, Nuthall writes…

We discovered that a student needed to encounter, on at least three different occasions, the complete set of the information she or he neede to understand a concept.

If the information was incomplete, or not experienced on three different occasions, the student did not learn the concept. (63)

Similar to research into the spacing effect, Nuthall’s research shows that students must devote brain space to an idea several times — spread out over more than one class meeting — to consolidate that idea in long-term memory.

Later in Hidden Lives (p. 126), Nuthall suggests that students should “encounter the complete set of information” on four occassions — not three.

For me, the precise number (is it 4? is it 3?) is less important than the broader concept: teachers should build curricula that ensure students delve into an idea several times. One or two encounters can’t create enough momentum to change memory systems.

I think that Nuthall’s method provides substantial support for translating the spacing effect research into classroom practice. Both psychology research AND Nuthall’s deep classroom investigation arrive independently at substantially similar ideas.

Changing the Focus

Most research in this field focuses on what teachers do. Nuthall — wisely — insists that we focus on what students do.

His methodology — all those microphones, all those transcripts — helps him recognize all those “encounters” with ideas. And, crucially, students often “encounter” ideas in their conversations and projects with other students.

This observation leads to several important insights.

First, students often have prior knowledge about a topic.

When that prior knowledge is incorrect, it BOTH hinders their understanding of new ideas AND hampers their classmates’ efforts to learn correct ideas.

For this reason — I’m extrapolating from Nuthall here — teachers really should focus on students’ prior misconceptions.

Unless we know what our students (wrongly) think they know, their misinformation will substantially muddle the learning process.

Second, building classroom culture matters.

This seemingly obvious statement comes from one of Nuthall’s most alarming findings (well: alarming to me).

The students in these classes were AMAZINGLY unkind to one another. Casual insults — even racial epithets — made up a regular part of classroom dialogue.

Nuthall proposes two solutions to this problem.

Option A: “Teachers therefore need to know who is in which friendship groups, who wants to be liked by whom, who has status, who is rejected.

They also need to know the kinds of beliefs and culture — about music, clothes, curriculum, learning, co-operating, and the like — that hold students’ relationships together.” (p. 37)

While I understand the logic behind this statement, it strikes me as frankly impossible. As I think over my various sophomore and senior English classes, it’s simply inconceivable to me that I would know — with any level of consistent detail — what the exact relationships are among all these people.

I might have a dim idea that this student is especially popular, or that those two are dating, or that some song or another has everyone’s attention. But for that knowledge to be broad and current: no way.

In fact, I think it would be inappropriate for me to know such things. Inquiring too closely into students’ personal and romantic lives does not strike me as healthy or appropriate.

A Better Way?

Happily, Nuthall proposes Option B:

“Some teachers have tried to deal with this problem [peer-to-peer unkindness] by creating an alternative culture within their classrooms — a culture of mutual respect and cooperation, a culture in which everyone is expected to succeed in some significant aspect of classroom activities.” (p. 37)

Now, this approach seems healthy, appropriate, and necessary.

Yes, I want my students to learn about Macbeth and topic sentences, but I also insist that they know how to treat one another well.

Nuthall’s findings about casual peer cruelty has reminded me how much happens in my classroom that I can’t see (“hidden lives of learners”), and how important it is that I solve those invisible problems.

The Very Big Picture

One final point stood out for me in Nuthall’s book, although my interpretation of it might not persuade you. Here’s the story…

Because Nuthall measured how much students already knew, and what they did to learn new information, he could track important patterns. One pattern went like this:

Students who didn’t know much about the topic learned most from the teacher.

Students who already knew a lot learned most by working on their own, or with peers. (pp. 86-7)

I think this finding might help us see past a controvesial binary in the field of education.

Current schooling debates have encouraged us to pick sides. Either we believe in direct instruction, or we believe in project pedagogies. (This sentence oversimplifies a very complex debate, but is a useful shorthand at this moment.)

Nuthall’s findings (and my own reading of schema theory) suggest an alternative viewpoint. Perhaps

Students who don’t know much about a topic (a.k.a. “novices”) learn most from the teacher (a.k.a. “direct instruction”), whereas

Students who already know a lot (a.k.a. “relative experts”) learn most by working on their own, or with peers (a.k.a. “project pedagogies”).

That is: before we say whether direct instruction or independent investigation is better for a student, we have to know where the student lies on the novice/expert continuum.

Novices need lots of guidance; relative experts benefit from more open-ended, self-driven exploration.

To be clear: I’ve been quietly advocating for this view for a few years now. It seems to me — although I could be wrong — that Nuthall’s data roughly support it.

Read This Book If…

…You’re intrigued by the possibility of extremely granular classroom research, focusing directly on the students’ experience,

…you want to see how the spacing effect plays out in the classroom,

…perhaps you want to know more about how students actually treat each other in day-to-day interactions.

…you want to hear an inventive and thoughtful researcher think aloud about his findings.

I don’t agree with everything that Nuthall has written. For instance, his account of working memory is not at all in line with current models of this cognitive function.

But, gosh: he and his book have given me lots to think about, and new ways to think about old ideas.

The Most Important 5 Minutes in Class: The Primacy/Recency Effect
Andrew Watson
Andrew Watson

As we put our lesson plans together, we teachers want to know: are some minutes more valuable than others?

Student Holding Clock

That is:

Do students remember most at the 10-minute mark of the lesson, because they’re mentally revved up?

Or, perhaps they remember most from the final five minutes, because the whole class has led to this grand conclusion.

Or, perhaps some other time slot generates the most learning, because psychology reasons.

What does the research tell us?

Start Here

I occasionally see teaching advice that seeks to answer this question. That advice typically begins with a fascinating research pool.

Here’s the story.

Researchers present students with — say — a list of 15 words. After distraction, how many  words do students remember? And, can we predict which ones?

Several studies suggest a consistent answer.

Students tend to remember words from the beginning of the list. Researchers call that the “primacy” effect.

And, they remember words from the end of the list. That result gets the moniker “recency effect.”

Going all the way back to 1962, this primacy/recency effect has a lot of research behind it. (For a more recent study, click here.)

Lab to Classroom

So, how should teachers plan our lessons based on this particular finding?

Let’s imagine that I tell my students a list of 8 instructions. Because of the primacy/recency effect, I suspect they’ll remember the early and late instructionst better than the ones in the middle. (Hint: maybe I should write down a long list of instructions…)

But: what does this effect tell us about the most valuable teaching time during a class period as a whole?

From time to time, scholars who translate psychology research for classroom teachers make this argument:

The primacy/recency effect suggests that the first several minutes of class, and the final several minutes of class, have the greatest effect on learning.

That is: For the same reason that students remember the first and last instruction from my list of 8, they’ll learn the most during the first and last minutes of class.

Voila: a research-based answer to the question.

I confess, however, that I myself have doubts.

The argument says, in effect:

Rules governing mental processes for 60-120 seconds also govern mental processes for 45-80 minutes.

Honestly, I’m just not sure that’s plausible. My doubts spring from two sources.

Doubts, and More Doubts

In the first place, I doubt this advice because it extrapolates so far beyond the initial research conditions.

If research tells me something about — say — college students, that conclusion might also apply to 1st graders. But it might not. 1st graders aren’t college students.

If research tells me something about adolescents in Iceland, that conclusion might apply to teens in Brazil. But it might not. Icelandic culture differs from Brazilian culture.

And, if research tells me about mental functions over one minute, that conclusion might apply to 20 minutes. (Or 45, or 80.) But IT MIGHT NOT. One minute isn’t twenty.

Long-time readers know I always focus on “boundary conditions.” From my perspective, this advice goes WAY beyond the boundaries of the initial research.

By the way: I’ve asked SEVERAL wise people if they know of primacy/recency research that goes beyond a minute or two. So far, the answer is “no.”

The second reason I doubt this advice because of the specific mental functions involved.

As far as I can tell, researchers explain the primacy/recency effect by talking about short-term memory and working memory.

Both of these mental faculties describe very short-term mental functions. In my grad-school classes, our profs typically said that working memory holds information somewhere between 5 and 30 seconds.

If, in fact, the primacy/recency effect results from short-term and working memory functions, then those findings almost certainly won’t apply to mental processes that take 30+ minutes.

Like, say, our classes.

Just Answer the Question

If this advice doesn’t hold, what can research tell us about the “most important five minutes in class”?

I’ve got two answers.

Answer #1:

I’ve asked lots of people if they have a resaerch-informed answer to this question. So far, no one has a strong “yes.” But, If I hear of one, I’ll pass it along.

And, btw, a friend has answered “we really have to research that question!” So, I’ll let you know if/when his results come through.

Answer #2:

Long-time readers know my mantra: “don’t just do this thing; instead, think this way.”

In this case, I don’t think we can plausibly identify any one time slot that consistently generates the most learning.

Instead, we want to use core ideas from cognitive science to structure lesson plans effectively.

Use retriveal practice.

Beware working-memory overload.

Foster attention.

Activate prior knowledge.

And so forth.

If we follow this approach, every minute will build ultimately — and more-or-less equally — toward students’ learning.


Castel, A. D. (2008). Metacognition and learning about primacy and recency effects in free recall: The utilization of intrinsic and extrinsic cues when making judgments of learning. Memory & Cognition36(2), 429-437.

“No Cameras Allowed:” Does Taking Pictures During Lectures Benefit Learning?
Andrew Watson
Andrew Watson

Should students use cameras to take pictures of boardwork?

My high school students know my fierce anti-cell-phone policy. Nonetheless, they do occasionally ask if they may take a quick picture. (I typically say yes, and then check to be sure the phone goes back in the bag.)

When I do PD work at schools, or present at conferences, teachers take pictures of my slides ALL THE TIME.

Of course, the fact that students and teachers want to take those pictures doesn’t automatically mean that it’s a good idea to do so.

In fact, we have several reasons to think it’s a bad idea.

First reason: those photos might serve as subtle hint to our brain’s memory systems: “you don’t need to remember this, because you’ve got a photo.”

Second reason: the act of taking a photo might distract students (and teachers) from the content we’re discussing.

For example: If my students are thinking about framing the photo correctly (and using a cool filter), they’re NOT thinking about the ways that Fences combines both comic and tragic symbols.

Third reason: we’ve got research!

Check this out…

Prior Knowledge

Researchers have looked at this question for several years now.

In several studies, for instance, researchers asked participants to tour a museum and take pictures of various works of art.

Sure enough, later tests revealed that people remember more about the artwork they didn’t photograph than the artwork they did photograph.

As predicted above, something about taking a photograph made it harder – not easier – to remember the content.

For all these reasons, it seems, teachers might reasonably discourage students from taking photos.

At the same time, we should probably keep asking questions.

In particular, we should acknowledge that museum photography probably isn’t a good stand-in for classroom photography.

That is: my students (and teachers during PD) probably take photographs to help themselves remember important ideas, concepts, and examples. In museums, people might take pictures because that statue is so cool and beautiful!

The museum research offers a useful and interesting baseline, but we’d love some research into … say … actual classrooms.

Cheesemaking, and Beyond!

Well, I’ve got good news. A research team — led by Dr. Annie Ditta at the University of California, Riverside — has indeed started exploring exactly these questions.

In their studies, Team Ditta had students watch 3 short online video lectures about obscure topics. (Like, cheesemaking. No, I’m not joking.)

Participants took pictures of half of the slides.

Here’s the primary question: did students remember more information from the photographed slides, or the unphotographed slides?

SURPRISE! Taking pictures helped students remember the information on the slide.

For the reasons listed above, I did not expect that result. In fact, the researchers didn’t either.

But, those photos really helped.

In one study, students got 39% of the questions right for the slides they photographed, and 29% right for the ones they didn’t. (Stats folks: Cohen’s d was 0.41.)

Given how EASY this strategy is, we should really pay attention to this finding.

By the way, Dr. Ditta’s study explored some other questions as well.

First: students remembered info from photographed slides better both when they decided which slides to photograph and when they were told which ones to photograph.

So, if we tell students to “photograph this information,” we (probably) don’t disrupt the benefit.

Second: what about spoken information?

Common sense suggests that taking a picture won’t help remember spoken ideas (if those ideas aren’t written on the slide). In fact, taking that picture might distract students from the spoken words.

Strangely, in this research, Team Ditta came up with mixed – and surprising – results. In one study, taking a picture made no difference in memory of spoken material. In the other, it benefitted memory of spoken material.

WOW.

So, What Should Teachers Do?

Before we rush to apply research in our classrooms, we always want to ask a few questions.

In this case, I think we should have LOTS of questions.

First: Dr. Ditta’s research looked at brief, online lectures for college students.

Do these conclusions apply to longer classes? To in-person classes? For K-12 students? To students who aren’t neurotypical?

We just don’t (yet) know.

Second: participants in these studies didn’t do anything with the photos. They simply took them.

Would we find the same pattern for students who reviewed their photos, compared to – say – reviewing their notes?

We don’t (yet) know.

Third: participants were tested on their knowledge 5 minutes after the videos were done.

We’ve got LOTS of research showing that short-term gains don’t necessarily result in long-term learning.

So, would these findings hold a week later? A month later?

We don’t (yet) know.

 

Given all the things we don’t know, how can this research benefit us?

For me, these studies open up new possibilities.

In the past, as I described above, I permitted students (and teachers) to take photos. But I tried to discourage them.

I would even – on occasion – explain all the reasons above why I thought taking photos would reduce learning.

Well, I’m no longer going to discourage.

Instead, I’ll explain the complex possibilities.

Perhaps taking photos helps memory because it signals that THIS INFORMATION DESERVES ATTENTION.

Or, perhaps taking photos helps only if students DON’T review before tests. But, taking notes would help more … especially the students who DO review before tests.

And perhaps, just perhaps, this research team got flukey results because even well-done research sometimes produces flukey results. Future classroom research about taking photos of slides might ultimately suggest that (despite this thoughtful work), it really is a bad idea.

I wish the answer were simpler, but it just isn’t.

TL;DR

Surprising new research suggests that taking photos of lecture slides helps college students remember slide contents – even when students don’t review those photos.

Before we teachers rush to make dramatic changes, we should think carefully how this research fits our classrooms and contexts.

And, we should weigh this memory strategy against lots of other strategies – like retrieval practice.

Finally: let’s all watch this space!


Ditta, A. S., Soares, J. S., & Storm, B. C. (2022). What happens to memory for lecture content when students take photos of the lecture slides?. Journal of Applied Research in Memory and Cognition.

Soderstrom, N. C., & Bjork, R. A. (2015). Learning versus performance: An integrative review. Perspectives on Psychological Science10(2), 176-199.

 

How Students (Think They) Learn: The Plusses and Minuses of “Interleaving”
Andrew Watson
Andrew Watson

As the school year begins, teachers want to know: can mind/brain research give us strategies to foster learning?

We might also wonder: what will our students think of those strategies?

College Students Sitting in Hallway

It seems plausible — even likely — that students will prefer the strategies that help them learn. If those strategies help, why wouldn’t students like them?

Strategies to Foster Learning

Some classroom truths seem almost to basic to say out loud. For instance:

#1: We want our students to learn several different sub-topics within any particular topic.

And

#2: Students need to practice to learn.

When teachers think about those basic truths at the same time, we often adopt a specific strategy.

We ask students to practice (that’s #2) each individual subtopic (that’s#1) on its own. So:

Students practice identifying nouns, and then they practice identifying verbs, and then the practice identifying adjectives.

Or, angles, then circumferences, then areas.

Or, backhand, then forehand, then serve.

We could represent this strategy this way: AAA, BBB, CCC. Each sub-topic gets its own discrete practice session.

But, would a different strategy be better? How about: ABC, CBA, BCA?

In other words: should students jumble different topics together when they practice?

Interleaving: Old Research, and New

The answer to that question is YES: students SHOULD jumble different sub-topics together when they practice.

For research confirmation, you can check out this study by Rohrer and Pashler.

Or, for a broader synthesis, explore Agarwal and Bain’s great book, Powerful Teaching.

Or, you might ask a pointed question: “has this strategy been tested in actual classrooms, not just in psychology research labs?”

The answer to that question is also YES.

recently published study by Samani and Pan tried this strategy in a college physics class.

Sure enough, students learned more when their homework problems were interleaved than when sub-topics were practiced one at a time.

That is: students whose practice problems covered Coulomb’s Law by itself learned less than those whose practice problems also included capacitors and composite wires.

So, we arrive at this tentative teaching advice:

No doubt, you have your students practice — either in class, or with homework, or both.

When students practice, they should work on a few sub-topics at a time, not just one.

So far, so good.

Paradox: Teaching Solutions Create Studying Problems

Let’s return to the question that opened this blog post: do students prefer the study strategy that fosters learning. (They should; after all, it helped them learn!)

Reader, they do not.

Why?

In Samani and Pan’s study (and many others), students found that effective learning strategies are more difficult.

That is: they require more thought, and frequently lead to more short-term mistakes. (Students did relatively badly on the homework before they did relatively well on the tests.)

From one perspective, this finding makes perfect sense.

If we do difficult mental work, we will struggle and fail more often. And yet, all that extra hard thinking will ultimately lead to more learning. (Soderstrom and Bjork have written a GREAT review article on this topic.)

That encouraging perspective, however, runs into a perfectly understandable alternative: most people don’t like struggle and failure.

We shouldn’t blame students for disliking the interleaving. It hurt their heads. They did badly on the homework. YUCK.

As teachers, we have the long-term perspective. We know that short-term struggle leads ultimately to greater learning.

But, most students lack that perspective. They feel the struggle and the pain, but don’t recognize the long-term benefits.

Teaching Advice 2.0

Given all these findings, how should we structure students’ practice?

I think all these findings add up to this guidance:

First: interleave practice.

Second: tell students that you are doing so, and explain why.

The language you use and the level of explanation will, of course, vary by the age of the student. But, let them know.

Third: structure grading systems to value ultimate learning more than immediate understanding.

After all, if we both require interleaved practice (which is quite difficult) and grade students on the success of their practice, we will — in effect — force them to have lower grades. They will rightly feel the injustice of this instructional paradigm.

In other words: this practice strategy — in my view — does imply a grading policy as well.

TL;DR

Students, of course, must practice to learn.

Teachers should structure their practice to cover a few sub-topics simultaneously.

We should explain why we’re doing so; “interleaving” ultimately results in more learning.

We should create grading structures that account for the initial difficulty of interleaved practice.

If we get this balance right, students will willingly face early learning challenges, and ultimately learn more.


Rohrer, D., & Pashler, H. (2010). Recent research on human learning challenges conventional instructional strategies. Educational Researcher39(5), 406-412.

Agarwal, P. K., & Bain, P. M. (2019). Powerful teaching: Unleash the science of learning. John Wiley & Sons.

Samani, J., & Pan, S. C. (2021). Interleaved practice enhances memory and problem-solving ability in undergraduate physics. NPJ science of learning6(1), 1-11.

Soderstrom, N. C., & Bjork, R. A. (2015). Learning versus performance: An integrative review. Perspectives on Psychological Science10(2), 176-199.

The Best Book on Cognitive Load Theory: Ollie Lovell to the Rescue
Andrew Watson
Andrew Watson

Teaching ought to be easy.

After all, we have a functionally infinite amount of long-term memory. You don’t have to forget one thing to learn another thing — really.

So: I should be able to shovel information and skills into your infinite long-term memory. Voila! You’d know everything

Alas, to get to your long-term memory, “information and skills” have to pass through your working memory. This very narrow bottleneck makes learning terribly difficult — as teachers and students well know.

If only someone would come up with a theory to explain this bottleneck. If only that theory would help teachers and students succeed despite its narrow confines.

Good News, with a Twist

Happily, that theory exists. It’s called “cognitive load theory,” and several scholars in Australia (led by John Sweller) have been developing it for a few decades now.

It explains the relationship between infinite long-term memory and limited working memory. It explores practical classroom strategies to solve the problems created by this relationship.

Heck, it even muses upon evolutionary explanations for some quirky exceptions to its rules.

In other words, it has almost everything a teacher could want.

Alas — [warning: controversial opinion] — it does include one glaring difficulty.

Cognitive load theory helps educational psychologists talk with other educational psychologists about these topics.

However, it relies on on a long list of terms, each of which describes complex — sometimes counter-intuitive — concepts.

If you start reading articles based on cognitive load theory, you might well discover that …

… a particular teaching practice works this way because of the “split attention effect” (which doesn’t mean exactly what it sounds like),

… but it works that way because of the “expertise reversal effect,”

… and “element interactivity” might explain these contradictory results.

For this reason, paradoxically, teachers who try to understand and apply cognitive load theory often experience cognitive overload.

As a result, teachers would really benefit from a book that explains cognitive load theory so clearly as not to overwhelm our working memory.

Could such a book exist?

Ollie Lovell To The Rescue

Yes, reader, it exists. Oliver Lovell has written Sweller’s Cognitive Load Theory In Action (as part of Tom Sherrington’s “In Action” series).

Lovell’s book does exactly what teachers want it to do: explain cognitive load theory without overloading our cognitive faculties.

Lovell accomplishes this feat with three strategies.

First, he has an impressive ability to explain cognitive load theory concepts with bracing clarity.

For instance, let’s go back to that “expertise reversal effect.” Why might a teaching strategy benefit a novice but not an expert?

Lovell’s answer: redundancy. Redundant information taxes working memory. And, crucially:

“What is redundant for an expert is not redundant for the novice, and instructional recommendations are reversed accordingly.”

That’s the “expertise reversal effect.” Pithy, clear, sensible.

Because he writes and explains so clearly, Lovell helps teachers understand all that cognitive load theory terminology without feeling overwhelmed.

Second, Lovell gives examples.

SO MANY CLASSROOM EXAMPLES.

Whatever grade you teach, whatever topic you teach, you’ll find your discipline, your grade, and your interests represented. (I believe Lovell is a math teacher; as a high-school English teacher, I never felt slighted or ignored.)

Geography, piano, computer programming. It’s all there.

Knowing that clear explanations of worked examples can reduce working memory load, he provides plenty.

Practicing What He Preaches

Third, Lovell simplifies needless complexities.

Students of cognitive load theory will notice that he more-or-less skips over “germane” cognitive load: a category that has (ironically) created all sorts of “extraneous” working memory load for people trying to understand the theory.

He describes the difference between biologically primary and biologically secondary learning. And he explains the potential benefits this theory offers school folk.

However, Lovell doesn’t get bogged down in this niche-y (but fascinating) topic. He gives it just enough room, but not more.

Heck, he even keeps footnotes to a minimum, so as not to split the reader’s attention. Now that’s dedication to reducing working memory load!

Simply put: Lovell both explains and enacts strategies to manage working memory load just right.

In Brief

No doubt your pile of “must read” books is intimidatingly large.

If you want to know how to manage working memory load (and why doing so matters), Lovell’s Cognitive Load Theory in Action should be on top of that pile.


A final note:

I suspect Lovell’s explanations are so clear because he has lots of experience explaining.

Check out his wise, thoughtful, well-informed podcasts here.

Do Classroom Decorations Distract Students? A Story in 4 Parts… [Reposted]
Andrew Watson
Andrew Watson

As we prepare for the upcoming school year, how should we think about decorating our classrooms?

Can research give us any pointers?

This story, initially posted in March of 2022, paints a helpfully rich research picture.


Teacher training programs often encourage us to brighten our classrooms with lively, colorful, personal, and uplifting stuff:

Inspirational posters.

Students’ art work.

Anchor charts.

Word walls.

You know the look.

We certainly hope that these decorations invite our students in and invigorate their learning. (We might even have heard that “enriched environments promote learning.”)

At the same time, we might worry that all those decorations could distract our students from important cognitive work.

So, which is it? Do decorations distract or inspire? Do they promote learning or inhibit learning? If only we had research on this question…

Part I: Early Research

But wait: we DO have research on this objection.

Back in 2014, a team led by Dr. Anna Fisher asked if classroom decorations might be “Too Much of a Good Thing.”

They worked with Kindergarten students, and found that — sure enough — students who learned in highly-decorated rooms paid less attention and learned less than others in “sparsely” decorated classroom.

Since then, other researchers have measured students’ performance on specific mental tasks in busy environments, or in plain environments.

The results: the same. A busy visual field reduced working memory and attention scores, compared to plain visual environments.

It seems that we have a “brain-based” answer to our question:

Classroom decorations can indeed be “too much of a good thing.”

Taken too far, they distract students from learning.

Part II: Important Doubts

But wait just one minute…

When I present this research in schools, I find that teachers have a very plausible question.

Sure: those decorations might distract students at first. But, surely the students get used to them.

Decorations might make learning a bit harder at first. But ultimately students WON’T be so distracted, and they WILL feel welcomed, delighted, and inspired.

In this theory, a small short-term problem might well turn into a substantial long-term benefit.

And I have to be honest: that’s a plausible hypothesis.

Given Fisher’s research (and that of other scholars), I think the burden of proof is on people who say that decorations are not distracting. But I don’t have specific research to contradict those objections.

Part III: The Researchers Return

So now maybe you’re thinking: “why don’t researchers study this specific question”?

I’ve got good news: they just did.

In a recently-published study, another research team (including Fisher, and led by Dr. Karrie Godwin, who helped in the 2014 study) wondered if students would get used to the highly decorated classrooms.

Research isn’t research if we don’t use fancy terminology, so they studied “habituation.” As in: did students habituate to the highly decorated classrooms?

In the first half of their study, researchers again worked with Kindergarteners. Students spent five classes studying science topics in plainly decorated classrooms. (The visual material focused only on the topic being presented.)

Then they spent ten classes studying science topics in highly decorated classrooms. (These decorations resembled typical classroom decorations: posters, charts, artwork, etc.)

Unsurprisingly (based on the 2014 study), students were more distractable in the decorated classroom.

But: did they get used to the decorations? Did they become less distractable over time? Did they habituate?

The answer: a little bit.

In other words: students were less distractible than they initially were in the decorated classroom. But they were still more distractible than in the sparsely decorated room.

Even after ten classes, students hadn’t fully habituated.

Part IV: Going Big

This 2-week study with kindergarteners, I think, gives us valuable information.

We might have hoped that students will get used to decorations, and so benefit from their welcoming uplift (but not be harmed by their cognitive cost). So far, this study deflates that hope.

However, we might still hold out a possibility:

If students partially habituate over two weeks, won’t they fully habituate eventually? Won’t the habituation trend continue?

Team Godwin wanted to answer that question too. They ran yet another study in primary school classrooms.

This study had somewhat different parameters (the research nitty-gritty gets quite detailed). But the headline is: this study lasted 15 weeks.

Depending on the school system you’re in, that’s between one-third and one-half of a school year.

How much did the students habituate to the visual distractions?

The answer: not at all.

The distraction rate was the same after fifteen weeks as it was at the beginning of the year.

To my mind, that’s an AMAZING research finding.

Putting It Together

At this point, I think we have a compelling research story.

Despite our training — and, perhaps, despite our love of decoration — we have a substantial body of research suggesting that over-decorated classrooms interfere with learning.

The precise definition of “over-decorated” might take some time to sort out. And, the practical problems of putting up/taking down relevant learning supports deserves thought and sympathetic exploration.

However: we shouldn’t simply hope away the concern that young students can be distracted by the environment.

And we shouldn’t trust that they’ll get used to the busy environment.

Instead, we should deliberately create environments that welcome students, inspire students, and help students concentrate and learn.


Fisher, A. V., Godwin, K. E., & Seltman, H. (2014). Visual environment, attention allocation, and learning in young children: When too much of a good thing may be bad. Psychological science25(7), 1362-1370.

Godwin, K. E., Leroux, A. J., Seltman, H., Scupelli, P., & Fisher, A. V. (2022). Effect of Repeated Exposure to the Visual Environment on Young Children’s Attention. Cognitive Science46(2), e13093.

The Best Teaching Advice We’ve Got
Andrew Watson
Andrew Watson

I’m on my annual vacation during this month, so I’ll be posting some articles that got attention during the last year.

This post, initially from December of 2021, looks at a proposed different way to “put all the research pieces together.”


You want to improve your teaching with psychology research?

We’ve got good news, and bad news.

And more good news.

Good News: we have lots and LOTS of research. We can talk about attention, or working memory, or the spacing effect, or motivation, or stress…the list is long. And super helpful.

So much practical advice!

Bad News: actually, the bad news is the same as the good news. We’ve got SO MUCH good research that it’s honestly hard to keep track of it all.

I mean, seriously. Should you start by looking at attention research? Or stress research?

Should we think about the motivational effects of student-teacher relationships, or the perils of working memory overload, or the benefits of desirable difficulty?

Which is most important?

Honestly, I think our next priority is not so much finding out new truths about learning, but organizing all the information we already have.

More Good News

If you agree that we really need someone to sort all these suggestions into a coherent system, you’ll be delighted to read this article by Stephen Chew (Twitter handle: @SChewPsych) and William Cerbin (@BillCerbin).

Other scholars — for instance, Barak Rosenshine — have put together a coherent system based on learning principles. Chew and Cerbin, instead, organize their system around cognitive challenges.

That is:

If students feel anxiety about a topic or discipline, that emotion will interfere with their learning.

If students have prior misconceptions, they will distort students’ understanding.

If classroom work or assignments go beyond working memory limits, students won’t learn effectively (or, at all).

When planning a course or a lesson or an assignment, teachers can think their way through these specific challenges. By contemplating each one, we can design our work to best facilitate learning.

Getting the Emphasis Right

If you’re thinking “this is such excellent news! It just can’t get any better!” — well — I’ve got some news: it gets better.

Chew and Cerbin write:

There is no single best teaching strategy for all students, topics, and situations. The proposed framework is not prescriptive … and can guide adaptation of teaching practice.

In other words, they’re not saying: here’s a list of things to do.

Instead, they are saying: here are several topics/problems to consider.

Teaching advice should not include “best practices.” (That’s a business concept.) It should include “best questions to ponder as we make decisions.” Chew and Cerbin make this point repeatedly.

Frequent readers know that I’ve been banging on for years with this mantra: “Don’t just do this thing; instead, think this way.”

We should think about our students’ working memory limitations. The strategies we use might differ for 1st graders and 8th graders.

We should think about the importance of transfer. A Montessori school and a KIPP school will (almost certainly) use differing strategies to reach that goal.

We should think about our students’ prior knowledge. The best way to measure that knowledge might be different for students with diagnosed learning differences.

Yes: we should consider these nine topics. But the ways we answer them must depend on our students, our schools, our curriculum, and ourselves.

For all these reasons, I recommend Chew and Cerbin’s article with great enthusiasm.

And, happily, you can meet Dr. Chew at our online conference in February! (In case you’re wondering: I was planning to write about this article before I knew he was joining the conference. A happy synchronicity.)