Last week, I wrote that “upsides always have downsides.”
That is: anything that teachers do to foster learning (in this way) might also hamper learning (in that way).
We should always be looking for side effects.
So, let me take a dose of my own medicine.
Are there teaching suggestions that I champion that have both upsides and conspicuous downsides?
Case in Point: Retrieval Practice
This blog has long advocated for retrieval practice.
We have lots (and LOTS) of research showing that students learn more when they study by “taking information out of their brains” than “putting information back into their brains.” (This phrasing comes from Agarwal and Bain.)
So:
Students shouldn’t study vocabulary lists; they should make flash cards.
They shouldn’t review notes; insted, they should quiz one another on their notes.
Don’t reread the book; try to outline its key concepts from memory.
In each of these cases (and hundred more), learners start by rummaging around in their memory banks to see if they can remember. All that extra mental work results in more learning.
SO MUCH UPSIDE.
But wait: are there any downsides?
Let the Buyer Beware: Retrieval-Induced Forgetting
Sure enough, some researchers have focused on “retrieval-induced forgetting.”
Yup. That means remembering can cause forgetting.
How on earth can that be? Here’s the story…
Step 1: Let’s say I learn the definitions of ten words.
Step 2: I use retrieval practice to study the definitions of five of them. So, I remembered five words.
Step 3: Good news! Retrieval practice means I’ll remember the five words that I practiced better.
Step 4: Bad news! Retrieval-induced forgetting means I’ll remember the five words I didn’t practice worse. Yes: worse than if I hadn’t practiced those other five words.
In brief: when I remember part of a topic, I’m likelier to FORGET the part I didn’t practice. (Although, of course, I’m likelier to REMEMBER the part I did practice.)
So, retrieving induces forgetting. Now that’s what I call a downside.
Potential solution?
How do our students get the good stuff (memories enhanced by retrieval practice) without the bad stuff (other memories inhibited by retrieval practice)?
Here’s an obvious solution: tell our students about retrieval-induced forgetting.
Heck, let’s go one step further: tell them about it, and encourage them to resist its effects.
The research design here gets quite complicated, but the headline is:
They ran the same “retrieval-induced forgetting” study that others had run, and this time added a brief description of the problem.
In some cases, they added encouragement on how to overcome this effect.
So, what happened when they warned students?
Nothing. Students kept right on forgetting the un-practiced information (although they kept right on remembering the practiced information).
In brief: warnings about retrieval-induced forgetting just didn’t help. (Heck: in some cases, they seemed to promote even more forgetting.)
Alternative Solutions?
Much of the time, we benefit our students by telling them about reserach in cognitive science.
I routinely tell my high-school students about retrieval practice. I show them exactly the same studies and graphs that I show teachers in my consulting work.
In this case, however, it seems that sharing the research doesn’t help. Telling students about retrieval-induced forgetting didn’t stop retrieval induced forgetting.
Conclusion: it’s up to teachers to manage this side effect.
How? We should require retrieval of all essential elements.
For example:
When I teach my students about comedy and tragey, the definitions of those terms include lots of moving pieces.
I know that ALL THE PIECES are equally important. So I need to ensure that my retrieval practice exercises include ALL THE PARTS of those definitions.
Students don’t need to remember everything I say. But if I want them to remember, I need to ensure retrieval practice happens.
Each of us will devise different strategies to accomplish this goal. But to get the upside (from retrieval practice) we should mitigate the downside (from retrieval-induced forgetting).
TL;DR
Retrieval practice is great, but it might cause students to forget the parts they don’t retrieve.
Alas, we can’t solve this problem simply by warning our students.
So, we should structure our review sessions so that students do in fact retrieve EVERYTHING we want them to remember.
If we create such comprehensive retrieval, students can get the upsides and without the downsides.
Price, J., Jones, L. W., & Mueller, M. L. (2015). The role of warnings in younger and older adults’ retrieval-induced forgetting. Aging, Neuropsychology, and Cognition, 22(1), 1-24.
If we want students to remember what we teach–and, what teacher doesn’t?–we’ve got a vital strategy: spread practice out over time.
We’ve got scads of research showing that the same number of practice problems results in a lot more learning if those problems are spread out over days and weeks, compared to being done all at once.
We call this the spacing effect, and it’s as solid a finding as we’ve got in the field of educational psychology.
As teachers interested in psychology research, we should always be asking: “yes, but does that work in my specific context.”
For instance: if research shows that college students learn stoichiometry better in a flipped-classroom model, that doesn’t necessarily mean that my 3rd graders will learn spelling better that way.
In the language of psychology research, we’re looking for “boundary conditions.” What are the limits of any particular technique?
The Spacing Effect Meets Critical Thinking
Researchers in Canada wanted to know: does the spacing effect apply to the teaching of critical thinking?
Of course, we want our students to be effective critical thinkers. But, there’s heated debate about the best way to teach this skill.
Lots of people doubt that critical thinking can be taught as a free-standing skill. Instead, they believe it should be nested in a specific curriculum.
That is: we can be critical thinkers about sonnets, or about football play-calling strategy, or about the design of bridges. But, we can’t learn to think critically in an abstract way.
The Canadian researchers start with that perspective, and so they teach critical thinking about a specific topic: the reliability of websites. And, they go further to ask: will the spacing effect help students be better critical thinkers?
In other words: if we spread out practice in critical thinking, will students ultimately practice their critical craft more effectively?
The Research; The Results
To answer this question, researchers used a 3-lesson curriculum exploring the credibility of websites. This curriculum asked 17 questions within 4 categories: the authority of the website’s authors, the quality of the content, the professionalism of the design, and so forth.
Half of the 4th-6th graders in this study learned this curriculum over 3 days. The other half learned it over 3 weeks.
Did this spacing matter? Were those who spread their practice out more proficient critical website thinkers than those who bunched their practice together?
In a word: yup.
When tested a month later, students who spread practice out were much likelier to use all four categories when analyzing websites’ reliability. And, they used more of the 17 questions to explore those four categories.
To Sum Up
This research leads us to two encouraging, and practical, conclusions.
First: we can help our students be better critical thinkers when they analyze websites. (Heaven knows that will be a useful skill throughout their lives.)
Second: we can improve their ability by relying on the spacing effect. As with so many kinds of learning, we get better at critical thinking when we practice over relatively long periods of time.
You’ve probably heard of the “method of loci,” or — more glamorously — the “memory palace.”
Here’s how the strategy works. If I want to remember several words, I visualize them along a path that I know well: say, the walk from my house to the square where I do all my shopping.
To recall the words, I simply walk along that path again in my mind. This combination of visuals — the more striking the better — will help me remember even a long list of unrelated words.
This method gets lots of love, most famously in Joshua Foer’s Moonwalking with Einstein.
Surely we should teach it to our students, no?
Palace Boundaries
We always look for boundary conditions here on the blog. That is, even good teaching ideas have limits, and we want to know what’s outside those limits.
So, for the “method of loci,” one question goes like this: how often do you ask your students to memorize long lists of unrelated words?
If the answer is, “not often,” then I’m not sure how much they’ll benefit from building a memory palace.
Dr. Christopher Sanchez wondered about another limit.
The “method of loci” relies on visualization. Not everyone is equally good at that. Does “visuospatial aptitude” influence the usefulness of building a memory palace?
One Answer, Many Questions
The study to answer this question is quite straight-forward. Sanchez had several students memorize words. Some were instructed to use a memory palace; some not. All took tests of their visual aptitude.
Sure enough, as Sanchez predicted, students who used a memory palace remembered more words than those who didn’t.
And, crucially, palace builders with HIGH visualspatial aptitude recalled more words than those with LOW aptitude.
In fact, those with low aptitude said the memory-palace strategy made the memory task much harder.
This research finding offers a specific example of a general truth. Like all teaching strategies, memory palaces may help some students — but they don’t help all students equally.
This finding also leads to some important questions.
First: If a student has low visuospatial aptitude, how can we tell?
At this point, I don’t have an easy way to diagnose that condition. (I’ve asked around, but so far no luck.)
My best advice is: if a student says to you, “I tried that memory palace thing, but it just didn’t work for me. It’s so HARD!” believe the student.
Second: does this finding apply to other visualization strategies? More broadly, does it apply to dual coding theory?
Again, I think the answer is “probably yes.” Making information visual will help some students…but probably not all of them.
The Big Question (I Can’t Look…)
This next question alarms me a little; I hardly dare write it down. But, here goes…
However, might Sanchez’s research imply a kind of learning-anti-style?
That is, no one is a “visual learner.” But, perhaps some people don’t learn well from visual cues, and rely more on other ways of taking in information?
In other words: some students might have a diagnosed learning difference. Others might not have a serious enough difference to merit a diagnosis — but nonetheless struggle meaningfully to process information a particular way.
Those students, like Sanchez’s students with low visuospatial aptitude, don’t process information one way, and prefer to use alternate means.
So, again, that’s not so much a “learning style” as a “learning anti-style”: “I prefer anything but visual, please…”
I haven’t seen this question asked, much less investigated. I’ll let you know what I find as I explore it further.
Blake Harvard teaches psychology and coaches soccer at James Clemens High School. For three years now, he’s been actively at work trying out teaching strategies derived from cognitive psychology. And, he blogs about his work at The Effortful Educator.
I spoke with Blake about his work, hoping to learn more about the classroom strategies he finds most helpful and effective. (This transcript has been edited for clarity and brevity.)
Andrew Watson
Blake, thank you for taking the time to chat with me.
I always enjoy reading your blog posts, and learning about your strategies to connect psychology research with the teaching of psychology.
Can you give an example of research you read, and then you tried it out in your classroom? Maybe you tinkered with it along the way?
Blake Harvard
Well, first: retrieval practice and spacing. Research tells us that we forget things very rapidly. Forgetting information and then retrieving that information again strengthens ties in the brain. It promotes long term memory of that information.
So, I’m very conscious of different ways that my students elaborate on information and generate information.
What am I doing to have my kids review? Or, how am I spacing out the information that we were learning yesterday versus what we were learning a week ago versus what we were learning months ago. What are the ties among those things? How are they related?
In the past, when we completed a unit of study, it was in the past. We moved on. Now I’m very careful to revisit. I space out their practice and provide the opportunity for my students to think about material we’ve covered in the past.
And second, dual coding.
I think every teacher does some activity where they have students draw something. But dual coding is more than just about drawing things. It’s about organizing the information: how does it link up?
So, using those general concepts of retrieval practice, space practice, and dual coding, and applying them to my class specifically, I’m constantly trying to get my kids to think – to think more.
Andrew Watson:
Can you give an example of a strategy you use to be sure they do?
Blake Harvard
Sure. One example is, I use an unusual template with multiple-choice questions.
In a normal multiple-choice question, you have a kid read it. They answer “B.” You think, “okay B’s correct, let’s go to the next thing.”
Well, I’ve got this template where kids have to use – have to think about – A through E.
If B’s the right answer, they have to tell me why B’s the right answer. That is, they have to think about B.
But, then, they also have to take A, C, D, and E, and think about those too.
Why is C the wrong answer?
Or, how could you make D into the right answer?
Or, what question could you ask to make E the right answer?
Even, why is A tricky?
Andrew Watson
That seems both simple and extraordinarily powerful at the same time.
Blake Harvard
I don’t want to boil all of cognitive psychology down to that, but that’s really central, I think. There’s no elaborate trick. You don’t need any new technology. At the end of the day, you’re just getting those kids’ brains thinking more with the information.
Andrew Watson
Are there some teaching strategies that you read research about, and you tried them out, and you thought: I understand why this works in psychology lab, but it actually just doesn’t work in my classroom. I’m not gonna do it anymore.
Blake Harvard
Well, I just recently did something with flexible seating. I have an AP psychology student who wanted to try this out in my classroom, I said sure.
I have first block and second block class, and they’re both AP Psychology classes, and they’re both on the same pace, doing the same stuff.
We took the first block class, and we put them in a flexible seating classroom. This classroom had beanbags, it had a couch, it had comfortable chairs, it had only one or two tables with traditional chairs.
With my second block class, we kept them in more traditional seating: sitting at tables, facing the front.
And then I taught a unit, which is about seven or eight days, to both classes. I tried to keep everything the same as much as possible, and at the end we took our unit exam and then we compared the data.
So: how did the seating affect the grades, right?
The people in the flexible seating classroom did worse than the people in the traditional seating.
And then I took the grades and compared them to people who took the same course and the same test in years past. I got the same results. The flexible seating in that one classroom was worse than all of the other classes.
I know it’s not perfect methodology. Nothing is perfect “in the wild,” so to speak. But, I gave it a go. And I’ve decided that that’s not what I want to do.
Now, my student was focused more on the emotional part of it: “how did the kids feel about it?”
She had them fill out a survey: “Do you think you did better?” “Did you feel more comfortable in class?” – those sorts of things. And I haven’t seen those surveys yet; she’s compiling information herself. I am interested to see those too.
I heard some of the comments, and it’s interesting. Some of the comments on the first day of the class that was in the flexible seating classroom were like, “Oh my gosh! This is great!” And then by the end it was, “When is this over?”
Andrew Watson:
I’m wondering if your students take the strategies you use to their other classes? Do they study history with retrieval practice? Or science? Or do you find it stays pretty local to the work you do with them?
Blake Harvard
The short answer is: I don’t know. But I definitely impress upon them that this is how you should be studying.
Rereading your notes is not the most effective way to study. Going back over your notes and highlighting them is not effective. If you’re not thinking about the information, if you’re not actually trying to do something with it, you’re probably not being as effective as you should be.
In fact, it’s not just about simplifying; the right study strategies actually save you time. If you’ve tested yourself on this concept two and three times, and you get the same things right, you’re probably pretty good. You got it. Focus on the other things that you haven’t gotten right.
It doesn’t matter if it’s math, it doesn’t matter if it’s biology, it doesn’t matter what it is. The brain works the way the brain works. If you can’t use the information, if you can’t answer this question, you don’t know it. And you need to study it, because if you did know it, you would have answered the question. It’s as simple as that.
Andrew Watson
Yes. So, we talked about whether or not students use these strategies in other classes. Are there things you encourage them to do that have research support, but they’re particularly resistant to?
Blake Harvard
That’s an interesting question. Nothing off the top of my head is coming to me…
You know: those who don’t think they’re great artists – at first, don’t want to use dual coding. Because they think “my drawing’s bad.” And I’ll say: “you know, it’s not about how good your drawing is. It’s about what it represents to you, in your mind.”
Andrew Watson
The mental practice that goes into it.
Blake Harvard
Exactly. Once you explain that to them, they’re much more receptive to it.
Andrew Watson
One of the tricky parts of our field is that there are many teaching strategies that people say have “a whole lot of research support.” And part of our job is to be good at sifting the good stuff from the not good stuff.
Do you have any advice for teachers who are trying to figure out what really is valid and valuable, not just trending on Twitter?
Blake Harvard
It’s never easy, you know.
Often, I look for multiple cases of a particular teaching strategy. Did they test 20 kids in one classroom? Or was this tested across the country?
You also want to think about the people you have in your class. If researchers test a particular demographic, but you don’t teach that demographic, perhaps their conclusion doesn’t apply to your class. Something that might work in an elementary classroom: there’s a chance it could work in my AP Psychology classroom, but I’ve got to really look at it.
To be fair, this is something I’m figuring out myself.
Andrew Watson
I know that you are a coach as well as a teacher. I wonder if you use any of these strategies in your coaching world as well as your teaching world.
I want to show my soccer players what a skill should look like, what the strategy does on the field, why it works.
We want to start small. I want each player individually working on it, and perfecting it or getting better at it. Then we go into a small sided game: maybe two-versus-two or three-versus-three. And then, let’s work it into a bigger scenario.
Eventually, obviously the goal is that they use it in a real-world game.
Just like in the classroom, I’m not a huge fan of inquiry-based learning. I think that there are much more effective ways of teaching than that. I want to explain each new concept to them very clearly, in a very organized way, so that they have a good understanding of what it is. Then we try to apply it to real life. But I don’t start off there.
Andrew Watson
So, you follow the coaching version of direct instruction.
Blake Harvard
Right, yes.
Andrew Watson
Are there questions I ought to have asked you which I haven’t asked you?
Blake Harvard
It’s an interesting journey to get to where I am right now. I graduated with my Master’s Degree in 2006 and up until about 2016 I was just doing just normal professional development: whatever the school had for me to do.
Sometimes I was really excited about it; sometimes I was sitting in there barely paying attention. But now that I’ve found these different types of professional development opportunities, I see they can really improve you, and improve your students and your classroom.
You don’t have to think “I’ll just do the PD that I’m supposed to do and then I go back to my classroom.” There are ways – simple ways, easy ways – to improve your classroom, to improve learning for your students.
Andrew Watson
It’s interesting you say that, because you’ve described my journey as well. I had been a classroom teacher for decades when I found Learning and the Brain, and those conferences completely changed my professional trajectory.
Do students learn better after they experience failure? Two recent studies over at The Science of Learning help us answer that question.
In the first study, professors in a Canadian college wanted to help their Intro Bio students learn difficult concepts more effectively. (Difficult concepts include, for example, the “structural directionality of genetic material.”)
They had one Intro Biology section follow a “Productive Failure” model of pedagogy. It went like this.
First, students wrestled with conceptual problems on these difficult topics.
Second, they got in-class feedback on their solutions.
Third, they heard the professor explain how an expert would think through those topics.
Another Intro Bio section followed these same steps but in a different order:
First, they heard the professor explain how an expert would think .
Second, students wrestled with conceptual problems.
Third, they got in-class feedback on their solutions.
So, all students did the same steps. And, they all followed an “active pedagogy” model. But, one group struggled first, whereas the other group didn’t.
Who Learned More?
This answer proves to be unusually complicated to determine. The researchers had to juggle more variables than usual to come up with a valid answer. (If you want the details, click the link above.)
The headlines are:
On the next major test, students who experienced productive failure learned more.
On the final exam, however, only the “low performing” students did better after productive failure. For the middle- and upper- tier students, both strategies worked equally well.
Conclusion #1:
So, we can’t really conclude that productive failure helps students learn.
Instead, we’re on safer ground to say that – over the longer term – productive failure helps “low performing” students learn (compared to other kinds of active learning).
But Wait, There’s (Much) More
Two weeks after they published the study about Canadian college students in Biology classes, Science of Learning then published a study about German fifth graders learning fractions.
(As we discussed in this post, watching students learn fractions helps researchers measure conceptual updating.)
In particular, these researchers wanted to know if students learned better after they struggle for a while. (Again, for details click the link.)
In this case, the answer was: nope.
So, we arrive at Conclusion #2:
Somecollege students, but not most, learned more from productive failure in a biology class – compared to those who learned via other active learning strategies.
However, fifth graders did not learn more about fractions – compared to those who learned via direct instruction.
Got that?
The Biggie: Conclusion #3
When teachers come to research-world, we can be tempted to look for grand, once-and-for-all findings.
A particular study shows that – say – students learn better when they use an iPad to study astronomical distances. Therefore, we should equip all our students with iPads.
But, that’s NOT what the study showed. Instead, it showed that a particular group of students studying a particular topic with a particular technology got some benefit – compared to a particular alternate approach.
So, Conclusion #3:
Teachers can often find helpful research on teaching strategies.
We should assume that results vary depending on lots of highly specific conditions. And therefore, we should seek out research that includes students (and classroom subjects) as much like our own as possible.
And so: if you teach biology to college students, you might give the first study a close look to see if its methods fit your students well. (Given that it worked particularly well with struggling students, that variable probably matters to you.)
If, however, you teach fractions to fifth graders, you should probably hold off on productive failure – unless you find several other studies that contradict this one.
In other words: teachers can learn the most from psychology and education research when we investigate narrow and specific questions.
A final thought. I’ve only recently come across the website that published these studies. Congratulations to them for emphasizing the complexity of these research questions by publishing these studies almost simultaneously.
I’m sure it’s tempting to make research look like the last word on a particular topic. Here, they’ve emphasized that boundary conditions matter. Bravo.
If we want our students to think creatively, should they listen to music? If yes, does the timing matter?
Intuition might lead us either to a “yes” or to a “no.”
Yes: music might get students’ creative juices flowing. Especially if it’s upbeat, energetic, and particularly creative in itself, music might spark parallel creativity in our students’ thought processes.
No: on the other hand, music just might be a serious distraction. Students might focus so keenly on the music — or on trying to ignore the music — that they can’t focus on the creative work before them.
Do You Smell a CRAT?
Researcher Emma Threadgold used a common creativity test – with the unlikely acronym of CRAT – to answer this question.
Here’s how a CRAT works:
I give you three words: “dress,” “dial,” and “flower.”
You have to think of another word that – when combined with each of those words – produces a real word or phrase.
To solve a CRAT, you have to rifle through your word bank and try all sorts of combinations before – AHA! – you pull the correct answer up from the depths of your brain.
In this case, the correct answer is “sun”: as in, sundress, sundial, and sunflower.
They played music with English lyrics, with foreign lyrics, and with no lyrics. They played upbeat, happy music.
They even played library noise – with the sound of a photocopier thrown in for good measure.
In every case, music made it harder to solve CRAT problems.
To put that in stark terms: music interfered with listeners’ creative thinking.
(For those of your interested in statistics, the Cohen’s d values here are astonishing. In one of the three studies, the difference between music and no music clocked in a d=2.86. That’s easily the highest d value I’ve seen in a psychology study. We’re typically impressed by a value above 0.67.)
Case Closed?
Having done such an admirably thorough study, has Threadgold’s team answered this question for good?
Nope.
As always, teachers should look not for one definitive study, but for several findings that point in the same direction.
And, we should also look for boundary conditions. This research might hold up for these particular circumstances. But: what other circumstances might apply?
For me, one obvious answer stands out: timing.
Other researchers have studied creativity by playing music before the creative task, not during it.
For instance, this study by Schellenberg found that upbeat music produces higher degrees of creativity in Canadian undergraduates AND in Japanese five-year-olds. (Unsurprisingly, the five-year-olds were especially creative after they sang songs themselves.)
In this study, crucially, they listened to the music before, not during, the task.
Threadgold’s study, in fact, cites other work where pre-test music enhanced creativity as well.
More Questions
Doubtless you can think of other related questions worth exploring.
Do people who learn to play music evince higher degrees of creativity in other tasks?
How about courses in music composition?
Music improvisation training?
Does this effect vary by age, by culture, by the kind of music being played?
For the time being, based on what I know about human attention systems, this study persuades me that playing music during the creative task is likely to be distracting.
Depending on what you want your students to do, you might investigate other essential variables.
__________________
On a related topic: for Dan Willingham’s thoughts on listening to music while studying, click here.
If a brain expert offers me a teaching suggestion, I might respond: “Well, I know my students, and that technique just wouldn’t work with them.”
Alas, this rebuttal simply removes me from the realm of scientific discussion.
Scientific research functions only when a claim can be disproven. Yet the claim “I know my students better than you do” can’t be disproven.
Safe in this “I know my students” fortress, I can resist all outside guidance.
As Didau writes:
If, in the face of contradictory evidence, we [teachers] make the claim that a particular practice ‘works for me and my students’, then we are in danger of adopting an unfalsifiable position. We are free to define ‘works’ however we please.
It’s important to note: Didau isn’t arguing with a straw man. He’s responding to a tweet in which a former teacher proudly announces: “I taught 20 years without evidence or research…I chose to listen to my students.”
(Didau’s original post is a few years old; he recently linked to it to rebut this teacher’s bluff boast.)
Beware Teachers’ Judgment, Part 2
In their excellent book Understanding How We Learn, the Learning Scientists Yana Weinstein and Megan Sumeracki make a related pair of arguments.
They perceive in teachers “a huge distrust of any information that comes ‘from above’ “… and “a preference for relying on [teachers’] intuitions” (p. 22).
And yet, as they note,
There are two major problems that arise from a reliance on intuition.
The first is that our intuitions can lead us to pick the wrong learning strategies.
Second, once we land on a learning strategy, we tend to seek out “evidence” that favors the strategy we have picked. (p. 23)
Weinstein and Sumeracki cite lots of data supporting these concerns.
For instance, college students believe that rereading a textbook leads to more learning than does retrieval practice — even when their own experience shows the opposite.
The Problems with the Problem
I myself certainly agree that teachers should listen to guidance from psychology and neuroscience. Heck: I’ve spent more than 10 years making such research a part of my own teaching, and helping others do so too.
And yet, I worry that this perspective overstates its case.
Why? Because as I see it, we absolutely must rely on teachers’ judgment — and even intuition. Quite literally, we have no other choice. (I’m an English teacher. When I write “literally,” I mean literally.)
At a minimum, I see three ways that teachers’ judgments must be a cornerstone in teacher-researcher conversations.
Judgment #1: Context Always Matters
Researchers arrive at specific findings. And yet, the context in which we teach a) always matters, and b) almost never matches the context in which the research was done.
And therefore, we must rely on teachers’ judgments to translate the specific finding to our specific context.
For example: the estimable Nate Kornell has shown that the spacing effect applies to study with flashcards. In his research, students learned more by studying 1 pile of 20 flashcards than 4 piles of 5 flashcards. The bigger pile spaced out practice of specific flashcards, and thus yielded more learning.
So, clearly, we should always tell our students to study with decks of 20 flashcards.
No, we should not.
Kornell’s study showed that college students reviewing pairs of words learned more from 20-flashcard piles than 5-flashcard piles. But, I don’t teach college students. And: my students simply NEVER learn word pairs.
So: I think Kornell’s research gives us useful general guidance. Relatively large flashcard decks will probably result in more learning than relatively small ones. But, “relatively large” and “relatively small” will vary.
Doubtless, 2nd graders will want smaller decks than 9th graders.
Complex definitions will benefit from smaller decks than simple ones.
Flashcards with important historical dates can be studied in larger piles than flashcards with lengthy descriptions.
In every case, we have to rely on … yes … teachers’ judgments to translate a broad research principle to the specific classroom context.
Judgment #2: Combining Variables
Research works by isolating variables. Classrooms work by combining variables.
Who can best combine findings from various fields? Teachers.
So: we know from psychology research that interleaving improves learning.
We also know from psychology research that working memory overload impedes learning.
Let’s put those findings together and ask: at what point does too much interleaving lead to working memory overload?
It will be simply impossible for researchers to explore all possible combinations of interleaving within all levels of working memory challenge.
The best we can do: tell teachers about the benefits of interleaving, warn them about the dangers of WM overload – and let them use their judgment to find the right combination.
Judgment #3: Resolving Disputes
Some research findings point consistently in one direction. But, many research fields leave plenty of room for doubt, confusion, and contradiction.
For example: the field of retrieval practice is (seemingly) rock solid. We’ve got all sorts of research showing its effectiveness. I tell teachers and students about its benefits all the time.
And yet, we still don’t understand its boundary conditions well.
As I wrote last week, we do know that RP improves memory of specifically tested facts and processes. But we don’t know if it improves memory of facts and processes adjacent to the ones that got tested.
So: what should the teacher do right now, before we get a consistent research answer? We should hear about the current research, and then use our best judgment.
One Final Point
People who don’t want to rely on teacherly judgment might respond thus: “well, teachers have to be willing to listen to research, and to make changes to their practice based upon it.”
For example, that teacher who boasted about ignoring research is no model for our work.
I heartily – EMPHATICALLY – agree with that point of view.
At the same time, I ask this question: “why would teachers listen to research-based guidance if those offering it routinely belittle our judgment in the first place?”
If we start by telling teachers that their judgment is not to be trusted, we can’t be surprised that they respond with “a huge distrust of any information that comes ‘from above’.”
So, here’s my suggestion: the field of Mind, Brain, Education should emphasize equal partnership.
Teachers: listen respectfully to relevant psychology and neuroscience research. Be willing to make changes to your practice based upon it.
Psychology and neuroscience researchers: listen respectfully to teachers’ experience. Be up front about the limits of your knowledge and its applicability.
Made wiser by these many points of view, we can all trust each other to do our best within our fields of expertise.
As we regularly emphasize here on the blog, attempts to recall information benefit learning.
That is: students might study by reviewing material. Or, they might study with practice tests. (Or flashcards. Perhaps Quizlet.)
Researchers call this technique “retrieval practice,” and we’ve got piles of research showing its effectiveness.
Learning Untested Material?
How far do the benefits of this technique go?
For instance, let’s imagine my students read a passage on famous author friendships during the Harlem Renaissance. Then they take a test on its key names, dates, and concepts.
We know that retrieval practice helps with facts (names and dates) and concepts.
But: does retrieval practice help with the names, dates, and concepts that didn’t appear on the practice test?
For instance, my practice test on Harlem Renaissance literature might include this question: “Zora Neale Hurston befriended which famous Harlem poet?”
That practice question will (probably) help my students do well on this test question: “Langston Hughes often corresponded with which well-known HR novelist?”
After all, the friendship between Hurston and Hughes was retrieved on the practice test, and therefore specifically recalled.
But: will that question help students remember that…say…Carl van Vechten took famous photos of poet and novelist Countee Cullen?
After all, that relationship was in the unit, but NOT specifically tested.
So, what are the limits of retrieval practice benefits?
In his study, Eva asked pharmacology students to study a PowerPoint deck about acid reflux and peptic ulcers: just the sort of information pharmacists need to know. In fact, this PPT deck would be taught later in the course – so students were getting a useful head start.
Half of them spent 30 minutes reviewing the deck. The other half spent 20 minutes reviewing, and 10 minutes taking a practice test.
Who remembered the information better 2 weeks later?
Sure enough: the students who took the practice test. And, crucially, they remembered more information on which they had been tested AND other information from the PPT that hadn’t been specifically tested.
That it, they would be likelier to remember information about Zora Hurston and Langton Hughes (the tested info) AND about van Vechten and Cullen (the UNtested info).
However, Eva’s tested students did NOT remember more general pharmacology info than their untested peers. In other words: retrieval practice helped with locally related information, but not with the entire discipline.
But Wait! There’s More! (Or, Less…)
About 2 month ago, I posted on the same topic – looking at a study by Cindy Nebel (nee Wooldridge) and Co.
You may recall that they reached the opposite finding. That is, in their research paradigm, retrieval practice helped students remember the information they retrieved, and only the information they retrieved.
Whereas retrieval practice helped students on later tests if the questions were basically the same, it didn’t have that effect if the questions were merely “topically related.”
For instance, a biology quiz question about “the fossil record” didn’t help students learn about “genetic differences,” even though both questions focus on the topic of “evolution.”
What Went Wrong?
If two psychology studies looked at (basically) the same question and got (basically) opposite answers, what went wrong?
Here’s a potentially surprising answer: nothing.
In science research, we often find contradictory results when we first start looking at questions.
We make progress in this field NOT by doing one study and concluding we know the answer, but by doing multiple (slightly different) studies and seeing what patterns emerge.
Only after we’ve got many data points can we draw strong conclusions.
In other words: the fact that we’ve got conflicting evidence isn’t bad news. It shows that the system is working as it should.
OK, but What Should Teachers Do?
Until we get those many data points, how should teachers use retrieval practice most effectively?
As is so often the case, we have to adapt research to our own teaching context.
If we want to ensure that our students learn a particular fact or concept or process, we should be sure to include it directly in our retrieval practice. In this case, we do have lots (and LOTS) of data points showing that this approach works. We can use this technique with great confidence.
Depending on how adventurous we feel, we might also use retrieval practice to enhance learning of topically related material. We’re on thinner ice here, so we shouldn’t do it with core content.
But, our own experiments might lead us to useful conclusions. Perhaps we’ll find that…
Older students can use RP this way better than younger students, or
The technique works for factual learning better than procedural learning, or
Math yes, history no.
In brief: we can both follow retrieval practice research and contribute to it.
If we’re going to rely on research to improve teaching — that’s why you’re here, yes? — we need to hone our skepticism skills.
After all, we don’t want just any research. We want the good stuff.
But, we face a serious problem. If we’re not psychology or neuroscience researchers, how can we tell what’s good?
Over at TES, Bridget Clay and David Weston have four suggestions.
Seek out review articles.
Don’t be impressed by lists.
Look for disagreement.
Don’t be impressed by one shiny new study.
Their post is clear and thoughtful; I encourage you to read it all.
Second Look
I want to go back to their third suggestion: “seek criticism.” This one habit, I believe, can make us all substantially wiser readers of classroom-relevant research.
Here’s what I mean.
When I first started in brain-research world, I wanted to hear the enduring truths that researchers discovered about learning.
I would then (nobly, heroically) enact those truths in my classroom.
As an entirely hypothetical example: imagine I heard a presentation about research showing that fluorescent lights inhibit learning. (To be clear: I have no idea if this is true, or even if anyone claims that it’s true. I just made this up as an example.)
Given that research finding, I would boldly refuse to turn on the fluorescent lights in my classroom, and set up several lamps and candles. Learning would flourish.
Right?
Research Reality
Well, maybe. But, maybe not.
Researchers simply don’t discover “the truth about learning.” Instead, they try to disprove a particular claim in a particular way. If they can’t disprove it, then that claim seem slightly more plausible.
But, someone else might disprove it in some other way. Or, under some other conditions.
Such an incremental, lumpy process isn’t surprising or strange. The system should work this way.
When Clay and Weston warn us against being impressed by one new study, they’re making exactly this point. If one research team comes to a conclusion once, that’s interesting … but we shouldn’t make any changes to our classrooms just yet.
So, back to my example. I’ve heard that presentation about fluorescent lights. What should I do next?
I should — for the time being — assume that the claim (“fluorescent lights inhibit learning”) is UNTRUE, and go look for counter-examples.
Or, perhaps, I should assume the claim is CONTROVERSIAL, and seek out evidence on both sides.
How do I do that?
Skeptical Research, with Boundaries
Believe it or not, start by going to google.
Use words like “controversy” or “debate” or “untrue.”
So, I’d google “fluorescent lights and learning controversy.” The results will give you some ideas to play with. (In fact, I just tried that search. LOTS of interesting sources.)
You might go to Google Scholar, which provides links to scholarly articles. Try “fluorescent light learning.” (Again, lots of sources — in this case including information about ADHD.)
When you review several of these articles, you’ll start noticing interesting specifics. Researchers call them “boundary conditions.” A research claim might prove true for one subset of learners — that is, within these boundaries — but not another.
So: perhaps 3rd graders do badly with fluorescent lights. What about 10th graders?
Perhaps such light hampered learning of math facts. What about critical thinking?
Perhaps the researchers studied turtles learning mazes. Almost certainly, you aren’t teaching turtles. Until we test the claim with humans, we shouldn’t worry too much about turtle learning.
Perhaps — in fact, quite often — culture matters. Research findings about adolescence will differ in the US and Japan because cultural norms shape behavior quite differently.
Back to Beginnings
Clay & Weston say: seek out disagreement.
I say: AMEN!
Science works by asking incremental questions and coming to halting, often-contradictory findings.
Look for the contradictions. Use your teacherly wisdom to sort through them. You’ll know what to do next.
The headlines: highlighting helps students if the highlight the right amount of the right information.
Right amount: students tend to highlight too much. This habit reduces the benefit of highlighting, for several reasons.
Highlighting can help if the result is that information “pops out.” If students highlight too much, then nothing pops out. After all, it’s all highlighted.
Highlighting can help when it prompts students to think more about the reading. When they say “this part is more important than that part,” this extra level of processing promotes learning. Too much highlighting means not enough selective processing.
Sometimes students think that highlighting itself is studying. Instead, the review of highlighted material produces the benefits. (Along with the decision making before-hand.)
Right information.
Unsurprisingly, students often don’t know what to highlight. This problem shows up most often for a) younger students, and b) novices to a topic.
Suggestions and Solutions
Surma & Co. include several suggestions to help students highlight more effectively.
For instance, they suggest that students not highlight anything until they’ve read everything. This strategy helps them know what’s important.
(I myself use this technique, although I tend to highlight once I’ve read a substantive section. I don’t wait for a full chapter.)
And, of course, teachers who teach highlighting strategies explicitly, and who model those strategies, will likely see better results.
Surma’s post does a great job summarizing and organizing all this research; I encourage you to read the whole thing.
You might also check out John Dunlosky’s awesome review of study strategies. He and his co-authors devote lots of attention to highlighting, starting on page 18. They’re quite skeptical about its benefits, and have lots to contribute to the debate.
For other suggestions about highlighting, especially as a form of retrieval practice, click here.