Skip to main content
Retrieval Grids: The Good, the Bad, and the Potential Solutions
Andrew Watson
Andrew Watson

Retrieval Practice is all the rage these days — for the very excellent reason that it works.

Study after study after study suggests that students learn more when they pull information out of their brains (“retrieval practice”) than by putting information back into their brains (“mere review”).

So, teachers naturally want to know: what specific teaching and studying strategies count as retrieval practice?

We’ve got lots (and lots) of answers.

Flashcards — believe it or not — prompt retrieval practice.

Short answer quizzes, even if ungraded. (Perhaps, especially if ungraded)

Individual white boards ensure that each student writes his/her own answer.

So, you can see this general research finding opens many distinct avenues for classroom practice.

Retrieval Grids: The Good

One strategy in particular has been getting lots of twitter love recently: the retrieval grid.

You can quickly see the benefits.

In the retrieval-grid quiz below, students answer short questions about Macbeth. Crucially, the grid includes questions from this week (in yellow), last week (in blue), and two weeks ago (in red). 

Because students get more points for answering older/harder questions, the form encourages retrieval of weeks-old information.

So much retrieval-practice goodness packed into so little space. (By the way: this “quiz” can be graded, but doesn’t need to be. You could frame it as a simple review exercise.)

Retrieval Grids: My Worries

Although I like everything that I’ve said so far, I do have an enduring concern about this format: the looming potential for working memory overload.

Students making their way through this grid must process two different streams of information simultaneously.

In part of their working memory, they’re processing answers to Macbeth questions.

And, with other parts of their working memory, they’re processing and holding the number of points that they’ve earned.

And, of course, those two different processing streams aren’t conceptually related to each other. One is Macbeth plot information; the other is math/number information.

As you know from my summer series on working memory, that’s a recipe for cognitive overload.

Retrieval Grids: Solutions?

To be clear: I’m not opposed to retrieval grids. All that retrieval practice could help students substantially.

I hope we can find ways to get the grid’s good parts (retrieval practice) without the bad parts (WM overload).

I don’t know of any research on this subject, but I do have some suggestions.

First: “if it isn’t a problem, it isn’t a problem.” If your students are doing just fine on retrieval grids, then obviously their WM isn’t overwhelmed. Keep on keepin’ on.

But, if your students do struggle with this format, try reducing the demands for simultaneous processing. You could…

Second: remove the math from the process. Instead of requiring 15 points (which requires addition), you could simply require that they answer two questions from each week. You could even put all the week-1 questions in the same row or column, in order to simplify the process. Or,

Third: include the math on the answer sheet. If they write down the points that they’ve earned at the same time they answer the question, they don’t have to hold that information in mind. After all, it’s right there on the paper. So, a well-designed answer sheet could reduce WM demands.

Fourth: no doubt, as you read this, you are already coming up with your own solutions. If you have an idea that sounds better than these — try it! (And, I hope you’ll share it with me.)

To Sum Up

Researchers work by isolating variables. Teachers and classrooms work by combining variables.

So: researchers who focus on long-term memory will champion retrieval practice and retrieval grids.

Researchers who focus on working memory might worry about them.

By combining our knowledge of both topics, we teachers can reduce the dangers of WM overload in order to get all the benefits of retrieval practice.

That’s a retrieval-grid win-win.

Can Multiple-Choice Tests Really Help Students?
Andrew Watson
Andrew Watson

Multiple-choice tests have a bad reputation. They’re easy to grade, but otherwise seem…well…hard to defend.

After all, the answer is RIGHT THERE. How could the student possibly get it wrong?

Given that undeniable objection, could multiple-choice tests possibly be good for learning?

The Benefits of Distraction

A multiple-choice test includes one correct answer, and other incorrect answers called “distractors.” Perhaps the effectiveness of a multiple-choice question depends on the plausibility of the distractors.

So, a multiple choice question might go like this:

“Who was George Washington’s Vice President?”

a) John Adams

b) Mickey Mouse

c) Tom Brady

d) Harriet Tubman

In this case, none of the distractors could possibly be true. However, I could ask the same question a different way:

“Who was George Washington’s Vice President?”

a) John Adams

b) Thomas Jefferson

c) Alexander Hamilton

d) James Madison

In THIS case, each of the distractors could reasonably have held that role. In fact, all three worked closely with–and deeply admired–Washington. Two of the three did serve as vice presidents. (And the other was killed by a VP.)

Why would the plausibility of the distractor matter?

We know from the study of retrieval practice that pulling information out of my brain benefits memory more than repeatedly putting information into it.

So, we might hypothesize this way:

If the distractors are implausible, a student doesn’t have to think much to figure out the correct answer. No retrieval required.

But, if the distractors are plausible, then the student has to think about each one to get the answer right. That’s lots of retrieval right there.

In other words: plausible distractors encourage retrieval practice, and thereby might enhance learning.

Better and Better

This line of reasoning leads to an even more delicious possibility.

To answer that question about Washington’s VP, the student had to think about four people: Adams, Jefferson, Hamilton, Madison.

Presumably she’ll learn the information about Adams–who was the correct answer to the question.

Will she also learn more about the other three choices? That is: will she be likelier to answer a question about Alexander Hamilton correctly? (“Who created the first US National Bank as Washington’s Secretary of the Treasury?”)

If the answer to that question is YES, then one multiple-choice question can help students consolidate learning about several different facts or concepts.

And, according to recent research, the answer is indeed YES.

The research paradigm used to explore this question requires lots of complex details, and goes beyond the scope of a blog post. If you’re interested, check out the link above.

Classroom Implications

If this research holds up, we might well have found a surprisingly powerful tool to help students acquire lots of factual knowledge.

A well-designed multiple-choice question–that is: one whose plausible distractors require lots of careful thought–helps students learn four distinct facts or concepts.

In other words:

“Multiple-choice questions…

a) are easy to grade

b) help students learn the correct answer

c) help students learn information about the incorrect answers

or

d) all of the above.”

Me: I’m thinking d) sounds increasingly likely…

Study Advice for Students: Getting the Specifics Just Right
Andrew Watson
Andrew Watson

If you follow research-based teaching advice, you’ve heard a lot about retrieval practice in recent months.

The headline: if students want to remember information, they shouldn’t review it. That is: they shouldn’t just look it over. (“Ah, yes, the book says here that the Ideal Gas law is PV=nRT.”)

Instead, they should try to remember it first. That is: they should try a mini mental quiz. (“Hmm. What is the Ideal Gas law again? PV = something…let me think for a moment.”)

One great benefit of this research finding: students can do it themselves. All those online testing programs (most famously, Quizlet) can help students self-test rather than simply review.

Timing is Everything

Two days ago, I presented this research to (quite splendid) teachers in Fukuoka, Japan. As they pondered this guidance, one teacher asked a question I’d never heard before. Here’s a paraphrase:

I understand that retrieval practice might promote learning. But, it also might be really discouraging.

If students keep testing themselves, and keep getting the answers wrong, they’ll feel helpless and frustrated.

So: this strategy might increase learning for some students, but paradoxically for other students it might decrease motivation to study.

At the time, my response was: that’s an entirely plausible hypothesis, but I haven’t seen any research into that question. If you the teacher see that retrieval practice is demotivating, you’ll know best when (and how) to switch to something else.

Entirely by coincidence, I found research that addresses that question the very next day.

Kalif Vaughn and Nate Kornell wondered: how does retrieval practice influence motivation? Specifically, does a student’s fear of getting the answer wrong discourage her from relying on retrieval practice?

If yes, can we redirect those motivational processes? And, crucially, can we redirect motivation without sacrificing the benefits of retrieval practice?

The Power of Hints

Vaughn and Kornell started researching the effect of hints. Here’s their thought process:

If I’m nervous about getting a retrieval-practice answer wrong, I might choose simply to review the material instead. (Rather that struggling to remember that PV=something something something, I’ll just look in the book.)

But if I know I’ll get a hint, then I might be willing to try retrieval practice. That is: the hint makes retrieval practice less scary, and so increases my motivation to try it out.

Sure enough, people who had to choose between straight-up retrieval practice and simple review strongly preferred the review. Something like 80% of the time, they reviewed the correct answer. Only 20% of the time did they dare retrieval practice.

However, when they could get a hint, they reviewed only 30% of the time. The other 70%, they tried some form of hint-informed retrieval practice.

That is: by including the hint option, teachers can more than triple the likelihood that students will try retrieval practice. Hints reduce the likelihood of failure, and thereby increase motivation.

The Danger of Hints?

But wait just a minute here.

Past research shows that pure retrieval practice helps students learn and remember. We should admit that hints just might undermine that effect.

In other words, hints could entice students to try self-quizzing, but could reduce the effectiveness of the technique. Ugh.

Happily, Vaughn and Kornell spotted that potential problem, and investigated it.

Their findings: hints didn’t hurt.

In other words: students who did pure retrieval practice, and those who got small hints, and those who got big hints all remembered new information better that students who simply reviewed information.

Based on these findings, the researchers write:

We recommend giving students the option to get hints when they are testing themselves. It will make them choose [retrieval practice] more often, which should increase their learning, and it will also make learning more fun, which might increase their motivation to study. We envision instructors making more use of hints in worksheets, questions at the end of textbook chapters, flashcards, and a variety of digital study aides that resemble Quizlet. The students themselves might also benefit by finding ways to give themselves hints as they test themselves.

Vaugh and Kornell also suggest that the hint option will be more beneficial early in the review process. After a while, students shouldn’t need them anymore to feel confident enough to try retrieval practice.

A final note: the word “hint” here should be interpreted quite broadly. Vaughn & Kornell let students see a few letters of the correct answer; that was their version of “hint.” As teachers, we’ll adapt that general concept to the specifics of our classroom work.

As I say so often: teachers needn’t do what researchers do. Instead, we should think the way they think. That thought process will bring us to our own version of the right answer in our classrooms.

The Best Teaching Book to Read This Summer: Powerful Teaching
Andrew Watson
Andrew Watson

Let’s describe a perfect book for a Learning and the Brain conference goer:

First: it should begin with solid science. Teachers don’t want advice based on hunches or upbeat guesswork. We’d like real research.

Second: it should include lots of classroom specifics. While research advice can offer us general guidance, we’d like some suggestions on adapting it to our classroom particulars.

Third: it should welcome teachers as equal players in this field. While lots of people tell teachers to “do what research tells us to do” – that is, to stop trusting our instincts – we’d like a book that values us for our experience. And, yes, for our instincts.

And, while I’m making this list of hopes for an impossibly perfect book, I’ll add one more.

Fourth: it should be conspicuously well-written. We’d like a lively writing voice: one that gets the science right, but sounds more like a conversation than a lecture.

Clearly, such a book can’t exist.

Except that it does. And: you can get it soon.

Memory researcher Pooja Agarwal and teacher Patrice Bain have written Powerful Teaching: Unleash the Science of Learning. Let’s see how their book stacks up against our (impossible) criteria:

First: Begins with Research

If you attend Learning and the Brain conferences, you prioritize brain research.

We’re not here for the fads. We’re here for the best ideas that can be supported by psychology and neuroscience.

Happily, Powerful Teaching draws its classroom guidance from extensive research.

Citing dozens of studies done over multiple decades, Agarwal and Bain champion four teaching strategies: retrieval practice, spacing, interleaving, and metacognition.

(As frequent blog readers, you’ve read lots about these topics.)

Agarwal herself did much of the research cited here. In fact, (researcher) Agarwal did much of the on-the-ground research in (teacher) Bain’s classrooms.

And Agarwal studied and worked with many of the best-know memory researchers in the field: “Roddy” Roediger, Mark McDaniel, and Kathleen McDermott, among others.

(BTW: McDaniel will be speaking at the LatB conference this fall in Boston.)

In short: if you read a recommendation in Powerful Teaching, you can be confident that LOTS of quality research supports that conclusion.

Second: Offers Classroom Specifics

Powerful Teaching is written by two teachers. Bain taught 6-8 grade for decades. And Agarwal is currently a psychology professor.

For this reason, their book BOTH offers research-based teaching advice AND gives dozens of specific classroom examples.

What does retrieval practice look like in the classroom? No worries: they’ve got you covered.

This strength merits particular attention, because it helps solve a common problem in our field.

Teachers often hear researchers say, “I studied this technique, and got a good result.” We infer that we should try that same technique.

But, most research takes place in college classrooms. And, the technique that works with that age group just might not work with our students.

How should we translate these research principles to our classrooms? Over and over again — with specific, practical, and imaginative examples — Bain and Agarwal show us how.

Third: Welcomes Teachers

Increasingly in recent months, I’ve seen scholars argue that teacherly instincts should not be trusted. We should just do what research tells us to do.

As I’ve written elsewhere, I think this argument does lots of damage—because we HAVE to use our instincts.

How exactly do research-based principles of instruction work in thousands of different classrooms? Teachers have to adapt those principles, and we’ll need our experience —and our instincts—to do so.

Powerful Teaching makes exactly this point. As Bain and Agarwal write:

You can use Power Tools your way, in your classroom. From preschool through medical school, and biology to sign language, these strategies increase learning for diverse students, grade levels, and subject areas. There are multiple ways to use these strategies to boost students’ learning, making them flexible in your classroom, not just any classroom.

Or, more succinctly:

The better you understand the research behind the strategies, the more effectively you can adapt them in your classroom – and you know your classroom best.

By including so many teachers’ experiences and suggestions, Agarwal and Bain put teacherly insight at the center of their thinking. They don’t need to argue that teachers should have a role; they simply show us that it’s true.

Fourth: Lively Voice

Scientific research offers teachers lots of splendid guidance … but if you’ve tried to read the research, you know it can be dry. Parched, even.

Happily, both Bain and Agarwal have lively writing voices. Powerful Teaching doesn’t feel like a dry lecture, but a friendly conversation.

For example:

Learning is complex and messy, it’s not something we can touch, and it’s really hard to define. You might even say that the learning process looks more like a blob than a flowchart.

Having tried to draw many learning flowcharts, only to end up with blobs, I appreciate this honest and accurate advice.

What’s Not to Love?

As a reviewer, I really should offer at least some criticism of Power Tools. Alas, I really don’t have much – at least not much substantive.

Once or twice, I thought that the research behind a particular finding is more muddled that PT lets on. For example, as I’ve written about before, we’ve got contradictory evidence about the benefits of retrieval practice for unstudied material.

But, as noted above, Agarwal is an important researcher in this field, and so I’m inclined to trust her judgment.

Mostly, I think you should put Powerful Teaching at the top of your summer reading list. You might sign up for the summer book club. Keep on eye on the website for updates.

A Rose by Any Other Name Would Smell as Confusing
Andrew Watson
Andrew Watson

We have to admit it: when it comes to naming things, the field of psychology has no skills.

In many professions, we can easily distinguish between key terms.

The difference between a kidney and a pancreas? Easy.

The difference between a 2×4 and a 1×6? Easy.

The difference between an altimeter and speed indicator? Easy.

But:

The difference between grit and resilience?

Between self-control and self-regulation?

Between an adolescent and a teen-ager? Um….

And, if we can’t define and distinguish among concepts easily, we’ll struggle to talk with each other sensibly about the work we’re doing.

I think of naming problems in several categories:

Sales-Pitch Names

Occasionally, psychologists come up with a name that seems to have been market tested for maximum sales.

Take, for instance, “wise feedback.”

Many researchers have explored a particular feedback structure that combines, first, an explicit statement of high standards, and second, an explicit statement of support.

For instance:

“I’ve made these suggestions on your essay because we have very high standards in the history department. And, I’m quite confident that – with the right kind of revision – this essay will meet those standards.”

(You can find research into this strategy here.)

I myself find the research quite persuasive. The strategy couldn’t be easier to implement. It couldn’t cost any less – it’s free! And, it’s particularly helpful for marginalized students.

But the phrase “wise feedback” rankles. Whenever I talk with teachers about this strategy, I feel like I’m participating in a late-night cable TV sales pitch.

Couldn’t we find a more neutral name? “Two-step feedback”? “Supportive standards feedback”?

Another example: “engagement.” Blake Harvard recently posted about this word, worrying that it’s too hard to define.

I agree. But, I also worry the name itself tries to prohibit debate. Who could be opposed to “engagement”?

In science world, however, we should always look for opposing viewpoints on any new suggestion. If a brand name – like “engagement” – feels too warm and fuzzy to oppose, the name itself inhibits scientific thinking.

By the way, almost everything that includes the word “brain” in it is a sales-pitch name: “Brain Gym.” “Brain Break.”

Of course, the right kind of exercise and activity do benefit learning. Short cognitive breaks do benefit learning. We don’t need to throw the word “brain” at those sentences to improve those strategies.

Poaching Names

If I’ve got a new idea, and no one pays attention to it, how might I get eyeballs on my website?

I know! I can use a pre-existing popular name, and staple it on to my concept – even if the two aren’t factually related to one another!

That way, readers will think that my new ideas has links to that other well-known idea. Voila – instant credibility.

This “poaching” happens most often with “Mindset.”

You’ve probably read about an “empathy” mindset. Or a “technology” mindset. Or a “creative” mindset. Maybe, an “international” mindset. Or a “your product name here” mindset.

To be clear, these ideas might in fact help students learn. Empathy and creativity and an international perspective can certainly improve schools.

But, Dweck’s word “mindset” has a very particular meaning. She has done quite specific research to support a handful of quite specific theories.

Calling my new thing “a Watson mindset” implies that my work links with Dweck’s. But, that implication needs careful, critical investigation. If you trust Dweck, you don’t have to believe everything called “mindset.”

(Of course, not everyone does trust Dweck. But: that’s a different post.)

Confusing Names

These names make sense to the people who coin and use them. But, they’re not obviously connected to the concepts under discussion – especially to visitors in the field.

Here’s a crazy example: entity theorists.

Believe it or not, one of the best-known concepts in educational psychology used to distinguish between entity theorists and (not joking here) incremental theorists.

But then, in the late 1990s, Carol Dweck started a rebranding project, and now calls those things a fixed mindset and a growth mindset.

I rather suspect her ideas wouldn’t have gotten such traction without the new names.

(Imagine teachers earnestly encouraging their students: “remember to adopt an incremental theory!” I don’t see it…)

A Really Good Name

In the bad old days (the 2000s), psychologists did a lot of research into “the testing effect.” It’s a terrible name. No one in schools wants anything to do with more testing.

Let’s rebrand. How about “retrieval practice”?

That name has many strengths:

First: far from being confusing, it tells you exactly what it means. Practice by retrieving, not by reviewing. Couldn’t be clearer.

Second: far from being a sales pitch, it remains comfortably neutral. It’s not “awesome practice” or “perfect practice.” You get to investigate research pro- and con-, and decide for yourself.

Third: rather than poaching (“students should develop a practice mindset!”), it stands on its own.

I don’t know who came up with this phrase. But, I tip my hat to a modest, clear, straightforward name.

We should all try to follow this clear and neutral example.

 

Meet Blake Harvard, “Effortful Educator”
Andrew Watson
Andrew Watson

Blake Harvard teaches psychology and coaches soccer at James Clemens High School. For three years now, he’s been actively at work trying out teaching strategies derived from cognitive psychology. And, he blogs about his work at The Effortful Educator.

I spoke with Blake about his work, hoping to learn more about the classroom strategies he finds most helpful and effective. (This transcript has been edited for clarity and brevity.) 


Andrew Watson

Blake, thank you for taking the time to chat with me.

I always enjoy reading your blog posts, and learning about your strategies to connect psychology research with the teaching of psychology.

Can you give an example of research you read, and then you tried it out in your classroom? Maybe you tinkered with it along the way?

Blake Harvard

Well, first: retrieval practice and spacing. Research tells us that we forget things very rapidly. Forgetting information and then retrieving that information again strengthens ties in the brain. It promotes long term memory of that information.

So, I’m very conscious of different ways that my students elaborate on information and generate information.

What am I doing to have my kids review? Or, how am I spacing out the information that we were learning yesterday versus what we were learning a week ago versus what we were learning months ago. What are the ties among those things? How are they related?

In the past, when we completed a unit of study, it was in the past. We moved on. Now I’m very careful to revisit. I space out their practice and provide the opportunity for my students to think about material we’ve covered in the past.

And second, dual coding.

I think every teacher does some activity where they have students draw something. But dual coding is more than just about drawing things. It’s about organizing the information: how does it link up?

So, using those general concepts of retrieval practice, space practice, and dual coding, and applying them to my class specifically, I’m constantly trying to get my kids to think – to think more.

Andrew Watson:

Can you give an example of a strategy you use to be sure they do?

Blake Harvard

Sure. One example is, I use an unusual template with multiple-choice questions.

In a normal multiple-choice question, you have a kid read it. They answer “B.” You think, “okay B’s correct, let’s go to the next thing.”

Well, I’ve got this template where kids have to use – have to think about – A through E.

If B’s the right answer, they have to tell me why B’s the right answer. That is, they have to think about B.

But, then, they also have to take A, C, D, and E, and think about those too.

Why is C the wrong answer?

Or, how could you make D into the right answer?

Or, what question could you ask to make E the right answer?

Even, why is A tricky?

Andrew Watson

That seems both simple and extraordinarily powerful at the same time.

Blake Harvard

I don’t want to boil all of cognitive psychology down to that, but that’s really central, I think. There’s no elaborate trick. You don’t need any new technology. At the end of the day, you’re just getting those kids’ brains thinking more with the information.

Andrew Watson

Are there some teaching strategies that you read research about, and you tried them out, and you thought: I understand why this works in psychology lab, but it actually just doesn’t work in my classroom. I’m not gonna do it anymore.

Blake Harvard

Well, I just recently did something with flexible seating. I have an AP psychology student who wanted to try this out in my classroom, I said sure.

I have first block and second block class, and they’re both AP Psychology classes, and they’re both on the same pace, doing the same stuff.

We took the first block class, and we put them in a flexible seating classroom. This classroom had beanbags, it had a couch, it had comfortable chairs, it had only one or two tables with traditional chairs.

With my second block class, we kept them in more traditional seating: sitting at tables, facing the front.

And then I taught a unit, which is about seven or eight days, to both classes. I tried to keep everything the same as much as possible, and at the end we took our unit exam and then we compared the data.

So: how did the seating affect the grades, right?

The people in the flexible seating classroom did worse than the people in the traditional seating.

And then I took the grades and compared them to people who took the same course and the same test in years past. I got the same results. The flexible seating in that one classroom was worse than all of the other classes.

I know it’s not perfect methodology. Nothing is perfect “in the wild,” so to speak. But, I gave it a go. And I’ve decided that that’s not what I want to do.

Now, my student was focused more on the emotional part of it: “how did the kids feel about it?”

She had them fill out a survey: “Do you think you did better?” “Did you feel more comfortable in class?” – those sorts of things. And I haven’t seen those surveys yet; she’s compiling information herself. I am interested to see those too.

I heard some of the comments, and it’s interesting. Some of the comments on the first day of the class that was in the flexible seating classroom were like, “Oh my gosh! This is great!” And then by the end it was, “When is this over?”

Andrew Watson:

I’m wondering if your students take the strategies you use to their other classes? Do they study history with retrieval practice? Or science? Or do you find it stays pretty local to the work you do with them?

Blake Harvard

The short answer is: I don’t know. But I definitely impress upon them that this is how you should be studying.

Rereading your notes is not the most effective way to study. Going back over your notes and highlighting them is not effective. If you’re not thinking about the information, if you’re not actually trying to do something with it, you’re probably not being as effective as you should be.

In fact, it’s not just about simplifying; the right study strategies actually save you time. If you’ve tested yourself on this concept two and three times, and you get the same things right, you’re probably pretty good. You got it. Focus on the other things that you haven’t gotten right.

It doesn’t matter if it’s math, it doesn’t matter if it’s biology, it doesn’t matter what it is. The brain works the way the brain works. If you can’t use the information, if you can’t answer this question, you don’t know it. And you need to study it, because if you did know it, you would have answered the question. It’s as simple as that.

Andrew Watson

Yes. So, we talked about whether or not students use these strategies in other classes. Are there things you encourage them to do that have research support, but they’re particularly resistant to?

Blake Harvard

That’s an interesting question. Nothing off the top of my head is coming to me…

You know: those who don’t think they’re great artists – at first, don’t want to use dual coding. Because they think “my drawing’s bad.” And I’ll say: “you know, it’s not about how good your drawing is. It’s about what it represents to you, in your mind.”

Andrew Watson

The mental practice that goes into it.

Blake Harvard

Exactly. Once you explain that to them, they’re much more receptive to it.

Andrew Watson

One of the tricky parts of our field is that there are many teaching strategies that people say have “a whole lot of research support.” And part of our job is to be good at sifting the good stuff from the not good stuff.

Do you have any advice for teachers who are trying to figure out what really is valid and valuable, not just trending on Twitter?

Blake Harvard

It’s never easy, you know.

Often, I look for multiple cases of a particular teaching strategy. Did they test 20 kids in one classroom? Or was this tested across the country?

You also want to think about the people you have in your class. If researchers test a particular demographic, but you don’t teach that demographic, perhaps their conclusion doesn’t apply to your class. Something that might work in an elementary classroom: there’s a chance it could work in my AP Psychology classroom, but I’ve got to really look at it.

To be fair, this is something I’m figuring out myself.

Andrew Watson

I know that you are a coach as well as a teacher. I wonder if you use any of these strategies in your coaching world as well as your teaching world.

Blake Harvard

Yes, I do, definitely. For me, it has to do with how I structure practice.

I want to show my soccer players what a skill should look like, what the strategy does on the field, why it works.

We want to start small. I want each player individually working on it, and perfecting it or getting better at it. Then we go into a small sided game: maybe two-versus-two or three-versus-three. And then, let’s work it into a bigger scenario.

Eventually, obviously the goal is that they use it in a real-world game.

Just like in the classroom, I’m not a huge fan of inquiry-based learning. I think that there are much more effective ways of teaching than that. I want to explain each new concept to them very clearly, in a very organized way, so that they have a good understanding of what it is. Then we try to apply it to real life. But I don’t start off there.

Andrew Watson

So, you follow the coaching version of direct instruction.

Blake Harvard

Right, yes.

Andrew Watson

Are there questions I ought to have asked you which I haven’t asked you?

Blake Harvard

It’s an interesting journey to get to where I am right now. I graduated with my Master’s Degree in 2006 and up until about 2016 I was just doing just normal professional development: whatever the school had for me to do.

Sometimes I was really excited about it; sometimes I was sitting in there barely paying attention. But now that I’ve found these different types of professional development opportunities, I see they can really improve you, and improve your students and your classroom.

You don’t have to think “I’ll just do the PD that I’m supposed to do and then I go back to my classroom.” There are ways – simple ways, easy ways – to improve your classroom, to improve learning for your students.

Andrew Watson

It’s interesting you say that, because you’ve described my journey as well. I had been a classroom teacher for decades when I found Learning and the Brain, and those conferences completely changed my professional trajectory.

Well, thank you Blake for talking with me today.

 

Check out the Effortful Educator blog here.

The Better Choice: Open- or Closed-Book Quizzes
Andrew Watson
Andrew Watson

Psychology research offers lots of big ideas for improving student learning: self-determination theory, or the spacing effect, or cognitive load theory.

Once we make sense of that research, we teachers work to translate those big idea to practical classroom strategies.

In some cases, we can simply do what the researcher did. In most cases, however, we have to adapt their test paradigm to our specific classroom world.

So, for example, Nate Kornell explored the spacing effect with flashcards. He found that 1 deck of 20 cards produced more learning 4 decks of 5 cards. Why: a deck with 20 cards spaces practice out more than a deck with five cards.

That “big idea” gives teachers a direction to go.

But: we should not conclude that 20 is always the right number. Instead, we should adapt the concept to our circumstances. 20 flashcards might be WAY TOO MANY for 1st graders. Or, if the concepts on the cards are quite simple, that might be too few for college students studing vocabulary.

Translating Retrieval Practice

We know from many (many) studies that retrieval practice boosts learning.

In brief, as summarized by researcher Pooja Agarwal, we want students to pull ideas out of their brains, not put them back in.

So, students who study by rereading their notes don’t learn much; that’s putting ideas back in. Instead, they should quiz themselves on their notes; that’s pulling ideas out.

This big idea makes lots of sense. But, what exactly does that look like in our classrooms?

Over the years, teachers and researchers have developed lots of suggestions. (You can check out Dr. Agarwal’s site here for ideas.)

Thinking about retrieval practice, researchers in Germany asked a helpful question. In theory, closed-book quizzes ought to generate more learning than open-book quizzes.

After all: if my book is closed, I have to pull the information out of my brain. That’s retrieval practice.

If my book is open, I’m much likelier simply to look around until I find the right answer. That’s not retrieval practice.

These researchers wanted to know: does this sensible prediction come true?

The Results Please

Sure enough, closed-book quizzes do produce more learning. This research team retested students on information twice: one week after, and eight weeks after, they heard information in a lecture.

Sure enough, the students who took closed-book quizzes did substantially better than those who took open-book quizzes. (The cohen’s d values were above 0.80.)

In brief: we now have one more research-supported strategy for creating retrieval practice.

As always, I think we should be careful to think about limits on such research.

In the first place, this study took place with college students. If you teach younger students, and your experience tells you that an open-book strategy will work better under particular circumstances, you might ask a trusted colleague for a second opinion. Research like this gives us excellent guidance, but it can’t answer all questions.

In the second place, other variables might come strongly into play. For instance: stress. If your school culture has always allowed open-book quizzes, your students might freak out at the prospect of a closed-book alternative. If so, the benefits of retrieval practice might be lost to anxiety overload.

In this case, you’ll need to take the time to explain your reasoning, and to ease your students into new learning habits.

In any case, we can be increasingly confident that many varieties of retrieval practice produce the desirable difficulties that help students learn. (For a fun exception to this rule, click here.)

 

Default Image
Andrew Watson
Andrew Watson

In a blog post, David Didau raises concerns about “the problem with teachers’ judgment.”

Here goes:

If a brain expert offers me a teaching suggestion, I might respond: “Well, I know my students, and that technique just wouldn’t work with them.”

Alas, this rebuttal simply removes me from the realm of scientific discussion.

Scientific research functions only when a claim can be disproven. Yet the claim “I know my students better than you do” can’t be disproven.

Safe in this “I know my students” fortress, I can resist all outside guidance.

As Didau writes:

If, in the face of contradictory evidence, we [teachers] make the claim that a particular practice ‘works for me and my students’, then we are in danger of adopting an unfalsifiable position. We are free to define ‘works’ however we please.

It’s important to note: Didau isn’t arguing with a straw man. He’s responding to a tweet in which a former teacher proudly announces: “I taught 20 years without evidence or research…I chose to listen to my students.”

(Didau’s original post is a few years old; he recently linked to it to rebut this teacher’s bluff boast.)

Beware Teachers’ Judgment, Part 2

In their excellent book Understanding How We Learn, the Learning Scientists Yana Weinstein and Megan Sumeracki make a related pair of arguments.

They perceive in teachers “a huge distrust of any information that comes ‘from above’ “… and “a preference for relying on [teachers’] intuitions” (p. 22).

And yet, as they note,

There are two major problems that arise from a reliance on intuition.

The first is that our intuitions can lead us to pick the wrong learning strategies.

Second, once we land on a learning strategy, we tend to seek out “evidence” that favors the strategy we have picked. (p. 23)

Weinstein and Sumeracki cite lots of data supporting these concerns.

For instance, college students believe that rereading a textbook leads to more learning than does retrieval practice — even when their own experience shows the opposite.

The Problems with the Problem

I myself certainly agree that teachers should listen to guidance from psychology and neuroscience. Heck: I’ve spent more than 10 years making such research a part of my own teaching, and helping others do so too.

And yet, I worry that this perspective overstates its case.

Why? Because as I see it, we absolutely must rely on teachers’ judgment — and even intuition. Quite literally, we have no other choice. (I’m an English teacher. When I write “literally,” I mean literally.)

At a minimum, I see three ways that teachers’ judgments must be a cornerstone in teacher-researcher conversations.

Judgment #1: Context Always Matters

Researchers arrive at specific findings. And yet, the context in which we teach a) always matters, and b) almost never matches the context in which the research was done.

And therefore, we must rely on teachers’ judgments to translate the specific finding to our specific context.

For example: the estimable Nate Kornell has shown that the spacing effect applies to study with flashcards. In his research, students learned more by studying 1 pile of 20 flashcards than 4 piles of 5 flashcards. The bigger pile spaced out practice of specific flashcards, and thus yielded more learning.

So, clearly, we should always tell our students to study with decks of 20 flashcards.

No, we should not.

Kornell’s study showed that college students reviewing pairs of words learned more from 20-flashcard piles than 5-flashcard piles. But, I don’t teach college students. And: my students simply NEVER learn word pairs.

So: I think Kornell’s research gives us useful general guidance. Relatively large flashcard decks will probably result in more learning than relatively small ones. But, “relatively large” and “relatively small” will vary.

Doubtless, 2nd graders will want smaller decks than 9th graders.

Complex definitions will benefit from smaller decks than simple ones.

Flashcards with important historical dates can be studied in larger piles than flashcards with lengthy descriptions.

In every case, we have to rely on … yes … teachers’ judgments to translate a broad research principle to the specific classroom context.

Judgment #2: Combining Variables

Research works by isolating variables. Classrooms work by combining variables.

Who can best combine findings from various fields? Teachers.

So: we know from psychology research that interleaving improves learning.

We also know from psychology research that working memory overload impedes learning.

Let’s put those findings together and ask: at what point does too much interleaving lead to working memory overload?

It will be simply impossible for researchers to explore all possible combinations of interleaving within all levels of working memory challenge.

The best we can do: tell teachers about the benefits of interleaving, warn them about the dangers of WM overload – and let them use their judgment to find the right combination.

Judgment #3: Resolving Disputes

Some research findings point consistently in one direction. But, many research fields leave plenty of room for doubt, confusion, and contradiction.

For example: the field of retrieval practice is (seemingly) rock solid. We’ve got all sorts of research showing its effectiveness. I tell teachers and students about its benefits all the time.

And yet, we still don’t understand its boundary conditions well.

As I wrote last week, we do know that RP improves memory of specifically tested facts and processes. But we don’t know if it improves memory of facts and processes adjacent to the ones that got tested.

This study says it does. This one says it doesn’t.

So: what should the teacher do right now, before we get a consistent research answer? We should hear about the current research, and then use our best judgment.

One Final Point

People who don’t want to rely on teacherly judgment might respond thus: “well, teachers have to be willing to listen to research, and to make changes to their practice based upon it.”

For example, that teacher who boasted about ignoring research is no model for our work.

I heartily – EMPHATICALLY – agree with that point of view.

At the same time, I ask this question: “why would teachers listen to research-based guidance if those offering it routinely belittle our judgment in the first place?”

If we start by telling teachers that their judgment is not to be trusted, we can’t be surprised that they respond with “a huge distrust of any information that comes ‘from above’.”

So, here’s my suggestion: the field of Mind, Brain, Education should emphasize equal partnership.

Teachers: listen respectfully to relevant psychology and neuroscience research. Be willing to make changes to your practice based upon it.

Psychology and neuroscience researchers: listen respectfully to teachers’ experience. Be up front about the limits of your knowledge and its applicability.

Made wiser by these many points of view, we can all trust each other to do our best within our fields of expertise.

Good News! Contradictory Research on Desirable Difficulties…
Andrew Watson
Andrew Watson

As we regularly emphasize here on the blog, attempts to recall information benefit learning.

That is: students might study by reviewing material. Or, they might study with practice tests. (Or flashcards. Perhaps Quizlet.)

Researchers call this technique “retrieval practice,” and we’ve got piles of research showing its effectiveness.

Learning Untested Material?

How far do the benefits of this technique go?

For instance, let’s imagine my students read a passage on famous author friendships during the Harlem Renaissance. Then they take a test on its key names, dates, and concepts.

We know that retrieval practice helps with facts (names and dates) and concepts.

But: does retrieval practice help with the names, dates, and concepts that didn’t appear on the practice test?

For instance, my practice test on Harlem Renaissance literature might include this question: “Zora Neale Hurston befriended which famous Harlem poet?”

That practice question will (probably) help my students do well on this test question: “Langston Hughes often corresponded with which well-known HR novelist?”

After all, the friendship between Hurston and Hughes was retrieved on the practice test, and therefore specifically recalled.

But: will that question help students remember that…say…Carl van Vechten took famous photos of poet and novelist Countee Cullen?

After all, that relationship was in the unit, but NOT specifically tested.

So, what are the limits of retrieval practice benefits?

Everything You Wanted to Know about Acid Reflux

Kevin Eva and Co. have explored this question, and found encouraging results.

In his study, Eva asked pharmacology students to study a PowerPoint deck about acid reflux and peptic ulcers: just the sort of information pharmacists need to know. In fact, this PPT deck would be taught later in the course – so students were getting a useful head start.

Half of them spent 30 minutes reviewing the deck. The other half spent 20 minutes reviewing, and 10 minutes taking a practice test.

Who remembered the information better 2 weeks later?

Sure enough: the students who took the practice test. And, crucially, they remembered more information on which they had been tested AND other information from the PPT that hadn’t been specifically tested.

That it, they would be likelier to remember information about Zora Hurston and Langton Hughes (the tested info) AND about van Vechten and Cullen (the UNtested info).

However, Eva’s tested students did NOT remember more general pharmacology info than their untested peers. In other words: retrieval practice helped with locally related information, but not with the entire discipline.

But Wait! There’s More! (Or, Less…)

About 2 month ago, I posted on the same topic – looking at a study by Cindy Nebel (nee Wooldridge) and Co.

You may recall that they reached the opposite finding. That is, in their research paradigm, retrieval practice helped students remember the information they retrieved, and only the information they retrieved.

Whereas retrieval practice helped students on later tests if the questions were basically the same, it didn’t have that effect if the questions were merely “topically related.”

For instance, a biology quiz question about “the fossil record” didn’t help students learn about “genetic differences,” even though both questions focus on the topic of “evolution.”

What Went Wrong?

If two psychology studies looked at (basically) the same question and got (basically) opposite answers, what went wrong?

Here’s a potentially surprising answer: nothing.

In science research, we often find contradictory results when we first start looking at questions.

We make progress in this field NOT by doing one study and concluding we know the answer, but by doing multiple (slightly different) studies and seeing what patterns emerge.

Only after we’ve got many data points can we draw strong conclusions.

In other words: the fact that we’ve got conflicting evidence isn’t bad news. It shows that the system is working as it should.

OK, but What Should Teachers Do?

Until we get those many data points, how should teachers use retrieval practice most effectively?

As is so often the case, we have to adapt research to our own teaching context.

If we want to ensure that our students learn a particular fact or concept or process, we should be sure to include it directly in our retrieval practice. In this case, we do have lots (and LOTS) of data points showing that this approach works. We can use this technique with great confidence.

Depending on how adventurous we feel, we might also use retrieval practice to enhance learning of topically related material. We’re on thinner ice here, so we shouldn’t do it with core content.

But, our own experiments  might lead us to useful conclusions. Perhaps we’ll find that…

Older students can use RP this way better than younger students, or

The technique works for factual learning better than procedural learning, or

Math yes, history no.

In brief: we can both follow retrieval practice research and contribute to it.

The Limits of Retrieval Practice, Take II…
Andrew Watson
Andrew Watson

Just two weeks ago, I posted about a study showing potential boundary conditions for retrieval practice: one of the most robustly supported classroom strategies for enhancing long-term memories.

As luck would have it, the authors of that study wrote up their own description of it over at The Learning Scientists blog. Those of you keeping score at home might want to see their description of the study, and their thoughts on its significance.

The short version: boundary conditions always matter.

We should assume they exist, and look for them.

A teaching practice that works with some students — even most students — just might not work with my students.

In that case: I’m happy it helps the others, but I need to find the strategy that will work with mine.