Andrew began his classroom life as a high-school English teacher in 1988, and has been working in or near schools ever since. In 2008, Andrew began exploring the practical application of psychology and neuroscience in his classroom. In 2011, he earned his M. Ed. from the “Mind, Brain, Education” program at Harvard University. As President of “Translate the Brain,” Andrew now works with teachers, students, administrators, and parents to make learning easier and teaching more effective. He has presented at schools and workshops across the country; he also serves as an adviser to several organizations, including “The People’s Science.”
Andrew is the author of "Learning Begins: The Science of Working Memory and Attention for the Classroom Teacher."
Given the importance of feedback for learning, it seems obvious teachers should have well-established routines around its timing.
In an optimal world, would we give feedback right away? 24 hours later? As late as possible?
Which option promotes learning?
In the past, I’ve seen research distinguishing between feedback given right this second and that given once students are done with the exercise: a difference of several seconds, perhaps a minute or two.
It would, of course, be interesting to see research into longer periods of time.
Sure enough, Dan Willingham recently tweeted a link to this study, which explores exactly that question.
The Study Plan
In this research, a team led by Dr. Hillary Mullet gave feedback to college students after they finished a set of math problems. Some got that feedback when they submitted the assignment; others got it a week later.
Importantly, both groups got the same feedback.
Mullet’s team then looked at students’ scores on the final exams. More specifically, if the students got delayed feedback on “Fourier Transforms” — whatever those are — Mullet checked to see how they did on the exam questions covering Fourier.
And: they also surveyed the students to see which timing they preferred — right now vs. one week later.
The Results
I’m not surprised to learn that students strongly preferred immediate feedback. Students who got delayed feedback said they didn’t like it. And: some worried that it interfered with their learning.
Were those students’ worries correct?
Nope. In fact, just the opposite.
To pick one set of scores: students who got immediate feedback scored 83% on that section of an exam. Students who got delayed feedback scored a 94%.
Technically speaking, that’s HUGE.
Explanations and Implications
I suspect that delayed feedback benefitted these students because it effectively spread out the students’ practice.
We have shed loads of research showing that spacing practice out enhances learning more than doing it all at once.
So, if students got feedback right away, they did all their Fourier thinking at the same time. They did that mental work all at once.
However, if the feedback arrived a week later, they had to think about it an additional, distinct time. They spread that mental work out more.
If that explanation is true, what should teachers do with this information? How should we apply it to our teaching?
As always: boundary conditions matter. That is, Mullet worked with college students studying — I suspect — quite distinct topics. If they got delayed feedback on Fourier Transforms, that delay didn’t interfere with their ability to practice “convolution.”
In K-12 classrooms, however, students often need feedback on yesterday’s work before they can undertake tonight’s assignment.
In that case, it seems obvious that we should get feedback to them ASAP. As a rule: we shouldn’t require new work on a topic until we’ve given them feedback on relevant prior work.
With that caveat, Mullet’s research suggests that delaying feedback as much as reasonably possible might help students learn. The definition of “reasonably” will depend on all sorts of factors: the topic we’re studying, the age of my students, the trajectory of the curriculum, and so forth.
But: if we do this right, feedback helps a) because feedback is vital, and b) because it creates the spacing effect. That double-whammy might help our students in the way it helped Mullet’s. That would be GREAT.
I’m usually an easy-going guy. But if you want to see me frantic with frustration, tell me about the superiority of handwriting for taking notes.
Here’s the story.
Back in 2014, two Princeton researchers did a study which concluded that handwritten notes lead to better learning than notes taken on laptops.
That’s a helpful question to have answered, and so I read their study with a mixture of curiosity and gratitude.
Imagine my surprise when I found that their conclusion rests on the assumption that students can’t learn to do new things. (That’s a VERY weird belief for a teacher to have.)
If you believe a student CAN learn new to do things, then the researchers’ data strongly suggest that laptop notes will be better.
Despite these glaring flaws, people still cite this study — and look at me with pity (contempt?) when I try to convince them otherwise. “But research says so,” they say wearily. I seethe, but try to do so politely.
Today’s Exciting News
When I try to explain my argument, my interlocutor often says something like “handwriting engages more neural processing through kinesthetic yada yada,” and therefore boosts learning.
In the first place, that’s NOT the argument that the Princeton researchers make. It might be true, but that’s changing the subject — never a good way to prove a point.
In the second place, where is the evidence of that claim? I’d love to review it.
To date, no one has taken me up on that offer.
But — [sound of trumpets blaring] — I recently found a post at Neuroscience News with this splendid headline: “Why Writing by Hand Makes Kids Smarter.”
Here’s the first sentence of the article:
Children learn more and remember better when writing by hand, a new study reports. The brains of children are more active when handwriting than typing on a computer keyboard.
“Learn more.” “Remember better.” That’s impressive. At last: the research I’ve been asking for all these years!
Believe it or not, I rather enjoy finding research that encourages me to change my mind. That process reminds me of the power of the scientific method. I believe one thing until I see better evidence on the other side of the argument. Then I believe the other thing.
So, AT LAST, I got to read the research showing that handwriting helps students learn more and remember better.
Want to know what I found?
The Study
The researchers did not test anyone’s learning or memory.
You read that right. This article claims that handwriting improves learning and memory, but they didn’t test those claims.
This research team asked 24 participants — twelve adults and twelve 12-year-olds — to write by hand, or write on a laptop. They then observed the neural regions involved in those tasks.
Based on what they saw, they inferred that handwriting ought to result in better learning.
But they did not test that hypothesis.
So, based on a tiny sample size and a huge leap of neuro-faith, they have concluded that handwriting is better. (And, astonishingly, some big names in the field have echoed this claim.)
The Bigger Picture
Believe it or not, I’m entirely open to the possibility that handwritten notes enhance learning more than laptop notes do.
I’m even open to the possibility that kinesthetic yada yada is the reason.
To take one example, Jeffrey Wammes has done some splendid research showing that — in specific circumstances — drawing pictures helps students remember words and concepts.
If drawing boosts learning, maybe handwriting does too. That’s plausible.
But here’s the thing: before Wammes made his claim, he tested the actual claim he made.
He did not — as the Princeton researchers did — start from the assumption that students can’t learn to do new things.
He did not — as this current research does — extrapolate from neural patterns (of 24 people!) to predict how much learning might happen later on.
Wammes designed a plausible study to measure his hypothesis. In fact, he worked hard to disprove his interpretation of the data. Only when he couldn’t did he admit that — indeed — drawing can boost learning.
Before I believe in the superiority of either handwritten notes or laptop notes, I want to see the study that works hard to disprove its own claims. At present, the best known research on the topic conspicuously fails to meet that test.
Do you know of research that meets this standard? If yes, please let me know!
“The Theory of Enchantment is a social-emotional learning program that teaches individuals how to develop character, develop tools for resiliency…but more importantly, to learn how to love oneself.”
Intrigued?
Meet Chloé Valdary in this TedTalk, at at our conference, November 7-8.
Here’s a practical question: should the diagrams we use with students be detailed, colorful, bright, and specific?
Or, should they be simple, black and white, somewhat abstract?
We might reasonably assume that DETAILS and COLORS attract students’ attention. If so, they could help students learn.
We might, instead, worry that DETAILS and COLORS focus students’ attention on surface features, not deep structures. If so, students might learn a specific idea, but not transfer their learning to a new context.
In other words: richly-decorated diagrams might offer short-term benefits (attention!), but result in long-term limitations (difficulties with transfer). If so, blandly-decorated diagrams might be the better pedagogical choice.
Specifically, they asked college students to watch a brief video about metamorphosis. (They explained that the video was meant for younger students, so that the cool college kids wouldn’t be insulted by the simplicity of the topic.)
For half the students, that video showed only the black-and-white diagram to the left; for the other half, the video showed the colors and dots.
Did the different diagrams shape the students’ learning? Did it shape their ability to transfer that learning?
Results, Please…
No, and yes. Well, mostly yes.
In other words: students who watched both videos learned about ladybug metamorphosis equally well.
But — and this is a BIG but — students who watched the video with the “rich” diagram did not transfer their learning to other species as well as students who saw the “bland” diagram.
In other words: the bright colors and specifics of the rich diagram seem to limit metamorphosis to this specific species right here. An abstract representation allowed for more successful transfer of these concepts to other species.
In sum: to encourage transfer, we should use “bland,” abstract diagrams.
By the way: Team Menendez tested this hypothesis with both in-person learners and online learners. They got (largely) the same result.
So: if you’re teaching face-to-face or remotely, this research can guide your thinking.
Some Caveats
First: as is often the case, this effect depended on the students’ prior knowledge. Students who knew a lot about metamorphosis weren’t as distracted by the “rich” details.
Second: like much psychology research, this study worked with college students. Will its core concepts work with younger students?
As it turns out, Team Menendez has others studies underway to answer that very question. Watch This Space!
Third: Like much psychology research, this study looked at STEM materials. Will it work in the humanities?
What, after all, is the detail-free version of a poem? How do you study a presidency without specifics and details?
When I asked Menendez that question, he referred me to a study about reader illustrations. I’ll be writing about this soon.
In Sum
Like seductive details, “rich” diagrams might seem like a good teaching idea to increase interest and attention.
Alas, that perceptual richness seems to help in the short term but interfere with transfer over time.
To promote transfer, teach with “bland” diagrams — and use a different strategy to grab the students’ interest.
If you’re as excited for our November conference as I am, you might want to know more about our speakers.
Mary Helen Immordino-Yang is an affective neuroscientist and an educational psychologist.
That means: she studies how “children’s emotional and social relationships shape their LEARING, and also shape the BRAIN DEVELOPMENT that undergirds their learning.”
Yes: her work is that interesting.
https://www.youtube.com/watch?v=DEeo350WQrs
I got to interview Dr. Immordino-Yang back in 2018; she’s practical and funny and insightful. And she KNOWS SO MUCH.
We all agree, I suspect, that students should learn math. And reading. They should learn history. And science. SO MANY other topics.
What’s the best way to meet these goals?
If I want my students to learn math, is math teaching the best way to go? If I want them to understand history, should I teach more history?
Or, instead, is there a handy shortcut?
If I could help students improve their reading by teaching something other than reading, that alternate approach just might be more efficient and motivating.
In fact, two candidates get lots of attention as “alternative approaches.” If either or both pan out, they would offer us more choices. Maybe even a higher chance of success.
Music and Math
I don’t remember where I first heard that music education improves math learning. Specifically: learning to play the violin ultimately makes students better at learning calculus.
The explanation focused on “strengthened neural circuits” “repurposed” for “higher cognitive function.” Something like that. That string of words sounded quite impressive, and inclined me to believe.
Given the complexity of calculus, that would be really helpful!
But: is it true?
A recent meta-analysis looked at 54 relevant studies, including just under 7,000 participants.
Their findings? Let me quote key points from their summary:
Music training has repeatedly been claimed to positively impact children’s cognitive skills and academic achievement (literacy and mathematics).
This claim relies on the assumption that engaging in intellectually demanding activities fosters particular domain-general cognitive skills, or even general intelligence.
The present meta-analytic review shows that this belief is incorrect.
Once the quality of study design is controlled for, the overall effect of music training programs is null.
It gets worse:
Small statistically significant overall effects are obtained only in those studies implementing no random allocation of participants and employing non-active controls.
In other words: you get this result only if the study isn’t correctly designed.
And worse:
Interestingly, music training is ineffective regardless of the type of outcome measure (e.g., verbal, non-verbal, speed-related, etc.), participants’ age, and duration of training.
That is: no matter what you measure, the answer is still “no.”
Violin training sure strengthened some neural circuits. But that additional strength doesn’t get “repurposed for ‘higher’ cognitive function.”
If I want my students to learn math, I should teach them math.
Chess and Intelligence
If you watch The West Wing, you know that President Bartlet is smarter than everyone else because he won a Nobel Prize, and he plays chess frequently. He says things like “rook takes queen in five.” And then Leo nods appreciatively.
So smart.
It might be true that being smart makes you better at chess. (Although, Anders Ericsson says “no.”)
Is it true that playing chess makes you smarter? If we want our students to learn math and reading and science, should we teach them more chess? Would some neural circuitry get repurposed?
In contrast to much of the existing literature, we find no evidence of an effect of chess instruction upon children’s mathematics, reading or science test scores.
In this case, by the way, the “tape” is a randomized control trial with more than 4,000 students in it. So: that result seems impressively well established.
So far, it seems that if I want my students to be better at X, I should teach them X. Teaching them Y and hoping that Y makes them better at X hasn’t panned out well…
Social Studies and Reading
Reading might be an interesting exception to this rule. On the one hand, reading is a skill that students must acquire.
And, at the same time, they have to apply the skill of reading to the content being read. The more that students know about the content, maybe the better they’ll do at reading.
In any case, that’s a plausible hypothesis.
A recently released report from the Thomas Fordham Institute crunches the numbers, and finds that additional time devoted to social studies instruction ultimately improves reading scores.
Two key sentences from the executive summary:
Instead of devoting more class time to English language arts, we should be teaching elementary school children more social studies — as in, rich content about history, geography, and civics.
…
Literacy gains are more likely to materialize when students spend more time learning social studies.
In fact, they find that social studies instruction most benefits students from lower-income households, and from non-English speaking homes.
For a variety of reasons, this study looks at correlation, and so can’t demonstrate causation.
However, the underlying theory makes sense. If students can decode the sounds of the words “Berlin” and “Wall,” but don’t know the geography of Germany or cold-war history, they’re unlikely to make much sense of a reading passage about that in/famous border.
In Sum
Students improve at the skills they practice. Those skills — alas — rarely transfer to distantly unrelated disciplines.
To help students learn math, teach them math. To help them read, teach them to read — and also about the scientific, historical, geographic, and philosophical concepts that make reading so important and so worthwhile.
Dr. Neil Lewis thinks a lot about science communication: in fact, his appointment at Cornell is in both the Psychology AND the Communications departments. (For a complete bio, click here.)
He and Dr. Jonathan Wai recently posted an article focusing on a troubling communication paradox:
Researchers are encouraged to “give science away”; however, because of the “replication crisis,” it’s hard to know what science is worth being given.
Here at Learning and the Brain, we think about that question frequently — so I was delighted that Dr. Lewis agreed to chat with me about his article.
In this conversation, we talk about…
… how teachers can ask psychologists good questions
… the dangers of “eminence”
… what we should think about growth mindset research
… the research “hype cycle.”
I hope you enjoy this conversation as much as I did.
Andrew Watson:
Thank you, Dr. Lewis, for sharing your ideas with our readers.
In your recent article, you and Dr. Wai write about tensions between two imperatives in the field of psychology.
First, psychologists are being asked to “give research away.” And second, our field worries about the “replication crisis.”
Both of those phrases mean more or less what they say. Could you define them a little more precisely, and talk about the tensions that these imperatives are creating?
Dr. Lewis:
There has been a long-standing call in psychology—going back, really, to the 60’s when George Miller first issued this call—to “give psychology away.”
As scholars, we spend our time doing all this research: we should try to communicate it with the world so that people can use it and improve lives.
Professional psychology societies and organizations really encourage researchers to “get our work out there.”
But at the same time, over the past decade or so, there has been a movement to reflect on what we really know in psychology.
A “replication crisis” has occurred—not only in psychology, it’s been happening in many areas.
We are having a hard time replicating many research findings. And that [failure] is making us, the scientists, wrestle with: what do we know? How do we know it? How robust are some of our findings?
And so there’s a tension here. We’re supposed to be “giving our findings away,” but at the same time we’re not sure which ones are robust enough to be worth giving away.
Andrew Watson:
That does sound like a problem. In that tension, do you see any special concerns about the field of education?
Dr. Lewis:
One of the things I’ve been thinking about for education researchers is: how do we know what we know? We have to look very closely at the details of the paper to figure those things out.
Which students are being studied in the papers you’re reading?
What kinds of schools?
What kind of teachers?
At least in the US, there’s so much segregation in our school systems that schools look very different.
If studies are run—let’s say—with kids in the Ithaca school district where I live in upstate New York: those kids, those parents, those schools are very different than studies run—let’s say—in the Detroit public school district, which is the district I thought a lot about during my graduate training when I lived in Michigan.
There are big differences between these districts. We have to figure out: are the schools that we’re trying to intervene in, similar to the studies that were run? Or are they different?
Andrew Watson:
I have a question about that process.
Here’s a problem: to know what questions teachers ought to be asking, we need expert knowledge. Because we’re teachers not psychologists, it’s hard to know the right questions.
So: what’s the best question that a nonspecialist teacher can ask of a researcher, in order to get an answer that we can genuinely understand?
Dr. Lewis:
I think there are some basic things that teachers can ask of researchers.
The teachers can ask what kinds of schools were these studies run in. Are they urban schools, rural schools?
What percentage of the students are on free lunch? (That’s an indicator of poverty levels of the school. Research findings are often influenced by background characteristics about the students.)
What do we know about the kinds of students that were involved in studies?
What do we know about the teachers?
Those are basic things that the researchers should be able to tell you. And then you can figure out whether those are similar to:
the students that you’re working with,
the kinds of schools that you have,
the kind of leadership in your school district, and the like.
Those basic characteristics about how the study was done will help you figure out whether or not you can use it.
Andrew Watson:
I spend a lot of time talking with teachers about this concern. Most psychology research is done with college undergraduates. That research is obviously important. But if you’re teaching reading to third graders, maybe that research translates to your context and maybe it doesn’t.
Dr. Lewis:
Right.
Andrew Watson:
One of the more intriguing points you made in the article has to do with the idea of eminence.
In the world of education, we’re often drawn to Big Names. You argue that the things scholars do to achieve eminence don’t necessarily help them produce high quality research.
As teachers, how do we sort through this paradox? How can we be wise when we think about that?
Dr. Lewis:
We brought up eminence to reinforce what I just noted. Look at the details of the study and don’t rely on the “cue” of eminence as your signal that research must be good.
Researchers are judged by many metrics. Once you put those metrics in place, people do what they can to… I hesitate to use the word “game,” but to optimize their standing in those metrics.
Andrew Watson:
Which is a lot like “gaming,” isn’t it?
Dr. Lewis:
Yes. In the research world, there are a few metrics that don’t necessarily help [produce meaningful results]. One of them, for instance, is that researchers are incentivized to publish as much as we can.
Unfortunately, publishing fast is the way to rise up the ranks. But sometimes figuring out these differences that I have been talking about—like, between contexts and samples—it takes some time. It slows you down from churning out papers; and unfortunately, researchers often aren’t incentivized to take that slower, more careful approach.
And so there’s that tension again too. I don’t want to leave the impression that we just shouldn’t trust eminent people. That’s not the point I want to make.
The point is: eminence in and of itself is not a useful signal of quality. You have to look very closely at the details of the studies in front of you. Then compare those details to your own situation and judge the work on that. Judge the work, don’t judge based on how famous the person is.
Andrew Watson:
It occurs to me as you’re explaining this, there’s a real problem with the emphasis on rapid publication. One of the consistent findings in education research is that short-term performance isn’t a good indicator of long-term learning.
But if scholars are incentivized to publish quickly, they’re incentivized to the study short-term, which doesn’t tell us much about what we really want to know: learning that lasts.
Dr. Lewis:
Absolutely right. As I’ve written in other articles, we don’t have enough longitudinal studies for the very reasons we’re talking about: longitudinal studies take forever—and, again, the incentive is to publish fast, publish often.
The outcomes that are often measured in psychology studies are these shorter term things. You have the student do something, and you measure at the end of the session. Maybe you look again at the end of the semester.
But [we should] look nextyear, two years, three years, because we know some of these effects take time to accumulate.
Some older studies have looked at long-term outcomes. I’ve seen a few fascinating studies showing, initially, no significant findings. But if you look far enough down the road, you start to see meaningful effects. It just takes time for the benefits to accumulate.
In education, we shouldn’t assume that research results “generalize.” [Editor: That is, we shouldn’t assume that research with 1st graders applies to 10th graders; or that short term findings will also be true in the long term.]
Now, until I see more evidence, I assume findings are context-specific. [Editor: That is, research with 1st graders applies to 1st graders—but not much beyond that age/grade. Research from the United States applies to the US cultural context, but not—perhaps—to Korea.]
For instance: “growth mindset.” In recent studies, authors have been looking at how much the effect varies by context and by population. Those details matter in thinking about mindset studies.
Andrew Watson:
Yes, I think mindset is a really interesting case study for the topic we’re talking about. My impression is that teachers got super excited about growth mindset. We went to a highly simplistic “poster-on-the-wall” version of the theory.
And in the last 18 months or so, there has been a real backlash. Now we hear: “growth mindset means nothing whatsoever! Why are you wasting your time?”
We need to find our way to a nuanced middle ground. No, growth mindset is not a panacea. But nothing is a panacea. At the same time, in a specific set of circumstances, mindset can help certain students in specific ways.
That balanced conclusion can be a hard place to get the conversation to go.
Dr. Lewis
Yes, issues like that motivated us to write our paper.
If we [researchers] are able to communicate those nuances clearly, then I think we avoid these misunderstandings. It’s not that mindset is useless; instead, mindset will have a small effect under certain conditions. We should just say that.
We have a problem with the “hype cycle.”
If something is over-hyped one day, then you’re really setting people’s expectations unreasonably high. Later, when the research doesn’t meet those expectations, teachers are disappointed.
And so researchers should set expectations appropriately. Mindset is not a panacea. We shouldn’t expect enormous impacts. And that’s fine. Let’s just say that.
Andrew Watson:
I think this “hype cycle” is part of the challenge that we’re facing.
For instance, with learning styles, teachers thought that it had a lot of scientific backing. We embraced it because it was “research based.”
Now the message is: “no, research got that wrong; learning styles aren’t a thing. But here’s another research-based thing instead.”
And teachers are saying: “wait, if I shouldn’t have followed research about learning styles, why should I believe new research about new teaching suggestions?”
Dr. Lewis:
That’s a tricky problem.
One way to think about science is: science is a way of reducing uncertainty.
We had this idea about learning styles. We gathered some initial evidence about it. It seemed like a good idea for a while.
But as we continued studying it, we realized, well, maybe there is not as much good evidence as we thought.
And that’s part of the scientific process. I think it’s important to explain that.
But: that shift without an explanation naturally leads teachers to be suspicious.
Teachers think: “why are you telling me, just make this change. You have to explain to me what is going on and why should I make that change.”
This explanation does take more time. But that’s what is necessary to get people to update their understanding of the world.
Something that we all have to keep in mind: just as every year teachers are learning new ways to teach the new generations of students, scientists are doing the same thing too. We’re constantly trying to update our knowledge.
So there will be changes in the recommendations over time. If there weren’t changes, none of us would be doing our best. So we’re learning and improving constantly.
But we have to have that conversation. How are we updating our knowledge? And what are ways that we can implement that new knowledge into curriculum?
And, the conversation has to go both ways. Researchers communicate things to teachers, but teachers also need to be telling things to researchers. So we can keep that real classroom context in mind as we’re developing research advice.
Andrew Watson:
In your article, you and Dr. Wai remind researchers that they’re not communicating with one undifferentiated public. They are talking with many distinct, smaller audiences—audiences which have different interests and needs.
Are there difficulties that make it especially hard to communicate with teachers about psychology research? Is there some way that we’re an extra challenging audience? Or maybe, an especially easy audience?
Dr. Lewis:
I think what’s hard for presenters is not knowing details about the audience, where they’re coming from. That section of the paper is about is really getting to know your audience, and tailoring your message from there.
If I’m going to go explain psychology findings to a group of STEM teachers, that talk might be different than if the audience is a broader cross-section of teachers.
In the university setting, it’s easier to figure out those distinctions because you know which department invited you to speak.
In broader K-12 settings you don’t always know. A school district invites you. You can do some Googling to try to figure something out about the district. But you don’t know who’s going to be in the room, and what is happening [in that district]. So you might end up giving too broad a talk, that might be less informative than if you did get some more information.
Andrew Watson:
Are there questions I haven’t asked that I ought to have asked?
Dr. Lewis:
The key point for me is: when we communicate about science in the world, we really have to look at key research details and have serious conversations about them. Nuances matter, and we just can’t gloss over them.
Andrew Watson:
Dr. Lewis, I very much appreciate your taking the time to talk with me today.
When I attended my first Learning and the Brain conference, more than a decade ago, I had a simple plan:
Step 1: Listen to the researcher’s advice.
Step 2: Do what the researcher told me to do.
Step 3: Watch my students learn more.
Step 4: Quietly glow in the satisfaction that my teaching is research-based.
In fact, I tried to follow that plan for several years. Only gradually did I discover that it simply couldn’t work.
Why?
Because researchers’ advice almost always applies to a very specific, narrow set of circumstances.
The teaching technique they use to help — say — college students learn calculus might not help my 10th graders write better Macbeth essays.
Or: their teaching strategy encourages a technology that my Montessori school forbids.
Or: research on American adolescents might not yield results that help teens raised in other cultures.
In other words: psychology and neuroscience research don’t provide me a handy checklist. I don’t just need to change what I do; I need to change how I think. I really wish someone had said to me:
“Before you change your teaching, change your thinking.”
Example the First
I thought of this advice when I saw a recent Twitter post by Otto Warman (@MrOWarman), a math teacher in Britain.
Warman has gone WAY beyond following a researcher’s checklist. Instead, he has synthesized an impressive amount of research, and reorganized it all into a lesson-planning system that works for him.
As you can see, his lesson plan form (which he has generously shared) prompts him to begin class with retrieval practice, then to introduce new information, then to check for understanding, and so forth. (You can click on the image to expand it.)
Each circle and slice of the diagram includes helpful reminders about the key concepts that he’s putting into action.
That is: he’s not simply enacting someone else’s program in a routinized way. He has, instead, RETHOUGHT his approach to lesson planning in order to use research-supported strategies most appropriately and effectively.
To be clear: I DO NOT think you should print up this sheet and start using it yourself. That would be a way to change what you do, not necessarily a way to change what you think. The strategies that he has adopted might not apply to your students or your subject.
Instead, I DO THINK you should find inspiration in Warman’s example.
What new lesson plan form would you devise?
Are there cognitive-science concepts you should prioritize in your teaching?
Will your students benefit especially from XYZ, but not so much from P, Q, or R?
The more you reorganize ideas to fit your particular circumstances, the more they will help your teaching and your students.
Example the Second
Over on his blog (which you should be reading), Adam Boxer worries that we might be making a mess of retrieval practice.
Done correctly, retrieval practice yields all sorts of important benefits. Done badly, however, it provides few benefits. And takes up time.
For that reason, he explains quite specifically how his school has put retrieval practice to work. As you’ll see when you review his post, this system probably won’t work if teacher simply go through the steps.
Instead, we have to understand the cognitive science behind retrieval practice. Why does it work? What are the boundary conditions limiting its effectiveness? How do we ensure that the research-based practice fits the very specific demands of our classes, subjects, and students?
Retrieval practice isn’t just something to do; it’s a way to think about creating desirable difficulty. Without the thinking, the doing won’t help.
To Sum Up
What’s the best checklist for explaining a concept clearly? There is no checklist: think differently about working memory and schema theory.
What’s the best daily schedule for a school? There is no best schedule: think differently about attention.
What steps help are most powerful to help students manage stress? Before we work steps, we have to think differently about students’ emotional and cognitive systems.
To-do lists are straightforward and easy. Teaching is complex and hard. Think different.
We teachers get LOTS of advice from cognitive science. Research tells us to…
…monitor and manage our students’ stress levels.
…use mid-class exercise to enhance attention.
…interleave topics to create desirable difficulties.
Each individual suggestion has lots of research behind it, and we’re glad to get these ideas.
But: what happens when we start thinking about combinations?
If we use more than one technique at a time, do the benefits add onto each other? Do they conflict with each other? How can we link up research-informed strategies to create the best overall learning experience?
Focus on Memory
In the last ten years, we’ve seen a real explosion in research about long-term memory formation (aka, learning).
We’ve seen that retrieval practice takes no more time than simple review, but results in lots more learning.
We’ve seen that spreading practice out (aka, spacing) helps students learn better than bunching practice together.
So, here’s the big question: what happens if we do both? Is retrieval practice + spacing more powerful than each technique by itself? Is 1+1 greater than 2?
In this study, researchers developed a complex study paradigm that created both retrieval practice and spacing. Unlike some retrieval practice exercises — which simply require students to try to remember the answer — this paradigm required students to get questions correct before they were done.
Researchers called this combination successive relearning. Students used successive relearning for some of the topics they learned in an advanced biopsychology course. They used their regular (“business-as-usual”) study techniques for the others.
Did successive relearning help students learn?
Answers, and More Questions
To some degree, the answer to that question is: it depends on what you compare to what.
Should the researchers compare this year’s students to last year’s students in the same course?
Should they compare students’ memory of topics where they did use successive relearning to topics where they didn’t?
Being thorough, this research team compared lots of variables to lots of other variables. Quite consistently, they found that “successive relearning” helped.
That is: this year’s students learned more than last year’s. Among this year’s students, successive relearning helped students remember more than their “business-as-usual” strategies.
Roughly speaking, students remembered at least 10% more using successive relearning than other strategies.
So: 1 + 1 = +10%
Case closed.
Case Reopened?
But wait just a minute here.
If you’ve got a good memory, this blog post might be ringing some bells.
Back in February of this year, I wrote about a study in which spacing helped students learn, but — in the long run — retrieval practice didn’t.
If you’ve got an AMAZING memory, you might recall a blog post from June of 2018. Researchers polled students about their study habits. They found that students didbenefit from their own retrieval practice, but did not benefit from their spacing — the exact opposite result.
What’s going on here? Why did the two studies differ from each other? Why are they different from this study I’ve just described?
My hypothesis: specifics matter.
In those previous studies, the classes already included one of these techniques.
That is: the study I wrote about in February looked at a math class. Math classes already include lots of retrieval practice, because problem solving is a kind of RP. So, additional retrieval practice didn’t help. But the spacing did.
And, in the class I wrote about in 2018, the professor’s syllabus already included lots of spacing: cumulative review sheets and cumulative exams. So additional spacing done by the students didn’t help. But their retrieval practice did.
In this most recent study, students benefitted from both because the biopsychology course didn’t include either.
In other words: the best combination of retrieval practice and spacing will depend — in part — on the structure and content of the course you’re teaching.
Final Thoughts
Here’s how I concluded my post back in February:
In my own view, we can ask/expect our students to join us in retrieval practice strategies. Once they reach a certain age or grade, they should be able to make flashcards, or use quizlet, or test one another.
However, I think spacing requires a different perspective on the full scope of a course. That is: it requires a teacher’s perspective. We have the long view, and see how all the pieces best fit together.
For those reasons, I think we can (and should) ask students to do retrieval practice (in addition to the retrieval practice we create). But, we ourselves should take responsibility for spacing. We — much more than they they — have the big picture in mind. We should take that task off their to do list, and keep it squarely on ours.
That’s an opinion, not a research conclusion. But I still think it’s true.