Skip to main content

Andrew Watson About Andrew Watson

Andrew began his classroom life as a high-school English teacher in 1988, and has been working in or near schools ever since. In 2008, Andrew began exploring the practical application of psychology and neuroscience in his classroom. In 2011, he earned his M. Ed. from the “Mind, Brain, Education” program at Harvard University. As President of “Translate the Brain,” Andrew now works with teachers, students, administrators, and parents to make learning easier and teaching more effective. He has presented at schools and workshops across the country; he also serves as an adviser to several organizations, including “The People’s Science.” Andrew is the author of "Learning Begins: The Science of Working Memory and Attention for the Classroom Teacher."

Meet the Keynotes: Chloé Valdary
Andrew Watson
Andrew Watson

“The Theory of Enchantment is a social-emotional learning program that teaches individuals how to develop character, develop tools for resiliency…but more importantly, to learn how to love oneself.”

Intrigued?

Meet Chloé Valdary in this TedTalk, at at our conference, November 7-8.

https://www.youtube.com/watch?v=dB7gsp_zDZc

“Rich” or “Bland”: Which Diagrams Helps Students Learn Deeply?
Andrew Watson
Andrew Watson

Here’s a practical question: should the diagrams we use with students be detailed, colorful, bright, and specific?

Or, should they be simple, black and white, somewhat abstract?

We might reasonably assume that DETAILS and COLORS attract students’ attention. If so, they could help students learn.

We might, instead, worry that DETAILS and COLORS focus students’ attention on surface features, not deep structures. If so, students might learn a specific idea, but not transfer their learning to a new context.

In other words: richly-decorated diagrams might offer short-term benefits (attention!), but result in long-term limitations (difficulties with transfer). If so, blandly-decorated diagrams might be the better pedagogical choice.

Today’s Research

Scholars in Wisconsin — led by David Menendez — have explored this question.

Specifically, they asked college students to watch a brief video about metamorphosis. (They explained that the video was meant for younger students, so that the cool college kids wouldn’t be insulted by the simplicity of the topic.)

For half the students, that video showed only the black-and-white diagram to the left; for the other half, the video showed the colors and dots.

Did the different diagrams shape the students’ learning? Did it shape their ability to transfer that learning?

Results, Please…

No, and yes. Well, mostly yes.

In other words: students who watched both videos learned about ladybug metamorphosis equally well.

But — and this is a BIG but — students who watched the video with the “rich” diagram did not transfer their learning to other species as well as students who saw the “bland” diagram.

In other words: the bright colors and specifics of the rich diagram seem to limit metamorphosis to this specific species right here. An abstract representation allowed for more successful transfer of these concepts to other species.

In sum: to encourage transfer, we should use “bland,” abstract diagrams.

By the way: Team Menendez tested this hypothesis with both in-person learners and online learners. They got (largely) the same result.

So: if you’re teaching face-to-face or remotely, this research can guide your thinking.

Some Caveats

First: as is often the case, this effect depended on the students’ prior knowledge. Students who knew a lot about metamorphosis weren’t as distracted by the “rich” details.

Second: like much psychology research, this study worked with college students. Will its core concepts work with younger students?

As it turns out, Team Menendez has others studies underway to answer that very question. Watch This Space!

Third: Like much psychology research, this study looked at STEM materials. Will it work in the humanities?

What, after all, is the detail-free version of a poem? How do you study a presidency without specifics and details?

When I asked Menendez that question, he referred me to a study about reader illustrations. I’ll be writing about this soon.

In Sum

Like seductive details, “rich” diagrams might seem like a good teaching idea to increase interest and attention.

Alas, that perceptual richness seems to help in the short term but interfere with transfer over time.

To promote transfer, teach with “bland” diagrams — and use a different strategy to grab the students’ interest.

Meet the Keynotes: Mary Helen Immordino-Yang
Andrew Watson
Andrew Watson

If you’re as excited for our November conference as I am, you might want to know more about our speakers.

Mary Helen Immordino-Yang is an affective neuroscientist and an educational psychologist.

That means: she studies how “children’s emotional and social relationships shape their LEARING, and also shape the BRAIN DEVELOPMENT that undergirds their learning.”

Yes: her work is that interesting.

https://www.youtube.com/watch?v=DEeo350WQrs

I got to interview Dr. Immordino-Yang back in 2018; she’s practical and funny and insightful. And she KNOWS SO MUCH.

You can read more here.

If you want to learn more about Rebuilding SEL Skills in the Age of COVID-19, we hope you’ll join us, and Dr. Immordino-Yang.

“If I Want My Students to Learn Math, Should I Teach Them More Math?”
Andrew Watson
Andrew Watson

We all agree, I suspect, that students should learn math. And reading. They should learn history. And science. SO MANY other topics.

What’s the best way to meet these goals?

If I want my students to learn math, is math teaching the best way to go? If I want them to understand history, should I teach more history?

Or, instead, is there a handy shortcut?

If I could help students improve their reading by teaching something other than reading, that alternate approach just might be more efficient and motivating.

In fact, two candidates get lots of attention as “alternative approaches.”  If either or both pan out, they would offer us more choices. Maybe even a higher chance of success.

Music and Math

I don’t remember where I first heard that music education improves math learning. Specifically: learning to play the violin ultimately makes students better at learning calculus.

The explanation focused on “strengthened neural circuits” “repurposed” for “higher cognitive function.” Something like that. That string of words sounded quite impressive, and inclined me to believe.

Given the complexity of calculus, that would be really helpful!

But: is it true?

A recent meta-analysis looked at 54 relevant studies, including just under 7,000 participants.

Their findings? Let me quote key points from their summary:

Music training has repeatedly been claimed to positively impact children’s cognitive skills and academic achievement (literacy and mathematics).

This claim relies on the assumption that engaging in intellectually demanding activities fosters particular domain-general cognitive skills, or even general intelligence.

The present meta-analytic review shows that this belief is incorrect.

Once the quality of study design is controlled for, the overall effect of music training programs is null.

It gets worse:

Small statistically significant overall effects are obtained only in those studies implementing no random allocation of participants and employing non-active controls.

In other words: you get this result only if the study isn’t correctly designed.

And worse:

Interestingly, music training is ineffective regardless of the type of outcome measure (e.g., verbal, non-verbal, speed-related, etc.), participants’ age, and duration of training.

That is: no matter what you measure, the answer is still “no.”

Violin training sure strengthened some neural circuits. But that additional strength doesn’t get “repurposed for ‘higher’ cognitive function.”

If I want my students to learn math, I should teach them math.

Chess and Intelligence

If you watch The West Wing, you know that President Bartlet is smarter than everyone else because he won a Nobel Prize, and he plays chess frequently. He says things like “rook takes queen in five.” And then Leo nods appreciatively.

So smart.

It might be true that being smart makes you better at chess. (Although, Anders Ericsson says “no.”)

Is it true that playing chess makes you smarter? If we want our students to learn math and reading and science, should we teach them more chess? Would some neural circuitry get repurposed?

Let’s go to the tape:

In contrast to much of the existing literature, we find no evidence of an effect of chess instruction upon children’s mathematics, reading or science test scores.

In this case, by the way, the “tape” is a randomized control trial with more than 4,000 students in it. So: that result seems impressively well established.

So far, it seems that if I want my students to be better at X, I should teach them X. Teaching them Y and hoping that Y makes them better at X hasn’t panned out well…

Social Studies and Reading

Reading might be an interesting exception to this rule. On the one hand, reading is a skill that students must acquire.

And, at the same time, they have to apply the skill of reading to the content being read. The more that students know about the content, maybe the better they’ll do at reading.

In any case, that’s a plausible hypothesis.

A recently released report from the Thomas Fordham Institute crunches the numbers, and finds that additional time devoted to social studies instruction ultimately improves reading scores.

Two key sentences from the executive summary:

Instead of devoting more class time to English language arts, we should be teaching elementary school children more social studies — as in, rich content about history, geography, and civics.

Literacy gains are more likely to materialize when students spend more time learning social studies.

In fact, they find that social studies instruction most benefits students from lower-income households, and from non-English speaking homes.

For a variety of reasons, this study looks at correlation, and so can’t demonstrate causation.

However, the underlying theory makes sense. If students can decode the sounds of the words “Berlin” and “Wall,” but don’t know the geography of Germany or cold-war history, they’re unlikely to make much sense of a reading passage about that in/famous border.

In Sum

Students improve at the skills they practice. Those skills — alas —  rarely transfer to distantly unrelated disciplines.

To help students learn math, teach them math. To help them read, teach them to read — and also about the scientific, historical, geographic, and philosophical concepts that make reading so important and so worthwhile.

How Psychologists and Teachers Can Talk about Research Most Wisely
Andrew Watson
Andrew Watson

Dr. Neil Lewis thinks a lot about science communication: in fact, his appointment at Cornell is in both the Psychology AND the Communications departments. (For a complete bio, click here.)

He and Dr. Jonathan Wai recently posted an article focusing on a troubling communication paradox:

Researchers are encouraged to “give science away”; however, because of the “replication crisis,” it’s hard to know what science is worth being given.

Here at Learning and the Brain, we think about that question frequently — so I was delighted that Dr. Lewis agreed to chat with me about his article.

In this conversation, we talk about…

… how teachers can ask psychologists good questions

… the dangers of “eminence”

… what we should think about growth mindset research

… the research “hype cycle.”

I hope you enjoy this conversation as much as I did.


Andrew Watson:

Thank you, Dr. Lewis, for sharing your ideas with our readers.

In your recent article, you and Dr. Wai write about tensions between two imperatives in the field of psychology.

First, psychologists are being asked to “give research away.” And second, our field worries about the “replication crisis.”

Both of those phrases mean more or less what they say. Could you define them a little more precisely, and talk about the tensions that these imperatives are creating?

Dr. Lewis:

There has been a long-standing call in psychology—going back, really, to the 60’s when George Miller first issued this call—to “give psychology away.”

As scholars, we spend our time doing all this research: we should try to communicate it with the world so that people can use it and improve lives.

Professional psychology societies and organizations really encourage researchers to “get our work out there.”

But at the same time, over the past decade or so, there has been a movement to reflect on what we really know in psychology.

A “replication crisis” has occurred—not only in psychology, it’s been happening in many areas.

We are having a hard time replicating many research findings. And that [failure] is making us, the scientists, wrestle with: what do we know? How do we know it? How robust are some of our findings?

And so there’s a tension here. We’re supposed to be “giving our findings away,” but at the same time we’re not sure which ones are robust enough to be worth giving away.

Andrew Watson:

That does sound like a problem. In that tension, do you see any special concerns about the field of education?

Dr. Lewis:

One of the things I’ve been thinking about for education researchers is: how do we know what we know? We have to look very closely at the details of the paper to figure those things out.

Which students are being studied in the papers you’re reading?

What kinds of schools?

What kind of teachers?

At least in the US, there’s so much segregation in our school systems that schools look very different.

If studies are run—let’s say—with kids in the Ithaca school district where I live in upstate New York: those kids, those parents, those schools are very different than studies run—let’s say—in the Detroit public school district, which is the district I thought a lot about during my graduate training when I lived in Michigan.

There are big differences between these districts. We have to figure out: are the schools that we’re trying to intervene in, similar to the studies that were run? Or are they different?

Andrew Watson:

I have a question about that process.

Here’s a problem: to know what questions teachers ought to be asking, we need expert knowledge. Because we’re teachers not psychologists, it’s hard to know the right questions.

So: what’s the best question that a nonspecialist teacher can ask of a researcher, in order to get an answer that we can genuinely understand?

Dr. Lewis:

I think there are some basic things that teachers can ask of researchers.

The teachers can ask what kinds of schools were these studies run in. Are they urban schools, rural schools?

What percentage of the students are on free lunch? (That’s an indicator of poverty levels of the school. Research findings are often influenced by background characteristics about the students.)

What do we know about the kinds of students that were involved in studies?

What do we know about the teachers?

Those are basic things that the researchers should be able to tell you. And then you can figure out whether those are similar to:

the students that you’re working with,

the kinds of schools that you have,

the kind of leadership in your school district, and the like.

Those basic characteristics about how the study was done will help you figure out whether or not you can use it.

Andrew Watson:

I spend a lot of time talking with teachers about this concern. Most psychology research is done with college undergraduates. That research is obviously important. But if you’re teaching reading to third graders, maybe that research translates to your context and maybe it doesn’t.

Dr. Lewis:

Right.

Andrew Watson:

One of the more intriguing points you made in the article has to do with the idea of eminence.

In the world of education, we’re often drawn to Big Names. You argue that the things scholars do to achieve eminence don’t necessarily help them produce high quality research.

As teachers, how do we sort through this paradox? How can we be wise when we think about that?

Dr. Lewis:

We brought up eminence to reinforce what I just noted. Look at the details of the study and don’t rely on the “cue” of eminence as your signal that research must be good.

Researchers are judged by many metrics. Once you put those metrics in place, people do what they can to… I hesitate to use the word “game,” but to optimize their standing in those metrics.

Andrew Watson:

Which is a lot like “gaming,” isn’t it?

Dr. Lewis:

Yes. In the research world, there are a few metrics that don’t necessarily help [produce meaningful results]. One of them, for instance, is that researchers are incentivized to publish as much as we can.

Unfortunately, publishing fast is the way to rise up the ranks. But sometimes figuring out these differences that I have been talking about—like, between contexts and samples—it takes some time. It slows you down from churning out papers; and unfortunately, researchers often aren’t incentivized to take that slower, more careful approach.

And so there’s that tension again too. I don’t want to leave the impression that we just shouldn’t trust eminent people. That’s not the point I want to make.

The point is: eminence in and of itself is not a useful signal of quality. You have to look very closely at the details of the studies in front of you. Then compare those details to your own situation and judge the work on that. Judge the work, don’t judge based on how famous the person is.

Andrew Watson:

It occurs to me as you’re explaining this, there’s a real problem with the emphasis on rapid publication. One of the consistent findings in education research is that short-term performance isn’t a good indicator of long-term learning.

But if scholars are incentivized to publish quickly, they’re incentivized to the study short-term, which doesn’t tell us much about what we really want to know: learning that lasts.

Dr. Lewis:

Absolutely right. As I’ve written in other articles, we don’t have enough longitudinal studies for the very reasons we’re talking about: longitudinal studies take forever—and, again, the incentive is to publish fast, publish often.

The outcomes that are often measured in psychology studies are these shorter term things. You have the student do something, and you measure at the end of the session. Maybe you look again at the end of the semester.

But [we should] look next year, two years, three years, because we know some of these effects take time to accumulate.

Some older studies have looked at long-term outcomes. I’ve seen a few fascinating studies showing, initially, no significant findings. But if you look far enough down the road, you start to see meaningful effects. It just takes time for the benefits to accumulate.

In education, we shouldn’t assume that research results “generalize.” [Editor: That is, we shouldn’t assume that research with 1st graders applies to 10th graders; or that short term findings will also be true in the long term.]

Now, until I see more evidence, I assume findings are context-specific. [Editor: That is, research with 1st graders applies to 1st graders—but not much beyond that age/grade. Research from the United States applies to the US cultural context, but not—perhaps—to Korea.]

For instance: “growth mindset.” In recent studies, authors have been looking at how much the effect varies by context and by population. Those details matter in thinking about mindset studies.

Andrew Watson:

Yes, I think mindset is a really interesting case study for the topic we’re talking about. My impression is that teachers got super excited about growth mindset. We went to a highly simplistic “poster-on-the-wall” version of the theory.

And in the last 18 months or so, there has been a real backlash. Now we hear: “growth mindset means nothing whatsoever! Why are you wasting your time?”

We need to find our way to a nuanced middle ground. No, growth mindset is not a panacea. But nothing is a panacea. At the same time, in a specific set of circumstances, mindset can help certain students in specific ways.

That balanced conclusion can be a hard place to get the conversation to go.

Dr. Lewis

Yes, issues like that motivated us to write our paper.

If we [researchers] are able to communicate those nuances clearly, then I think we avoid these misunderstandings. It’s not that mindset is useless; instead, mindset will have a small effect under certain conditions. We should just say that.

We have a problem with the “hype cycle.”

If something is over-hyped one day, then you’re really setting people’s expectations unreasonably high. Later, when the research doesn’t meet those expectations, teachers are disappointed.

And so researchers should set expectations appropriately. Mindset is not a panacea. We shouldn’t expect enormous impacts. And that’s fine. Let’s just say that.

Andrew Watson:

I think this “hype cycle” is part of the challenge that we’re facing.

For instance, with learning styles, teachers thought that it had a lot of scientific backing. We embraced it because it was “research based.”

Now the message is: “no, research got that wrong; learning styles aren’t a thing. But here’s another research-based thing instead.”

And teachers are saying: “wait, if I shouldn’t have followed research about learning styles, why should I believe new research about new teaching suggestions?”

Dr. Lewis:

That’s a tricky problem.

One way to think about science is: science is a way of reducing uncertainty.

We had this idea about learning styles. We gathered some initial evidence about it. It seemed like a good idea for a while.

But as we continued studying it, we realized, well, maybe there is not as much good evidence as we thought.

And that’s part of the scientific process. I think it’s important to explain that.

But: that shift without an explanation naturally leads teachers to be suspicious.

Teachers think: “why are you telling me, just make this change. You have to explain to me what is going on and why should I make that change.”

This explanation does take more time. But that’s what is necessary to get people to update their understanding of the world.

Something that we all have to keep in mind: just as every year teachers are learning new ways to teach the new generations of students, scientists are doing the same thing too. We’re constantly trying to update our knowledge.

So there will be changes in the recommendations over time. If there weren’t changes, none of us would be doing our best. So we’re learning and improving constantly.

But we have to have that conversation. How are we updating our knowledge? And what are ways that we can implement that new knowledge into curriculum?

And, the conversation has to go both ways. Researchers communicate things to teachers, but teachers also need to be telling things to researchers. So we can keep that real classroom context in mind as we’re developing research advice.

Andrew Watson:

In your article, you and Dr. Wai remind researchers that they’re not communicating with one undifferentiated public. They are talking with many distinct, smaller audiences—audiences which have different interests and needs.

Are there difficulties that make it especially hard to communicate with teachers about psychology research? Is there some way that we’re an extra challenging audience? Or maybe, an especially easy audience?

Dr. Lewis:

I think what’s hard for presenters is not knowing details about the audience, where they’re coming from. That section of the paper is about is really getting to know your audience, and tailoring your message from there.

If I’m going to go explain psychology findings to a group of STEM teachers, that talk might be different than if the audience is a broader cross-section of teachers.

In the university setting, it’s easier to figure out those distinctions because you know which department invited you to speak.

In broader K-12 settings you don’t always know. A school district invites you. You can do some Googling to try to figure something out about the district. But you don’t know who’s going to be in the room, and what is happening [in that district]. So you might end up giving too broad a talk, that might be less informative than if you did get some more information.

Andrew Watson:

Are there questions I haven’t asked that I ought to have asked?

Dr. Lewis:

The key point for me is: when we communicate about science in the world, we really have to look at key research details and have serious conversations about them. Nuances matter, and we just can’t gloss over them.

Andrew Watson:

Dr. Lewis, I very much appreciate your taking the time to talk with me today.

Dr. Lewis:

Thank you.

“Before You Change Your Teaching, Change Your Thinking”
Andrew Watson
Andrew Watson

When I attended my first Learning and the Brain conference, more than a decade ago, I had a simple plan:

Step 1: Listen to the researcher’s advice.

Step 2: Do what the researcher told me to do.

Step 3: Watch my students learn more.

Step 4: Quietly glow in the satisfaction that my teaching is research-based.

In fact, I tried to follow that plan for several years. Only gradually did I discover that it simply couldn’t work.

Why?

Because researchers’ advice almost always applies to a very specific, narrow set of circumstances.

The teaching technique they use to help — say — college students learn calculus might not help my 10th graders write better Macbeth essays.

Or: their teaching strategy encourages a technology that my Montessori school forbids.

Or: research on American adolescents might not yield results that help teens raised in other cultures.

In other words: psychology and neuroscience research don’t provide me a handy checklist. I don’t just need to change what I do; I need to change how I think. I really wish someone had said to me:

“Before you change your teaching, change your thinking.”

Example the First

I thought of this advice when I saw a recent Twitter post by Otto Warman (@MrOWarman), a math teacher in Britain.

Warman has gone WAY beyond following a researcher’s checklist. Instead, he has synthesized an impressive amount of research, and reorganized it all into a lesson-planning system that works for him.

As you can see, his lesson plan form (which he has generously shared) prompts him to begin class with retrieval practice, then to introduce new information, then to check for understanding, and so forth. (You can click on the image to expand it.)

Each circle and slice of the diagram includes helpful reminders about the key concepts that he’s putting into action.

That is: he’s not simply enacting someone else’s program in a routinized way. He has, instead, RETHOUGHT his approach to lesson planning in order to use research-supported strategies most appropriately and effectively.

To be clear: I DO NOT think you should print up this sheet and start using it yourself. That would be a way to change what you do, not necessarily a way to change what you think. The strategies that he has adopted might not apply to your students or your subject.

Instead, I DO THINK you should find inspiration in Warman’s example.

What new lesson plan form would you devise?

Are there cognitive-science concepts you should prioritize in your teaching?

Will  your students benefit especially from XYZ, but not so much from P, Q, or R?

The more you reorganize ideas to fit your particular circumstances, the more they will help your teaching and your students.

Example the Second

Over on his blog (which you should be reading), Adam Boxer worries that we might be making a mess of retrieval practice.

Done correctly, retrieval practice yields all sorts of important benefits. Done badly, however, it provides few benefits. And takes up time.

For that reason, he explains quite specifically how his school has put retrieval practice to work. As you’ll see when you review his post, this system probably won’t work if teacher simply go through the steps.

Instead, we have to understand the cognitive science behind retrieval practice. Why does it work? What are the boundary conditions limiting its effectiveness? How do we ensure that the research-based practice fits the very specific demands of our classes, subjects, and students?

Retrieval practice isn’t just something to do; it’s a way to think about creating desirable difficulty. Without the thinking, the doing won’t help.

To Sum Up

What’s the best checklist for explaining a concept clearly? There is no checklist: think differently about working memory and schema theory.

What’s the best daily schedule for a school? There is no best schedule: think differently about attention.

What steps help are most powerful to help students manage stress? Before we work steps, we have to think differently about students’ emotional and cognitive systems.

To-do lists are straightforward and easy. Teaching is complex and hard. Think different.

“Successive Relearning”: 1 + 1 = +10%
Andrew Watson
Andrew Watson

We teachers get LOTS of advice from cognitive science. Research tells us to…

…monitor and manage our students’ stress levels.

…use mid-class exercise to enhance attention.

interleave topics to create desirable difficulties.

Each individual suggestion has lots of research behind it, and we’re glad to get these ideas.

But: what happens when we start thinking about combinations?

If we use more than one technique at a time, do the benefits add onto each other? Do they conflict with each other? How can we link up research-informed strategies to create the best overall learning experience?

Focus on Memory

In the last ten years, we’ve seen a real explosion in research about long-term memory formation (aka, learning).

We’ve seen that retrieval practice takes no more time than simple review, but results in lots more learning.

We’ve seen that spreading practice out (aka, spacing) helps students learn better than bunching practice together.

So, here’s the big question: what happens if we do both? Is retrieval practice + spacing more powerful than each technique by itself? Is 1+1 greater than 2?

A research team at Kent State recently explored this question.

In this study, researchers developed a complex study paradigm that created both retrieval practice and spacing. Unlike some retrieval practice exercises — which simply require students to try to remember the answer — this paradigm required students to get questions correct before they were done.

Researchers called this combination successive relearning. Students used successive relearning for some of the topics they learned in an advanced biopsychology course. They used their regular (“business-as-usual”) study techniques for the others.

Did successive relearning help students learn?

Answers, and More Questions

To some degree, the answer to that question is: it depends on what you compare to what.

Should the researchers compare this year’s students to last year’s students in the same course?

Should they compare students’ memory of topics where they did use successive relearning to topics where they didn’t?

Being thorough, this research team compared lots of variables to lots of other variables. Quite consistently, they found that “successive relearning” helped.

That is: this year’s students learned more than last year’s. Among this year’s students, successive relearning helped students remember more than their “business-as-usual” strategies.

Roughly speaking, students remembered at least 10% more using successive relearning than other strategies.

So: 1 + 1 = +10%

Case closed.

Case Reopened?

But wait just a minute here.

If you’ve got a good memory, this blog post might be ringing some bells.

Back in February of this year, I wrote about a study in which spacing helped students learn, but — in the long run — retrieval practice didn’t.

If you’ve got an AMAZING memory, you might recall a blog post from June of 2018. Researchers polled students about their study habits. They found that students did benefit from their own retrieval practice, but did not benefit from their spacing — the exact opposite result.

What’s going on here? Why did the two studies differ from each other? Why are they different from this study I’ve just described?

My hypothesis: specifics matter.

In those previous studies, the classes already included one of these techniques.

That is: the study I wrote about in February looked at a math class. Math classes already include lots of retrieval practice, because problem solving is a kind of RP. So, additional retrieval practice didn’t help. But the spacing did.

And, in the class I wrote about in 2018, the professor’s syllabus already included lots of spacing: cumulative review sheets and cumulative exams. So additional spacing done by the students  didn’t help. But their retrieval practice did.

In this most recent study, students benefitted from both because the biopsychology course didn’t include either.

In other words: the best combination of retrieval practice and spacing will depend — in part — on the structure and content of the course you’re teaching.

Final Thoughts

Here’s how I concluded my post back in February:

In my own view, we can ask/expect our students to join us in retrieval practice strategies. Once they reach a certain age or grade, they should be able to make flashcards, or use quizlet, or test one another.

However, I think spacing requires a different perspective on the full scope of a course. That is: it requires a teacher’s perspective. We have the long view, and see how all the pieces best fit together.

For those reasons, I think we can (and should) ask students to do retrieval practice (in addition to the retrieval practice we create). But, we ourselves should take responsibility for spacing. We — much more than they they — have the big picture in mind. We should take that task off their to do list, and keep it squarely on ours.

That’s an opinion, not a research conclusion. But I still think it’s true.

The Benefits of “Testing” Depend on the DEFINITION of “Testing.” And the TIMING. And…
Andrew Watson
Andrew Watson

Whenever I want to start a food fight at a faculty meeting (admit it: you know the feeling), I contemplate shouting: “What do we all think about testing?”

Almost certainly, several colleagues will decry the use of discriminatory high-stakes tests that stress and mis-categorize our students.

Others will angrily retort that research into retrieval practice shows that well-designed tests help students learn. Heck, “retrieval practice” yields great benefits because of “the testing effect.”

Still others will heatedly contrast formative assessment with summative assessment.

Like the Goddess of Discord with my Golden Apple, I will have inspired an enduring battle.

Of course, this battle results from confusion about definitions.

Scholars who champion retrieval practice or formative assessment might be said to “favor testing,” but they favor very specific kinds of testing. The testing they champion (probably) looks nothing like the high-stakes test that my first respondents so loathe.

In other words: my colleagues don’t necessarily disagree with each other. Because this one word has different meanings, they’re probably arguing about different topics without realizing it.

Confusion Gets Clearer

Even if I narrow my question to “pre-testing,” I’ve still created many opportunities for confusion.

I might, for instance, “pre-test” my students about the myth of the Golden Apple because I want to know what they already know.

My goal, in other words, isn’t to evaluate them. Instead, I want to align my lessons with their current knowledge. After all, I need one lesson plan for a class which has never heard of Zeus, Aphrodite, or Troy; and a completely different lesson plan for a class that read The Iliad last year.

I’m “pre-testing” as an early step in my own lesson planning.

On the other hand, we have a fascinating research pool suggesting that “pre-testing” itself might help students learn. That is: the act of taking that pretest can improve their ultimate understanding of the material later in the unit.

Amazingly, according to this research pool, these pre-tests benefit students even if they get all the answers wrong. (Of course they get the answers wrong. They haven’t yet done the unit with me.)

For instance, Dr. Lindsay Richland and Co. pre-tested students on a passage about color-blindness. They found that students who took a pretest (and got all the answers wrong) ultimately learned more than students who used that extra time to study the passage. (As I’ve written earlier, I love this study because Richland works SO HARD to disprove her own hypothesis.)

Getting the Specifics JUST RIGHT

So far, we’ve seen that the benefits of testing depend on the definition of testing. While we know that some tests stress and demotivate students, we’ve also got research suggesting that a very specific kind of pre-testing might help students learn.

Of course, we know that psychology research always includes boundary conditionsA teaching technique that works in one set of circumstances might not work in others.

So, for instance, a teaching technique that helps 3rd graders learn might not help college students. Or, a strategy to help students write well might not help them learn to pirouette in dance class.

We know that there will be boundaries for this (very specific kind of) pre-testing. What are they?

As is so often the case, this question has led to complexity and controversy. For instance: several scholars have found that pretesting helps students make new connections only if they already know a little bit about the material.

A study from 2014, however, suggests that pre-testing helps students even if they don’t know anything about the material.

For teachers, this distinction matters.

If students need at least a little prior knowledge for pretesting to be helpful, we will use this technique less often. If, however, they don’t need that prior knowledge, then our decision to limit its use robs them of a chance to learn.

To use this technique correctly, we really need to know the right answer.

Today’s Research: Activating Prior Knowledge

A recent study, led by Dr. Tina Seabrooke, tries to sort through this intricate and important question. In fact, because the details require such nuanced distinctions, she ended up running five different experiments. Only by considering all five together could she and her team reach a conclusion.

Of course, with five different studies underway, Seabrooke’s work has lots of nooks and subtleties to explore. Instead, let me cut to the chase:

TL; DR

Pre-testing will probably help students learn and understand a topic better if they already know something about it.

If students don‘t know anything about the subject, pretests don’t help much.

More specifically: pretests might help them remember some words from the questions, and some words from the answers. But — crucially — pretests won’t help students make connections between the questions and the answers.

Of course, we really want students to make those connections. Any definition of “understanding” a topic will include the ability to answer meaningful questions about it.

You might think about pretesting this way:

Pretests help students activate useful prior knowledge. If they don’t have relevant prior knowledge, then pretests don’t have anything useful to draw upon.

Final Word

Research into “pretesting” is still ongoing, and we’re still learning new and useful information.

I suggest that teachers use this technique from time to time as a way to activate prior knowledge. I wouldn’t require it as part of my daily routine.

And: I would keep my eyes peeled for the next relevant study. We’ve still got lots more to learn on this subject…

Can We Improve Our Students’ Executive Function? Will That Help Them Read Better?
Andrew Watson
Andrew Watson

Here’s a fun exercise. I’m going to give you a list of words. You try to sort them into two groups, based on the sound they begin with:

cup, bag, bread, can, box, cookie, cake, bucket, corn, beans, crate, banana

Presumably you came up with /k/ sounds and /b/ sounds:

cup, can, crate, cookie, cake, corn

beans, bread, banana, box, bag, bucket

Okay, now go back and RESORT those words into two groups, based on the category they belong to.

Presumably you came up with containers and foods:

cup, can, crate, box, bag, bucket

cookie, cake, corn, beans, banana, bread

If you succeeded, BRAVO! You demonstrated cognitive flexibility: an executive function that allows you to change your thought process mid-stream.

Believe it or not, we have to learn this particular skill.

In video below, for instance, 3-year-olds sort cards according to their color (“red goes here, blue goes there”). They’re usually good at that. However, when the rules change to focus on shape (“trucks go here, flowers go there”), they struggle to follow the different instructions.

https://www.youtube.com/watch?v=tXZau5VIIvU

Why? Because they haven’t yet developed the executive function of cognitive flexibility.

New Research: Improving Reading

For a number of reasons, we might think that this general executive function (cognitive flexibility) might support a specific academic skill (reading).

If that’s true, then maybe we can help struggling readers by training their cognitive flexibility. (This possibility relies on several assumptions; the scholars who did this work have lots of research supporting each one.)

To test this possibility, Kelly Cartwright & Co. had teachers spend several weeks training a group of 2nd – 5th  graders in cognitive flexibility.

Basically, those students repeated that word-sorting/resorting exercise you did at the top of this post. And, they tried a more complicated fill-in-the-blank version of that task as well.

The results?

Compared with other struggling readers, these students got better at cognitive flexibility. And — here’s the big news — they got better at reading as well. (More specifically: they didn’t get better at individual word recognition, but they got better at reading comprehension and grade level reading.)

So, in this research, Cartwright’s team found that training a particular executive function helps struggling readers do better.

Technically speaking, that’s awesome.

As Always, the Caveats

First: as Dan Willingham says in his Twitter bio, “One study is just one study, folks.” Even if Cartwright and Co. did everything right, it’s possible their results are a fluke. We won’t know until many other scholars succeed in replicating and extending this finding.

Second: We shouldn’t extrapolate too far based on this study. We don’t know if training other executive functions would help struggling readers. We don’t know if training EF benefits typical readers; or, people first learning to read; or, improves the performance of sophisticated readers.

Those questions are important — but not addressed directly by this research.

Third: Both reading instruction and executive function are hotly controversial topics. (Heck, I wrote a post about a month ago questioning the very idea of a “general” executive function.) I wouldn’t be surprised if this research (or my summary of it) prompted stern rebukes from scholars/practitioners with a different understanding of reading/EF processes.

I wouldn’t even be surprised if those stern rebukes were correct. If you’ve got an alternative perspective (and some research behind it), please let me know.

But, with those caveats in mind, this research strikes me as exciting and potentially powerful. Any strategy to help struggling readers should get our attention. One that a) costs essentially no money, b) doesn’t take very long, and c) can be done so easily might be a real boon to schools, students, and readers.

Watch this space…

The Best Length of Time for a Class [Repost]
Andrew Watson
Andrew Watson

Quite consistently, this post has been among the most searched for and most popular on the blog.

Teachers and administrators REALLY want to know: What is the optimal amount of time for our students to meet? What’s the very best schedule?

Here’s the best answer I have:


I met yesterday with several thoughtful teachers who had resonant questions about education research.

class length

How do we balance factual learning and deep thinking?

What’s “the right amount of stress” during a test?

How can we promote collaboration while honoring individual differences?

And:

What’s the optimal class length?

This question comes up often. Should we have lots of short classes, so every subject meets every day? Should we have a few longer classes, so that we can dig deeply into a particular topic without interruption?

Debates sometimes fall along disciplinary lines. Foreign language and math teachers often want frequent class meetings; English and History teachers typically like bigger chunks of time for discussions.

Science teachers just gotta have 80 minutes to run a lab well.

But: what does research show?

Class Length: What Research Tells Us

As far as I know, we just don’t have a clear answer to that question.

Over at the Education Endowment Fund, for example, they’ve investigated the benefits of block scheduling: that is, a few long periods rather than several short ones.

The finding: we can’t really say. Or, to quote EEF: “There is no consistent pattern in the evidence.”

More precisely:

The evidence suggests that how teachers use the time they are allocated is more important than the length of lesson or the schedule of lessons, and hence that the introduction of block scheduling is unlikely to raise attainment by itself.

By implication, a change away from block scheduling shouldn’t raise attainment either.

The point is not how long we teach but how well we teach with the time we’ve got.

For this reason, I often counsel schools and teachers: before you change your schedule, study human attention systems.

Once teachers know how attention works — and, it’s A LOT more complicated that we might have thought — we’ll be much better at helping students learn. (If you have the chance to attend a Learning and the Brain session about attention: RUN, don’t walk.)

Class Length: What Research Can’t Tell Us

Research doesn’t answer this question, I think, because it can’t. There’s no one correct answer.

If you teach 2nd graders or 7th graders or 11th graders, you’ll probably find that different lengths of time work better.

If you teach in cultures that inculcate patience and concentration, longer classes will work better than in cultures with a more get-up-and-go kind of pace.

The number of students in the class might matter.

The experience of the teacher almost certainly matters.

When your school starts investigating schedules, therefore, I suggest you start with these essentials:

First: study human attention.

Second: don’t design “the optimal schedule.” Design the optimal schedule for your school and your students. It might not work at anyone else’s school, but it doesn’t need to.

A schedule that works for you and your students is the closest to optimal that you can get.