Skip to main content
Let’s Talk! How Teachers & Researchers Can Think and Work Together
Andrew Watson
Andrew Watson

Once you say it out loud, it’s so obvious:

Teachers benefit from learning about psychology and neuroscience.

AND, psychologists and neuroscientists (in certain fields) benefit from learning more about classroom teaching.

These beliefs inspire our conferences and seminars and summer institutes, and they motivate this blog.

However — and this is a big however — conversations among these disciplines can prove a real challenge.

Why? So many reasons…

… These conversations often start with the assumption that teachers should be junior partners in this collaborative work. (Hint: we’re equal partners.)

… Each of these disciplines — including ours — starts with its own assumptions, builds off its own traditions, and papers over its own shortcomings.

… We all use our own complex terminology and vexing acronyms. (Quick: does ToM result from activity in the vmPFC, and should we discuss it in our IEPs?)

Given all these muddles (and many more), it’s impressive these conversations happen at all.

Today’s Resource

Dr. Cindy Nebel invited me to discuss these questions for a podcast over at The Learning Scientists.

We explore all these problems, along with dual coding, working memory overload, the importance of boundary conditions, and the complexities of motivation research.

We agree about many topics, disagree about a few, and solve as many problems as possible. (As a bonus, the link has a discount code for my newest book: The Goldilocks Map, A Teacher’s Quest to Evaluate ‘Brain-Based’ Teaching Advice.)

I’ve known Dr. Nebel for several years now. She and the other Learning Scientists do great work in this translation field, and they DON’T start with the assumption that teachers are junior partners.

I hope you enjoy our conversation!

A Beacon in the Mindset Wilderness
Andrew Watson
Andrew Watson

For a few years now, I’ve been in the Mindset wilderness.

Three years ago, I spent lots of time tapping the brakes.

“Yes,” I’d say, “we do have plenty of good research behind this strategy. HOWEVER, let’s be realistic. A wall covered in upbeat slogans (“YET!”) just isn’t going to revolutionize education.”

I got a lot of side-eyes.

In 2018, several careful scholars published a blockbuster pair of meta-analyses, throwing doubt on the whole mindset enterprise. Their grim conclusions:

First: students’ mindset has little effect on their academic performance, and

Second: mindset intervention programs don’t provide much benefit.

Suddenly, I started sounding like a mindset enthusiast.

“Yes,” I’d say, “a focus on mindset won’t revolutionize education. HOWEVER: incremental increases in motivation can add up over time. We have SO FEW strategies to help with motivation, we shouldn’t ignore the ones that provide even modest benefits.”

I got even more side-eyes.

The Stickiest Wicket

In these conversations, one point has consistently created the greatest difficulties for my position.

Several mindset researchers have championed the efficacy of “one-shot interventions.”

That is: if students experience one carefully designed mindset-reshaping experience — a webinar, a presentation, an exercise of some kind — that “one shot” alone can help them transform a fixed mindset into a growth mindset.

I gotta say: I just don’t believe that.

My doubts stem not from research, but from experience. Having taught high-school students for thousands of years, I don’t think it ever happens that telling them something once meaningfully changes anything.

I don’t doubt the integrity of the researchers or the process they use. But their conclusion defies too much of my experience (and common sense) for me to take it on board.

Rarely do I use the “my experience trumps your research” veto; in this case, I’m really tempted.

What’s That? “A Beacon,” You Say?

A soon-to-be-published study — run by several of Team Mindset’s leading scholars — offers some support for this skepticism.

These scholars asked a perfectly sensible question: “can a one-shot mindset intervention help students whose teachers demonstrate a fixed mindset?”

That is: must the classroom context echo the explicit message of that one-shot intervention?

Or — in the words of the study — can the mindset “seed” grow in inhospitable “soil”? Are students (on average) independent agents who can overcome implicit classroom messages and act on their explicit mindset training?

To answer this question, the authors reviewed data from a very large study with more than 9000 high school students.

This study takes great procedural care to get the details right: students are randomly assigned to groups; teachers don’t know which student is in which group; teachers don’t know the hypothesis of the study — and so forth.

After a one-shot intervention at the beginning of 9th grade, researchers tracked students’ math grades at the end of the year.

The researchers also asked questions to learn about the teachers‘ mindsets. They wanted to know: did the teachers’ mindset shape the students’ response to the intervention?

The results?

Context Always Matters

Initially, no.

Immediately after the one-shot intervention, students who saw the growth-mindset messages expressed higher degrees of growthiness. Those in the control condition did not. And the teachers’ mindsets didn’t influence those early results.

However — this is a big however — at the end of the year that final sentence wasn’t true.

Students who BOTH heard the growth-mindset messages AND had growth-mindset teachers saw higher math grades.

Students who heard the growth mindset message BUT had fixed-mindset teachers did not.

And, to repeat, those results came months after the intervention itself.

To me, these results make perfect sense. A one-shot message won’t help if the daily classroom routine constantly undermines it; that message might sink in if classroom routines reinforce it.

After all, as the authors wisely write, “no psychological phenomenon works the same way for all people in all contexts.” *

Next Question

This research suggests that teachers’ classroom work can sustain explicit mindset interventions.

Here’s my question: do students need that intervention in the first place? Is the teacher’s classroom practice enough?

I do share LOTS of research with my students: research into retrieval practice, and multitasking, and spacing. I DON’T even mention mindset research, or exhort them to embrace their inner growth mindset.

Instead, I simply enact the mindset strategies.

The classroom rewrite policy encourages and rewards multiple drafts.

I frequently comment on the benefits of cognitive struggle. (“Good news! If you got some questions wrong on that retrieval practice exercise, you’re likelier to learn the answers in the future. The right kind of practice will help you learn.”)

I regularly emphasize what I don’t know, and am excited when I learn something new. (I recently told my sophomores that I have NO IDEA how to interpret the symbolism of Tea Cake’s rabies in Their Eyes Were Watching God. One of my students promptly offered up an explanation; I’m genuinely enthusiastic to have his insight — and the class knows that!)

As I see it, growth mindset isn’t something to talk about. It’s something we demonstrate: quietly, un-fussily, daily.

I’m hoping that — someday — research will support this belief as well.


* Although most psychology studies can put off even the most determined reader, this one has been written (it seems) with a lay reader in mind. Although the technical sections are indeed quite technical, the early sections are easy to read: clear, logical, straightforward. If you’re interested in the topic, I recommend giving these early sections a read.

“Compared to What”: Is Retrieval Practice Really Better?
Andrew Watson
Andrew Watson

When teachers turn to brain research, we want to know: which way is better?

Are handwritten notes better than laptop notes?

Is cold-calling better than calling on students who raise their hands?

Is it better to spread practice out over time, or concentrate practice in intensive bursts?

For that reason, we’re excited to discover research that shows: plan A gets better results than plan B. Now we know what to do.

Right?

Better than What?

More often than not, research in this field compares two options: for instance, retrieval practice vs. rereading.

Often, research compares one option to nothing: starting class WITH learning objectives, or starting class WITHOUT learning objectives.

These studies can give us useful information. We might find that, say, brief exercise breaks help students concentrate during lectures.

However, they DON’T tell us what the best option is. Are exercise breaks more helpful than retrieval practice? How about video breaks? How about turn-n-talks?

When research compares two options, we get information only about the relative benefits of those two options.

For that reason, we’re really excited to find studies that compare more than two.

Enriching Encoding

A recent podcast* highlighted this point for me.

A 2018 study compared THREE different study strategies: rereading, enriched encoding, and retrieval practice.

Participants studied word pairs: say, “moon-galaxy.” Some of them studied by reviewing those pairs. Some studied with retrieval practice (“moon-__?__”).

Some studied with enriched encoding. This strategy urges students to connect new information to ideas already in long-term memory. In this case, they were asked, “what word do you associate with both “moon” and “galaxy”?

My answer to that question: “planet.” Whatever answer you came up with, you had to think about those two words and their associated ideas. You enriched your encoding.

Because this experiment looked at three different study strategies, it gives us richer insights into teaching and learning.

For instance, students who reviewed remembered 61% of the word pairs, whereas those who enriched their encoding remembered 75% (Cohen’s d = 0.72). Clearly, enriched encoding is better.

But wait, what about students who used retrieval practice?

Even Richer

Students in the retrieval practice group remembered 84% of their word pairs.

So, yes: “research shows” that enriched encoding is “better than review.” But it’s clearly not better than retrieval practice. **

In fact, this point may sound familiar if you read last week’s blog post about learning objectives. As that post summarized Dr. Faria Sana’s research:

Starting class with traditional learning objectives > starting class without traditional learning objectives

but

Starting class with learning objectives phrased as questions  > starting class with learning objectives phrased as statements

In fact, Sana looked at a fourth choice:

Teachers immediately answer the questions posed in the learning objectives >?< teachers don’t immediately answer the questions posed in the learning objectives.

It turns out: providing answers right away reduces students’ learning.

Because Sana studied so many different combinations, her research really gives us insight into our starting question: which way is better?

Friendly Reminders

No one study can answer all the questions we have. We ALWAYS put many studies together, looking for trends, patterns, exceptions, and gaps.

For instance, boundary conditions might limit the applicability of a study. Sana’s research took place in a college setting. Do her conclusions apply to 10th graders? 6th graders? 1st graders? We just don’t know (yet).

Or, if you teach in a school for children with a history of trauma, or in a school for students with learning differences, or in a culture with different expectations for teachers and students, those factors might shape the usefulness of this research.

By comparing multiple studies, and by looking for studies that compare more than two options, we can gradually uncover the most promising strategies to help our students learn.


* If you’re not following The Learning Scientists — their website, their blog, their podcast — I HIGHLY recommend them.

** To be clear: this study focuses on a further question: the participants’ “judgments of learning” as a result of those study practices. Those results are interesting and helpful, but not my primary interest here.

Making “Learning Objectives” Explicit: A Skeptic Converted?
Andrew Watson
Andrew Watson

Teachers have long gotten guidance that we should make our learning objectives explicit to our students.

The formula goes something like this: “By the end of the lesson, you will be able to [know and do these several things].”

I’ve long been skeptical about this guidance — in part because such formulas feel forced and unnatural to me. I’m an actor, but I just don’t think I can deliver those lines convincingly.

The last time I asked for research support behind this advice, a friend pointed me to research touting its benefits. Alas, that research relied on student reports of their learning. Sadly, in the past, such reports haven’t been a reliable guide to actual learning.

For that reason, I was delighted to find a new study on the topic.

I was especially happy to see this research come from Dr. Faria Sana, whose work on laptop multitasking  has (rightly) gotten so much love. (Whenever I talk with teachers about attention, I share this study.)

Strangely, I like research that challenges my beliefs. I’m especially likely to learn something useful and new when I explore it. So: am I a convert?

Take 1; Take 2

Working with college students in a psychology course, Sana’s team started with the basics.

In her first experiment, she had students read five short passages about mirror neurons.

Group 1 read no learning objectives.

Group 2 read three learning objectives at the beginning of each passage.

And, Group 3 read all fifteen learning objectives at the beginning of the first passage.

The results?

Both groups that read the learning objectives scored better than the group that didn’t. (Group 2, with the learning objectives spread out, learned a bit more than Group  3, with the objectives all bunched together — but the differences weren’t large enough to reach statistical significance.)

So: compared to doing nothing, starting with learning objectives increased learning of these five paragraphs.

But: what about compared to doing a plausible something else? Starting with learning objectives might be better than starting cold. Are they better than other options?

How about activating prior knowledge? Should we try some retrieval practice? How about a few minutes of mindful breathing?

Sana’s team investigated that question. In particular — in their second experiment — they combined learning objectives with research into pretesting.

As I’ve written before, Dr. Lindsay Richland‘s splendid study shows that “pretesting” — asking students questions about an upcoming reading passage, even though they don’t know the answers yetyields great results. (Such a helpfully counter-intuitive suggestion!)

So, Team Sana wanted to know: what happens if we present learning objectives as questions rather than as statements? Instead of reading

“In the first passage, you will learn about where the mirror neurons are located.”

Students had to answer this question:

“Where are the mirror neurons located?” (Note: the students hadn’t read the passage yet, so it’s unlikely they would know. Only 38% of these questions were answered correctly.)

Are learning objectives more effective as statements or as pretests?

The Envelope Please

Pretests. By a lot.

On the final test — with application questions, not simple recall questions — students who read learning-objectives-as-statements got 53% correct.

Students who answered learning-objectives-as-pretest-questions got 67% correct. (For the stats minded, Cohen’s d was 0.84! That’s HUGE!)

So: traditional learning objectives might be better than nothing, but they’re not nearly as helpful as learning-objectives-as-pretests.

This finding prompts me to speculate. (Alert: I’m shifting from research-based conclusions to research-&-experience-informed musings.)

First: Agarwal and Bain describe retrieval practice this way: “Don’t ask students to put information into their brains (by, say, rereading). Instead, ask students to pull information out of their brains (by trying to remember).”

As I see it, traditional learning objectives feel like review: “put this information into your brain.”

Learning-objectives-as-pretests feel like retrieval practice: “try to take information back out of your brain.” We suspect students won’t be successful in these retrieval attempts, because they haven’t learned the material yet. But, they’re actively trying to recall, not trying to encode.

Second: even more speculatively, I suspect many kinds of active thinking will be more effective than a cold start (as learning objectives were in Study 1 above). And, I suspect that many kinds of active thinking will be more effective that a recital of learning objectives (as pretests were in Study 2).

In other words: am I a convert to listing learning objectives (as traditionally recommended)? No.

I simply don’t think Sana’s research encourages us to follow that strategy.

Instead, I think it encourages us to begin classes with some mental questing. Pretests help in Sana’s studies. I suspect other kinds of retrieval practice would help. Maybe asking students to solve a relevant problem or puzzle would help.

Whichever approach we use, I suspect that inviting students to think will have a greater benefit than teachers’ telling them what they’ll be thinking about.

Three Final Points

I should note three ways that this research might NOT support my conclusions.

First: this research was done with college students. Will objectives-as-pretests work with 3rd graders? I don’t know.

Second: this research paradigm included a very high ratio of objectives to material. Students read, in effect, one learning objective for every 75 words in a reading passage. Translated into a regular class, that’s a HUGE number of learning objectives.

Third: does this research about reading passages translate to classroom discussions and activities? I don’t know.

Here’s what I do know. In these three studies, Sana’s students remembered more when they started reading with unanswered questions in mind. That insight offers teachers a inspiring prompt for thinking about our daily classroom work.

Introducing “Interteaching” (Works Online Too!)
Andrew Watson
Andrew Watson

Have you heard of “interteaching” before? Me neither.

The headlines for this blog sound like this:

“INTERTEACHING” HELPFULLY BALANCES TEACHER AND STUDENT EFFORT/RESPONSIBILITY

and

“INTERTEACHING” WORKS ONLINE AND ASYNCHRONOUSLY, according to recent research.

Let’s take those headlines one at a time.

Headline 1: A Helpful Balance of Work and Responsibility

Few battles rage hotter in education than the “teacher-centered” vs. “student-centered” debate. Should teachers be “sages” who explain ideas and procedures directly and precisely? Should we be “guides” who help students as they puzzle their way to discovery and understanding?

We can be tempted to think that one approach is always right, the other always wrong.

Interteaching, as I understand it, strikes a useful middle ground.

Teachers do plenty of organizing/presenting. And, students devote lots of mental energy to figuring out key ideas. As explained in this study by Gayman, Hammonds, and Rost, Interteaching includes four steps:

First: As students do assigned readings, they answer questions included in “prep-guides.” In other words: teachers guide and signal with the questions they ask, but students do the mental work to figure out the answers.

Second: Students meet in small groups (2-4) to compare answers and figure out  their disagreements. Here, again, students must do lots of useful mental work.

Third: Students tell teachers (in “record sheets”) which concepts they struggled most to understand. Notice: time for student metacognition.

Fourth: Guided by that feedback, teachers prepare and present a brief lecture to explain the trickiest ideas. After several steps focused on student effort, this one invites teachers to do the heavy lifting.

By the way, an optional fifth step creates a grade incentive to encourage student participation. According to Gayman and Co., the method works with or without those incentives.

As you can see, interteaching combines “student-centered” discussions and metacognition with “teacher-centered” prep-guides and lectures. We end up being both guides and sages, depending on the step we’re in.

Headline #2: Interteaching Helps Online/Asynchronous Learners

Gayman’s study cites prior research showing the benefits of interteaching. (Short version: students learn more.)

She and her colleagues, want to know: does this approach also help students learn online, especially in asynchronous classes?

To find out, Gayman used one of her own psychology courses as a testing ground. In one section, she taught topics A, C, and E the traditional lecturey way, and topics B, D, and F with interteaching.

With the other section, she flipped that: B, D, and F got traditional lectures, and A, C, and E got interteaching love.

With this method, she could determine within the same student which method worked better.

Turns out: students learned more from interteaching as measured by unit-end quizzes. And on the final exam. (Those exams yielded more As and Bs for inter-taught material, and fewer Cs, Ds, and Fs.)

And, 82% of the students said they preferred interteaching classes to lectures.

So: yes, interteaching helps online learners, even in asynchronous classes.

Always with the Nuance

I haven’t yet found any research looking at interteaching in K-12 classrooms.

I suspect that students need a well-developed academic skills to manage the more independent parts of this approach. That is: I would hypothesize that the method works better in high school and college than with earlier grades.

In particular, the metacognitive step — the “record sheets” — could be very challenging for younger students. Even my high-school sophomores don’t reliably understand what they don’t understand. (Dunning-Kruger, I’m looking at you.)

At the same, I imagine that the underlying concepts (a balance of “student- and teacher-centeredness”) could be wisely adapted for students at various stages in their academic careers.

I’ll also be curious to hear how this approach works with different subjects (math, history, art, dance) and in different cultural contexts.

 

In brief: I’m intrigued by this approach, and look forward to exploring it in future posts. If only I had known about it a year ago!

The Rise and Fall and Rise of Growth Mindset
Andrew Watson
Andrew Watson

Few theories in education have had a more dramatic story arc than Carol Dweck’s “Mindset.”

Based on research she started in the early 1970s, Dweck published her sumptuously-titled book Mindset, The New Psychology of Success: How We Can Learn to Fulfill our Potential in 2006. By the time I entered this field in 2008, it had gone beyond “must read” status to the “what do you mean you haven’t read it?” pantheon.

Like millions of other teachers, I read it. (In fact, I wrote my own book about Mindset: Learning Grows.)

Across the country, Growth Mindset posters went up on classroom walls. Grading standards changed to include the words “not yet.”

Like any big target, Dweck’s work attracted detractors. Doubts reached their pinnacle in 2018, when Sisk and Burgoyne published two meta-analyses. Their findings:

Growth mindset doesn’t really make much of difference for student learning.

Programs designed to enhance growth mindset have little meaningful effect.

Other large-scale studies, including this one from Argentina, reported similarly doubts.

Mindset’s potential, contrary to Dweck’s subtitle, remained unfulfilled.

Fresh Plot Twist?

Since the Sisk & Burgoyne meta-analyses, it has become fashionable to say “Successful mindset interventions have one variable in common: Carol Dweck did them.”

This critique is both untrue — lots of other researchers have found positive results — and unprofessional: it implies (without directly accusing) that Dweck either has been sloppy or has cooked her data.

And yet, anyone who reads Dweck’s research over the years would hesitate to throw such shade.

A freshly released heap o’ data, in fact, might restore some interest in Mindset.

Every three years, the Programme for International Student Assessment (PISA) tests 15-year-olds in science, math, and reading. They gather all sorts of data and publish those results.

In fact, recently released data from the 2018 sitting connects a Growth Mindset with higher academic achievement. (You can read a summary article here, and see the full chapter here.)

As Sarah Sparks summarizes the data:

After controlling for students’ and schools’ socioeconomic differences, students with a strong growth mindset scored significantly higher on all subjects—31.5 points in reading, 27 points in science, and 23 points in math—compared with students who believed their intelligence was fixed.

Unsurprisingly, Sparks reports, teachers matter:

Students with supportive teachers—for example, those who show interest in every student learning and a willingness to provide extra help and explanation until a student understands—were 4 percentage points more likely to have a growth mindset than those without a supportive instructor.

In other words: when we look past the shade and the snark, we find that growth mindsets might help learning, and that teachers can help foster them.

Stop the Pendulum

Our profession, alas, tends to extremes. We might embrace Mindset Theory as our school’s shining mission; we might reject it as fashionable pseudo-science.

I hope this time we can aim for a modest middle ground. A few points to keep in mind:

First: the PISA data show correlation, not causation.

Second: they come from self-report.

Third: they show wide differences across country and culture. (For instance: this graph caught my eye.)

Rather than put all our energies into this one strategy (or, into denigrating this one strategy), I think we can adopt a sensible logical chain:

A: Motivated students learn more, but teachers can struggle to motivate students. (Let’s admit it: much of what we study in schools isn’t intrinsically motivating for most students.)

B: On average, a growth mindset offers many students a motivational boost.

C: On average, specific teaching practices make it somewhat likelier that students will adopt a growth mindset.

D: If we can easily adopt — and easily maintain — culturally-appropriate teaching practices that enhance a growth mindset, our efforts will help some students learn.

E: Therefore, let’s do so.

Do I think a one-shot mindset intervention will help? Probably not. (I don’t think a one-shot intervention of anything will help.)

Do I think that Mindset strategies — consistently and modestly applied — will help? I do.

Should those strategies be accompanied by many other research-supported approaches (retrieval practice, metacognition, cognitive-load monitoring, attention-fostering, stress-reduction)? Indeed they should.

A True Story

I did some consulting at a summer camp two years ago. When I went to the archery department, they asked if I wanted to try my hand with a bow.

NO, reader, I DID NOT.

As a camper at this very camp decades before, I had experienced repeated humiliation; I only rarely hit the target, and often missed comically/catastrophically. Honestly, it was dreadful — one of those experiences that, 40 years later, can STILL make me blush.

After a moment of terror, I said to myself:

“Okay, Andrew, you talk about Growth Mindset all the time. Give it a try. Your goal shouldn’t be to get a perfect score. Just try to learn a bit and improve. That’s what you tell your students. Practice what you preach.”

What happened next was an archery miracle.

It turns out that I am right handed, but I sight with my left eye. I had been humiliated all those years ago because I was shooting with the wrong bow.

Once they got a lefty bow into my hand, taught me the stance and a few breathing tricks, I found that I’m a passable archer.

I’m no Robin Hood, but I felt like I hit the bullseye.

Proxy Battles: The Value of Handshakes at the Door
Andrew Watson
Andrew Watson

Should teachers welcome students to the classroom with elaborate individual handshakes?

Or — in these COVIDian days of ours — with elaborate dances? (If you’re on Twitter, you can check out @thedopeeducator’s post from March 17 of 2021 for an example.)

This question generates a surprising amount of heat. Around here that heat means: let’s look for research!

What Does “The Research” Say?

Truthfully, I can’t find much research on this question. Teachers have strong feelings on the subject, but the topic hasn’t gotten much scholarly attention.

The exception to this rule: Dr. Clayton Cook’s study on “Positive Greetings at the Door” from 2018.

As I described this study back in 2019, researchers trained teachers in a two-step process:

First: greet each student positively at the door: “Good morning, Dan — great hat!”

Second: offer “precorretive” reminders: “We’re starting with our flashcards, so be sure to take them out right away.”

The researchers trained five teachers (in sixth, seventh, and eighth grades) in these strategies.

Their results — compared to an “active” control group — were encouraging:

For the control group, time on task was in the mid-to-high 50%, while disruptive behaviors took place about 15% of the time.

For the positive greeting group, researchers saw big changes.

Time on task went from the high-50% to more than 80% of the time.

Disruptive behaviors fell from ~15% to less than 5% of the time.

All that from positive greetings.

A Clear Winner?

Handshake advocates might be tempted to read this study and declare victory. However, we have many good reasons to move more deliberately.

First: although handshakes are a kind of “positive greeting,” they’re not the only “positive greeting.” Researchers didn’t specify handshakes; they certainly didn’t require elaborate dances.

So, we can’t use this research to insist on either of those approaches. Teachers’ greetings should be specific and positive, but needn’t be handshake-y or dance-y.

Second: the “positive greetings” strategy requires an addition step — “precorrective guidance.” Once the handshake/greeting is complete, the teacher should offer specific directions about the next appropriate step…

… start the exercise on the board,

… take out your notebook and write the date,

… remember the definitions of yesterday’s key words.

Handshakes alone don’t match this research strategy. We need to do more to get these results.

Third: this research took place in a very specific context. Researchers asked principals to nominate classes that had seen higher-than-average levels of disruption.

That is: if your class is already well behaved, you might not see much of a change. (Of course, if your class is already well behaved, you don’t really need much of a change.)

And One More Thing (Well, TWO More Things)

I think Dr. Cook’s study helpful, clear, and well-done. However, as far as I know, it’s one of a kind. His research hasn’t been replicated (or, for that matter, contradicted). According to both Scite.ai and ConnectedPapers.com, this one study is everything we know from a research perspective.

In brief: the little research we have is encouraging. But: it doesn’t require elaborate choreography. It does require “precorrective guidance.” And, as Daniel Willingham says: “One study is just one study, folks.”

A final thought:

I suspect that “handshakes at the door” generate so much controversy because they’re a proxy for a wider battle of extremes.

Extreme #1: “If teachers put all our energy into forming relationships, students will inevitably learn more!”

Extreme #2: “That’s obviously dreadful nonsense.”

That is: “handshakes at the door” stand in for “relationships-first” teaching. Hence all the passion on Twitter.

This battle, I think, largely sacrifices sensible nuance to passionate belief.

On the one hand: of course, students (on average) learn more when they feel a sense of safety, respect, and connection. Some students (especially those who have experienced trauma) might struggle to learn without those things.

And, on the other hand: of course students can learn from teachers they don’t really like, and from teachers with whom they have no real connection. Lecture-based college courses depend on that model completely. So do military academies.

Handshakes at the door might help us connect with students if they feel comfortable and fun for us. But: plenty of individual teachers would feel awkward doing such a thing. Many school or cultural contexts would make such handshakes seem weird or silly or discomforting.

If such handshakes strengthen relationships, they might be a useful tool. If your relationships are already quite good, or if you’d feel foolish doing such a thing, or if your cultural context looks askance at such rituals, you can enhance relationships in other ways.

As is so often the case, we don’t need to get pulled onto a team — championing our side and decrying the other. We can, instead, check out available research, see how its conclusions apply to our context, and understand that optimal teaching practices might vary from place to place.

The 10-Minute Rule: Is The Lecture Dead?
Andrew Watson
Andrew Watson

The “10-minute rule” offers teachers practical guidance. It typically sounds something like this:

If students aren’t intrinsically interested in material, they can pay attention to it for no more than 10 minutes.

Ergo: teachers should do something different every ten minutes.

Ergo: the lecture is dead.

I first heard the “10-minute rule” at a conference in 2008, and run across it frequently when I work with teachers. They too, it seems, heard it at a conference.

Any rule that gets so much love at teaching conferences must be true, right?

Research-Aligned Teaching Advice

If you’re reading this blog, you want your teaching to have research behind it. So, what exactly is the research behind the “10-minute rule?”

Neil Bradbury is glad you asked. He looked into its history, and came up with some astonishing results: results that would be funny if they weren’t so alarming.

Let’s start with a Johnstone and Percival study from 1976, where two researchers visited 90 lecture classes (!). By comparing observations, they agreed that attention started to wane within the first five minutes (!!), with another decrease in the 10-18 minute range (!!!).

As Bradbury reports, however, this conclusion gets murky quickly:

First: they visited only 13% of those lectures together. In other words: 87% of their data come from one lone observer.

Second: they don’t report how they measured attention, or — for that matter — lapses in attention.

That student looking out the window: is she distracted by a bird, or concentrating on the professor’s complex argument?

That student looking keenly at the slides: is he engrossed in the topic, or trying to remember his lines for tonight’s rehearsal?

Johnstone and Percival have no way to know.

In other words: the “10-minute rule” rests on the hunchy sense of two visitors who were — as far as we can tell — simply relying on their guts. Whatever we call that, we don’t call it “research.”

And, whatever we do with their hunches, we shouldn’t change our teaching because of them.

Measuring Attention

This study highlights a complex problem. Attention, of course, takes place inside our heads. How can we measure it?

One solution: keep track of students’ note taking. Perhaps, students take more notes when they pay attention, and fewer notes when they don’t?

If that hypothesis is true, then students who write less are paying less attention. When we find a steep decline in note taking, we’ve found the moment when attention has dropped off. Sure enough: a 10-minute increment turns out to be crucial.

Alas, as Bradbury points out, this approach also collapses.

First: students take notes relatively consistently throughout a lecture. Their note taking falls off in the final ten minutes, not after the first ten minutes.

Second: in fact, the quantity of note taking results from the professor’s lecture, not from the point in the lecture. When the speaker makes key points, students write more. When the professor is recapping, or simply winding down — as she might do at the end of a lecture — they take fewer notes.

As Bradbury pithily summarizes this approach:

Note taking is not a good proxy for attention whatsoever, and even it if were, it does not support a 10- to 15- minute limit on student engagement.

BOOM.

Let’s Get Physical

If note-taking doesn’t measure attention, perhaps we can use biological measures instead.

Research by Bligh used a pulsemeter to measure students’ alertness. This study found that their pulses dropped roughly 14% over the course of the class.

At last: research confirmation of the “10-minute rule”?

Alas, Bligh’s research found the same results during a discussion class as during a lecture.

We might think that a lower pulse suggests less attention. If it does, then neither class format sustains attention.

Classroom Implications

In brief, the “10-minute rule” isn’t a rule, and doesn’t last ten minutes.

More precisely: we have no research suggesting it’s a rule with a consistent time limit.

Given that truth, what should we teachers do?

First: focus on the obvious truth that people are different.

Older students can (probably) pay attention longer than younger ones.

Hungry students (probably) pay less attention than less-hungry ones. (Except right after lunch.)

Some cultures prioritize focused attention more than others.

Some lecturers know how to hold an audience better than others.

Your approach to teaching should vary based on your specific circumstances, not be dictated by an arbitrary rule (which sounds authoritative but has no research backing.)

For instance: I’m currently teaching two sections of the same class — one in person and the other online. I teach them differently because attention can be more difficult online. (And because the online section meets earlier in the day — a real challenge for high school students.)

Second: study the science of attention.

Surprisingly, attention isn’t one thing.

Instead, attention is a behavioral combination of three distinct mental processes.

The more teachers understand that complex mix, the more successful we can be in creating the behavior by managing the mental processes.

I’ve written a book on this subject: Learning Begins: A Classroom Teacher’s Guide to Working Memory and Attention. (Ryan Reynolds will play me in the movie, I’m sure.)

Or, you can read LOTS of great articles: here’s one place to start.

Whichever approach you take, don’t let implausible absolute rules shape your thinking. Pay attention to your students, and to attention itself. Those two beacons will guide you on your classroom journey.


In the past, I’ve cited Wilson and Korn’s 2007 discussion of this topic. My thanks to Zach Groshell (Twitter handle: @MrZachG) for pointing to Bradbury’s wonderful article.

To Grade or Not to Grade: Should Retrieval Practice Quizzes Be Scored?
Andrew Watson
Andrew Watson

We’ve seen enough research on retrieval practice to know: it rocks.

When students simply review material (review their notes; reread the chapter), that mental work doesn’t help them learn.

However, when they try to remember (quiz themselves, use flashcards), this kind of mental work does result in greater learning.

In Agarwal and Bain’s elegant phrasing: don’t ask students to put information back into their brains. Instead, ask them to pull information out of their brains.

Like all teaching guidance, however, the suggestion “use retrieval practice!” requires nuanced exploration.

What are the best methods for doing so?

Are some retrieval practice strategies more effective?

Are some frankly harmful?

Any on-point research would be welcomed.

On-Point Research

Here’s a simple and practical question. If we use pop quizzes as a form of retrieval practice, should we grade them?

In other words: do graded pop quizzes result in more or less learning, compared to their ungraded cousins?

This study, it turns out, can be run fairly easily.

Dr. Maya Khanna taught three sections of an Intro to Psychology course. The first section had no pop quizzes. In the second section, Khanna gave six graded pop quizzes. In the third, six ungraded pop quizzes.

Students also filled out a questionnaire about their experience taking those quizzes.

What did Khanna learn? Did the quizzes help? Did grading them matter?

The Envelope Please

The big headline: the ungraded quizzes helped students on the final exam.

Roughly: students who took the ungraded pop quizzes averaged a B- on the final exam.

Students in the other two groups averaged in the mid-to-high C range. (The precise comparisons require lots of stats speak.)

An important note: students in the “ungraded” group scored higher even though the final exam did not repeat the questions from those pop quizzes. (The same material was covered on the exam, but the questions themselves were different.)

Of course, we also wonder about our students’ stress. Did these quizzes raise anxiety levels?

According to the questionnaires, nope.

Khanna’s students responded to this statement: “The inclusion of quizzes in this course made me feel anxious.”

A 1 meant “strongly disagree.”

A 9 meant “strongly agree.”

In other words, a LOWER rating suggests that the quizzes didn’t increase stress.

Students who took the graded quizzes averaged an answer of 4.20.

Students who took the ungraded quizzes averaged an answer of 2.96.

So, neither group felt much stress as a result of the quizzes. And, the students in the ungraded group felt even less.

In the Classroom

I myself use this technique as one of a great many retrieval practice strategies.

My students’ homework sometimes includes retrieval practice exercises.

I often begin class with some lively cold-calling to promote retrieval practice.

Occasionally — last Thursday, in fact — I begin class by saying: “Take out a blank piece of paper. This is NOT a quiz. It will NOT be graded. We’re using a different kind of retrieval practice to start us off today.”

As is always true, I’m combining this research with my own experience and classroom circumstances.

Khanna gave her quizzes at the end of class; I do mine at the beginning.

Because I’ve taught high school for centuries, I’m confident my students feel comfortable doing this kind of written work. If you teach younger grades, or in a different school context, your own experience might suggest a different approach.

To promote interleaving, I include questions from many topics (Define “bildungsroman.” Write a sentence with a participle. Give an example of Janie exercising agency in last night’s reading.) You might focus on one topic to build your students’ confidence.

Whichever approach you take, Khanna’s research suggests that retrieval practice quizzes don’t increase stress and don’t require grades.

As I said: retrieval practice rocks!

What (and Why) Should Students Memorize? Confidence and Fluency for the Win
Andrew Watson
Andrew Watson

In our profession, memorization has gotten a bad name.

The word conjures up alarming images: Dickensian brutes wielding rulers, insisting on “facts, facts, facts!”

In a world when students “can look up anything on the interwebs,” why do we ask students to memorize at all?

One answer from cognitive science: we think better with information we know well.

Even when we can find information on the internet, we don’t use that information very effectively. (Why? Working memory limitations muddle our processing of external information.)

A second answer appears in intriguing recent research.

Reasonable Plans, Unstated Assumptions

As a classroom teacher, I might operate on this reasonable plan:

Step one: we discuss ideas and information in class.

Step two: students write down the important parts.

And, step three: when students need that information later, they look at their notes.

This plan — the core of most high school classes I know — relies on unstated assumptions:

Assumption one: students’ notes are largely correct.

Assumption two: if students write down information INcorrectly, they’ll recognize that mistake. After all, we discussed the correct information in class.

But what if that second assumption isn’t true?

What if students trust external information (their notes) more than internal information (their memories)?

Assumptions Thwarted

In 2019, Risko, Kelly, & Gaspar studied one version of this question.

They had students listen to word lists, and type them into a storable file. After distraction, students got to review their lists. They then were tested on those words.

On the final list, however, these scholars did a sneaky thing: they added a word to the stored list. Sure enough, 100% of their students wrote down the additional word, even though it hadn’t in fact been on the initial word list.

Students trusted their written document (external “memory”) more than their own actual memory. When tested even later, students still included the additional word, even though it wasn’t a part of their initial learning.

In other words: the “reasonable plan” that teachers often rely on includes an assumption that — at least in this research — isn’t true.

Ugh.

Classroom Implications

This research, I think, reminds us that the right kind of memorization has great value for students.

We want students to know certain bedrock facts and processes with absolute certainty. We want them, for instance, to define key terms and ideas fluently. Crucially, we want them to reject — with confidence borne of certain knowledge — inaccurate claims.

For instance:

I just completed a unit on tragedy. My sophomores read August Wilson’s Fences and Shakespeare’s Macbeth.

 

On the very first day of the term, I went over a four-part definition of “tragedy.” (It’s a quirky definition, I admit, but it works really well.)

 

We reviewed that definition almost daily, increasingly relying on retrieval practice. For instance, I might give them this handout to fill in. Or we might do that work together on the board.

Over time, I started including inaccurate prompts in my questions: “So, tragedy ends in death or marriage, right?”

By this point, my students knew the definition so well that they confidently rejected my falsehoods: “No, you’re trying to trick us! Tragedy ends in death or banishment!”

For an even trickier approach, I encouraged students to correct one another’s (non-existent) mistakes:

Me: “T: what does comedy represent, and why?”

T: “The marriage (and implied birth) at the end of a comedy implies the continuity of society, and in that way contrasts tragedy’s death and banishment, which represent the end of society.”

Me: “M: what did T get wrong.”

M [confidently]: “Nothing. That was exactly right.”

Me [faking exasperation]: “S, help me out here. What did T and M miss?”

S [not fooled]: “Nothing. I agree with them both.”

Me: “Congrats to T for getting the answer just right. And congrats to M and S for not letting me fool you. It’s GREAT that you’re all so confident about this complex idea.”

Because these students knew this complex definition cold — because they had memorized it — they could stand firm when questioned skeptically. As a result, they did a great job when asked to apply that definition at the end of the term:

“How does Wilson’s Fences fit the definition of tragedy AND of comedy?”

To Sum Up

Despite all the bad press, the right kind of memorization can enhance learning.

When students know foundational information and processes by heart, they

BOTH process questions more fluently

AND resist misleading information from “external memory” sources.

Greater cognitive fluency + greater confidence in their knowledge = enduring learning.