Skip to main content
Handwriting Improves Learning, Right?
Andrew Watson
Andrew Watson

Here’s a good rule for research: if you believe something, look for research that contradicts your belief.

So, if you think that retrieval practice helps students learn, see if you can find research showing the opposite.

If you disapprove of cold-calling, see if any studies support its use.

If you think that hand-written notes help students more than notes taken on a laptop, try to find research that disagrees with you.

In this last case, you might even find me. Most teachers I know believe that handwritten notes are superior, and they cite a well-known study to support that belief.

I’ve argued for years that this research assumes students can’t learn how to do new things – a very odd belief for a teacher to have. If you believe a students can learn how to do new things, well, this study actually suggests that laptop notes will help more than handwritten notes.

However, the “good rule” described above applies to me too. If I believe that we don’t know whether handwriting or keyboarding is better for learning, I should look for evidence that contradicts my belief.

For that reason, I pounced on a recent science news headline. The gist: recent research by Robert Wiley and Brenda Rapp shows that students who wrote by hand learned more than those who used laptops.

So, does their research finally contradict my belief?

Learning Arabic Letters

Wiley and Rapp had college-age adults learn Arabic letters.

12 of them learned by pressing the right key on a keyboard.

12 learned by looking at the letters closely and confirming they were the same.

And, 12 learned by writing the letters.

Did these distinct learning strategies make a difference several days later?

YES THEY DID.

The hand-writers learned a lot more, and learned a lot faster.

In fact – here’s a cool part – their learning transferred to new, related skills.

These participants practiced with letters. When Wiley and Rapp tested them on WORDS, the hand-writers did better than the other two groups – even though they hadn’t practiced with words.

So: sure enough, handwriting helped students learn more.

Boundary Conditions

Given the strength and clarity of these findings, you might think that I’m going to change my mind.

Reader, I am not. Here’s why:

This research shows that writing by hand helps people learn how to write by hand. It also helps people learn to do things immediately related to writing by hand – like, say, saying and writing words.

We should notice the narrow boundaries around that conclusion.

People who write by hand learn how to write by hand.

That research finding, however, does NOT demonstrate that writing by hand helps people learn things unrelated to handwriting itself.

For instance: do handwritten notes help people learn more about history or psychology or anatomy than laptop notes? This research does not answer that question, because that question falls outside the boundaries of the research.

In a similar way: practicing scales on the piano surely helps play piano scales better than – say – watching someone else do so.

But: does practicing piano scales make me better at other tasks requiring manual dexterity? Knitting? Keyboarding? Sculpting?

To answer those questions, we have to research those questions. We can’t extrapolate from piano scales to knitting and sculpting. (Well: we can, but we really shouldn’t.)

So, What’s The Answer?

Is handwriting really a better way to learn than keyboarding?

Honestly, I just don’t think we know. (In fact, Wiley and Rapp don’t claim that handwriting helps anywhere other than learning and reading letters and words.)

In fact, I suspect we need to explore MANY other variables:

the content being learned,

the teacher’s strategy for presenting it,

the student’s preference,

the student’s age –

perhaps even the relative complexity of writing vs. keyboarding. (I’m not an expert in this topic, but I understand that some languages require very intricate steps for accurate keyboarding.)

We can say – thanks to Wiley and Rapp – that handwriting helps learn how to write by hand. But until we explore those other precise questions precisely, we shouldn’t offer strong answers as if they have research support.

 

Let’s Get Practical: What Works Best in the Classroom?
Andrew Watson
Andrew Watson

At times, this blog explores big-picture hypotheticals — the “what if” questions that can inspire researchers and teachers.

And, at times, we just want practical information. Teachers are busy folks. We simply want to know: what works? What really helps my students learn?

That question, in fact, implies a wise skepticism. If research shows a teaching strategy works well, we shouldn’t just stop with a study or two.

Instead, we should keep researching and asking more questions.

Does this strategy work with …

… older students as well as younger students?

… history classes as well as music classes as well as sports practice?

… Montessori classrooms, military academies, and public school classrooms?

this cultural cultural context as well as that cultural context?

And so forth.

In other words, we want to know: what have you got for me lately?

Today’s News

Long-time readers know of my admiration for Dr. Pooja Agarwal.

Her research into retrieval practice has helped clarify and deepen our understanding of this teaching strategy.

Her book, written with classroom teacher Patrice Bain, remains one of my favorites in the field.

And she’s deeply invested in understanding the complexity of translating research into the classroom.

That is: she doesn’t just see if a strategy works in the psychology lab (work that’s certainly important). Instead, she goes the next step to see if that strategy works with the messiness of classrooms and students and schedule changes and school muddle.

So: what has she done for us lately? I’m glad you asked.

Working with two other scholars, Agarwal asked all of those questions I listed above about retrieval practice.

That is: we think that retrieval practice works. But: does it work with different ages, and various subjects, in different countries?

Agarwal and Co. wanted to find out. They went though an exhaustive process to identify retrieval practice research in classrooms, and studied the results. They found:

First: yup, retrieval practice really does help. In 57% of the studies, the Cohen’s d value was 0.50 or greater. That’s an impressively large result for such a simple, low-cost strategy.

Second: yup, it works it in different fields. By far the most research is done in science and psychology (19 and 16 studies), but it works in every discipline where we look — including, say, history or spelling or CPR.

Third: yup, it works at all ages. Most research is done with college students (and, strangely, medical students), but works in K-12 as well.

Fourth: most retrieval practice research is done with multiple choice. (Yes: a well-designed multiple choice test can be retrieval practice. “Well-designed” = “students have to THINK about the distractors.”)

Fifth: we don’t have enough research to know what the optimal gap is between RP and final test.

Sixth: surprisingly, not enough classroom research focused on FEEDBACK. You’d think that would be an essential component…but Team Agarwal didn’t find enough research here to draw strong conclusions.

Seventh: Of the 50 studies, only 3 were from “non-Western” countries. So, this research gap really stands out.

In brief: if we want to know what really works, we have an increasingly clear answer: retrieval practice works. We had good evidence before; we’ve got better evidence now.

Examples Please

If you’re persuaded that retrieval practice is a good idea, you might want to be sure exactly what it is.

You can always use the “tags” menu on the right; we blog about retrieval practice quite frequently, so you’ve got lots of examples.

But, here’s a handy description (which I first heard in Agarwal and Bain’s book):

When students review, they put information back into their brains. So: “rereading the textbook” = “review,” because students try to redownload the book into their memory systems.

When students use retrieval practice, they take information out of their brains. So, “flashcards” = “retrieval practice,” because students have to remember what that word means.

So:

Reviewing class notes = review.

Outlining the chapter from memory = retrieval practice.

Short answer questions = retrieval practice.

Watching a lecture video = review.

When you strive for retrieval practice, the precise strategy is less important than the cognitive goal. We want student to try to remember before they get the correct answer. That desirable difficulty improves learning.

And, yes, retrieval practice works.

“Rich” or “Bland”: Which Diagrams Helps Students Learn Deeply? [Reposted]
Andrew Watson
Andrew Watson

Here’s a practical question: should the diagrams we use with students be detailed, colorful, bright, and specific?

Or, should they be simple, black and white, somewhat abstract?

We might reasonably assume that DETAILS and COLORS attract students’ attention. If so, they could help students learn.

We might, instead, worry that DETAILS and COLORS focus students’ attention on surface features, not deep structures. If so, students might learn a specific idea, but not transfer their learning to a new context.

In other words: richly-decorated diagrams might offer short-term benefits (attention!), but result in long-term limitations (difficulties with transfer). If so, blandly-decorated diagrams might be the better pedagogical choice.

Today’s Research

Scholars in Wisconsin — led by David Menendez — have explored this question.

Specifically, they asked college students to watch a brief video about metamorphosis. (They explained that the video was meant for younger students, so that the cool college kids wouldn’t be insulted by the simplicity of the topic.)

For half the students, that video showed only the black-and-white diagram to the left; for the other half, the video showed the colors and dots.

Did the different diagrams shape the students’ learning? Did it shape their ability to transfer that learning?

Results, Please…

No, and yes. Well, mostly yes.

In other words: students who watched both videos learned about ladybug metamorphosis equally well.

But — and this is a BIG but — students who watched the video with the “rich” diagram did not transfer their learning to other species as well as students who saw the “bland” diagram.

In other words: the bright colors and specifics of the rich diagram seem to limit metamorphosis to this specific species right here. An abstract representation allowed for more successful transfer of these concepts to other species.

In sum: to encourage transfer, we should use “bland,” abstract diagrams.

By the way: Team Menendez tested this hypothesis with both in-person learners and online learners. They got (largely) the same result.

So: if you’re teaching face-to-face or remotely, this research can guide your thinking.

Some Caveats

First: as is often the case, this effect depended on the students’ prior knowledge. Students who knew a lot about metamorphosis weren’t as distracted by the “rich” details.

Second: like much psychology research, this study worked with college students. Will its core concepts work with younger students?

As it turns out, Team Menendez has others studies underway to answer that very question. Watch This Space!

Third: Like much psychology research, this study looked at STEM materials. Will it work in the humanities?

What, after all, is the detail-free version of a poem? How do you study a presidency without specifics and details?

When I asked Menendez that question, he referred me to a study about reader illustrations. I’ll be writing about this soon.

In Sum

Like seductive details, “rich” diagrams might seem like a good teaching idea to increase interest and attention.

Alas, that perceptual richness seems to help in the short term but interfere with transfer over time.

To promote transfer, teach with “bland” diagrams — and use a different strategy to grab the students’ interest.

How to Foster New Friendships in School? Seating Plans! (We’ve Got Research…)
Andrew Watson
Andrew Watson

In schools, we want students to learn many topics: math, and history, and reading, and health, and robotics…

And, especially at the beginning of the year, we’d like them to make friends along the way.

Can we help?

One research team tried a reasonable approach. They wondered if students might form new friendships when they sit next to classmates they don’t yet know well.

Here’s the story:

The Plan

Julia Rohrer and colleagues worked with 182 teachers in 40 schools in Hungary. Their study included 3rd through 8th graders — almost 3000 of them!

In these schools, students sat at “freestanding forward-facing 2-person desks.” (It sounds to me like Little House on the Prairie, but in rural Hungary.) Researchers assigned students to these paired desks randomly.

And, they tracked the friendships that formed.

So: what happened? Did students befriend their deskmates?

The Prediction & the Speculation

Unsurprisingly, we tend — on average — to form friendships with people who are like us. In schools, that means:

boys typically befriend boys, while girls befriend girls;

academic achievers connect with other achievers;

members of racial and ethnic groups often form friendships within those groups. (In this study, researchers kept track of Roma and non-Roma Hungarian identities.)

Researchers predicted that this pattern (called “homophily) would continue.

And they speculated that the new seating plans might shake things up a bit. That is: perhaps more friendships would form outside of those usual patterns.

The Results

So, what happened with these new seating plans?

First: Randomly seating students next to each other did modestly increase the likelihood of mutual friendships forming: from 15% to 22%.

Second: These new friendships did mostly fit the expected patterns. As homophily suggests, friendships largely formed within gender, achievement, and ethnic groups.

Third: Random seating DID foster new friendships across those divides as well — although to a smaller degree. That is: some girls did form mutual friendships with boys, and so forth.

In brief: researchers wondered if random seating patterns might expand friendship circles — and they do!

The Big Picture

We should, of course, remember that this study is just one study. We’ll need more research to be increasingly certain of these conclusions.

And, honestly, this seating plan didn’t make a huge difference.

At the same time: teachers know that every little bit counts. If we can help students form new friendships — and help them form friendships that might not otherwise have started — that’s a powerful way to start a new school year.

You will, of course, adapt this idea to your own teaching context. As you contemplate your routine at the beginning of a new year, this strategy might be a useful way to open new friendship vistas.

To Grade or Not to Grade: Should Retrieval Practice Quizzes Be Scored? [Repost]
Andrew Watson
Andrew Watson

We’ve seen enough research on retrieval practice to know: it rocks.

When students simply review material (review their notes; reread the chapter), that mental work doesn’t help them learn.

However, when they try to remember (quiz themselves, use flashcards), this kind of mental work does result in greater learning.

In Agarwal and Bain’s elegant phrasing: don’t ask students to put information back into their brains. Instead, ask them to pull information out of their brains.

Like all teaching guidance, however, the suggestion “use retrieval practice!” requires nuanced exploration.

What are the best methods for doing so?

Are some retrieval practice strategies more effective?

Are some frankly harmful?

Any on-point research would be welcomed.

On-Point Research

Here’s a simple and practical question. If we use pop quizzes as a form of retrieval practice, should we grade them?

In other words: do graded pop quizzes result in more or less learning, compared to their ungraded cousins?

This study, it turns out, can be run fairly easily.

Dr. Maya Khanna taught three sections of an Intro to Psychology course. The first section had no pop quizzes. In the second section, Khanna gave six graded pop quizzes. In the third, six ungraded pop quizzes.

Students also filled out a questionnaire about their experience taking those quizzes.

What did Khanna learn? Did the quizzes help? Did grading them matter?

The Envelope Please

The big headline: the ungraded quizzes helped students on the final exam.

Roughly: students who took the ungraded pop quizzes averaged a B- on the final exam.

Students in the other two groups averaged in the mid-to-high C range. (The precise comparisons require lots of stats speak.)

An important note: students in the “ungraded” group scored higher even though the final exam did not repeat the questions from those pop quizzes. (The same material was covered on the exam, but the questions themselves were different.)

Of course, we also wonder about our students’ stress. Did these quizzes raise anxiety levels?

According to the questionnaires, nope.

Khanna’s students responded to this statement: “The inclusion of quizzes in this course made me feel anxious.”

A 1 meant “strongly disagree.”

A 9 meant “strongly agree.”

In other words, a LOWER rating suggests that the quizzes didn’t increase stress.

Students who took the graded quizzes averaged an answer of 4.20.

Students who took the ungraded quizzes averaged an answer of 2.96.

So, neither group felt much stress as a result of the quizzes. And, the students in the ungraded group felt even less.

In the Classroom

I myself use this technique as one of a great many retrieval practice strategies.

My students’ homework sometimes includes retrieval practice exercises.

I often begin class with some lively cold-calling to promote retrieval practice.

Occasionally — last Thursday, in fact — I begin class by saying: “Take out a blank piece of paper. This is NOT a quiz. It will NOT be graded. We’re using a different kind of retrieval practice to start us off today.”

As is always true, I’m combining this research with my own experience and classroom circumstances.

Khanna gave her quizzes at the end of class; I do mine at the beginning.

Because I’ve taught high school for centuries, I’m confident my students feel comfortable doing this kind of written work. If you teach younger grades, or in a different school context, your own experience might suggest a different approach.

To promote interleaving, I include questions from many topics (Define “bildungsroman.” Write a sentence with a participle. Give an example of Janie exercising agency in last night’s reading.) You might focus on one topic to build your students’ confidence.

Whichever approach you take, Khanna’s research suggests that retrieval practice quizzes don’t increase stress and don’t require grades.

As I said: retrieval practice rocks!

Making “Learning Objectives” Explicit: A Skeptic Converted? [Reposted]
Andrew Watson
Andrew Watson

Teachers have long gotten guidance that we should make our learning objectives explicit to our students.

The formula goes something like this: “By the end of the lesson, you will be able to [know and do these several things].”

I’ve long been skeptical about this guidance — in part because such formulas feel forced and unnatural to me. I’m an actor, but I just don’t think I can deliver those lines convincingly.

The last time I asked for research support behind this advice, a friend pointed me to research touting its benefits. Alas, that research relied on student reports of their learning. Sadly, in the past, such reports haven’t been a reliable guide to actual learning.

For that reason, I was delighted to find a new study on the topic.

I was especially happy to see this research come from Dr. Faria Sana, whose work on laptop multitasking  has (rightly) gotten so much love. (Whenever I talk with teachers about attention, I share this study.)

Strangely, I like research that challenges my beliefs. I’m especially likely to learn something useful and new when I explore it. So: am I a convert?

Take 1; Take 2

Working with college students in a psychology course, Sana’s team started with the basics.

In her first experiment, she had students read five short passages about mirror neurons.

Group 1 read no learning objectives.

Group 2 read three learning objectives at the beginning of each passage.

And, Group 3 read all fifteen learning objectives at the beginning of the first passage.

The results?

Both groups that read the learning objectives scored better than the group that didn’t. (Group 2, with the learning objectives spread out, learned a bit more than Group  3, with the objectives all bunched together — but the differences weren’t large enough to reach statistical significance.)

So: compared to doing nothing, starting with learning objectives increased learning of these five paragraphs.

But: what about compared to doing a plausible something else? Starting with learning objectives might be better than starting cold. Are they better than other options?

How about activating prior knowledge? Should we try some retrieval practice? How about a few minutes of mindful breathing?

Sana’s team investigated that question. In particular — in their second experiment — they combined learning objectives with research into pretesting.

As I’ve written before, Dr. Lindsay Richland‘s splendid study shows that “pretesting” — asking students questions about an upcoming reading passage, even though they don’t know the answers yetyields great results. (Such a helpfully counter-intuitive suggestion!)

So, Team Sana wanted to know: what happens if we present learning objectives as questions rather than as statements? Instead of reading

“In the first passage, you will learn about where the mirror neurons are located.”

Students had to answer this question:

“Where are the mirror neurons located?” (Note: the students hadn’t read the passage yet, so it’s unlikely they would know. Only 38% of these questions were answered correctly.)

Are learning objectives more effective as statements or as pretests?

The Envelope Please

Pretests. By a lot.

On the final test — with application questions, not simple recall questions — students who read learning-objectives-as-statements got 53% correct.

Students who answered learning-objectives-as-pretest-questions got 67% correct. (For the stats minded, Cohen’s d was 0.84! That’s HUGE!)

So: traditional learning objectives might be better than nothing, but they’re not nearly as helpful as learning-objectives-as-pretests.

This finding prompts me to speculate. (Alert: I’m shifting from research-based conclusions to research-&-experience-informed musings.)

First: Agarwal and Bain describe retrieval practice this way: “Don’t ask students to put information into their brains (by, say, rereading). Instead, ask students to pull information out of their brains (by trying to remember).”

As I see it, traditional learning objectives feel like review: “put this information into your brain.”

Learning-objectives-as-pretests feel like retrieval practice: “try to take information back out of your brain.” We suspect students won’t be successful in these retrieval attempts, because they haven’t learned the material yet. But, they’re actively trying to recall, not trying to encode.

Second: even more speculatively, I suspect many kinds of active thinking will be more effective than a cold start (as learning objectives were in Study 1 above). And, I suspect that many kinds of active thinking will be more effective that a recital of learning objectives (as pretests were in Study 2).

In other words: am I a convert to listing learning objectives (as traditionally recommended)? No.

I simply don’t think Sana’s research encourages us to follow that strategy.

Instead, I think it encourages us to begin classes with some mental questing. Pretests help in Sana’s studies. I suspect other kinds of retrieval practice would help. Maybe asking students to solve a relevant problem or puzzle would help.

Whichever approach we use, I suspect that inviting students to think will have a greater benefit than teachers’ telling them what they’ll be thinking about.

Three Final Points

I should note three ways that this research might NOT support my conclusions.

First: this research was done with college students. Will objectives-as-pretests work with 3rd graders? I don’t know.

Second: this research paradigm included a very high ratio of objectives to material. Students read, in effect, one learning objective for every 75 words in a reading passage. Translated into a regular class, that’s a HUGE number of learning objectives.

Third: does this research about reading passages translate to classroom discussions and activities? I don’t know.

Here’s what I do know. In these three studies, Sana’s students remembered more when they started reading with unanswered questions in mind. That insight offers teachers a inspiring prompt for thinking about our daily classroom work.

Making “Learning Objectives” Explicit: A Skeptic Converted?
Andrew Watson
Andrew Watson

Teachers have long gotten guidance that we should make our learning objectives explicit to our students.

The formula goes something like this: “By the end of the lesson, you will be able to [know and do these several things].”

I’ve long been skeptical about this guidance — in part because such formulas feel forced and unnatural to me. I’m an actor, but I just don’t think I can deliver those lines convincingly.

The last time I asked for research support behind this advice, a friend pointed me to research touting its benefits. Alas, that research relied on student reports of their learning. Sadly, in the past, such reports haven’t been a reliable guide to actual learning.

For that reason, I was delighted to find a new study on the topic.

I was especially happy to see this research come from Dr. Faria Sana, whose work on laptop multitasking  has (rightly) gotten so much love. (Whenever I talk with teachers about attention, I share this study.)

Strangely, I like research that challenges my beliefs. I’m especially likely to learn something useful and new when I explore it. So: am I a convert?

Take 1; Take 2

Working with college students in a psychology course, Sana’s team started with the basics.

In her first experiment, she had students read five short passages about mirror neurons.

Group 1 read no learning objectives.

Group 2 read three learning objectives at the beginning of each passage.

And, Group 3 read all fifteen learning objectives at the beginning of the first passage.

The results?

Both groups that read the learning objectives scored better than the group that didn’t. (Group 2, with the learning objectives spread out, learned a bit more than Group  3, with the objectives all bunched together — but the differences weren’t large enough to reach statistical significance.)

So: compared to doing nothing, starting with learning objectives increased learning of these five paragraphs.

But: what about compared to doing a plausible something else? Starting with learning objectives might be better than starting cold. Are they better than other options?

How about activating prior knowledge? Should we try some retrieval practice? How about a few minutes of mindful breathing?

Sana’s team investigated that question. In particular — in their second experiment — they combined learning objectives with research into pretesting.

As I’ve written before, Dr. Lindsay Richland‘s splendid study shows that “pretesting” — asking students questions about an upcoming reading passage, even though they don’t know the answers yetyields great results. (Such a helpfully counter-intuitive suggestion!)

So, Team Sana wanted to know: what happens if we present learning objectives as questions rather than as statements? Instead of reading

“In the first passage, you will learn about where the mirror neurons are located.”

Students had to answer this question:

“Where are the mirror neurons located?” (Note: the students hadn’t read the passage yet, so it’s unlikely they would know. Only 38% of these questions were answered correctly.)

Are learning objectives more effective as statements or as pretests?

The Envelope Please

Pretests. By a lot.

On the final test — with application questions, not simple recall questions — students who read learning-objectives-as-statements got 53% correct.

Students who answered learning-objectives-as-pretest-questions got 67% correct. (For the stats minded, Cohen’s d was 0.84! That’s HUGE!)

So: traditional learning objectives might be better than nothing, but they’re not nearly as helpful as learning-objectives-as-pretests.

This finding prompts me to speculate. (Alert: I’m shifting from research-based conclusions to research-&-experience-informed musings.)

First: Agarwal and Bain describe retrieval practice this way: “Don’t ask students to put information into their brains (by, say, rereading). Instead, ask students to pull information out of their brains (by trying to remember).”

As I see it, traditional learning objectives feel like review: “put this information into your brain.”

Learning-objectives-as-pretests feel like retrieval practice: “try to take information back out of your brain.” We suspect students won’t be successful in these retrieval attempts, because they haven’t learned the material yet. But, they’re actively trying to recall, not trying to encode.

Second: even more speculatively, I suspect many kinds of active thinking will be more effective than a cold start (as learning objectives were in Study 1 above). And, I suspect that many kinds of active thinking will be more effective that a recital of learning objectives (as pretests were in Study 2).

In other words: am I a convert to listing learning objectives (as traditionally recommended)? No.

I simply don’t think Sana’s research encourages us to follow that strategy.

Instead, I think it encourages us to begin classes with some mental questing. Pretests help in Sana’s studies. I suspect other kinds of retrieval practice would help. Maybe asking students to solve a relevant problem or puzzle would help.

Whichever approach we use, I suspect that inviting students to think will have a greater benefit than teachers’ telling them what they’ll be thinking about.

Three Final Points

I should note three ways that this research might NOT support my conclusions.

First: this research was done with college students. Will objectives-as-pretests work with 3rd graders? I don’t know.

Second: this research paradigm included a very high ratio of objectives to material. Students read, in effect, one learning objective for every 75 words in a reading passage. Translated into a regular class, that’s a HUGE number of learning objectives.

Third: does this research about reading passages translate to classroom discussions and activities? I don’t know.

Here’s what I do know. In these three studies, Sana’s students remembered more when they started reading with unanswered questions in mind. That insight offers teachers a inspiring prompt for thinking about our daily classroom work.

Proxy Battles: The Value of Handshakes at the Door
Andrew Watson
Andrew Watson

Should teachers welcome students to the classroom with elaborate individual handshakes?

Or — in these COVIDian days of ours — with elaborate dances? (If you’re on Twitter, you can check out @thedopeeducator’s post from March 17 of 2021 for an example.)

This question generates a surprising amount of heat. Around here that heat means: let’s look for research!

What Does “The Research” Say?

Truthfully, I can’t find much research on this question. Teachers have strong feelings on the subject, but the topic hasn’t gotten much scholarly attention.

The exception to this rule: Dr. Clayton Cook’s study on “Positive Greetings at the Door” from 2018.

As I described this study back in 2019, researchers trained teachers in a two-step process:

First: greet each student positively at the door: “Good morning, Dan — great hat!”

Second: offer “precorretive” reminders: “We’re starting with our flashcards, so be sure to take them out right away.”

The researchers trained five teachers (in sixth, seventh, and eighth grades) in these strategies.

Their results — compared to an “active” control group — were encouraging:

For the control group, time on task was in the mid-to-high 50%, while disruptive behaviors took place about 15% of the time.

For the positive greeting group, researchers saw big changes.

Time on task went from the high-50% to more than 80% of the time.

Disruptive behaviors fell from ~15% to less than 5% of the time.

All that from positive greetings.

A Clear Winner?

Handshake advocates might be tempted to read this study and declare victory. However, we have many good reasons to move more deliberately.

First: although handshakes are a kind of “positive greeting,” they’re not the only “positive greeting.” Researchers didn’t specify handshakes; they certainly didn’t require elaborate dances.

So, we can’t use this research to insist on either of those approaches. Teachers’ greetings should be specific and positive, but needn’t be handshake-y or dance-y.

Second: the “positive greetings” strategy requires an addition step — “precorrective guidance.” Once the handshake/greeting is complete, the teacher should offer specific directions about the next appropriate step…

… start the exercise on the board,

… take out your notebook and write the date,

… remember the definitions of yesterday’s key words.

Handshakes alone don’t match this research strategy. We need to do more to get these results.

Third: this research took place in a very specific context. Researchers asked principals to nominate classes that had seen higher-than-average levels of disruption.

That is: if your class is already well behaved, you might not see much of a change. (Of course, if your class is already well behaved, you don’t really need much of a change.)

And One More Thing (Well, TWO More Things)

I think Dr. Cook’s study helpful, clear, and well-done. However, as far as I know, it’s one of a kind. His research hasn’t been replicated (or, for that matter, contradicted). According to both Scite.ai and ConnectedPapers.com, this one study is everything we know from a research perspective.

In brief: the little research we have is encouraging. But: it doesn’t require elaborate choreography. It does require “precorrective guidance.” And, as Daniel Willingham says: “One study is just one study, folks.”

A final thought:

I suspect that “handshakes at the door” generate so much controversy because they’re a proxy for a wider battle of extremes.

Extreme #1: “If teachers put all our energy into forming relationships, students will inevitably learn more!”

Extreme #2: “That’s obviously dreadful nonsense.”

That is: “handshakes at the door” stand in for “relationships-first” teaching. Hence all the passion on Twitter.

This battle, I think, largely sacrifices sensible nuance to passionate belief.

On the one hand: of course, students (on average) learn more when they feel a sense of safety, respect, and connection. Some students (especially those who have experienced trauma) might struggle to learn without those things.

And, on the other hand: of course students can learn from teachers they don’t really like, and from teachers with whom they have no real connection. Lecture-based college courses depend on that model completely. So do military academies.

Handshakes at the door might help us connect with students if they feel comfortable and fun for us. But: plenty of individual teachers would feel awkward doing such a thing. Many school or cultural contexts would make such handshakes seem weird or silly or discomforting.

If such handshakes strengthen relationships, they might be a useful tool. If your relationships are already quite good, or if you’d feel foolish doing such a thing, or if your cultural context looks askance at such rituals, you can enhance relationships in other ways.

As is so often the case, we don’t need to get pulled onto a team — championing our side and decrying the other. We can, instead, check out available research, see how its conclusions apply to our context, and understand that optimal teaching practices might vary from place to place.

The 10-Minute Rule: Is The Lecture Dead?
Andrew Watson
Andrew Watson

The “10-minute rule” offers teachers practical guidance. It typically sounds something like this:

If students aren’t intrinsically interested in material, they can pay attention to it for no more than 10 minutes.

Ergo: teachers should do something different every ten minutes.

Ergo: the lecture is dead.

I first heard the “10-minute rule” at a conference in 2008, and run across it frequently when I work with teachers. They too, it seems, heard it at a conference.

Any rule that gets so much love at teaching conferences must be true, right?

Research-Aligned Teaching Advice

If you’re reading this blog, you want your teaching to have research behind it. So, what exactly is the research behind the “10-minute rule?”

Neil Bradbury is glad you asked. He looked into its history, and came up with some astonishing results: results that would be funny if they weren’t so alarming.

Let’s start with a Johnstone and Percival study from 1976, where two researchers visited 90 lecture classes (!). By comparing observations, they agreed that attention started to wane within the first five minutes (!!), with another decrease in the 10-18 minute range (!!!).

As Bradbury reports, however, this conclusion gets murky quickly:

First: they visited only 13% of those lectures together. In other words: 87% of their data come from one lone observer.

Second: they don’t report how they measured attention, or — for that matter — lapses in attention.

That student looking out the window: is she distracted by a bird, or concentrating on the professor’s complex argument?

That student looking keenly at the slides: is he engrossed in the topic, or trying to remember his lines for tonight’s rehearsal?

Johnstone and Percival have no way to know.

In other words: the “10-minute rule” rests on the hunchy sense of two visitors who were — as far as we can tell — simply relying on their guts. Whatever we call that, we don’t call it “research.”

And, whatever we do with their hunches, we shouldn’t change our teaching because of them.

Measuring Attention

This study highlights a complex problem. Attention, of course, takes place inside our heads. How can we measure it?

One solution: keep track of students’ note taking. Perhaps, students take more notes when they pay attention, and fewer notes when they don’t?

If that hypothesis is true, then students who write less are paying less attention. When we find a steep decline in note taking, we’ve found the moment when attention has dropped off. Sure enough: a 10-minute increment turns out to be crucial.

Alas, as Bradbury points out, this approach also collapses.

First: students take notes relatively consistently throughout a lecture. Their note taking falls off in the final ten minutes, not after the first ten minutes.

Second: in fact, the quantity of note taking results from the professor’s lecture, not from the point in the lecture. When the speaker makes key points, students write more. When the professor is recapping, or simply winding down — as she might do at the end of a lecture — they take fewer notes.

As Bradbury pithily summarizes this approach:

Note taking is not a good proxy for attention whatsoever, and even it if were, it does not support a 10- to 15- minute limit on student engagement.

BOOM.

Let’s Get Physical

If note-taking doesn’t measure attention, perhaps we can use biological measures instead.

Research by Bligh used a pulsemeter to measure students’ alertness. This study found that their pulses dropped roughly 14% over the course of the class.

At last: research confirmation of the “10-minute rule”?

Alas, Bligh’s research found the same results during a discussion class as during a lecture.

We might think that a lower pulse suggests less attention. If it does, then neither class format sustains attention.

Classroom Implications

In brief, the “10-minute rule” isn’t a rule, and doesn’t last ten minutes.

More precisely: we have no research suggesting it’s a rule with a consistent time limit.

Given that truth, what should we teachers do?

First: focus on the obvious truth that people are different.

Older students can (probably) pay attention longer than younger ones.

Hungry students (probably) pay less attention than less-hungry ones. (Except right after lunch.)

Some cultures prioritize focused attention more than others.

Some lecturers know how to hold an audience better than others.

Your approach to teaching should vary based on your specific circumstances, not be dictated by an arbitrary rule (which sounds authoritative but has no research backing.)

For instance: I’m currently teaching two sections of the same class — one in person and the other online. I teach them differently because attention can be more difficult online. (And because the online section meets earlier in the day — a real challenge for high school students.)

Second: study the science of attention.

Surprisingly, attention isn’t one thing.

Instead, attention is a behavioral combination of three distinct mental processes.

The more teachers understand that complex mix, the more successful we can be in creating the behavior by managing the mental processes.

I’ve written a book on this subject: Learning Begins: A Classroom Teacher’s Guide to Working Memory and Attention. (Ryan Reynolds will play me in the movie, I’m sure.)

Or, you can read LOTS of great articles: here’s one place to start.

Whichever approach you take, don’t let implausible absolute rules shape your thinking. Pay attention to your students, and to attention itself. Those two beacons will guide you on your classroom journey.


In the past, I’ve cited Wilson and Korn’s 2007 discussion of this topic. My thanks to Zach Groshell (Twitter handle: @MrZachG) for pointing to Bradbury’s wonderful article.

To Grade or Not to Grade: Should Retrieval Practice Quizzes Be Scored?
Andrew Watson
Andrew Watson

We’ve seen enough research on retrieval practice to know: it rocks.

When students simply review material (review their notes; reread the chapter), that mental work doesn’t help them learn.

However, when they try to remember (quiz themselves, use flashcards), this kind of mental work does result in greater learning.

In Agarwal and Bain’s elegant phrasing: don’t ask students to put information back into their brains. Instead, ask them to pull information out of their brains.

Like all teaching guidance, however, the suggestion “use retrieval practice!” requires nuanced exploration.

What are the best methods for doing so?

Are some retrieval practice strategies more effective?

Are some frankly harmful?

Any on-point research would be welcomed.

On-Point Research

Here’s a simple and practical question. If we use pop quizzes as a form of retrieval practice, should we grade them?

In other words: do graded pop quizzes result in more or less learning, compared to their ungraded cousins?

This study, it turns out, can be run fairly easily.

Dr. Maya Khanna taught three sections of an Intro to Psychology course. The first section had no pop quizzes. In the second section, Khanna gave six graded pop quizzes. In the third, six ungraded pop quizzes.

Students also filled out a questionnaire about their experience taking those quizzes.

What did Khanna learn? Did the quizzes help? Did grading them matter?

The Envelope Please

The big headline: the ungraded quizzes helped students on the final exam.

Roughly: students who took the ungraded pop quizzes averaged a B- on the final exam.

Students in the other two groups averaged in the mid-to-high C range. (The precise comparisons require lots of stats speak.)

An important note: students in the “ungraded” group scored higher even though the final exam did not repeat the questions from those pop quizzes. (The same material was covered on the exam, but the questions themselves were different.)

Of course, we also wonder about our students’ stress. Did these quizzes raise anxiety levels?

According to the questionnaires, nope.

Khanna’s students responded to this statement: “The inclusion of quizzes in this course made me feel anxious.”

A 1 meant “strongly disagree.”

A 9 meant “strongly agree.”

In other words, a LOWER rating suggests that the quizzes didn’t increase stress.

Students who took the graded quizzes averaged an answer of 4.20.

Students who took the ungraded quizzes averaged an answer of 2.96.

So, neither group felt much stress as a result of the quizzes. And, the students in the ungraded group felt even less.

In the Classroom

I myself use this technique as one of a great many retrieval practice strategies.

My students’ homework sometimes includes retrieval practice exercises.

I often begin class with some lively cold-calling to promote retrieval practice.

Occasionally — last Thursday, in fact — I begin class by saying: “Take out a blank piece of paper. This is NOT a quiz. It will NOT be graded. We’re using a different kind of retrieval practice to start us off today.”

As is always true, I’m combining this research with my own experience and classroom circumstances.

Khanna gave her quizzes at the end of class; I do mine at the beginning.

Because I’ve taught high school for centuries, I’m confident my students feel comfortable doing this kind of written work. If you teach younger grades, or in a different school context, your own experience might suggest a different approach.

To promote interleaving, I include questions from many topics (Define “bildungsroman.” Write a sentence with a participle. Give an example of Janie exercising agency in last night’s reading.) You might focus on one topic to build your students’ confidence.

Whichever approach you take, Khanna’s research suggests that retrieval practice quizzes don’t increase stress and don’t require grades.

As I said: retrieval practice rocks!