classroom advice – Page 6 – Education & Teacher Conferences Skip to main content
Proxy Battles: The Value of Handshakes at the Door
Andrew Watson
Andrew Watson

Should teachers welcome students to the classroom with elaborate individual handshakes?

Or — in these COVIDian days of ours — with elaborate dances? (If you’re on Twitter, you can check out @thedopeeducator’s post from March 17 of 2021 for an example.)

This question generates a surprising amount of heat. Around here that heat means: let’s look for research!

What Does “The Research” Say?

Truthfully, I can’t find much research on this question. Teachers have strong feelings on the subject, but the topic hasn’t gotten much scholarly attention.

The exception to this rule: Dr. Clayton Cook’s study on “Positive Greetings at the Door” from 2018.

As I described this study back in 2019, researchers trained teachers in a two-step process:

First: greet each student positively at the door: “Good morning, Dan — great hat!”

Second: offer “precorretive” reminders: “We’re starting with our flashcards, so be sure to take them out right away.”

The researchers trained five teachers (in sixth, seventh, and eighth grades) in these strategies.

Their results — compared to an “active” control group — were encouraging:

For the control group, time on task was in the mid-to-high 50%, while disruptive behaviors took place about 15% of the time.

For the positive greeting group, researchers saw big changes.

Time on task went from the high-50% to more than 80% of the time.

Disruptive behaviors fell from ~15% to less than 5% of the time.

All that from positive greetings.

A Clear Winner?

Handshake advocates might be tempted to read this study and declare victory. However, we have many good reasons to move more deliberately.

First: although handshakes are a kind of “positive greeting,” they’re not the only “positive greeting.” Researchers didn’t specify handshakes; they certainly didn’t require elaborate dances.

So, we can’t use this research to insist on either of those approaches. Teachers’ greetings should be specific and positive, but needn’t be handshake-y or dance-y.

Second: the “positive greetings” strategy requires an addition step — “precorrective guidance.” Once the handshake/greeting is complete, the teacher should offer specific directions about the next appropriate step…

… start the exercise on the board,

… take out your notebook and write the date,

… remember the definitions of yesterday’s key words.

Handshakes alone don’t match this research strategy. We need to do more to get these results.

Third: this research took place in a very specific context. Researchers asked principals to nominate classes that had seen higher-than-average levels of disruption.

That is: if your class is already well behaved, you might not see much of a change. (Of course, if your class is already well behaved, you don’t really need much of a change.)

And One More Thing (Well, TWO More Things)

I think Dr. Cook’s study helpful, clear, and well-done. However, as far as I know, it’s one of a kind. His research hasn’t been replicated (or, for that matter, contradicted). According to both Scite.ai and ConnectedPapers.com, this one study is everything we know from a research perspective.

In brief: the little research we have is encouraging. But: it doesn’t require elaborate choreography. It does require “precorrective guidance.” And, as Daniel Willingham says: “One study is just one study, folks.”

A final thought:

I suspect that “handshakes at the door” generate so much controversy because they’re a proxy for a wider battle of extremes.

Extreme #1: “If teachers put all our energy into forming relationships, students will inevitably learn more!”

Extreme #2: “That’s obviously dreadful nonsense.”

That is: “handshakes at the door” stand in for “relationships-first” teaching. Hence all the passion on Twitter.

This battle, I think, largely sacrifices sensible nuance to passionate belief.

On the one hand: of course, students (on average) learn more when they feel a sense of safety, respect, and connection. Some students (especially those who have experienced trauma) might struggle to learn without those things.

And, on the other hand: of course students can learn from teachers they don’t really like, and from teachers with whom they have no real connection. Lecture-based college courses depend on that model completely. So do military academies.

Handshakes at the door might help us connect with students if they feel comfortable and fun for us. But: plenty of individual teachers would feel awkward doing such a thing. Many school or cultural contexts would make such handshakes seem weird or silly or discomforting.

If such handshakes strengthen relationships, they might be a useful tool. If your relationships are already quite good, or if you’d feel foolish doing such a thing, or if your cultural context looks askance at such rituals, you can enhance relationships in other ways.

As is so often the case, we don’t need to get pulled onto a team — championing our side and decrying the other. We can, instead, check out available research, see how its conclusions apply to our context, and understand that optimal teaching practices might vary from place to place.

The 10-Minute Rule: Is The Lecture Dead?
Andrew Watson
Andrew Watson

The “10-minute rule” offers teachers practical guidance. It typically sounds something like this:

If students aren’t intrinsically interested in material, they can pay attention to it for no more than 10 minutes.

Ergo: teachers should do something different every ten minutes.

Ergo: the lecture is dead.

I first heard the “10-minute rule” at a conference in 2008, and run across it frequently when I work with teachers. They too, it seems, heard it at a conference.

Any rule that gets so much love at teaching conferences must be true, right?

Research-Aligned Teaching Advice

If you’re reading this blog, you want your teaching to have research behind it. So, what exactly is the research behind the “10-minute rule?”

Neil Bradbury is glad you asked. He looked into its history, and came up with some astonishing results: results that would be funny if they weren’t so alarming.

Let’s start with a Johnstone and Percival study from 1976, where two researchers visited 90 lecture classes (!). By comparing observations, they agreed that attention started to wane within the first five minutes (!!), with another decrease in the 10-18 minute range (!!!).

As Bradbury reports, however, this conclusion gets murky quickly:

First: they visited only 13% of those lectures together. In other words: 87% of their data come from one lone observer.

Second: they don’t report how they measured attention, or — for that matter — lapses in attention.

That student looking out the window: is she distracted by a bird, or concentrating on the professor’s complex argument?

That student looking keenly at the slides: is he engrossed in the topic, or trying to remember his lines for tonight’s rehearsal?

Johnstone and Percival have no way to know.

In other words: the “10-minute rule” rests on the hunchy sense of two visitors who were — as far as we can tell — simply relying on their guts. Whatever we call that, we don’t call it “research.”

And, whatever we do with their hunches, we shouldn’t change our teaching because of them.

Measuring Attention

This study highlights a complex problem. Attention, of course, takes place inside our heads. How can we measure it?

One solution: keep track of students’ note taking. Perhaps, students take more notes when they pay attention, and fewer notes when they don’t?

If that hypothesis is true, then students who write less are paying less attention. When we find a steep decline in note taking, we’ve found the moment when attention has dropped off. Sure enough: a 10-minute increment turns out to be crucial.

Alas, as Bradbury points out, this approach also collapses.

First: students take notes relatively consistently throughout a lecture. Their note taking falls off in the final ten minutes, not after the first ten minutes.

Second: in fact, the quantity of note taking results from the professor’s lecture, not from the point in the lecture. When the speaker makes key points, students write more. When the professor is recapping, or simply winding down — as she might do at the end of a lecture — they take fewer notes.

As Bradbury pithily summarizes this approach:

Note taking is not a good proxy for attention whatsoever, and even it if were, it does not support a 10- to 15- minute limit on student engagement.

BOOM.

Let’s Get Physical

If note-taking doesn’t measure attention, perhaps we can use biological measures instead.

Research by Bligh used a pulsemeter to measure students’ alertness. This study found that their pulses dropped roughly 14% over the course of the class.

At last: research confirmation of the “10-minute rule”?

Alas, Bligh’s research found the same results during a discussion class as during a lecture.

We might think that a lower pulse suggests less attention. If it does, then neither class format sustains attention.

Classroom Implications

In brief, the “10-minute rule” isn’t a rule, and doesn’t last ten minutes.

More precisely: we have no research suggesting it’s a rule with a consistent time limit.

Given that truth, what should we teachers do?

First: focus on the obvious truth that people are different.

Older students can (probably) pay attention longer than younger ones.

Hungry students (probably) pay less attention than less-hungry ones. (Except right after lunch.)

Some cultures prioritize focused attention more than others.

Some lecturers know how to hold an audience better than others.

Your approach to teaching should vary based on your specific circumstances, not be dictated by an arbitrary rule (which sounds authoritative but has no research backing.)

For instance: I’m currently teaching two sections of the same class — one in person and the other online. I teach them differently because attention can be more difficult online. (And because the online section meets earlier in the day — a real challenge for high school students.)

Second: study the science of attention.

Surprisingly, attention isn’t one thing.

Instead, attention is a behavioral combination of three distinct mental processes.

The more teachers understand that complex mix, the more successful we can be in creating the behavior by managing the mental processes.

I’ve written a book on this subject: Learning Begins: A Classroom Teacher’s Guide to Working Memory and Attention. (Ryan Reynolds will play me in the movie, I’m sure.)

Or, you can read LOTS of great articles: here’s one place to start.

Whichever approach you take, don’t let implausible absolute rules shape your thinking. Pay attention to your students, and to attention itself. Those two beacons will guide you on your classroom journey.


In the past, I’ve cited Wilson and Korn’s 2007 discussion of this topic. My thanks to Zach Groshell (Twitter handle: @MrZachG) for pointing to Bradbury’s wonderful article.

To Grade or Not to Grade: Should Retrieval Practice Quizzes Be Scored?
Andrew Watson
Andrew Watson

We’ve seen enough research on retrieval practice to know: it rocks.

When students simply review material (review their notes; reread the chapter), that mental work doesn’t help them learn.

However, when they try to remember (quiz themselves, use flashcards), this kind of mental work does result in greater learning.

In Agarwal and Bain’s elegant phrasing: don’t ask students to put information back into their brains. Instead, ask them to pull information out of their brains.

Like all teaching guidance, however, the suggestion “use retrieval practice!” requires nuanced exploration.

What are the best methods for doing so?

Are some retrieval practice strategies more effective?

Are some frankly harmful?

Any on-point research would be welcomed.

On-Point Research

Here’s a simple and practical question. If we use pop quizzes as a form of retrieval practice, should we grade them?

In other words: do graded pop quizzes result in more or less learning, compared to their ungraded cousins?

This study, it turns out, can be run fairly easily.

Dr. Maya Khanna taught three sections of an Intro to Psychology course. The first section had no pop quizzes. In the second section, Khanna gave six graded pop quizzes. In the third, six ungraded pop quizzes.

Students also filled out a questionnaire about their experience taking those quizzes.

What did Khanna learn? Did the quizzes help? Did grading them matter?

The Envelope Please

The big headline: the ungraded quizzes helped students on the final exam.

Roughly: students who took the ungraded pop quizzes averaged a B- on the final exam.

Students in the other two groups averaged in the mid-to-high C range. (The precise comparisons require lots of stats speak.)

An important note: students in the “ungraded” group scored higher even though the final exam did not repeat the questions from those pop quizzes. (The same material was covered on the exam, but the questions themselves were different.)

Of course, we also wonder about our students’ stress. Did these quizzes raise anxiety levels?

According to the questionnaires, nope.

Khanna’s students responded to this statement: “The inclusion of quizzes in this course made me feel anxious.”

A 1 meant “strongly disagree.”

A 9 meant “strongly agree.”

In other words, a LOWER rating suggests that the quizzes didn’t increase stress.

Students who took the graded quizzes averaged an answer of 4.20.

Students who took the ungraded quizzes averaged an answer of 2.96.

So, neither group felt much stress as a result of the quizzes. And, the students in the ungraded group felt even less.

In the Classroom

I myself use this technique as one of a great many retrieval practice strategies.

My students’ homework sometimes includes retrieval practice exercises.

I often begin class with some lively cold-calling to promote retrieval practice.

Occasionally — last Thursday, in fact — I begin class by saying: “Take out a blank piece of paper. This is NOT a quiz. It will NOT be graded. We’re using a different kind of retrieval practice to start us off today.”

As is always true, I’m combining this research with my own experience and classroom circumstances.

Khanna gave her quizzes at the end of class; I do mine at the beginning.

Because I’ve taught high school for centuries, I’m confident my students feel comfortable doing this kind of written work. If you teach younger grades, or in a different school context, your own experience might suggest a different approach.

To promote interleaving, I include questions from many topics (Define “bildungsroman.” Write a sentence with a participle. Give an example of Janie exercising agency in last night’s reading.) You might focus on one topic to build your students’ confidence.

Whichever approach you take, Khanna’s research suggests that retrieval practice quizzes don’t increase stress and don’t require grades.

As I said: retrieval practice rocks!

What (and Why) Should Students Memorize? Confidence and Fluency for the Win
Andrew Watson
Andrew Watson

In our profession, memorization has gotten a bad name.

The word conjures up alarming images: Dickensian brutes wielding rulers, insisting on “facts, facts, facts!”

In a world when students “can look up anything on the interwebs,” why do we ask students to memorize at all?

One answer from cognitive science: we think better with information we know well.

Even when we can find information on the internet, we don’t use that information very effectively. (Why? Working memory limitations muddle our processing of external information.)

A second answer appears in intriguing recent research.

Reasonable Plans, Unstated Assumptions

As a classroom teacher, I might operate on this reasonable plan:

Step one: we discuss ideas and information in class.

Step two: students write down the important parts.

And, step three: when students need that information later, they look at their notes.

This plan — the core of most high school classes I know — relies on unstated assumptions:

Assumption one: students’ notes are largely correct.

Assumption two: if students write down information INcorrectly, they’ll recognize that mistake. After all, we discussed the correct information in class.

But what if that second assumption isn’t true?

What if students trust external information (their notes) more than internal information (their memories)?

Assumptions Thwarted

In 2019, Risko, Kelly, & Gaspar studied one version of this question.

They had students listen to word lists, and type them into a storable file. After distraction, students got to review their lists. They then were tested on those words.

On the final list, however, these scholars did a sneaky thing: they added a word to the stored list. Sure enough, 100% of their students wrote down the additional word, even though it hadn’t in fact been on the initial word list.

Students trusted their written document (external “memory”) more than their own actual memory. When tested even later, students still included the additional word, even though it wasn’t a part of their initial learning.

In other words: the “reasonable plan” that teachers often rely on includes an assumption that — at least in this research — isn’t true.

Ugh.

Classroom Implications

This research, I think, reminds us that the right kind of memorization has great value for students.

We want students to know certain bedrock facts and processes with absolute certainty. We want them, for instance, to define key terms and ideas fluently. Crucially, we want them to reject — with confidence borne of certain knowledge — inaccurate claims.

For instance:

I just completed a unit on tragedy. My sophomores read August Wilson’s Fences and Shakespeare’s Macbeth.

 

On the very first day of the term, I went over a four-part definition of “tragedy.” (It’s a quirky definition, I admit, but it works really well.)

 

We reviewed that definition almost daily, increasingly relying on retrieval practice. For instance, I might give them this handout to fill in. Or we might do that work together on the board.

Over time, I started including inaccurate prompts in my questions: “So, tragedy ends in death or marriage, right?”

By this point, my students knew the definition so well that they confidently rejected my falsehoods: “No, you’re trying to trick us! Tragedy ends in death or banishment!”

For an even trickier approach, I encouraged students to correct one another’s (non-existent) mistakes:

Me: “T: what does comedy represent, and why?”

T: “The marriage (and implied birth) at the end of a comedy implies the continuity of society, and in that way contrasts tragedy’s death and banishment, which represent the end of society.”

Me: “M: what did T get wrong.”

M [confidently]: “Nothing. That was exactly right.”

Me [faking exasperation]: “S, help me out here. What did T and M miss?”

S [not fooled]: “Nothing. I agree with them both.”

Me: “Congrats to T for getting the answer just right. And congrats to M and S for not letting me fool you. It’s GREAT that you’re all so confident about this complex idea.”

Because these students knew this complex definition cold — because they had memorized it — they could stand firm when questioned skeptically. As a result, they did a great job when asked to apply that definition at the end of the term:

“How does Wilson’s Fences fit the definition of tragedy AND of comedy?”

To Sum Up

Despite all the bad press, the right kind of memorization can enhance learning.

When students know foundational information and processes by heart, they

BOTH process questions more fluently

AND resist misleading information from “external memory” sources.

Greater cognitive fluency + greater confidence in their knowledge = enduring learning.

Prior Knowledge: Building the Right Floor [Updated]
Andrew Watson
Andrew Watson

When I first published this post back in 2019, the underlying research wasn’t publicly available; now it is!


Take a gander at this passage from Michael Jones’s recent biography of The Black Prince:

“In the fourteenth century England used a silver standard of currency. The unit of account was the pound sterling (£) which was equal to one and a half marks of silver. The pound was divided into twenty shillings (s), each of twelve pence (d). There was also, from 1344, a gold coinage based on the noble, which was conventionally worth 6s 8d, but was rarely used. It would, however, be significant in the calculation of the ransom of King John II and also in the introduction of gold coinage into Gascony and then the principality of Aquitaine by the Black Prince.”

Many readers, I suspect, felt tempted to give up relatively quickly. (Don’t blame yourself if you did.) Unless you’re really up to speed on 14th century English currency–both silver and gold!–the paragraph quickly becomes overwhelming.

The vocabulary in this passage probably doesn’t strain our cognition. Except for the phrase “marks of silver,” I know what all those words mean. (And, I can guess from context that a “mark” is some unit of measurement.)

However, the passage does place several mental demands on the reader.

First, it invites you to undertake several rapid mathematical calculations. (Quick: how many shillings in a mark?)

Second, it requires you to learn abbreviations as you go. To understand the fourth sentence, you need to remember the (wildly counter-intuitive) abbreviation of “pence” as “d” from the third sentence.

Third, it assumes you recall several events and places unfamiliar–I suspect–to most Americans. Who was King John II? Why was he ransomed…was he kidnapped? Where are Gascony and Aquitaine? They don’t sound very English — why did an English prince introduce coinage to them? Actually: why is a prince empowered to introduce new currency?

Essential Background Knowledge

I thought of this paragraph when I read a recent article by Robert Pondiscio. In it, Pondiscio summarizes a study trying to determine how much background knowledge is necessary for comprehension.

In this study, students who scored higher than a 59% on a background knowledge test understood a reading passage substantially better than those who scored below 59%.

As summarized by Pondiscio, the study’s authors see some clear teaching implications here.

First, we can meaningfully measure our students’ relevant background knowledge.

Second, students who fall short on that measure will benefit A LOT if we provide them with the essentials.

For instance, students who understood that “habitat,” “species,” and “ecosystems” were relevant vocabulary for the study of ecology understood the reading passage more deeply. (The study included 3500 students, so I believe they controlled for various confounds. I haven’t read the study itself–it’s behind a paywall.)

I think those conclusions point to another:

Third: models of teaching that focus on “pure discovery” will create substantial challenges for students who lack background knowledge. Students who don’t know the basics of a topic simply can’t understand the field of inquiry within which they’re meant to discover.

And, they won’t feel motivated by curiosity to find out. They’ll feel discouraged by their confusion. (Few readers, I suspect, were motivated by the paragraph above to learn more about medieval English currency.)

A Final Thought

This study finds that 59% was the essential tipping point. Students who scored lower than 59% on the prior knowledge test found themselves in a different cognitive category than those who scored above.

Howeverthat percentage does not necessarily apply to all circumstances.

In other words: we shouldn’t give our students prior-knowledge tests, and focus only on those who score 58% and below.

Instead, we should plan our lessons and units knowing that some floor-level of knowledge will be crucial for learning most things.

In every case–as you hear me say so often–we’ll have to rely on the teacher’s judgment to discover that level.

Researchers can remind us that the floor exists. But they can’t identify it for every teacher in every classroom. Ultimately, with that research guidance in mind, we’ll find the right place for the floor. And, we’ll build it.

“Soft” vs. “Hard” Skills: Which Create a Stronger Foundation?
Andrew Watson
Andrew Watson

As teachers, should we focus on our students’ understanding of course content, or on our students’ development of foundational academic skills?

Do they benefit more from learning history (or chemistry or spelling or flute), or from developing the self-discipline (grit, focus, executive skills) to get the work — any work — done?

I’ve found a recent study that explores this question. It stands out for the rigor of its methodology, and the tough-mindedness of its conclusions.

Here’s the setup:

Daunting Problems; Clever Solutions

Researchers struggle to answer these questions because student choice can complicate the data.

When college students choose courses and professors, when they opt out of one section and opt into another, we can’t tell if the professor’s quality or the students’ preferences led to particular research results.

How to solve this problem? We find a school where students get no choices.

They must take the same courses.

They can’t change sections.

Students start the year randomly distributed, and they stay randomly distributed.

Where shall we find such a school? Here’s a possibility: the United States Naval Academy. All students take the same courses. They can’t switch. They can’t drop. Sir, yes sir!

Even better: several USNA courses are sequential. We can ask this question: how does the student’s performance in the first semester affect his/her performance in the second semester?

Do some 1st semester teachers prepare their students especially well — or especially badly — for the 2nd semester?

We can even fold in extra data. The website Rate My Professors lets students grade professors on many qualities — including the difficulty of the course, and their overall rating. Perhaps those data can inform our understanding of teacher effectiveness.

Provocative Conclusions

A research team has followed this logic and recently published their conclusions.

In their findings:

Easygoing teachers — who don’t demand lots of work, who don’t communicate high standards, who routinely give lots of high grades — harm their students. 

How so? Their students — quite consistently — do badly on subsequent courses in the field.

In other words: if I have an easygoing teacher for Calculus I, I’m likely to do badly in Calculus II — compared to my identical twin brother who had a different teacher.

On the other hand, tough-minded teachers — who insist on deadlines, who require extra work, who remain stingy with high grades — benefit their students.

How so? These students — like my identical twin — do better in subsequent courses than I do.

This research team calls such executive function topics — getting work done, even if it’s dull; prioritizing; metacognition — “soft skills.” In their analysis, professors who are tough minded about these soft skills ultimately help their students learn more.

More Provocative Still

This logic certainly makes sense; we’re not shocked that students learn more when we insist that they work hard, focus, and set high standards.

Of course, professors who DON’T insist that their students work hard get lots of student compliments (on average). We teachers know that — all things being equal — students are happier when they get less work. Their RateMyProfessor scores average higher than those of their tough-minded peers.

In turn, colleges notice student popularity ratings. School leaders feel good when students praise particular teachers. They give them awards and promotions and citations. Why wouldn’t they? After all, those highly-praised professors give the college a good reputation.

In other words: according to this research team, colleges are tempted to honor and promote teachers who get high student ratings — even though those very professors harm their students’ long term learning, and thereby diminish the quality of the academic program.

That’s a scathing claim indeed.

Caveats

Like everything I write about here, this finding comes with caveats.

First: although these students were randomly assigned once they got to the Naval Academy, admission to that Academy is very challenging indeed. (Google tells me that 8.3% of their applicants get in.)

So, a tough-minded approach might benefit this extremely narrow part of the population — who, let’s be honest, signed up for a rigorous academic program, rigorously delivered.

However, that finding doesn’t necessarily mean that this approach works for younger students, or a broader swath of the population, or students who didn’t apply for such demanding treatment.

It might. But, this study by itself shouldn’t persuade us to change our work dramatically. (Unless we work in a similar academic setting.)

Second: this report’s authors define “soft” and “hard” in a very specific way (see their page 3).

Your school might use these terms quite differently, so their claims might not apply directly to your terminology.

Equally important, the strategies they use to distinguish between “tough-minded” and “easy-going” professors require lots of intricate parsing.

I myself don’t have the stats skills to interrogate their process; I can imagine a more expert reading asking sharp questions about their methods.

Conclusion

In many parts of life, short-term challenges lead to long-term benefits.

We might not like exercise, but it helps us as we get older.

We might like bacon and ice cream, but leeks and salmon keep us fitter.

This research report suggests that we help our students in the long run by maintaining tough-minded high standards right now.

Doing so might not make us popular. Our administrative leaders don’t always recognize our wisdom. But if our students learn more, their strong “soft-skills” foundation really does help them thrive.

Seriously: What Motivates Teachers to Be Funny?
Andrew Watson
Andrew Watson

To start 2021 in the right spirit, let’s think about humor in the classroom.

It seems that, obviously, humor might be a good classroom strategy. When the lesson slows down, a joke or two might brighten the mood.

Once we begin studying this question the way researchers study things, well, it gets much more complicated. (I once heard the claim that “laughter improves learning 44%!” Unsurprisingly, so vague a statement doesn’t hold up to much scrutiny. For starters, the cited research says 28%, not 44%…)

We might study, for instance:

What kind of humor do teachers use?

Are there differences between K-12 teachers’ and college professors’ use of humor?

Are there gender differences in use of humor? (Believe it or not, there’s a controversy here.)

What motivates teachers to attempt humor?

I’ve recently found research trying to answer this question:

What motivation traits prompt college professors to try content-relevant humor? (“Content relevant” means they’re not just telling jokes; they’re being funny about the topic they’re discussing.)

What did the researchers find?

Kinds of Motivation

Psychologists often divide behavior up into plausible categories for analysis.

For instance, if you know Carol Dweck’s work on mindset, you know that some people feel motivated to learn more, while others feel motivated to demonstrate what they already know.

In this case, college professors might want to improve at professing, or they might want to demonstrate that they’re already top-notch professors.

Also, motivations can be either positive or negative.

In this case, a professor might want to demonstrate that they’re good (positive), or not reveal that they’re bad (negative).

Researchers have other motivational categories as well.

In this study, they wonder if professors use humor to improve relationships with students.

And, they wonder if a prof’s desire to avoid work influences their use of humor.

To start answering these questions, the researchers had more than 250 professors fill out surveys that give insight into their motivation. (Important: these data, in other words, come from self report. Not everyone is persuaded by such data.)

They also asked students to rate — on a scale of 1 to 5 — their agreement with this statement: “Instructor enhances presentations with the use of humor.”

The Envelope, Please

So, what did they learn?

For instance: when did students endorse the statement that their professor “enhances presentations with the use of humor?”

Answer: when those professors themselves said s/he wanted to “build amicable and valued relationships with students.” That is: profs with relational goals use humor to build those relationships.

How about the reverse? When did students consistently reject that statement?

Answer: When their professors said they wanted to avoid looking bad. (If you’re keeping score, that’s a “negative performance” goal.)

In brief: professors who want to connect with students make content-related jokes. Those who fear looking incompetent remain humorless.

Three Interesting Points

First: the researchers here are scrupulous to distinguish between college professors and K-12 teachers. They don’t claim that these findings apply to earlier grades.

Second: This research team — contradicting others — finds that women use humor more often than men. (The controversy continues!)

Third: One word in particular jumps out at me: “enhances.” Students didn’t say that the professors “tried” to be funny, but that they “enhanced presentations with humor.”

That is: the students suggest that — for teachers who want to build relationships — humor really does make lectures better. The researchers don’t address that question directly, but — at least to me — that conclusion flows directly from this research.

Oh, what the heck. Let’s have another.

Fourth: In humor as in all things, personality matters. If you’re not a funny teacher, don’t feel that you have to start telling jokes to build relationships. You — almost certainly — have your own ways to do so. Use your own authentic strategies to connect with your students.

I can’t cite research, but I’m almost certain: your own honest self-presentation will be MUCH more effective at building relationships that forced humor.

The Best Teaching Advice We’ve Got
Andrew Watson
Andrew Watson

You want to improve your teaching with psychology research?

We’ve got good news, and bad news.

And more good news.

Good News: we have lots and LOTS of research. We can talk about attention, or working memory, or the spacing effect, or motivation, or stress…the list is long. And super helpful.

So much practical advice!

Bad News: actually, the bad news is the same as the good news. We’ve got SO MUCH good research that it’s honestly hard to keep track of it all.

I mean, seriously. Should you start by looking at attention research? Or stress research?

Should we think about the motivational effects of student-teacher relationships, or the perils of working memory overload, or the benefits of desirable difficulty?

Which is most important?

Honestly, I think our next priority is not so much finding out new truths about learning, but organizing all the information we already have.

More Good News

If you agree that we really need someone to sort all these suggestions into a coherent system, you’ll be delighted to read this article by Stephen Chew (Twitter handle: @SChewPsych) and William Cerbin (@BillCerbin).

Other scholars — for instance, Barak Rosenshine — have put together a coherent system based on learning principles. Chew and Cerbin, instead, organize their system around cognitive challenges.

That is:

If students feel anxiety about a topic or discipline, that emotion will interfere with their learning.

If students have prior misconceptions, they will distort students’ understanding.

If classroom work or assignments go beyond working memory limits, students won’t learn effectively (or, at all).

When planning a course or a lesson or an assignment, teachers can think their way through these specific challenges. By contemplating each one, we can design our work to best facilitate learning.

Getting the Emphasis Right

If you’re thinking “this is such excellent news! It just can’t get any better!” — well — I’ve got some news: it gets better.

Chew and Cerbin write:

There is no single best teaching strategy for all students, topics, and situations. The proposed framework is not prescriptive … and can guide adaptation of teaching practice.

In other words, they’re not saying: here’s a list of things to do.

Instead, they are saying: here are several topics/problems to consider.

Teaching advice should not include “best practices.” (That’s a business concept.) It should include “best questions to ponder as we make decisions.” Chew and Cerbin make this point repeatedly.

Frequent readers know that I’ve been banging on for years with this mantra: “Don’t just do this thing; instead, think this way.”

We should think about our students’ working memory limitations. The strategies we use might differ for 1st graders and 8th graders.

We should think about the importance of transfer. A Montessori school and a KIPP school will (almost certainly) use differing strategies to reach that goal.

We should think about our students’ prior knowledge. The best way to measure that knowledge might be different for students with diagnosed learning differences.

Yes: we should consider these nine topics. But the ways we answer them must depend on our students, our schools, our curriculum, and ourselves.

For all these reasons, I recommend Chew and Cerbin’s article with great enthusiasm.

And, happily, you can meet Dr. Chew at our online conference in February! (In case you’re wondering: I was planning to write about this article before I knew he was joining the conference. A happy synchronicity.)

“But How Do We Know If It Works in the Classroom?”: The Latest on Retrieval Practice
Andrew Watson
Andrew Watson

We’ve heard so much about retrieval practice in the last two years that it seems like we’ve ALWAYS known about its merits.

But no: this research pool hasn’t been widely known among teachers until recently.

We can thank Agarwal and Bain’s wonderful Powerful Teaching for giving it a broad public audience. (If you had been attending Learning and the Brain conferences, of course, you would have heard about it a few years before that.)

Of course, we should stop every now and then to ask ourselves: how do we know this works?

In this case, we’ve got several answers.

In addition to Agarwal and Bain’s book, both Make it Stick (by Brown, Roediger, and McDaniel) and How We Learn (by Benedict Carey) offer helpful surveys of the research.

You could also check out current research. Ayanna Kim Thomas recently published a helpful study about frequent quizzing in college classrooms. (It helps!)

All these ways of knowing help. Other ways of knowing would be equally helpful.

For instance: I might want to know if retrieval practice helps in actual classrooms, not just in some psychology lab somewhere.

Yes, yes: Agarwal and Bain’s research mostly happened in classrooms. But if you’ve met them you know: it might work because they’re such engaging teachers! What about teachers like me — who don’t quite live up to their energy and verve?

Today’s News

A recent meta-analysis looked at the effect on retrieval practice in actual classrooms with actual students. (How many students? Almost 8000 of them…)

Turns out: retrieval practice helps when its studied in psychology labs.

And, it helps when vivacious teachers (like Agarwal and Bain) use it.

And, it helps when everyday teachers (like me) use it.

It really just helps. As in: it helps students learn.

A few interesting specifics from this analysis:

First: retrieval practice quizzes helped students learn more when they were counted for a final grade than when they weren’t. (Although: they did help when not counted toward the grade.)

Second: they helped more when students got feedback right away than when feedback was delayed. (This finding contradicts the research I wrote about last week.)

Third: short answer quizzes helped learning more than multiple choice (but: multiple choice quizzes did produce modest benefits).

Fourth: announced quizzes helped more than unannounced quizzes.

and, by the way

Fifth: retrieval practice helped middle-school and high-school students more than college students. (Admittedly: based on only a few MS and HS studies.)

In brief: all that good news about retrieval practice has not been over sold. It really is among the most robustly researched and beneficial teaching strategies we can use.

And: it’s EASY and FREE.

A Final Note

Because psychology research can be — ahem — written for other psychology researchers (and not for teachers), these meta-analyses can be quite daunting. I don’t often encourage people to read them.

In this case, however, authors Sotola and Crede have a straightforward, uncomplicated prose style.

They don’t hold back on the technical parts — this is, after all, a highly technical kind of writing.

But the explanatory paragraphs are unusually easy to read. If you can get a copy — ask your school’s librarian, or see if it shows up on Google Scholar — you might enjoy giving it a savvy skim.

“Sooner or Later”: What’s the Best Timing for Feedback?
Andrew Watson
Andrew Watson

Given the importance of feedback for learning, it seems obvious teachers should have well-established routines around its timing.

In an optimal world, would we give feedback right away? 24 hours later? As late as possible?

Which option promotes learning?

In the past, I’ve seen research distinguishing between feedback given right this second and that given once students are done with the exercise: a difference of several seconds, perhaps a minute or two.

It would, of course, be interesting to see research into longer periods of time.

Sure enough, Dan Willingham recently tweeted a link to this study, which explores exactly that question.

The Study Plan

In this research, a team led by Dr. Hillary Mullet gave feedback to college students after they finished a set of math problems. Some got that feedback when they submitted the assignment; others got it a week later.

Importantly, both groups got the same feedback.

Mullet’s team then looked at students’ scores on the final exams. More specifically, if the students got delayed feedback on “Fourier Transforms” — whatever those are — Mullet checked to see how they did on the exam questions covering Fourier.

And: they also surveyed the students to see which timing they preferred — right now vs. one week later.

The Results

I’m not surprised to learn that students strongly preferred immediate feedback. Students who got delayed feedback said they didn’t like it. And: some worried that it interfered with their learning.

Were those students’ worries correct?

Nope. In fact, just the opposite.

To pick one set of scores: students who got immediate feedback scored 83% on that section of an exam. Students who got delayed feedback scored a 94%.

Technically speaking, that’s HUGE.

Explanations and Implications

I suspect that delayed feedback benefitted these students because it effectively spread out the students’ practice.

We have shed loads of research showing that spacing practice out enhances learning more than doing it all at once.

So, if students got feedback right away, they did all their Fourier thinking at the same time.  They did that mental work all at once.

However, if the feedback arrived a week later, they had to think about it an additional, distinct time. They spread that mental work out more.

If that explanation is true, what should teachers do with this information? How should we apply it to our teaching?

As always: boundary conditions matter. That is, Mullet worked with college students studying — I suspect — quite distinct topics. If they got delayed feedback on Fourier Transforms, that delay didn’t interfere with their ability to practice “convolution.”

In K-12 classrooms, however, students often need feedback on yesterday’s work before they can undertake tonight’s assignment.

In that case, it seems obvious that we should get feedback to them ASAP. As a rule: we shouldn’t require new work on a topic until we’ve given them feedback on relevant prior work.

With that caveat, Mullet’s research suggests that delaying feedback as much as reasonably possible might help students learn. The definition of “reasonably” will depend on all sorts of factors: the topic we’re studying, the age of my students, the trajectory of the curriculum, and so forth.

But: if we do this right, feedback helps a) because feedback is vital, and b) because it creates the spacing effect. That double-whammy might help our students in the way it helped Mullet’s. That would be GREAT.