Skip to main content
How Do Experts Think?
Andrew Watson
Andrew Watson

Perhaps you’ve heard the saying: “To a hammer, everything looks like a nail.”

It means, more or less, we see what we’re trained to see.

If I bring a problem to a plumber, she’ll think about it like a plumbing problem. An economist, like an economics problem. A general, a military problem.

What does research tell us about this insight? And, does that research give us guidance about teaching and learning?

The Geoscientists and the Balloon

A research team led by Dr. Micah Goldwater wanted to explore this topic.

So, they asked a few hundred people these questions:

“A balloon floating is like _________ because _________.”

“Catching a cold is like _________ because _________.”

Those who answered the question fell into four distinct groups:

Expert geoscientists — who had an MA or PhD in geoscience

Intermediate geoscientists — who were studying geoscience

Expert vision scientists –who had an MA or PhD in vision science

Non-expert adults — who had not studied science in college

Goldwater’s team wanted to know: how often would people offer causal analogies? “A balloon floating is like hot water rising in a cold sea because they result from the same underlying causal principle.”

Deeper still, they wanted to know how often people offer those causal analogies spontaneously, and how often they need to be prompted to do so. (The research details get tricky here, so I’m simplifying a bit.)

Archimedes Catches a Cold

Sure enough, expert geoscientists spontaneously offered causal analogies for the balloon question — because they have a relevant geoscientific rule, called “Archimedes’ principle.”

However, expert vision scientists did not spontaneously give causal analogies, because their branch of science does not include a causally relevant analogy.

And neither group spontaneously proposed many causal analogies for “catching a cold,” because neither field builds on underlying relevant principles.

This finding — along with other parts of Goldwater’s research — suggests this conclusion: hammers typically see nails.

That is: experts spontaneously perceive, contemplate, and understand new information (“floating balloons”) through core principles of their field (“Archimedes’ principle”) — even though balloons don’t come up very often in geoscience.

Teaching Implications: Bad News, and Good

As I visit schools, I often hear teachers say “I want my students to think like historians” or “think like scientists” or “think like artists.” To accomplish this goal, some pedagogies encourage us to give students “expert tasks.”

Alas, Goldwater’s findings (and LOTS of other research) suggest that this bar might be MUCH too high. It takes years — decades? — to “think like a researcher” or “think like a coach.”

Even people with PhD’s in vision science don’t think causally about floating balloons unless explicitly prompted to do so.

As Dan Willingham writes in Why Don’t Students Like School?, “cognition early in training is fundamentally different from cognition late in training” (127).

This message often feels like bad news.

All those authentic tasks we’ve been giving students might not have the results we had hoped for. It’s extraordinarily difficult for students to think like a mathematician, even when we give them expert math tasks.

However, I see glimmers of hope in this gloomy conclusion.

My students (I teach high school English) won’t think like literary critics. However, I think they can and do become “experts” in much smaller sub-sub-sub-fields of English. (Warning: I’m about to switch from summarizing research to speculating about a classroom anecdote.)

When Comedy is Tragic

For instance: I recently gave my students a fairly complex definition of “comedy and tragedy.” This section of the unit required LOTS of direct instruction and LOTS of retrieval practice. After all: I’m the expert, and they’re novices.

My students then read a short story by Jhumpa Lahiri called “A Temporary Matter.” I asked them to look for elements of comedy and tragedy in that story.

Not only did they find those elements, they SPONTANEOUSLY pointed out Lahiri’s daring: she uses traditionally comic symbols (food, music, celebration, childbirth) as indicators of tragedy (“death and banishment”).

And, since then, they’ve been pouncing on tragic/comic symbolism, and other potentially innovative uses thereof.

These students aren’t (yet) expert literary critics. But on this very narrow topic, they starting to be flexible and inventive — a sign of budding expertise.

As long as I have a suitably narrow definition, a focused kind of pre-expertise is indeed a reasonable and achievable goal.

In Sum

Like lots of research in the field of “novices and experts,” Goldwater’s study warns us that experts really do think differently from novices, and that true expertise takes years to develop.

However, that insight shouldn’t scare us away from well-defined tasks that build up very local subsections of developing expertise. Our students aren’t yet capital-E Experts. And, the right-sized educational goals can move them towards ultimate Expertise.

 

Teachers’ Gestures Can Help Students Learn
Andrew Watson
Andrew Watson

Over the years, I’ve written about the importance of “embodied cognition.

In other words: we know with our brains, and we know with and through our bodies.

Scholars such as Dr. Susan Goldin-Meadow and Dr. Sian Beilock have done splendid and helpful work in this field.

Their research suggests that students might learn more when they make the right kind of gesture.

Other scholars have shown that — in online lectures — the right kind of pointing helps too.

What about the teachers‘ gestures? Can we help students learn in the way we use our hands?

Dr. Celeste Pilegard wanted to find out

Steamboats, East and West

Pilegard invited college students to watch brief video lectures. The topic: the differences between Eastern and Western steamboats. (You think I’m joking. I’m not joking.)

These students watched one of four versions:

In the first version, the teacher’s gestures focused on the surface features of the steamboats themselves (how deep they sit in the water, for instance).

In the second version, the gestures focused on the structure of the lesson (“Now I’m talking about Eastern steamboats, and NOW I’m talking about Western steamboats.”).

Third version: gestures emphasized BOTH surface AND structural features.

Fourth version: a control group saw a video with neutral, content-free gestures.

Did those gestures make a difference for learning?

Pilegard, in fact, measured learning in two ways:

Did the students remember the facts?

Could the students apply those facts by drawing inferences?

So, what did she discover?

No, but Yes

Researchers typically make predictions about their findings.

In this case, Pilegard predicted that neither the surface gestures (about steamboats) nor the structural gestures (about the logic of the lesson) would help students remember facts.

But, she predicted that the structural gestures would help students draw inferences. (“If a steamboat operates on a shallow river, what does that tell you about the pressure of the steamboat’s engine?”) Surface gestures, she predicted, would not improve inferences.

Sure enough, Pilegard was 2 for 2.

Watching gestures didn’t help students remember facts any better. However, students who watched structural gestures (but not surface gestures) did better on inference questions. (Stats types: the Cohen’s d was 0.39; an impressive bonus for such a small intervention.)

When Pilegard repeated the experiment with a video on “innate vs. acquired immunity,” she got the same results.

Implications and Cautions

As teachers, we know that every little bit helps. When we use gestures to reinforce the underlying logical structure of our explanations, doing so might help students learn more.

As we plan, therefore, we should be consciously aware of our lesson’s logical structure, and think a bit about how gestures might reinforce that structure.

At the same time, regular readers know that all the usual cautions apply:

We should look at groups of studies, not just one study.

Pilegard’s research focused on college students. Will this strategy work with other students? We don’t know for sure.

These video lessons were quite short: under two minutes each. Will this strategy work over longer periods of time? We don’t know for sure.

In other words — this research offers a promising strategy. And, we need more research with students who resemble our own classrooms and lessons that last longer to have greater confidence.

I myself do plan to think about gestures for upcoming lessons. But I won’t ignore all the other teaching strategies (retrieval practice, cognitive load management, etc.). Here’s hoping that future research can point the way…


By the way:

Teachers often ask how they can get copies of research to study it for themselves.

Easy answer #1: Google Scholar.

If that doesn’t work, I recommend easy answer #2: email the researcher.

In this case, I emailed Dr. Pilegard asking for a copy of the study — and she emailed it to me 11 minutes later.

In honor of her doing so, I’m creating the Pilegard Award for Prompt Generosity in Sharing Research with People who Email You Out of the Blue.

No doubt it will be much coveted.

 

Handwriting Improves Learning, Right?
Andrew Watson
Andrew Watson

Here’s a good rule for research: if you believe something, look for research that contradicts your belief.

So, if you think that retrieval practice helps students learn, see if you can find research showing the opposite.

If you disapprove of cold-calling, see if any studies support its use.

If you think that hand-written notes help students more than notes taken on a laptop, try to find research that disagrees with you.

In this last case, you might even find me. Most teachers I know believe that handwritten notes are superior, and they cite a well-known study to support that belief.

I’ve argued for years that this research assumes students can’t learn how to do new things – a very odd belief for a teacher to have. If you believe a students can learn how to do new things, well, this study actually suggests that laptop notes will help more than handwritten notes.

However, the “good rule” described above applies to me too. If I believe that we don’t know whether handwriting or keyboarding is better for learning, I should look for evidence that contradicts my belief.

For that reason, I pounced on a recent science news headline. The gist: recent research by Robert Wiley and Brenda Rapp shows that students who wrote by hand learned more than those who used laptops.

So, does their research finally contradict my belief?

Learning Arabic Letters

Wiley and Rapp had college-age adults learn Arabic letters.

12 of them learned by pressing the right key on a keyboard.

12 learned by looking at the letters closely and confirming they were the same.

And, 12 learned by writing the letters.

Did these distinct learning strategies make a difference several days later?

YES THEY DID.

The hand-writers learned a lot more, and learned a lot faster.

In fact – here’s a cool part – their learning transferred to new, related skills.

These participants practiced with letters. When Wiley and Rapp tested them on WORDS, the hand-writers did better than the other two groups – even though they hadn’t practiced with words.

So: sure enough, handwriting helped students learn more.

Boundary Conditions

Given the strength and clarity of these findings, you might think that I’m going to change my mind.

Reader, I am not. Here’s why:

This research shows that writing by hand helps people learn how to write by hand. It also helps people learn to do things immediately related to writing by hand – like, say, saying and writing words.

We should notice the narrow boundaries around that conclusion.

People who write by hand learn how to write by hand.

That research finding, however, does NOT demonstrate that writing by hand helps people learn things unrelated to handwriting itself.

For instance: do handwritten notes help people learn more about history or psychology or anatomy than laptop notes? This research does not answer that question, because that question falls outside the boundaries of the research.

In a similar way: practicing scales on the piano surely helps play piano scales better than – say – watching someone else do so.

But: does practicing piano scales make me better at other tasks requiring manual dexterity? Knitting? Keyboarding? Sculpting?

To answer those questions, we have to research those questions. We can’t extrapolate from piano scales to knitting and sculpting. (Well: we can, but we really shouldn’t.)

So, What’s The Answer?

Is handwriting really a better way to learn than keyboarding?

Honestly, I just don’t think we know. (In fact, Wiley and Rapp don’t claim that handwriting helps anywhere other than learning and reading letters and words.)

In fact, I suspect we need to explore MANY other variables:

the content being learned,

the teacher’s strategy for presenting it,

the student’s preference,

the student’s age –

perhaps even the relative complexity of writing vs. keyboarding. (I’m not an expert in this topic, but I understand that some languages require very intricate steps for accurate keyboarding.)

We can say – thanks to Wiley and Rapp – that handwriting helps learn how to write by hand. But until we explore those other precise questions precisely, we shouldn’t offer strong answers as if they have research support.

 

Why Don’t My High-School Students Just Follow My Advice?
Andrew Watson
Andrew Watson

I’ve been teaching for several centuries now. You’d think my students would believe me when I tell them how to make their sentences better. Or how to interpret literary passages. Or how to succeed in life.

Why don’t they?

Recent research suggests one potential answer: because my advice isn’t very good.

Here’s the story…

London Calling

A research team in London, led by PhD student Madeleine Moses-Payne, looked at research into adolescent metacognition: their ability to assess the correctness of their own judgments.

And, they looked at teens’ willingness to accept advice — good and bad — from adults.

In this case, the metacognition and “advice” were about a kind of space-themed video game. The participants had to determine — as quickly as possible — if there were more of species X or species Y on a planet.

The species were simply blobs in different colors. So, the participants made a snap judgment: are there more blue or more yellow blobs on the screen? (You can see some images from the study here.)

After the participants made their guess, they then rated their own confidence in their judgment; that’s the metacognition part.

And occasionally they got guidance from a “space advisor,” saying either “there are more blue blobs” or “more yellow blobs.” Most of the time (70%) the advisor was correct; 30% it was wrong.

What did researchers learn by putting all these variables together?

This Depends on That

Moses-Payne’s methodology included 3 age groups: children (8-9 years old), early adolescents (12-13), and late adolescents (16-17).

She wanted to know if data patterns changed with time. Here’s what she found:

First: adolescents (both early and late) were better at metacognition. That is, their confidence in their judgment aligned more precisely with the quality of their guesses, compared to the children.

Second: adolescents rejected more adult advice than children did.

And, here’s the kicker:

Third: adolescents rejected more bad advice.

That is: children lacked metacognitive certainty in the correctness of their judgements. Therefore, they let adult advice — even bad advice — sway their decision making.

However, adolescents had more accurate metacognitive confidence in their judgment. Therefore, they accepted good advice when they weren’t certain, but rejected bad advice when they were certain.

In Moses-Payne’s pithy summary:

adolescents, in contrast to children, take on others’ advice less often, but only when the advice is misleading.

So: why do my students resist my advice? Maybe they resist it when I’m wrong

Not So Fast

So far, this research design makes a lot of sense, and leads to a helpful — and usefully provocative — conclusion.

At the same time, I think we should notice the important limitations of its conclusions.

In this research, the “advice” was either a correct or an incorrect answer about perceiving the relative number of colored blobs on a screen.

It was not, say, advice about career choice, or about the best strategy to use when solving a math problem, or about when to listen to your mother. (ALWAYS listen to your mother.)

Most of the time, in fact, we don’t use the word “advice” to describe information that’s factually correct or incorrect. “Advice” is usually an experienced-based opinion, not the correct answer to a question.

And so: this research does provide a helpful look at adolescent development.

Teens improve their metacognitive awareness of their own right/wrong answers.

They can use that information to guide decision making effectively.

It does NOT, however, give us a comprehensive new framework for thinking about advising teens (“Don’t worry if they reject your advice — it must have been wrong if they did!”).

I suspect adults will still give teens advice. And, they’ll accept some and reject some. And we’ll still be puzzled when they do.

And — if we’re high school teachers — we’ll still think they’re awesome anyway.

Let’s Get Practical: What Works Best in the Classroom?
Andrew Watson
Andrew Watson

At times, this blog explores big-picture hypotheticals — the “what if” questions that can inspire researchers and teachers.

And, at times, we just want practical information. Teachers are busy folks. We simply want to know: what works? What really helps my students learn?

That question, in fact, implies a wise skepticism. If research shows a teaching strategy works well, we shouldn’t just stop with a study or two.

Instead, we should keep researching and asking more questions.

Does this strategy work with …

… older students as well as younger students?

… history classes as well as music classes as well as sports practice?

… Montessori classrooms, military academies, and public school classrooms?

this cultural cultural context as well as that cultural context?

And so forth.

In other words, we want to know: what have you got for me lately?

Today’s News

Long-time readers know of my admiration for Dr. Pooja Agarwal.

Her research into retrieval practice has helped clarify and deepen our understanding of this teaching strategy.

Her book, written with classroom teacher Patrice Bain, remains one of my favorites in the field.

And she’s deeply invested in understanding the complexity of translating research into the classroom.

That is: she doesn’t just see if a strategy works in the psychology lab (work that’s certainly important). Instead, she goes the next step to see if that strategy works with the messiness of classrooms and students and schedule changes and school muddle.

So: what has she done for us lately? I’m glad you asked.

Working with two other scholars, Agarwal asked all of those questions I listed above about retrieval practice.

That is: we think that retrieval practice works. But: does it work with different ages, and various subjects, in different countries?

Agarwal and Co. wanted to find out. They went though an exhaustive process to identify retrieval practice research in classrooms, and studied the results. They found:

First: yup, retrieval practice really does help. In 57% of the studies, the Cohen’s d value was 0.50 or greater. That’s an impressively large result for such a simple, low-cost strategy.

Second: yup, it works it in different fields. By far the most research is done in science and psychology (19 and 16 studies), but it works in every discipline where we look — including, say, history or spelling or CPR.

Third: yup, it works at all ages. Most research is done with college students (and, strangely, medical students), but works in K-12 as well.

Fourth: most retrieval practice research is done with multiple choice. (Yes: a well-designed multiple choice test can be retrieval practice. “Well-designed” = “students have to THINK about the distractors.”)

Fifth: we don’t have enough research to know what the optimal gap is between RP and final test.

Sixth: surprisingly, not enough classroom research focused on FEEDBACK. You’d think that would be an essential component…but Team Agarwal didn’t find enough research here to draw strong conclusions.

Seventh: Of the 50 studies, only 3 were from “non-Western” countries. So, this research gap really stands out.

In brief: if we want to know what really works, we have an increasingly clear answer: retrieval practice works. We had good evidence before; we’ve got better evidence now.

Examples Please

If you’re persuaded that retrieval practice is a good idea, you might want to be sure exactly what it is.

You can always use the “tags” menu on the right; we blog about retrieval practice quite frequently, so you’ve got lots of examples.

But, here’s a handy description (which I first heard in Agarwal and Bain’s book):

When students review, they put information back into their brains. So: “rereading the textbook” = “review,” because students try to redownload the book into their memory systems.

When students use retrieval practice, they take information out of their brains. So, “flashcards” = “retrieval practice,” because students have to remember what that word means.

So:

Reviewing class notes = review.

Outlining the chapter from memory = retrieval practice.

Short answer questions = retrieval practice.

Watching a lecture video = review.

When you strive for retrieval practice, the precise strategy is less important than the cognitive goal. We want student to try to remember before they get the correct answer. That desirable difficulty improves learning.

And, yes, retrieval practice works.

How Can We Help Students Study Better? [Repost]
Andrew Watson
Andrew Watson

This story might sound familiar:

You attend a Learning and the Brain conference (like, say, our upcoming conference about Teaching During a Pandemic) and come away with FANTASTIC ideas.

You go back to your classrooms — in person, online, asynchronous — and tell your students all about the amazing research you saw. (Perhaps you discuss the importance of retrieval practice, which helps much more than old-fashioned review.)

Your students sound thrilled!

And yet, the very next day they ignore your retrieval practice suggestion, and go right back to rereading their notes. Ugh.

SO FRUSTRATING!

What can we do to help our students study correctly — which is to say: how can we help them learn more, and more effectively?

In a recent article, Mark McDaniel and Gilles Einstein offer a 4-step framework to help change students’ study behavior.

Called KBCP — which stands for “Knowledge, Belief, Commitment, and Planning” — this framework could make a real difference for long-term learning.

The Short Version

In brief:

Knowledge: we should tell students about the study strategy or technique that research has shown to be effective: say, spacing, or generative learning strategies.

Belief: students then undertake an exercise that demonstrates the benefits of this strategy.

Commitment: students get onboard with the idea. They don’t just know and believe; they buy in.

Planning: next, they make a specific and measurable plan to enact their commitment.

As McDaniel and Einstein’s article shows, each of these steps has good research behind it. Their contribution to this field: they bring them all together in a coherent system.

McDaniel and Einstein emphasize that teachers shouldn’t rely on just one or two of these steps. They all work together to help students learn more:

Our central premise is that all four components must and can be explicitly targeted in a training program to maximize self-guided transfer of effective learning strategies.

The problem with the story that began this blog post, in other words, is that it targets only the first of these four steps. To help our students learn, we need to do more and better.

One Example

This article makes for such compelling reading because the authors both explain the research behind each step and offer specific classroom examples to show what they mean.

For instance: the “belief” step encourages teachers to design an exercise that helps students really believe that the technique will work. What would such an exercise look like?

If, for instance, we want to encourage students to “generate explanations” as a memory strategy, what exercise would persuade them that it works?

M&E describe a strategy they’ve often used.

First: have students learn several simple sentences. For instance: “The brave man ran into the house.”

Second: for half of those sentences, encourage students to (silently) generate an explanation: perhaps, “to rescue the kitten from the fire.”

Third: when we test students on those sentences later, they will (almost certainly) remember the second group better than the first. That is: they’ll have reason to believe the strategy works because they experienced it themselves.

McDaniel and Einstein include such examples for each of their four steps.

And Beyond

This article gets my attention for another reason as well. The authors write:

There are many potentially effective ways to actualize the key components of the KBCP framework, and we offer the following as one possible example of a training program.

Frequent readers recognize my mantra here: “don’t just do this thing; instead, think this way.”

In other words, McDaniel and Einstein don’t offer readers a to-do list — a set of instructions to follow. Instead, they provide ideas for teachers to consider, and then to adapt to our own specific teaching context.

KBCP will look different in a 2nd grade classroom than a high-school classroom; different in a gym class than a tuba lesson; different in a Brazilian cultural context than a Finnish one.

Research can offer us broad guidance on the directions to go; it can’t tell us exactly what to do with our own students.

The KBCP framework creates another intriguing possibility.

I recently saw an article saying — basically — that “teaching study skills doesn’t work.”

Its provocative abstract begins:

This paper argues that the widespread approach to enhancing student learning through separate study skills courses is ineffective, and that the term ‘study skills’ itself has misleading implications, which are counterproductive to learning.

The main argument is that learning how to study effectively at university cannot be separated from subject content and the process of learning.

Having seen McDaniel and Einstein’s article, I wonder: perhaps these courses don’t work not because they can’t work, but because they’re currently being taught incorrectly.

Perhaps if study skills classes followed this KBCP framework, they would in fact accomplish their mission.

M&E acknowledge that their framework hasn’t been tested together as a coherent strategy. To me at least, it sounds more promising than other approaches I’ve heard.

“Rich” or “Bland”: Which Diagrams Helps Students Learn Deeply? [Reposted]
Andrew Watson
Andrew Watson

Here’s a practical question: should the diagrams we use with students be detailed, colorful, bright, and specific?

Or, should they be simple, black and white, somewhat abstract?

We might reasonably assume that DETAILS and COLORS attract students’ attention. If so, they could help students learn.

We might, instead, worry that DETAILS and COLORS focus students’ attention on surface features, not deep structures. If so, students might learn a specific idea, but not transfer their learning to a new context.

In other words: richly-decorated diagrams might offer short-term benefits (attention!), but result in long-term limitations (difficulties with transfer). If so, blandly-decorated diagrams might be the better pedagogical choice.

Today’s Research

Scholars in Wisconsin — led by David Menendez — have explored this question.

Specifically, they asked college students to watch a brief video about metamorphosis. (They explained that the video was meant for younger students, so that the cool college kids wouldn’t be insulted by the simplicity of the topic.)

For half the students, that video showed only the black-and-white diagram to the left; for the other half, the video showed the colors and dots.

Did the different diagrams shape the students’ learning? Did it shape their ability to transfer that learning?

Results, Please…

No, and yes. Well, mostly yes.

In other words: students who watched both videos learned about ladybug metamorphosis equally well.

But — and this is a BIG but — students who watched the video with the “rich” diagram did not transfer their learning to other species as well as students who saw the “bland” diagram.

In other words: the bright colors and specifics of the rich diagram seem to limit metamorphosis to this specific species right here. An abstract representation allowed for more successful transfer of these concepts to other species.

In sum: to encourage transfer, we should use “bland,” abstract diagrams.

By the way: Team Menendez tested this hypothesis with both in-person learners and online learners. They got (largely) the same result.

So: if you’re teaching face-to-face or remotely, this research can guide your thinking.

Some Caveats

First: as is often the case, this effect depended on the students’ prior knowledge. Students who knew a lot about metamorphosis weren’t as distracted by the “rich” details.

Second: like much psychology research, this study worked with college students. Will its core concepts work with younger students?

As it turns out, Team Menendez has others studies underway to answer that very question. Watch This Space!

Third: Like much psychology research, this study looked at STEM materials. Will it work in the humanities?

What, after all, is the detail-free version of a poem? How do you study a presidency without specifics and details?

When I asked Menendez that question, he referred me to a study about reader illustrations. I’ll be writing about this soon.

In Sum

Like seductive details, “rich” diagrams might seem like a good teaching idea to increase interest and attention.

Alas, that perceptual richness seems to help in the short term but interfere with transfer over time.

To promote transfer, teach with “bland” diagrams — and use a different strategy to grab the students’ interest.

How to Foster New Friendships in School? Seating Plans! (We’ve Got Research…)
Andrew Watson
Andrew Watson

In schools, we want students to learn many topics: math, and history, and reading, and health, and robotics…

And, especially at the beginning of the year, we’d like them to make friends along the way.

Can we help?

One research team tried a reasonable approach. They wondered if students might form new friendships when they sit next to classmates they don’t yet know well.

Here’s the story:

The Plan

Julia Rohrer and colleagues worked with 182 teachers in 40 schools in Hungary. Their study included 3rd through 8th graders — almost 3000 of them!

In these schools, students sat at “freestanding forward-facing 2-person desks.” (It sounds to me like Little House on the Prairie, but in rural Hungary.) Researchers assigned students to these paired desks randomly.

And, they tracked the friendships that formed.

So: what happened? Did students befriend their deskmates?

The Prediction & the Speculation

Unsurprisingly, we tend — on average — to form friendships with people who are like us. In schools, that means:

boys typically befriend boys, while girls befriend girls;

academic achievers connect with other achievers;

members of racial and ethnic groups often form friendships within those groups. (In this study, researchers kept track of Roma and non-Roma Hungarian identities.)

Researchers predicted that this pattern (called “homophily) would continue.

And they speculated that the new seating plans might shake things up a bit. That is: perhaps more friendships would form outside of those usual patterns.

The Results

So, what happened with these new seating plans?

First: Randomly seating students next to each other did modestly increase the likelihood of mutual friendships forming: from 15% to 22%.

Second: These new friendships did mostly fit the expected patterns. As homophily suggests, friendships largely formed within gender, achievement, and ethnic groups.

Third: Random seating DID foster new friendships across those divides as well — although to a smaller degree. That is: some girls did form mutual friendships with boys, and so forth.

In brief: researchers wondered if random seating patterns might expand friendship circles — and they do!

The Big Picture

We should, of course, remember that this study is just one study. We’ll need more research to be increasingly certain of these conclusions.

And, honestly, this seating plan didn’t make a huge difference.

At the same time: teachers know that every little bit counts. If we can help students form new friendships — and help them form friendships that might not otherwise have started — that’s a powerful way to start a new school year.

You will, of course, adapt this idea to your own teaching context. As you contemplate your routine at the beginning of a new year, this strategy might be a useful way to open new friendship vistas.

To Grade or Not to Grade: Should Retrieval Practice Quizzes Be Scored? [Repost]
Andrew Watson
Andrew Watson

We’ve seen enough research on retrieval practice to know: it rocks.

When students simply review material (review their notes; reread the chapter), that mental work doesn’t help them learn.

However, when they try to remember (quiz themselves, use flashcards), this kind of mental work does result in greater learning.

In Agarwal and Bain’s elegant phrasing: don’t ask students to put information back into their brains. Instead, ask them to pull information out of their brains.

Like all teaching guidance, however, the suggestion “use retrieval practice!” requires nuanced exploration.

What are the best methods for doing so?

Are some retrieval practice strategies more effective?

Are some frankly harmful?

Any on-point research would be welcomed.

On-Point Research

Here’s a simple and practical question. If we use pop quizzes as a form of retrieval practice, should we grade them?

In other words: do graded pop quizzes result in more or less learning, compared to their ungraded cousins?

This study, it turns out, can be run fairly easily.

Dr. Maya Khanna taught three sections of an Intro to Psychology course. The first section had no pop quizzes. In the second section, Khanna gave six graded pop quizzes. In the third, six ungraded pop quizzes.

Students also filled out a questionnaire about their experience taking those quizzes.

What did Khanna learn? Did the quizzes help? Did grading them matter?

The Envelope Please

The big headline: the ungraded quizzes helped students on the final exam.

Roughly: students who took the ungraded pop quizzes averaged a B- on the final exam.

Students in the other two groups averaged in the mid-to-high C range. (The precise comparisons require lots of stats speak.)

An important note: students in the “ungraded” group scored higher even though the final exam did not repeat the questions from those pop quizzes. (The same material was covered on the exam, but the questions themselves were different.)

Of course, we also wonder about our students’ stress. Did these quizzes raise anxiety levels?

According to the questionnaires, nope.

Khanna’s students responded to this statement: “The inclusion of quizzes in this course made me feel anxious.”

A 1 meant “strongly disagree.”

A 9 meant “strongly agree.”

In other words, a LOWER rating suggests that the quizzes didn’t increase stress.

Students who took the graded quizzes averaged an answer of 4.20.

Students who took the ungraded quizzes averaged an answer of 2.96.

So, neither group felt much stress as a result of the quizzes. And, the students in the ungraded group felt even less.

In the Classroom

I myself use this technique as one of a great many retrieval practice strategies.

My students’ homework sometimes includes retrieval practice exercises.

I often begin class with some lively cold-calling to promote retrieval practice.

Occasionally — last Thursday, in fact — I begin class by saying: “Take out a blank piece of paper. This is NOT a quiz. It will NOT be graded. We’re using a different kind of retrieval practice to start us off today.”

As is always true, I’m combining this research with my own experience and classroom circumstances.

Khanna gave her quizzes at the end of class; I do mine at the beginning.

Because I’ve taught high school for centuries, I’m confident my students feel comfortable doing this kind of written work. If you teach younger grades, or in a different school context, your own experience might suggest a different approach.

To promote interleaving, I include questions from many topics (Define “bildungsroman.” Write a sentence with a participle. Give an example of Janie exercising agency in last night’s reading.) You might focus on one topic to build your students’ confidence.

Whichever approach you take, Khanna’s research suggests that retrieval practice quizzes don’t increase stress and don’t require grades.

As I said: retrieval practice rocks!

Parachutes Don’t Help (Important Asterisk) [Repost]
Andrew Watson
Andrew Watson

A surprising research finding to start your week: parachutes don’t reduce injury or death.

How do we know?

Researchers asked participants to jump from planes (or helicopters), and then measured their injuries once they got to the ground. (To be thorough, they checked a week later as well.)

Those who wore parachutes and those who did not suffered — on average — the same level of injury.

Being thorough researchers, Robert Yeh and his team report all sorts of variables: the participants’ average acrophobia, their family history of using parachutes, and so forth.

They also kept track of other variables. The average height from which participants jumped: 0.6 meters. (That’s a smidge under 2 feet.) The average velocity of the plane (or helicopter): 0.0 kilometers/hour.

Yes: participants jumped from stationary planes. On the ground. Parked.

Researchers include a helpful photo to illustrate their study:

Representative study participant jumping from aircraft with an empty backpack. This individual did not incur death or major injury upon impact with the ground

Why Teachers Care

As far as I know, teachers don’t jump out of planes more than other professions. (If you’re jumping from a plane that is more than 0.6 meters off the ground, please do wear a parachute.)

We do, however, rely on research more than many.

Yeh’s study highlights an essential point: before we accept researchers’ advice, we need to know exactly what they did in their research.

Too often, we just look at headlines and apply what we learn. We should — lest we jump without parachutes — keep reading.

Does EXERCISE helps students learn?

It probably depends on when they do the exercise. (If the exercise happens during the lesson, it might disrupt learning, not enhance it.)

Does METACOGNITION help students learn?

It probably depends on exactly which metacognitive activity they undertook.

Do PARACHUTES protect us when we jump from planes?

It probably depends on how high the plane is and how fast it’s going when we jump.

In brief: yes, we should listen respectfully to researchers’ classroom guidance. AND, we should ask precise questions about that research before we use it in our classrooms.