classroom advice – Page 8 – Education & Teacher Conferences Skip to main content
Overcoming Potential Perils of Online Learning [Repost]
Andrew Watson
Andrew Watson

In June of 2019, I wrote about Dr. Rachael Blasiman’s research into the effect of typical distractions on online learning.

Given the current health climate, I thought her work might be especially helpful right now.

The key take-aways here:

First: (unsurprisingly) distractions interfere with online learning, and

Second: (crucially) we can do something about that.

In brief, we should start our online classes by teaching students how to learn online…

Here’s the post from June.


Online learning offers many tempting — almost irresistible — possibilities. Almost anyone can study almost anything from almost anywhere.

What’s not to love?

A tough-minded response to that optimistic question might be:

“Yes, anyone can study anything, but will they learn it?”

More precisely: “will they learn it roughly as well as they do in person?”

If the answer to that question is “no,” then it doesn’t really matter that they undertook all that study.

Rachael Blasiman and her team wanted to know if common at-home distractions interfere with online learning.

So: can I learn online while…

…watching a nature documentary?

…texting a friend?

…folding laundry?

…playing a video game?

…watching The Princess Bride?

Helpful Study, Helpful Answers

To answer this important and practical question, Blasiman’s team first had students watch an online lecture undistracted. They took a test on that lecture, to see how much they typically learn online with undivided attention.

Team Blasiman then had students watch 2 more online lectures, each one with a distractor present.

Some students had a casual conversation while watching. Others played a simple video game. And, yes, others watched a fencing scene from Princess Bride.

Did these distractions influence their ability to learn?

On average, these distractions lowered test scores by 25%.

That is: undistracted students averaged an 87% on post-video quizzes. Distracted students averaged a 62%.

Conversation and The Princess Bride were most distracting (they lowered scores by ~30%). The nature video was least distracting — but still lowered scores by 15%.

In case you’re wondering: men and women were equally muddled by these distractions.

Teaching Implications

In this case, knowledge may well help us win the battle.

Blasiman & Co. sensibly recommend that teachers share this study with their students, to emphasize the importance of working in a distraction-free environment.

And, they encourage students to make concrete plans to create — and to work in — those environments.

(This post, on “implementation intentions,” offers highly effective ways to encourage students to do so.)

I also think it’s helpful to think about this study in reverse. The BAD news is that distractions clearly hinder learning.

The GOOD news: in a distraction-free environment, students can indeed start to learn a good deal of information.

(Researchers didn’t measure how much they remembered a week or a month later, so we don’t know for sure. But: we’ve got confidence they had some initial success in encoding information.)

In other words: online classes might not be a panacea. But, under the right conditions, they might indeed benefit students who would not otherwise have an opportunity to learn.


I’ve just learned that both of Dr. Blasiman’s co-authors on this study were undergraduates at the time they did the work. That’s quite unusual in research world, and very admirable! [6-11-19]

Does Teaching HANDWRITING Help Students READ?
Andrew Watson
Andrew Watson

I recently saw a newspaper headline suggesting that teaching students HANDWRITING ultimately improves their READING ability.

As an English teacher, I was intrigued by that claim.

As a skeptic, I was … well … skeptical.

In this case, we have two good reasons to be skeptical. First, we should always be skeptical. Second, claims of transfer rarely hold up.

What is “transfer”?

Well, if you teach me calculus, then it’s likely I’ll get better at calculus. If you teach me to play the violin, it’s likely I’ll get better at playing the violin. But: if you teach me to play the violin, it’s NOT likely that this skill will transfer to another skill — like calculus. (And, no: music training in youth doesn’t reliably improve math ability later in life.)

In fact, most claims of transfer — “teaching you X makes you better at distantly-related-thing A” — end up being untrue.

So, is it true — as this newspaper headline implied — that handwriting skills transfer to reading skills?

The Research

This newspaper article pointed to research by Dr. Anabela Malpique, working in Western Australia.

Her research team worked with 154 6-7 year-olds around Perth. They measured all sorts of variables, including…

…the students’ handwriting automaticity (how well can they write individual letters),

…their reading skills (how accurately they read individual words),

…the amount of time the teachers reported spending in reading/writing instruction.

And, they measured handwriting automaticity and reading skills at the beginning and end of the year. For that reason, they could look for relationships among their variables over time. (As you can see, Malpique’s research focuses on many topics — not just the writing/reading question that I’m discussing in this post.)

Tentative Conclusions

To their surprise, Malpique’s team found that more fluent letter formation at the beginning of the year predicted more fluent word reading at the end of the year. In their words, this finding

suggest[s] that being able to write letters quickly and effortlessly in kindergarten facilitates pre-reading and decoding skills one year later.

In other words: this research allows the possibility that teaching writing does ultimately help students read single words.

However — and this is a big however — the researchers’ methodology does NOT allow for causal conclusions. They see a mathematical “relationship” between two things, but don’t say that the writing ability led to later reading ability.

They warn:

Experimental research is needed to confirm these findings[,] and systematically evaluate potential explanatory mechanism[s] of writing-to-reading effects over time in the early years.

They specifically note that they did NOT measure reading comprehension; they measured single word reading.

To put this in other words: we would like to know if

a) teaching letter writing leads to

b) improved letter writing fluency, which leads to

c) improved single word reading, which leads to

d) improved reading comprehension.

These findings make the b) to c) connection more plausible, but the certainly do not “prove” that a) leads to d).

Classroom Implications

This research doesn’t claim we should make big changes right away.

I do think it leads to this conclusion:

Some schools are replacing books with computers and tablets. I can imagine (although I haven’t heard this) that advocates might make this claim:

“In the future, no one will need to write by hand. Everything will be keyboarding, and so we need to get children typing as soon as possible. Let’s replace handwriting instruction with keyboarding instruction, to prepare our kids for the future!”

If we hear that argument, we can say:

“I have LOTS of objections to that logical chain. In particular, we have tentative reasons to believe that handwriting instruction improves reading. If that’s true — and we don’t yet know — we should be VERY wary of doing anything that slows our students’ ability to read. We might not be handwriting so much in the future, but we’ll be reading forever.”

In sum: I don’t think that newspaper article captured essential nuances. However, this research raises the intriguing possibility that transfer just might take place from writing instruction to single-word reading. We need more research to know with greater certainty.

But, given the importance of reading for school and life, we should be excited to find anything that can help students do better.

The Big Six: A Grand Summary
Andrew Watson
Andrew Watson

Much of the time, this blog digs into a specific example of a specific teaching practice.

Within the last two weeks, I’ve written about spacing and interleaving in math instruction, a “big challenging book” strategy for struggling readers, and the potential benefits of cold calling.

At times, however, it’s helpful to zoom the camera back and look at THE BIG PICTURE.

What does cognitive science tell us about learning?

Today’s Grand Summary

Regular readers know that The Learning Scientists do a GREAT job explaining…well…the science of learning.

In particular, they focus on “six strategies of effective learning”:

Spacing

Interleaving

Retrieval Practice

Concrete Examples

Elaboration

Dual Coding

In a recent post, Dr. Megan Sumeracki does a typically helpful job giving a thoughtful overview of those strategies. Rather than summarize her summary, I’m encouraging you to give her post a quick read. It will help put the pieces together for you.

Wise Caveats

Sumeracki introduces her summary with this helpful note:

Before digging into the specifics of each strategy, it is important to note that they are very flexible. This is a good thing, in that it means they can be used in a lot of different situations.

However, this also means that there really isn’t a specific prescription we can provide that will “always work.”

Instead, understanding the strategies and how they work can help instructors and students. [Emphasis added.]

In other words — as you often read on this blog — “don’t just do this thing; instead, think this way.”

Cognitive science really cannot provide a script for teachers to read verbatim. Instead, it offers principles that we must adapt to our own specific classrooms and students.

So, if you increase spacing and retrieval practice, your students will — almost certainly — remember more over the long term. But: exactly how to do that will differ from classroom to classroom, grade to grade, culture to culture.

In other words: teachers should draw on scientific understanding of minds and brains to shape our work. But: teaching itself isn’t a science. It’s a craft, a passion, a profession.

Cold Calling and Bad Pizza
Andrew Watson
Andrew Watson

When I was in grad school, a well-known professor announced that — given everything we know about the effects of stress — it is professional malpractice to “cold call” on students. (To “cold call” means to call on a student who hasn’t raised her hand.)

Imagine the cascade of bad results.

When cold-called, the student feels stress. Cortisol levels go up. Excess cortisol interferes with learning. In fact, long-term excess cortisol damages the hippocampus. (You can check out this video here.)

My professor’s claim struck me as shocking, because Doug Lemov argues so strongly for cold calling in his much admired Teach Like a Champion:

“If I was working with a group of teachers and had to help them make the greatest possible improvements in the rigor, ratio, and level of expectations in their classroom with one technique, the technique I’d choose might well be cold call.”

That is: if we want students themselves to be doing cognitive work — a.k.a. “active learning” — Lemov thinks cold calling is the way to go. It serves four key functions:

First, it lets the teacher check students’ understanding,

Second, it creates a culture of “engaged accountability,”

Third, it helps the teacher speed up or slow down the pace, and

Fourth, it supplements other teaching strategies, like “turn and talk.”

Little wonder Lemov champions it so heartily.

Breaking the Tie?

We’ve got an expert in the neurobiology of stress saying cold calling is professional malpractice. We’ve got an expert in classroom teaching saying that cold calling is profession best practice.

How to we decide?

On this blog, we try always to find relevant research. In this case, the best study I can find was undertaken by Dallimore, Hertenstein, and Platt.

Team Dallimore — aware of both sides of this debate — looked at 16 sections of a college accounting course, including well over 600 students.

They kept track of the professors’ discussion techniques: in particular, did they cold call or not?

And, they followed a number of variables: in particular, how much did students voluntarily participate? And, how comfortable were the students in class discussion? (In other words: what happened to those cortisol levels my professor worried about?)

If the answers to those questions show a clear pattern, that might help us decide to follow my prof’s guidance, or Lemov’s.

The Envelope Please

In brief: cold calling produced good thinking results, and lowered (apparent) stress levels.

That is: in classes with infrequent cold calling, students’ voluntary participation remained the same throughout the term. In classes with high cold calling, their voluntary participation rose from 68% to 86%.

Dallimore’s team saw the same results with the number of questions students volunteered to answer. That number remained flat in the low cold calling classes, and rose in the high cold calling classes.

And, how about stress?

When asked to report their comfort level with class discussion, that level remained constant in low cold calling sections. Comfort levels rose in high cold calling sections.

So: when teachers cold called, their students voluntarily participated more, and they felt more comfortable in class.

Always with the Limitations

Dallimore’s study — combined with Lemov’s insight, guidance, and wisdom — suggests that cold calling really can benefit students.

However, any good teaching technique can be used badly. If it’s possible to make a bad pizza, it’s possible to make a bad version of any great thing.

So, if we’ve got students who have experienced ongoing trauma, we should make reasonable accommodations. If a student has an IEP that warns against cold calling, we should — of course! — heed that warning.

Also, I should acknowledge the limitations of this research. The study I’ve described was published in 2012, and it’s the most recent one I have located. Simply put: we don’t have much research on the topic.

And: research done with accounting students — most of whom are college sophomores — might not apply to your students.

Of course, Lemov works mostly with K-12 students, especially those who attend schools that have relatively high poverty rates. In other words: Dallimore’s research + Lemov’s research shows a wide range of effectiveness for this technique.

In sum: I’m sure teachers can use cold calling techniques badly — resulting in raised stress and reduced learning. But, done well, this technique offers real benefits.

If we create a respectful, supportive, and challenging classroom climate — including cold call — students can learn splendidly. This video shows the technique in action.

Are “Retrieval Practice” and “Spacing” Equally Important? [Updated]
Andrew Watson
Andrew Watson

If you follow research in the world of long-term memory, you know you’ve got SO MANY GOOD STRATEGIES.

Agarwal and Bain’s Powerful Teaching, for instance, offers a delicious menu: spacing, interleaving, retrieval practice, metacognition.

Inquiring minds want to know: how do we best choose among those options? Should we do them all? Should we rely mostly on one, and then add in dashes of the other three? What’s the idea combination?

One Important Answer

Dr. Keith Lyle and his research team wanted to know: which strategy has greater long-term impact in teaching college math: retrieval practice or spacing?

That is: in the long term, do students benefit from more retrieval? From greater spacing? From both?

To answer this really important question, they carefully designed weekly quizzes in a college precalculus class. Some topics, at “baseline,” were tested with three questions at the end of the week. That’s a little retrieval practice, and a few days of spacing.

Some topics were tested with six quiz questions at the end of the week. That’s MORE retrieval practice, but the same baseline amount of spacing.

Some topics were tested with three quiz questions spread out over the semester. That’s baseline retrieval practice, but MUCH GREATER spacing.

And, some topics were tested with six quiz questions spread out over the semester. That’s extra retrieval AND extra spacing.

They then measured: how did these precalculus students do when tested on those topics on the final exam? And — hold on you hats — how did they do when tested a month later, when they started taking the follow-up class on calculus?

Intriguing Answers…

Lyle and Co. found that — on the precalculus final exam…

…extra retrieval practice helped (about 4% points), and

…extra spacing helped (about 4% points), and

…combining extra retrieval with extra spacing helped more (about 8% points).

So, in the relatively short term, both strategies enhance learning. And, they complement each other.

What about the relatively longer term? That is, what happened a month later, on the pre-test for the calculus class? In that case…

…extra retrieval practice didn’t matter

…extra spacing helped (about 4% points).

…combining extra retrieval with extra spacing produced no extra benefit (still about 4% points).**

For enduring learning, then, extra spacing helped, but extra retrieval practice didn’t.

…Important Considerations

First: as the researchers note, it’s important to stress that this research comes from the field of math instruction. Math — more than most disciplines — already has retrieval practice built into in.

That is: when I do math homework, every problem I solve requires me (to some degree) to recall the math task at hand. (And, probably, lots of other relevant math info as well.)

But, when I do my English homework, the paper I’m writing about Macbeth might not remind me about Grapes of Wrath. Or, when I do my History homework, the time I spend studying Aztec civilization doesn’t necessarily require me to recall facts or concepts from the Silk Road unit. (It might, but might not.)

So, this study shows that extra retrieval practice didn’t help over and above the considerable retrieval practice the math students were already doing.

Second: notice that the “spacing” in this case was a special kind of spacing. It was, in fact, spacing of retrieval practice. Of course, that counts as spacing.

But, we have lots of other ways to space as well. For instance, Dr. Rachael Blasiman testing spacing by taking time in lectures to revisit earlier concepts. That strategy did create spacing, but didn’t include retrieval practice.

So, this research doesn’t necessarily apply to other kinds of spacing. It might, but we don’t yet know.

Practical Classroom Applications

Lyle & Co.’s study gives us three helpful classroom reminders.

First: as long as we’ve done enough retrieval practice to establish ideas (as math homework does almost automatically), we can redouble our energies to focus on spacing.

Second: Lyle mentions in passing that students do (very slightly) worse on quizzes that include spacing — because spacing is harder. (Regular readers know, we call this “desirable difficulty.”)

This reminder gives us an extra reason to be sure that quizzes with spacing are low-stakes or no-stakes. We don’t want to penalize students for participating in learning strategies that benefit them.

Third: In my own view, we can ask/expect our students to join us in retrieval practice strategies. Once they reach a certain age or grade, they should be able to make flashcards, or use quizlet, or test one another.

However, I think spacing requires a different perspective on the full scope of a course. That is: it requires a teacher’s perspective. We have the long view, and see how all the pieces best fit together.

For those reasons, I think we can (and should) ask students to do retrieval practice (in addition to the retrieval practice we create). But, we ourselves should take responsibility for spacing. We — much more than they they — have the big picture in mind. We should take that task off their to do list, and keep it squarely on ours.


** This post has been revised on 3/7/30. The initial version did not include the total improvement created by retrieval practice and spacing one month after the final exam.

Where Should Students Study?
Andrew Watson
Andrew Watson

We’ve got lots of advice for the students in our lives:

How to study: retrieval practice

When to study: spacing effect

Why study: so many answers

Where to study: …um, hold please, your call is very important to us…

As can happen, research provides a counter-intuitive — and sometimes contradictory — answers to that last question.

I grew up hearing the confident proclamation that we should create a perfect study environment in one place, and always study there. (The word “library” was spoken in reverent tones.)

As I think about the research I’ve seen in the last ten years, my own recommendations to students have been evolving.

Classic Beginnings

In a deservedly famous study, Smith, Glenberg and Bjork (1978) tried to measure the effect on environment on memory.

They found that, in the short run, I associate the words that I learn in this room with the room itselfThat is: if I learn words in room 27, I’ll do better on a test of those words in room 27 than in room 52.

One way to interpret those findings is that we should teach in the place where students will be tested.

If the final exam, inevitably, is in the gym, I should teach my students in the gym. And they should study in the gym. This approach ensures that they’ll associate their new knowledge with the place they have to demonstrate that knowledge.

In this theory, students should learn and study in the place they’ll ultimately be tested.

Priority Fix #1

This interpretation of Smith’s work makes sense if — and only if — the goal of learning is to do well on tests.

Of course, that’s not my goal. I don’t want my students to think carefully about literature for the test; I want them to think carefully about literature for life.

I want them to have excellent writing skills now, and whenever in the future they need to write effectively and clearly.

We might reasonably worry that a strong association between the room and the content would limit transfer. That is: if I connect the material I’ve learned so strongly with room 27, or the gym, I might struggle to remember or use it anywhere else.

Smith worried about that too. And, sure enough, when he tested that hypothesis, his research supported it.

In other words, he found that students who study material in different locations can use it more flexibly elsewhere. Students who study material in only one location can’t transfer their learning so easily. (By the way: Smith’s research has been replicated. You can read about this in Benedict Carey’s How We Learn. Check out chapter 3.)

This finding leads to a wholly different piece of advice. Don’t do what my teachers told me to do when I was a student. Instead, study material in as many different places as reasonably possible. That breadth of study will spread learning associations as widely as possible, and benefit transfer.

That’s what I’ve been telling students for the last several years.

Voila. Generations of teaching advice overturned by research!

Priority Fix #2

Frequent readers have heard me say: “Researchers work by isolating variables. Schools work by combining variables.”

The longer I do this work, the longer I think that this “where to study” advice makes sense only if I focus exclusively on that one variable.

If I start adding in other variables, well, maybe not so much.

True enough, research shows that I’ll remember a topic better if I study it in different places … as long as all other variables being held constant. But, in life, other variables aren’t constant.

Specifically, some study locations are noisier than others. Starbucks is louder than the library: it just is. And, some locations are visually busier than others.

And, as you would expect, noise — such as music — distracts from learning. So, too, do visually busy environments.

So, a more honest set of guidelines for students goes like this:

You should review material in different places. But, you want each of those places to be quiet. And, you don’t want them to have much by way of visual distraction.

You know what that sounds like to me? The library.

I suppose it’s possible for students to come up with several different study locations that are equally quiet and visually bland. Speaking as a high school teacher, I think it’s unlikely they’ll actually do that.

So, unless they’ve got the bandwidth to manage all those demands even before they sit down to study, then I think the traditional advice (“library!”) is as good as anything.

Final Thoughts

People occasionally ask me where I am in the “traditional vs. progressive” education debate.

The honest answer is: I’m indifferent to it. I (try to) focus on practical interpretations of pertinent psychology and neuroscience research.

If that research leads to a seemingly innovative suggestion (“study in many locations!”), that’s fine. If it leads to a traditional position (“library”), that’s equally fine.

I think that, for the most part, having teams in education (prog vs. trad) doesn’t help. If we measure results as best we can, and think humbly and open-mindedly about the teaching implications, we’ll serve our students best.

“How We Learn”: Wise Teaching Guidance from a Really Brainy Guy
Andrew Watson
Andrew Watson

Imagine that you ask a neuro-expert: “What’s the most important brain information for teachers to know?”

The answer you get will depend on the expertise of the person you ask.

If you ask Stanislas Dehaene, well, you’ll get LOTS of answers — because he has so many areas of brain expertise.

He is, for example,  a professor of experimental cognitive psychology at the Collège de France; and Director of the NeuroSpin Center, where they’re building the largest MRI gizmo in the world. (Yup, you read that right. IN THE WORLD.)

He has in fact written several books on neuroscience: neuroscience and reading, neuroscience and math, even neuroscience and human consciousness.

He’s also President of a newly established council to ensure that teacher education in all of France has scientific backing: the Scientific Council for Education. (If the United States had such a committee, we could expunge Learning Styles myths from teacher training overnight.)

If that’s not enough, Dehaene is interested in artificial intelligence. And statistics. And evolution.

So, when he writes a book called How We Learn: Why Brains Learn Better than Any Machine…for Now, you know you’re going to get all sorts of wise advice.

Practical Teaching Advice

Dehaene wants teachers to think about “four pillars” central to the learning process.

Pillar 1: Attention

Pillar 2: Active engagement

Pillar 3: Error feedback

Pillar 4: Consolidation

As you can see, this blueprint offers practical and flexible guidance for our work. If we know how to help students pay attention (#1), how to help them engage substantively with the ideas under discussion (#2), how to offer the right kind of feedback at the right time (#3), and how to shape practice that fosters consolidation (#4), we’ll have masterful classrooms indeed.

Learning, of course, begins with Attention: we can’t learn about things we don’t pay attention to. Following Michael Posner’s framework, Dehaene sees attention not as one cognitive process, but as a combination of three distinct cognitive processes.

Helpfully, he simplifies these processes into three intuitive steps. Students have to know:

when to pay attention

what to pay attention to, and

how to pay attention.

Once teachers start thinking about attention this way, we can see all sorts of new possibilities for our craft. Happily, he has suggestions.

Like other writers, Dehaene wants teachers to focus on active engagement (pillar #2). More than other writers, he emphasizes that “active” doesn’t necessarily mean moving. In other words, active engagement requires not physical engagement but cognitive engagement.

This misunderstanding has led to many needlessly chaotic classroom strategies, all in the name of “active learning.” So, Dehaene’s emphasis here is particularly helpful and important.

What’s the best way to create cognitive (not physical) engagement?

“There is no single miraculous method, but rather a whole range of approaches that force students to think for themselves, such as: practical activities, discussions in which everyone takes part, small group work, or teachers who interrupt their class to ask a difficult questions.”

Error Feedback (pillar #3) and Consolidation (#4) both get equally measured and helpful chapters. As with the first two, Dehaene works to dispel myths that have muddled our approaches to teaching, and to offer practical suggestions to guide our classroom practice.

Underneath the “Four Pillars”

These four groups of suggestions all rest on a sophisticated understanding of what used to be called the “nature/nurture” debate.

Dehaene digs deeply into both sides of the question to help teachers understand both brain’s adaptability (“nurture”) and the limits of that adaptability (“nature”).

To take but one example: research with babies makes it quite clear that brains are not “blank slates.” We come with pre-wired modules for processing language, numbers, faces, and all sorts of other things.

One example in particular surprised me: probability. Imagine that you put ten red marbles and ten green marbles in a bag. As you start drawing marbles back out of that bag, a 6-month-old will be surprised — and increasingly surprised — if you draw out green marble after green marble after green marble.

That is: the baby understands probability. They know it’s increasingly likely you’ll draw a red marble, and increasingly surprising that you don’t. Don’t believe me? Check out chapter 3: “Babies’ Invisible Knowledge.”

Of course, Dehaene has fascinating stories to tell about the brain’s plasticity as well. He describes several experiments — unknown to me — where traumatized rats were reconditioned to prefer the room where the traumatizing shock initially took place.

He also tells the amazing story of “neuronal recycling.” That is: the neural real-estate we train to read initially housed other (evolutionarily essential) cognitive functions.

Human Brains and Machine Learning

Dehaene opens his book by contemplating definitions of learning — and by contrasting humans and machines in their ability to do so.

By one set of measures, computers have us beat.

For instance, one computer was programmed with the rules of the game Go, and then trained to play against itself. In three hours, it became better at the game than the human Go champion. And, it got better from there.

However, Dehaene still thinks humans are the better learners. Unlike humans, machines can’t generalize their learning. In other words: that Go computer can’t play any other games. In fact, if you changed the size of the Go board even slightly, it would be utterly stumped.

And, unlike humans, it can’t explain its learning to anyone else.

And, humans need relatively little data to start learning. Machines do better than us when they can crank millions of calculations. But, when they calculate as slowly as we do, they don’t learn nearly as much as we do.

As his subtitle reassures us, brains learn better than any machine. (And, based on my conversation with him, it’s clear that “…for now” means “for the long foreseeable future.”)

Final Thoughts

At this point, you see what I mean when I wrote that Dehaene has an impressive list of brain interests, and therefore offers an impressive catalog of brain guidance.

You might, however, wonder if this much technical information ends up being a little dry.

The answer is: absolutely not.

Dehaene’s fascination with all things brain is indeed palpable in this book. And, his library of amazing studies and compelling anecdotes keeps the book fresh and easy-to-read. I simply lost track of the number of times I wrote “WOW” in the margin.

This has been a great year for brain books. Whether you’re new to the field, or looking to deepen your understanding, I recommend How We Learn enthusiastically.

https://www.youtube.com/watch?time_continue=62&v=23KWKoD8xW8&feature=emb_logo

An Unexpected Strategy to Manage Student Stress
Andrew Watson
Andrew Watson

School includes lots of stress. And, sometimes that stress interferes with academic life.

It might make it harder for students to encode new information. It might make it harder for them to show what they know — on tests, for example.

So, how can we help students manage their stress?

We’ve got some research suggesting that mindfulness helps. Can we do anything else?

Rethinking Our First Instinct

Imagine that a student comes to me and says, “Whoa! I’m really stressed out about this test…

My gut instinct might be to say something reassuring: “No worries — you totally got this. Just stay calm and I’m sure you’ll do fine.

This instinct, however, has a built-in problem. An anxious student experiences well-known physiological symptoms: a racing heart, sweaty palms, dry mouth, etc.

My student might try to persuade himself that he’s calm. But, all that physiological evidence reminds him — second by second — that he really isn’t calm.

Researcher Alison Wood Brooks wondered: could she encourage students to adopt a positive emotional framework with those same physiological signs?

Rather than encouraging a student to “be calm,” Brooks thought she might encourage him to “get excited.” After all, the bodily signs of excitement are a lot like those of stress. And, whereas stress feels mostly negative, excitement is (obviously) positive.

Testing (and Retesting) the Hypothesis

Brooks tested out this hypothesis in an impressive variety of stressful situations.

She started by having participants sing in a karaoke contest. One group prepped by saying “I am anxious.” A second group said “I am excited.” A third didn’t say either of those things.

Sure enough, the “excited” group sang their karaoke song considerably more accurately (81%) than their “anxious” peers (53%).

She then tried the ultimate in stress-inducing situations: public speaking.

Half of the speakers prepped by declaring themselves “calm” (which was my go-to suggestion above). The other half declared themselves “excited.”

As Brooks expected, independent judges rated the “excited” speakers superior to the “calm” speakers in persuasiveness, competence, and confidence.

One more approach may be most interesting to classroom teachers: a math test.

When getting reading for a “very difficult” test including eight math questions, students were told either “try to remain calm” or “try to get excited.”

You know how this story ends.

The students instructed to “get excited” scored, on average, about 1/2 point higher than their “calm” peers.

Every way that Brooks could think to measure the question, the advice to “get excited” proved more beneficial than the traditional advice to “remain calm.”

Not Persuaded Yet?

Perhaps this video, which neatly recaps Brooks’s study, will persuade you. Check out the handy graphic at 1:30.

https://www.youtube.com/watch?v=1rRgElTeIqE

 

Balancing Direct Instruction with Project-Based Pedagogies
Andrew Watson
Andrew Watson

A month ago, I wrote about a Tom Sherrington essay proposing a truce between partisans of direct instruction and those of project-based learning (and other “constructivist pedagogies”).

In brief, Sherrington argues that both pedagogical approaches have their appropriate time in the learning process.

EARLY in schema formation, direct instruction helps promote learning for novices.

LATER in schema formation, project-based pedagogies can apply, enrich, and connect concepts for experts.

Today’s Update

At the time I wrote about Sherrington’s essay, it was available in a book on Education Myths, edited by Craig Barton.

I do recommend that book–several of its essays offer important insights. (See this post on Clare Sealy’s distinction between autobiographical and semantic memory.)

If you’d like to read Sherrington’s essay right away, I have good news: he has published it on his website.

Happily, his contribution to the debate is now more broadly available.

A Final Note

Like other thinkers in this field, Sherrington proposes the novice/expert divide as the most important framework for understanding when to adapt pedagogical models.

In my own thinking, I’m increasingly interested in understanding and defining the transition points from one to the other.

That is: how can we tell when our novices have become experts?

What are the signs and symptoms of expertise? How can we describe those signs and symptoms so that 3rd grade teachers and 7th grade teachers can make sense of them?

Or, science teachers and history teachers?

Or, soccer coaches as well as dance instructors?

In other words: I agree with Sherrington’s framework, but I think it’s incomplete without clearer guidance about the novice/expert continuum.

Concrete + Abstract = Math Learning
Andrew Watson
Andrew Watson

Early math instruction includes daunting complexities.

We need our students to understand several sophisticated concepts. And, we need them to learn a symbolic language with which to represent those concepts.

Take, for example, the concept of equivalence. As adults, you and I can readily solve this problem :   3+4 = 4 + __

Early math learners, however, can easily stumble. Often, they take the equals sign to mean “add up all the numbers,” and believe the correct answer to that question is “10.”

How can we help them through this stage of understanding?

Strategy #1: Switch from Abstract to Concrete

The first answer to the question seems quite straightforward. If the abstract, symbolic language of math (“3+4=___”) confuses students, let’s switch to a more concrete language.

For instance: “If my frog puppet has three oranges, and your monkey puppet has four oranges, how many oranges do they have together?”

It just seems logical: the switch from abstract to concrete ought to help.

Alas, those concrete examples have a hidden downside.

As Dan Willingham argues in Why Don’t Students Like School?, humans naturally focus on surface features of learning.

When children see monkeys and frogs and oranges, they associate the lesson with those specific entities–not with the underlying mathematical properties we want them to learn.

In edu-lingo, concrete examples can inhibit transfer. Students struggle to transfer a lesson about oranges and puppets to anything else.

Strategy #2: “Fade” from Concrete to Abstract

Taking their cue from Jerome Bruner, psychology researchers wondered if they could start with concrete examples and then, over time, switch to more abstract examples.

For instance, students might start learning about mathematical equivalence by using a balance. When they put an equal number of tokens on both sides, the balance is level.

In the second step, they do practice problems with pictures of a balance and tokens.

And, in the final step, they see abstract representations: 2 + 5 = 5 + __.

They describe this technique as “concreteness fading.”

And, sure enough, it worked. In this case, “worked” meant that students who learned equivalence though a concreteness fading method transferred their knowledge to different–and more difficult–problems.

They did so better than students who learned in a purely abstract way. And, better than students who learned in a purely concrete way. (And even, as a control condition, better than students who started with an abstract representation, and then switched to concrete.)

By the way: these researchers tested their hypothesis both with students who had a relatively low level of knowledge in this area, and those who had a high level of knowledge. They got (basically) the same results both times.

An Essential Detail

When we teachers try to incorporate psychology research into our teaching, we can sometimes find that it conflicts with actual experience.

In this case, we might find that our young math learners just “get it” faster when we use frog puppets. Given that experience, we might hesitate to fade over to abstract teaching.

This research shows an intriguing pattern.

Sure enough, students who began with concrete examples made fewer mistakes on early practice problems. And, that finding was true for both the “concrete only” group and the “concreteness fading” groups.

In other words, the “abstract only” group did worse on the early practice problems than did those groups.

But…and this is a CRUCIAL “but”…the “concrete only” group didn’t do very well on the transfer test. Their raw scores were the lowest of the bunch.

However, the “concreteness fading” group did well on the early problems AND on the transfer test.

It seems that, as the researchers feared, too much concrete instruction reduced transfer.

 

In sum: “concreteness fading” gives young math learners both a helpfully clear introduction to math concepts and the abstract understanding that allows transfer.


Fyfe, E. R., McNeil, N. M., & Borjas, S. (2015). Benefits of “concreteness fading” for children’s mathematics understanding. Learning and Instruction35, 104-120.