Skip to main content
Can Quiet Cognitive Breaks Help You Learn?
Andrew Watson
Andrew Watson

We write a lot on the blog about “desirable difficulties” (for example, here and here). Extra cognitive work during early learning makes memories more robust.

cognitive breaks

Retrieval practice takes more brain power than simple review — that is, it’s harder. But, it helps students remember much more.

Wouldn’t it be great if some easy things helped too?

How about: doing nothing at all?

Cognitive Breaks: The Theory

When a memory begins to form, several thousand neurons begin connecting together. The synapses linking them get stronger.

Everything we do to help strengthen those synapses, by definition, helps us remember.

We know that sleep really helps in this process. In fact, researchers can see various brain regions working together during sleep. It seems that they’re “rehearsing” those memories.

If sleep allows the brain to rehearse, then perhaps a short cognitive break would produce the same result.

Cognitive Breaks: The Research

Michaela Dewar and colleagues have been looking into this question.

They had study participants listen to two stories. After one story, participants had to do a distracting mental task. (They compared pictures for subtle differences.)

After the other, they “rest[ed] quietly with their eyes closed in the darkened testing room for ten minutes.”

Sure enough, a week later, the quiet rest led to better memory. As a rough calculation, they remember 10% more than without the quiet rest.

10% more learning with essentially 0% extra cognitive effort: that’s an impressive accomplishment!

Classroom Questions

A finding like this raises LOTS of practical questions.

Dewar’s study didn’t focus on K-12 learners. (In fact, in this study, the average age was over 70.) Do these findings apply to our students?

Does this technique work for information other than stories? For instance: mathematical procedures? Dance steps? Vocabulary definitions?

Does this finding explain the benefits of mindfulness? That is: perhaps students can get these memory benefits without specific mindfulness techniques. (To be clear: some mindfulness researchers claim benefits above and beyond memory formation.)

Can this finding work as a classroom technique? Can we really stop in the middle of class, turn out the lights, tell students to “rest quietly for 10 minutes,” and have them remember more?

Would they instead remember more if we tried a fun fill-in-the-blank review exercise?

I’ll be looking into this research pool, and getting back to you with the answers I find.

Cognitive Breaks: The Neuroscience

If you’d like to understand the brain details of this research even further, check out the video at this website. (Scroll down just a bit.) [Edit 11/4/19: This link no longer works; alas, I can’t find the video.]

The researchers explain a lot of science very quickly, so you’ll want to get settled before you watch. But: it covers this exact question with precision and clarity.

(By the way: you’ll hear the researchers talk about “consolidation.” That’s the process of a memory getting stronger.)

If you do watch the video, you might consider resting quietly after you do. No need to strain yourself: just let your mind wander…

hat tip: Michael Wirtz

T/F: Timed Tests Cause Math Anxiety?
Andrew Watson
Andrew Watson

Questions about math and anxiety have been on the uptick recently.

Over at Filling the Pail, Greg Ashman offers his typically direct analysis. You might disagree with his opinion, but he’s always worth a mental debate.

By the way, a casual aside in his post deserves attention of its own. Here’s how Ashman frames his tests: “I’m just checking in to see how well I’ve taught you.”

That simple sentence accomplishes many useful goals — it’s one I might use myself. It’s hard to imagine an easier way to reduce test stress…

How to Stop Cheating: An Awkward Debate
Andrew Watson
Andrew Watson

We would, of course, LOVE to prevent cheating.

prevent cheatingIt does moral damage to the cheater. It undermines classroom trust. And: it makes it hard for us to know how much our students are actually learning.

So: what techniques might help us do so?

How To Prevent Cheating: “Moral Reminders”

For some time now, Dan Ariely has made this his field. (Check out his book:  The (Honest) Truth about Dishonesty: How We Lie to Everyone — Especially Ourselves.)

Over the years, he developed a clever research paradigm to see how much people cheat. With that in place, he tested various strategies to prevent cheating.

(He can also promote cheating, but that’s not exactly what we’re looking for.)

One strategy that has gotten a lot of attention over the years: moral reminders.

Ariely asked some students to write down ten books they had read in high school. He asked the others to write down the 10 Commandments.

That is: he made them think about foundational moral standards in our culture.

Sure enough, once reminded about moral standards, students cheated less. (The Cohen’s d was 0.48, which is an impressive effect for such an easy intervention.)

Then Again, Maybe Not

In a study published just a month ago, Bruno Verschuere (and many others) retested Ariely’s hypothesis. Whereas the original study included 209 students, this meta-analysis included almost 4700. That is … [checks math] … more than 20 times as many students.

Studying much more data, they found that “moral reminders” made no difference.

(In fact, they found that students who recalled the 10 commandments were just a smidge likelier to cheat; but, the difference was tiny — not even approaching statistical significance.)

As we’ve seen in other cases of the “replication crisis,” seemingly settled results are back in question.

What’s a Teacher to Do?

Of course, Ariely had other suggestions as well. Signing  pledges not to cheat reduces cheating.  And, of course, teachers who supervise students closely reduce their opportunities to cheat.

As far as I know, these strategies have not been retested (although the second one seems too obvious to need much retesting).

For the time being, sadly, we should rely less on indirect moral reminders, and more on direct pledges — and direct supervision.

Using and Misusing Averages: The Benefits of Music?
Andrew Watson
Andrew Watson

The “10 Minute Rule” tells us that people can’t pay attention to something for longer than ten minutes.

As teachers, therefore, we shouldn’t do any one thing for longer than ten minutes. We need to mix it up a bit.

There’s an obvious problem here. The “rule” assumes that all people think alike — that one number is correct for all students in all situations.

That’s a bizarre assumption. It’s also wildly untrue.

(In fact, the “rule” itself has a weird history. )

The Bigger Picture: When teachers convert averages into absolutes — like, say, the 10 minute rule — we’re likely to miss out on the distinct needs of our particular students.

Today’s Example

Should students listen to music when they study or read?

If we go by averages, the answer is: no! We’ve got data to prove it. We’ve even got meta-analyses.

And yet, as Daniel Willingham argues, we should be aware of the variety in the data:

While mean of the grand distribution may show a small hit to comprehension when background music plays, it’s NOT the case that every child reads a little worse with background music on.

He’s got a specific example in mind:

Some of my students say they like music playing in the background because it makes them less anxious. It could be that a laboratory situation (with no stakes) means these students aren’t anxious (and hence show little cost when the music is off) but would have a harder time reading without music when they are studying.

In other words: psychology research can be immensely helpful. It can produce useful — even inspiring — guidance.

At the same time: when we work with our own students, we should always keep their individual circumstances in mind.

If this student right here needs music to stay focused and relaxed, then data on “the average student” just isn’t the right guide.

 

Live Theater Boosts Student Knowledge and Tolerance
Andrew Watson
Andrew Watson

Question: What’s the most potentially misleading kind of research?

Answer: Research that supports a position you REALLY want to believe.

For this reason, I try to be ferociously skeptical of research that sounds really wonderful to me.

live theater

In this case: I’ve been a theater guy my whole life. I acted in plays throughout high school and college. My first teaching job was as a theater director. As I write this post, I’m about to go to a play.

When I see research showing that attending live theater is good for students, I already believe it’s true. I’m completely certain.

For that very reason, I try as hard as I can to find flaws in the study’s method.

Here’s what I found…

Live Theater: Methods

Researcher Jay Greene and his team chose some high school classes at random to attend live plays, including Twelfth Night and Peter and the Starcatcher. They compared those classes to control group classes, and measured several variables:

Tolerance: how did students respond to statements like “people who disagree with my point of view bother me,” or “I think people can have different opinions about the same thing.”

“Social perspective taking”: how did they respond to questions like “How often do you try to figure out what motivates others to behave as they do?”

Content Knowledge: how well did they learn the play’s plot and vocabulary.

As best I can tell, the researchers made a good-faith effort to make comparisons as fair as possible.

In one case, for example, they sent two classes on the same bus to a college campus. Half the students got off the bus to see a live play, and the others went into the same building to see a movie version of that play.

It’s hard to imagine a fairer control group when measuring the effect of live theater.

Live Theater: the Results

Students filled out their questionnaires several weeks after they did (or didn’t) see the plays.

When they crunched the data, Greene’s team found impressive differences.

On all of these scales, students who saw live theater scored higher than those who didn’t. And, watching a movie version of the play that others saw didn’t have that effect. In fact, it didn’t have any effect.

To put that in other words:

Students who saw live theater were likelier to be open to other points of view.

They were likelier to think about another person’s perspective.

They were likelier to understand the events and the language of the play.

The stats methodology gets into the weeds here — they report their findings based on standard deviations and z scores — but the trend is clear: live theater matters. A lot.

Conclusions

I’m trying to be grimly skeptical here. But I have to say, I just might be convinced.

Given Greene’s conspicuous fairness, his obvious attempts to be as reasonable as possible, his honesty about the potential flaws in his method, it seems just possible that he’s on to something here.

One important point: this is the first study that looks directly at this question. We can never reach firm conclusions based on only one study.

But: as a place to start, this research seems quite persuasive.

Not only we theater people, but all teachers might come to believe that attending live theater helps students learn…and be good people.

 

Teenagers, Hormones, and Other Stubborn Myths
Andrew Watson
Andrew Watson

teenage hormones

There’s a short video about adolescence making the rounds on social media.

The video offers a quick explanation for highly-emotional teenage behavior. And it has a suggestion or two for parents.

The suggestions themselves make good sense:

Reassure your child that s/he’s normal.

Listen. (Ahem: turn off your cellphone first.)

Take courage: adolescence is a phase, and doesn’t last forever. (And, keep in mind: good things are happening in the brain as it matures.)

However, its “quick explanation for highly-emotional behavior” misses the mark.

This video returns to that old nemesis: teenage hormones.

The Fact and Fiction of Teenage Hormones

True enough, physical maturation does trigger a new hormone profile at puberty. And, those hormones do affect bodies and behaviors. So, this explanation isn’t entirely incorrect.

However, it’s substantially misleading.

In her book The Teenage Brain, Frances Jensen summarizes the “misconceptions and myths about the teenage brain and teenage behavior than are now so ingrained they are accepted as societal beliefs.”

The first misconception/myth on her list? “Teens are impulsive and emotional because of surging hormones” (p. 4).

Instead, we should focus on changes in neural development, especially myelination.

Here’s the short version: brains communicate (in part) with electrical signals. Many of those signals are carried by “uninsulated” wires.

As we age, the brain takes care to insulate more wires. That is: it covers them with myelin.

That process results in lots of good stuff. But, it takes time, and produces some real bumps along the way.

For instance: when the parts of the brain that generate emotional behavior (say, the amygdala and the nucleus accumbens) are more myelinated than the parts that control it (say, the prefrontal cortex), that imbalance allows for bad decisions and emotional over-reactions.

When trying to understand adolescent behavior, we should focus less on teenage hormones and more on the normal process of neuro-biological development.

Some Handy Sources

If you’re really interested in this topic, you should look at Jensen’s book. Also:

The Behavioral Neuroscience of Adolescence by Linda Spear

Age of Opportunity by Laurence Steinberg

Untangled by Lisa Damour

One more book I’d like to recommend: Inventing Ourselves: The Secret Life of the Teenage Brain.

Its author, Sarah-Jayne Blakemore, has done lots of the research behind the “imbalance hypothesis.” And, the book just won the Royal Society Book Prize.

For all these reasons, I assume it’s great. However, I haven’t read it yet, so I can’t be certain. I’ll update this post once I’ve got a confident view, one way or the other.

 

Default Image
Andrew Watson
Andrew Watson

I met yesterday with several thoughtful teachers who had resonant questions about education research.

class length

How do we balance factual learning and deep thinking?

What’s “the right amount of stress” during a test?

How can we promote collaboration while honoring individual differences?

And:

What’s the optimal class length?

This question comes up often. Should we have lots of short classes, so every subject meets every day? Should we have a few longer classes, so that we can dig deeply into a particular topic without interruption?

Debates sometimes fall along disciplinary lines. Foreign language and math teachers often want frequent class meetings; English and History teachers typically like bigger chunks of time for discussions.

Science teachers just gotta have 80 minutes to run a lab well.

But: what does research show?

Class Length: What Research Tells Us

As far as I know, we just don’t have a clear answer to that question.

Over at the Education Endowment Fund, for example, they’ve investigated the benefits of block scheduling: that is, a few long periods rather than several short ones.

The finding: we can’t really say. Or, to quote EEF: “There is no consistent pattern in the evidence.”

More precisely:

The evidence suggests that how teachers use the time they are allocated is more important than the length of lesson or the schedule of lessons, and hence that the introduction of block scheduling is unlikely to raise attainment by itself.

By implication, a change away from block scheduling shouldn’t raise attainment either.

The point is not how long we teach but how well we teach with the time we’ve got.

For this reason, I often counsel schools and teachers: before you change your schedule, study human attention systems.

Once teachers know how attention works — and, it’s A LOT more complicated that we might have thought — we’ll be much better at helping students learn. (If you have the chance to attend a Learning and the Brain session about attention: RUN, don’t walk.)

Class Length: What Research Can’t Tell Us

Research doesn’t answer this question, I think, because it can’t. There’s no one correct answer.

If you teach 2nd graders or 7th graders or 11th graders, you’ll probably find that different lengths of time work better.

If you teach in cultures that inculcate patience and concentration, longer classes will work better than in cultures with a more get-up-and-go kind of pace.

The number of students in the class might matter.

The experience of the teacher almost certainly matters.

When your school starts investigating schedules, therefore, I suggest you start with these essentials:

First: study human attention.

Second: don’t design “the optimal schedule.” Design the optimal schedule for your school and your students. It might not work at anyone else’s school, but it doesn’t need to.

A schedule that works for you and your students is the closest to optimal that you can get.

Is It Time to Re-Re-Think Mindset Research?
Andrew Watson
Andrew Watson

Mindset doubts have been haunting education for a while now.

mindset doubts

Most dramatically, a recent meta-analysis including more than 300 studies makes it clear that colorful growth-mindset posters won’t cure all our problems. (BTW: this meta-analysis included data from almost 370,000 participants. Wow.)

Combined with general concerns about the replication crisis in psychology, and some actual non-replications, this analysis has put Mindset Theory under a cloud.

Mindset Doubts in Context

Of course, we should always doubt research findings. Science, after all, is a way of practicing effective skepticism.

At the same time, doubts don’t require wholesale rejection.

While it’s certainly true that “colorful growth-mindset posters won’t cure all our problems,” I don’t think anyone has seriously claimed that they would. (Well: maybe people who sell colorful growth-mindset posters.)

Instead, the theory makes this claim: we can help students think one way (growth mindset) more often than another way (fixed mindset). When they do…

…they have more helpful goals in school.

…they have a healthier perspective on the difficulties that regularly accompany learning.

…and, they respond more effectively to academic struggle.

This process doesn’t require a revolution. It asks for a general change in emphasis. For some students, this new emphasis increases motivation and learning.

Research Continues

While that big meta-analysis got lots of headlines, other useful studies have recently come out. For example:

This meta-analysis found that a well-known mindset technique largely works. When students study how brains change as they learn (“neuroplasticity”), they develop growth mindsets. And, they learn more stuff.

This recent study shows that even a “one-shot” mindset intervention has lasting effects. The researchers tested this idea over two years with four different high-school cohorts. They’ve got lots of data.

This study suggests that encouraging people to adopt a growth mindset likewise encourages them to become more “intellectually humble.” Lord knows we can all use some more intellectual humility these days.

The point is not that we should reject all mindset doubts.

The point is that one meta-analysis should not end all discussion of a theory that’s been researched for 40+ years.

We should not, of course, ask mindset to solve all our problems. Nor should we ask retrieval practice to solve all problems. Or short bursts of in-class exercise.

No one change fixes everything.

Instead, we should see Mindset Theory as one useful tool that can help many of our students.

Default Image
Andrew Watson
Andrew Watson

We know that our students spend too much time sitting down. They’re antsy and unhappy, and — increasingly — overweight. Wouldn’t it be great if we could add even quick exercise breaks into the class day?

quick exercise

Of course, we have lots of reasons to be skeptical about this possibility.

Even if we get them to move more in class, they might just be more sedentary later in the day.

If they burn more calories at school, they might eat more later on.

And: let’s be practical. If we get our students up and moving around, it might take FOREVER to get them settled back down again.

Which is to say: if they move more, they might learn less.

Quick Exercise Breaks: The Research

A research team has been exploring each of these questions, and they’ve got LOTS of good news.

In brief, almost  all of these fears are groundless.

We were right to be skeptical, right to ask all those questions. But the answers turn out to be: “not to worry!”

For example: students who get extra exercise in class don’t spend more time on the couch later on.

They don’t eat more either.

They plain old feel better.

And — here’s some great news: they get back to work in about 30 seconds. (They learn the same amount as their sedentary peers, by the way.)

The Bad News?

Honestly, there’s just not much bad news here. The worst researchers could report is that they didn’t quite meet their goals.

They wanted teachers to do ten quick exercise breaks, but they averaged only five.

Given all the other good news, I’m thinking we can live with this.

By the way: we might have hoped that the exercise would help students learn — not just fail to impede learning.

Research into that question is complex. Here’s a link to a recent article on the subject.

In the meanwhile: here’s a fun video on the Michigan research project.

https://www.youtube.com/watch?v=sq5xVgClIsw

Does Hands-On Learning Benefit Science Students?
Andrew Watson
Andrew Watson

Phrases like “inquiry learning” or “project-based learning” inspire both enthusiasm and skepticism.

hands-on learning

In part, the difference of opinion results from a very basic problem: it’s hard to define either term precisely. What, exactly, are the essential elements of inquiry learning?

If we can’t even answer that question, it will be jolly hard for researchers to know if the method “really works.”

Questions without Answers; Hands-On Learning

A study published earlier this year focuses on two key elements of inquiry learning.

First: teachers should let students investigate a scientific phenomenon without telling them what they’ll find. It’s called inquiry learning because teachers withhold the correct answers.

Second: teachers should encourage hands-on learning. As much as possible, students should do the work themselves, not watch the teacher do it.

If you approach education with a constructivist lens, you’re likely to favor both approaches. Students who make sense of ideas on their own — with their own thoughts and their own hands, without too much teacher guidance — are likeliest to think deeply about concepts.

If instead you start with cognitive load theory, you’re likely to worry about these practices. Students have relatively little working memory with which to process new ideas. The absence of teacher guidance, and the need to manipulate physical objects might well overwhelm precious cognitive resources.

What They Did; What They Found

Researchers taught 4th and 5th graders about converting potential energy to kinetic energy. They used balls rolling down ramps of different heights to illustrate these concepts.

In one case, a teacher told the students what to expect: the higher the ramp, the farther the ball will roll. The students then watched the teacher do the experiment. (That is: “direct instruction.”)

In another, the teacher told students what to expect, but let them roll balls down the ramps.

In the third case, the teacher didn’t tell students what to expect, and let them do the experiment. (That is: “inquiry learning.”)

So: which combination of inquiry techniques yielded the most learning?

Direct instruction did. By a fair peg. (Cohen’s d was 0.59: not huge, but certainly respectable.)

In fact, in this paradigm, “inquiry learning” was the least effective at helping students take these concepts on board.

(To be complete: direct instruction helped students a) remember what they learned and b) reason with that new knowledge. On a third measure–applying this new knowledge to real world situations–both approaches worked equally well.)

At least in this one research paradigm, working memory limitations made constructivist pedagogy too difficult.

On The Other Hand…

When I first planned this post, I was excited to contrast Zhang’s study with a dramatic report from Washington State.

According to this report — here’s a one-page summary — 9th- and 10th-grade students who followed a constructivist inquiry curriculum (including hands-on learning) learned four extra months of science over two years.

That’s a simply staggering result.

I was hoping to argue that we should expect contradictory studies, and learn from the tensions between them.

In particular, the difference between a 1-shot study and a 2-year-long study should really get our attention.

Alas, I can’t make that argument here.

Compared to What?

In the ramp-and-ball study, Zhang’s three student groups learned under three equally plausible conditions. That is: she compared something to something else.

The Washington study, however, compares something to nothing.

That is: teachers at some schools got a shiny new curriculum and lots of dedicated professional development. Teachers at comparison schools got bupkis.

So, it’s entirely possible that the inquiry curriculum caused the extra learning.

It’s also possible that simply doing something new and exciting enlivened the teachers at the inquiry schools.

They might have been equally enlivened by some other kind of curriculum. Who knows: they might have found a well-designed direct-instruction curriculum inspiring.

Unless your control group is doing something, you can’t conclude that your intervention created the change. “Business as usual” — that’s what the researchers really called the control group! — doesn’t count as “doing something.”

An Invitation

Do you have a well-designed inquiry learning study that you love? Please send it to me: [email protected]. I’d love to write about it here…