Does even a short bout of exercise immediately after learning help form long-term memories?
A recent article, published by Cognitive Research: Principles and Implications, suggests intriguing—even surprising—answer to this question.
From a different perspective, this article also offers useful insights into the way that psychologists think and work
Specifically, it helps answer a second question: what should researchers do when their data are inconsistent?
Steven Most and colleagues wondered if even 5 minutes of exercise immediately after learning would increase the exerciser’s memory of that information.
To test this question, Most had students study pairs of names and faces, and then do five minutes of exercise. (They stepped on and off a low platform.) He then tested their memory of those name/face pairs the following day, and compared their performance to two control groups.
Compared to one control group which did not exercise, these steppers remembered more words.
Similarly, compared to another control group which did exercise before they learned the name/face pairs, these steppers remembered more words.
But here’s the surprise. On average, the exercising men in the study remembered slightly fewer pairs than the non-exercising men. But the exercising women remembered more than twice as many pairs as their non-exercising female peers.
This article opened with a question: does a short bout of exercise immediately after learning help form long-term memories?
The answer: it certainly seems to, but only for women.
Psychologists at Work
Although a lot of work goes into this kind of study, psychologists are rarely satisfied to examine a question just once. When they get these results—especially such interesting results—they’re inclined to repeat their study with slight variations.
They are, in effect, trying to prove themselves wrong. Or, at least, trying to discover the limits outside of which their findings aren’t true.
So, Most et. al. repeated their study. This time, instead of testing the students the following day, they tested them later the same day.
The results? They arrived at the same major findings. Although the women’s increase wasn’t so dramatic post exercise (they remembered almost twice as many name/face pairs, not more than twice as many name/face pairs), post-study exercisers still remembered more pairs than pre-study exercisers, and than non-exercisers.
Up to this point, Most’s team had gotten the same dramatic answer twice. What does a good psychologist do?
Most repeated the study again—this time using name/shape pairs instead of name/face pairs.
The results? Nada.
This time, none of the groups should significant differences at all. No differences between the pre- and post-study exercisers. No differences between the exercisers and non-exercisers. No meaningful gender differences. Bupkis.
So, you know what happens next: they performed their research paradigm a 4th time. This version was practically identical to the first; they simply made a slight change to the non-exercise task. (Crucially, Most’s team went back to name/face pairs.)
Drum roll please…
Basically, a nothingburger.
As was true in study #3–but contrary to studies #1 and #2–study #4 showed no statistically significant differences. As the authors write
“Examining the data only from the women, those in the exercise group exhibited somewhat better memory than those in the non-exercise group, but this [difference] fell short of significance.”
In the world of psychology, if a result falls short of statistical significance, you can’t make strong claims about your findings.
Psychologists at Work, Part II
Imagine that you’re a professional psychologist. You’ve spent months—probably years—running these studies. Some of your results—studies #1 and #2—are strong and compelling. Others—#3 and #4—don’t get you very far.
What do you do with this muddle?
As we asked at the top of this article: what should researchers do when their data are inconsistent?
The answer is: You publish it. You publish it all.
You say: look, we ran our studies and came up with a confusing and interesting collection of results. Here you go, world, see what you make of them.
You do not hide it. You do not, for example, publish studies #1 and #2 and pretend that #3 and #4 didn’t happen. You publish it all.
In fact, Most and colleagues went further. They created a handy graph (on page 11) making this inconsistency extremely clear. It’s easy to see that, for men, the short bout of exercise didn’t make much of a difference in any of the studies. For women, on the other hand, the exercise made a big difference in the first study, a modest difference in the second, and practically none in the 3rd and 4th.
Before I started attending Learning and the Brain conferences, I had been and English and Theater teacher for years. My undergraduate degree is in Medieval History and Literature; I have an MA (and half of a PhD) in English. I am, in other words, historically a Humanities kind of guy.
But I have to say, this article exemplifies some of the many reasons that I have grown to admire a scientist’s approach to teaching and learning.
Most and his colleagues, Briana Kennedy and Edgar Petras, not only tried to prove themselves wrong, they went out of their way to show the results when they partially succeeded in doing so.
Yes, there’s a lot of talk about a “replication crisis” in psychology. Yes, nobody knows what a p-value really means, and why .05 is the chosen threshold.
But at the end of the day, researchers like Most, Kennedy, and Petras are doing hard, fascinating, and helpful work—and they’re being remarkably straightforward with others about the complexity of their findings.
We should all admire this article. And me: I’m going to work out…