Skip to main content
Exercise and Learning
Andrew Watson
Andrew Watson

AdobeStock_68395807_Credit

Does even a short bout of exercise immediately after learning help form long-term memories?

A recent article, published by Cognitive Research: Principles and Implications, suggests intriguing—even surprising—answers to this question.

From a different perspective, this article also offers useful insights into the way that psychologists think and work

Specifically, it helps answer a second question: what should researchers do when their data are inconsistent?

The Study

Steven Most and colleagues wondered if even 5 minutes of exercise immediately after learning would increase the exerciser’s memory of that information.

To test this question, Most had students study pairs of names and faces, and then do five minutes of exercise. (They stepped on and off a low platform.) He then tested their memory of those name/face pairs the following day, and compared their performance to two control groups.

Compared to one control group which did not exercise, these steppers remembered more words.

Similarly, compared to another control group which did exercise before they learned the name/face pairs, these steppers remembered more words.

But here’s the surprise. On average, the exercising men in the study remembered slightly fewer pairs than the non-exercising men. But the exercising women remembered more than twice as many pairs as their non-exercising female peers.

This article opened with a question: does a short bout of exercise immediately after learning help form long-term memories?

The answer: it certainly seems to, but only for women.

Psychologists at Work

Although a lot of work goes into this kind of study, psychologists are rarely satisfied to examine a question just once. When they get these results—especially such interesting results—they’re inclined to repeat their study with slight variations.

They are, in effect, trying to prove themselves wrong. Or, at least, trying to discover the limits outside of which their findings aren’t true.

So, Most et. al. repeated their study. This time, instead of testing the students the following day, they tested them later the same day.

The results? They arrived at the same major findings. Although the women’s increase wasn’t so dramatic post exercise (they remembered almost twice as many name/face pairs, not more than twice as many name/face pairs), post-study exercisers still remembered more pairs than pre-study exercisers, and than non-exercisers.

Brace Yourself

Up to this point, Most’s team had gotten the same dramatic answer twice. What does a good psychologist do?

Most repeated the study again—this time using name/shape pairs instead of name/face pairs.

The results? Nada.

This time, none of the groups should significant differences at all. No differences between the pre- and post-study exercisers. No differences between the exercisers and non-exercisers. No meaningful gender differences. Bupkis.

So, you know what happens next: they performed their research paradigm a 4th time. This version was practically identical to the first; they simply made a slight change to the non-exercise task. (Crucially, Most’s team went back to name/face pairs.)

The results?

Drum roll please…

Basically, a nothingburger.

As was true in study #3 — but contrary to studies #1 and #2 — study #4 showed no statistically significant differences. As the authors write

“Examining the data only from the women, those in the exercise group exhibited somewhat better memory than those in the non-exercise group, but this [difference] fell short of significance.”

In the world of psychology, if a result falls short of statistical significance, you can’t make strong claims about your findings.

Psychologists at Work, Part II

Imagine that you’re a professional psychologist. You’ve spent months—probably years—running these studies. Some of your results—studies #1 and #2—are strong and compelling. Others—#3 and #4—don’t get you very far.

What do you do with this muddle?

As we asked at the top of this article: what should researchers do when their data are inconsistent?

The answer is: You publish it. You publish it all.

You say: look, we ran our studies and came up with a confusing and interesting collection of results. Here you go, world, see what you make of them.

You do not hide it. You do not, for example, publish studies #1 and #2 and pretend that #3 and #4 didn’t happen. You publish it all.

In fact, Most and colleagues went further. They created a handy graph (on page 11) making this inconsistency extremely clear. It’s easy to see that, for men, the short bout of exercise didn’t make much of a difference in any of the studies. For women, on the other hand, the exercise made a big difference in the first study, a modest difference in the second, and practically none in the 3rd and 4th.

Fig. 4 Means and 95% confidence intervals for each experiment indicating how many more paired associations were correctly recalled among female and male participants when the post-learning activity was exercise, relative to the non-exercise post-learning activity. For experiment 3, error bars reflect a repeated measures design, whereas those for the other experiments reflect independent measures designs. A meta-analysis across these experiments indicated that, among the female participants and with 95% confidence, 5 minutes of post-learning exercise increased memory for paired association by 0.40 to 4.63 items. Image from Most, S. B., Kennedy, B. L., & Petras, E. A. (2017). Evidence for improved memory from 5 minutes of immediate, post-encoding exercise among women. Cognitive Research: Principles and Implications, 2(1), 33.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Hats Off

Before I started attending Learning and the Brain conferences, I had been an English and Theater teacher for years. My undergraduate degree is in Medieval History and Literature; I have an MA (and half of a PhD) in English. I am, in other words, historically a Humanities kind of guy.

But I have to say, this article exemplifies some of the many reasons that I have grown to admire a scientist’s approach to teaching and learning.

Most and his colleagues, Briana Kennedy and Edgar Petras, not only tried to prove themselves wrong, they went out of their way to show the results when they partially succeeded in doing so.

Yes, there’s a lot of talk about a “replication crisis” in psychology. Yes, nobody knows what a p-value really means, and why .05 is the chosen threshold.

But at the end of the day, researchers like Most, Kennedy, and Petras are doing hard, fascinating, and helpful work—and they’re being remarkably straightforward with others about the complexity of their findings.

We should all admire this article. And me: I’m going to work out…

Maturation of the Hippocampus
Andrew Watson
Andrew Watson

AdobeStock_74188315_Credit

Why do adolescents learn and remember specific information more easily than younger children?

We have, of course, many answers to this question.

For instance: working memory increases during childhood, and so adolescents have–on average–greater working memory capacity than younger students.

Also, prior knowledge usually makes acquisition of new knowledge easier. And so, adolescents–who have more prior factual knowledge than children–can more easily take in new information.

Today’s Headline

New research from the Max Planck Institute for Human Development offers yet another reason: hippocampal development.

The hippocampus, tucked in below the cerebral cortex below both of your temples, helps process and form new long-term memories. It turns out that the hippocampus is developing much longer than we had previously known. Far from being fully developed in childhood, it continues its maturation at least until the teen years.

The specific teaching implications of this research are still years away. For the present, this article at Neuroscience News gives a helpful overview of what we know now, and how this new research fits into our current understanding.

The Neural Effects of Media Multitasking
Andrew Watson
Andrew Watson

AdobeStock_49412465_Credit v2

If you’re attending Learning and the Brain’s “Merging Minds and Technology” Conference in November, you’re probably interested in Mona Moisala’s research. After all, Moisala wants to know if media multitasking influences distractibility among 13-24 year olds.

That is: does switching from Instagram on an iPad to Angry Birds on an iPhone to email on a laptop make it harder for students to pay attention in class later on? (Moisala has your attention now, right?)

And, just to make her research even more intriguing, she investigates the relationship between time spent playing video games and working memory capacity.

Here’s what she found:

First: the more that students reported media multitasking, the more they struggled with attention tasks in the lab.

Second: the more that students reported playing daily computer games, the higher working memory capacity they demonstrated.

Third: more daily computer game play also correlated with improved reaction times, and with higher ability to switch from visual to auditory attention.

The Question You Know Is Coming…

Moisala finds a relationship between these uses of technology and various cognitive functions. However, which direction does causality flow?

Does media multitasking cause students to struggle with attention? Or, are those who already struggle with attention drawn to media multitasking?

Moisala’s research doesn’t yet answer that question–although she’s applying for funding to study longitudinal data. (Data showing changes over time ought to reveal causality.)

Some Tentative Answers 

Although this research doesn’t answer causality questions, I have some suspicions.

First: I think it’s unlikely that daily video game play increases working memory capacity. Instead, I suspect that people who have a high working memory capacity enjoy the complexity of video-game play more than those who don’t.

Why do I think this? Well: for the most part, we haven’t had much luck increasing working memory capacity outside of psychology labs. So, it would be big and surprising news if playing everyday video games grew working memory.

Second: I suspect that playing video games does improve reaction time and attention switching. Those cognitive capacities are trainable, and video games ought to help train them.

Third: I suspect–although this is purely conjecture–that media multitasking and attentional difficulties feed each other. That is: people with short attention spans are prone to media multitasking; and media-multitasking trains people to reorient their attention more frequently.

Here’s an even better answer: if you come to the November conference, you’re likely to meet people who have researched these very questions.

I hope to see you there…

Neuroscience and Neuromyths
Andrew Watson
Andrew Watson

AdobeStock_157864475_Credit

Does neuroscience education help reduce a teacher’s belief in neuromyths?

According to this recent study: not as much as we would like.

In some cases, neuroscience education does help teachers.

For instance, 59% of the general public falsely believe that listening to classical music increases reasoning ability. That number is 55% for teachers, but drops to 43% for teachers who have had neuroscience training.

Similarly, teachers with knowledge of neuroscience are less likely to embrace a “left-brained vs. right-brained” understanding of learning than teachers without. (See video here.)

However, neuromyths about learning styles and about dyslexia persist–even among teachers with neuroscience education.

Among the general population, 93% of people incorrectly believe that “individuals learn better when they receive information in their preferred learning style.” That number falls to 76% among teachers–but is almost identical (78%) for teachers who know from neuroscience.

And: teachers who have studied neuroscience believe that writing letters backwards is a sign of dyslexia at almost the same rate as those who haven’t.

The Big Question

Studies like these lead me to this question: why are some neuromyths so sticky? Why do so many of us teachers believe in, say, learning styles theory despite all the scientific evidence to the contrary?

Why does this belief persist even among those–like we who attend Learning and the Brain conferences–who have placed science at the center of our professional development?

I welcome all thoughts on this question…

Parents, High School Start Times, and Sleepy Teens
Andrew Watson
Andrew Watson

AdobeStock_39639084_Credit

Research findings that support later high-school start times have been more and more common in recent years. (See also here.) And teachers I know are increasingly vocal about letting teens sleep later.

And yet, when I talk with high school leaders, they ruefully cite sports schedules to explain the impossibility of making serious changes.

(I’ve also read that bus schedules get in the way.)

Here’s another–quite surprising–reason that this change might be hard to accomplish: parental uncertainty. According to this recent study, published in the Journal of Clinical Sleep Medicine, half of parents whose teens start school before 8:30 don’t support a later start time.

The study concludes that we need to do a better job educating parents about the biological changes in adolescent sleep patterns.

The more that parents understand how melatonin onset–and, hence, sleepiness–changes with adolescence, the more they might understand that their awake-at-midnight teens aren’t simply being willful. They are instead responding to powerful chemical signals.

Given all we know about adolescent sleep, and the effect of sleep on learning, teachers and parents should be champions of reasonable high school start times.

More Thoughts on Gender Differences
Andrew Watson
Andrew Watson

AdobeStock_118569932_Credit

Regular readers of this blog know that I’m a skeptic about gender differences in learning. Although they certainly do exist–I think particularly about differences in 3d mental rotation–I often think they’re overstated or overemphasized.

At the same time, my emphasis on this point might obscure the fact that at the population level, gender differences in learning do sometimes exist. Two articles are, I think, particularly helpful in understanding these ideas.

First, this weighty research review considers the number of women in STEM fields and reaches three broad conclusions:

  1. “Males are more variable [than females] on most measures of quantitative and visuospatial ability, which necessarily results in more males at both high- and low-ability extremes; the reasons why males are often more variable remain elusive.”
  2. “Females tend to excel in verbal abilities, with large differences between females and males found when assessments include writing samples. “
  3. “We conclude that early experience, biological factors, educational policy, and cultural context affect the number of women and men who pursue advanced study in science and math and that these effects add and interact in complex ways. There are no single or simple answers to the complex questions about sex differences in science and mathematics.”

The article stands out to me not only for its thoroughness, but for its all-star list of authors. Janet Shibley Hyde, for example, is well known for her skepticism about gender differences; in fact, she authored a widely-cited article called The Gender Similarities Hypothesis. If a known skeptic is on board with these conclusions, then I’m comfortable being there too.

(Another author, Diana Halpern, by the way, is a former president of the American Psychological Association.)

Second, Hyde has published an exploration of the first argument above: that men show greater variability in quantitative and visual abilities. This hypothesis suggests that–although large populations of men and women will have the same average math scores–we would expect to see more men who are very good at math (say, the top 5%) and also who are very bad at math (say, the bottom 5%).

Hyde’s article shows the complexity of this hypothesis. In particular, given that these variations differ from country to country, and can change over time, we have to recognize the social and historical context of any data set.

Decisions, Decisions: Helping Students with Complex Reasoning
Lindsay Clements
Lindsay Clements

AdobeStock_116366760_Credit

Most of us have heard the adage about the two ways that someone can get into a swimming pool: jump right in, or enter slowly to acclimate to the temperature a few inches at a time.

Most of us have probably also witnessed (or experienced) the varied ways that someone might approach an assignment: one could start and finish it right away; work on it in small chunks over an extended period of time; or wait until the last moment to start, likely rushing to finish.

And for those that are keeping an eye on back-to-school sales events, there are of course different ways to shop: one could impulse purchase an item, or do some research beforehand to get the best possible deal.

The common thread in all of those scenarios is that different methods, strategies, and thought processes can be employed to solve problems or complete tasks. And each has its own time and place. So how do we decide exactly which ones to use in a given situation?

Algorithms and heuristics

The science behind problem solving and decision-making comprises a robust portion of cognitive research and involves the study of both conscious and unconscious thought.

Overall, there are two primary ways that a problem can be tackled: with algorithms or with heuristics. [1] An algorithmic approach refers to a series of steps that are more or less guaranteed to yield the solution. While this approach is most easily thought of in the context of mathematics (e.g., following a mathematical formula), an algorithmic approach also refers to such procedures as following a recipe or backtracking your steps to find a lost object.

Heuristics, on the other hand, are associative strategies that don’t necessarily lead to a solution, but are generally pretty successful in getting you there. These include conscious strategies (such as solving a maze by making sure your path stays in the general direction of the end point) and unconscious strategies (such as emotional instincts). Because heuristics are more subjective and less systematic than an algorithmic approach, they tend to be more prone to error.

In the classroom, solving problems with an algorithmic approach is fairly straight-forward: students can learn the needed procedural steps for a task and identify any places where they might have gone wrong, such as a miscalculation or a typo.

Heuristics are more complicated, however, and much of the research on problem solving aims to understand how children and adults solve problems in complex, confusing, or murky situations. One question of particular interest involves transfer: how do children apply, or transfer, their knowledge and skills from one problem-solving scenario to another?

Six of one, half-dozen of the other

Research suggests that students tend to have trouble transferring knowledge between problems that share only the same deep structure. For example, two puzzles that can be solved with the same logic, but that have different numbers, settings, or characters, are tricky.

In contrast, problems that share both their deep structure and shallow structure can be solved with relative ease.

A seminal study that illustrates the challenges of transfer asked students to solve the Radiation Dilemma: a medical puzzle of how to destroy a tumor with laser beams. [2] Some of the students were first told to read The General: a puzzle (and its solution) based on the common military strategy of surrounding an enemy and attacking from all sides. The solution to the Radiation Dilemma was analogous to the solution for The General: radiation beams should target the tumor from all sides until destroyed.

The researchers found that the students who first read the solution to The General successfully solved the Radiation Dilemma more often than those who did not.

However, students who received a hint that the solution to The General problem would help them solve the Radiation Problem were actually more successful in solving it than those who read both problems but received no hint.

This finding suggests that analogies can certainly be a helpful guide when children (or adults) are trying to make sense of a problem or find similarities between different contexts. But, they can also be confusing. Presumably,  people become distracted by or hyper-focused on shallow structural features (e.g., reading the Radiation Dilemma and trying to remember what medical strategy was used on a TV drama) and thus overlook the deep structure similarities that are present.

So, when we ask students to make connections between two problems, scenarios, or stories that have surface-level differences, a little hint may just go a long way.

The less the merrier?

In addition to better understanding how to make decisions or think about problems, researchers also aim to understand how much we should think about them. And, contrary to popular thought, it appears that reasoned and evaluative thinking may not always be best.

In fact, there is evidence for the deliberation-without-attention effect: some problem-solving situations seem to benefit more from unconscious cognitive processing. To investigate this, scholars at the University of Amsterdam set out to determine whether better decisions result from unconscious or conscious thought. [3]

In their experiment:

  • participants (college students) read information about four hypothetical cars
  • the descriptions of the cars were either simple (four features of the car were listed) or complex (12 features were listed)
  • some of the features were positive and some were negative; the “best” car had the highest ratio of positive-to-negative features
  • four minutes passed between participants reading about the cars and being asked to choose the best one
  • some participants spent those four minutes thinking about the cars, while the others were given a puzzle to solve in order to distract them from such thinking

When asked to choose the “best” car, two groups stood out:

  • Group A: participants that (1) read the simple car description and (2) consciously thought about the cars were more likely to identify the best car than those who read the simple description and then worked on the puzzle
  • Group B: participants who: (1) read the most complex car descriptions and (2) were then distracted by the puzzle were more likely to identify the best car than those who read the complex description and consciously thought about the car options

The participants in Group B actually had a higher overall success rate than those in Group A.

Thus, it appeared that conscious thinkers made the best choices with simple conditions, but did not perform as well with complex circumstances. In contrast, the unconscious thinkers performed best with complex circumstances, but performed more poorly with simple ones.

Buyer’s Remorse

Of course, the cars that the participants evaluated were fictional. The researchers therefore wanted to see if their results would hold up in similar real-word circumstances. They traveled to two stores: IKEA (a complex store, because it sells furniture) and a department store (a simple store, because it sells a wide range of smaller items, such as kitchen accessories).

As shoppers were leaving the store with their purchases, the researchers asked them:

  • What did you buy?
  • How expensive was it?
  • Did you know about the product before you purchased it?
  • How much did you think about the product between seeing it and buying it?

The researchers then divided the shoppers into two groups: (1) conscious and (2) unconscious thinkers, based on amount of time they reportedly spent thinking about their purchased items.

After a few weeks, the researchers called the shoppers at home and asked them about their satisfaction with their purchases. In a similar vein to the first experiment, here the conscious thinkers reported more satisfaction for simple products (department store) and the unconscious thinkers reported more satisfaction for complex products (IKEA).

Thus, these experiments indicate that conscious thinking is linked to higher satisfaction with decisions when conditions are simple (less to evaluate), whereas unconscious thinking leads to higher satisfaction when conditions are complex (many factors to evaluate).

Why don’t you sleep on it

While these studies are only a snapshot of the problem-solving and decision-making research field, they offer some valuable thoughts for how we can support students in the classroom.

First, we know that students need to understand problems in order to solve them. It is likely a good habit to continually remind ourselves that our students do not all make sense of the same problems in the same way or at the same rate. Thus, as we saw in The General, when we offer students problem guides, strategies, or templates, a little nudge as to how to use them can be enormously beneficial.

Second, we often push our students to think deeply and critically about problems and context. And that is probably true now that, more than ever, thoughtful, evidence-based, and logical reasoning is critical for tackling both local and global issues.

But there is also much to be said about instinct, conscience, and whatever it is that goes on in our subconscious. So if we see our students dwelling on a problem, or sweating a decision, the best way that we can help them delve into a solution may just be to first have them step away for a little while.

References:

[1] Novick, L., & Bassok, M. (2006). Problem solving. In K. Holyoak & R. Morrison (Eds.), The Cambridge Handbook of Thinking and Reasoning (pp. 321-349). London: Cambridge University Press.

[2] Gick, M. & Holyoak, K. (1980). Analogical problem solving. Cognitive Psychology 12(3), 306-355.

[3] Dijksterhuis, A., Bos, M., Nordgren, L., & van Baaren, R. (2006). On making the right choice: The deliberation-without-attention effect. Science, 311, 1005-1007.

The Effect of Alcohol on Learning…
Andrew Watson
Andrew Watson

AdobeStock_9945053_Credit

…might not be what you’d expect.

My prediction would have been that if I have a glass of wine before I learn some new vocabulary words, I won’t learn those words as well as I would have fully sober.

That prediction, it turns out, is correct. New learning that takes place post-alcohol just doesn’t consolidate very well. It seems that alcohol inhibits long-term potentiation.

I also would have predicted that if I have a glass of wine just after I learn some new vocabulary words, that wine would muddle my memory of those new words as well.

That prediction, however, is just wrong. My post-study wine–surprise!–improves my recall of those words the next morning.

In fact, a recent study shows that this effect holds true not only in the psychology lab, but also at home. When participants (not just college students, by the way) went home after they learned new words and raised a pint or two, they remembered more of those words than their fully-sober counterparts.

Even more remarkable, they did better than their alcohol-free peers not because they forgot less, but because they remembered even more. That is, their recall score in the evening was in the mid 30% range; the next morning, it was in the low 40% range.

Theories, theories

The standard hypothesis to explain such a result goes like this: when we drink alcohol, the brain forms fewer new memories. The hippocampus takes advantage of this pause to consolidate previous memories.

In other words: since the brain has some alcohol-induced down time, it uses that time to firm up what it already knows.

The authors of this study suggest an alternate explanation: sleep. As they explain, alcohol increases the proportion of slow-wave sleep compared to rapid-eye-movement sleep. Because slow-wave sleep is good for the formation of factual memories, this SWS increase benefits factual learning.

(An implication of this hypothesis is that alcohol might be bad for other kinds of memory formation–such as procedural memory–which require more rapid-eye-movement sleep. That is: alcohol might help you learn more facts, but fewer skills.)

Some Caveats, and an Invitation

Needless to say, I’m not encouraging you to drink heavily to promote learning.

And, I wouldn’t share these results with my 2nd graders.

However, after a long evening of study, I just might feel a bit less guilty about relaxing with a cozy Cabernet.

And, when you come to this fall’s Learning and the Brain conference, you should definitely join us at the wine and cheese reception.

Criticizing Critical Thinking
Andrew Watson
Andrew Watson

AdobeStock_29005489_Credit

Over at Newsweek, Alexander Nazaryan wants to vex you. Here’s a sample:

Only someone who has uncritically mastered the intricacies of Shakespeare’s verse, the social subtexts of Elizabethan society and the historical background of Hamlet is going to have any original or even interesting thoughts about the play. Everything else is just uninformed opinion lacking intellectual valence.

If you’d like a more nuanced version of this argument, check out Daniel Willingham’s Why Don’t Students Like School. 

In particular, you might read…

Chapter 2: “Factual knowledge must precede skill”

Chapter 4:  “We understand things in the context of what we already know, and most of what we know is concrete”

Chapter 5: “It is virtually impossible to become proficient at a mental task without extended practice”

and chapter 6: “Cognition early in training is different from cognition late in training”

From another vantage point: my own book Learning Begins discusses the dangers of working memory overload lurking in efforts to teach critical thinking.

Whether you prefer Nazaryan’s emphatic declamations, or Willingham’s and my more research-focused commentary, take some time to think critically about all the cognitive legwork that must precede real critical thought.