Andrew Watson – Page 72 – Education & Teacher Conferences Skip to main content

Andrew Watson About Andrew Watson

Andrew began his classroom life as a high-school English teacher in 1988, and has been working in or near schools ever since. In 2008, Andrew began exploring the practical application of psychology and neuroscience in his classroom. In 2011, he earned his M. Ed. from the “Mind, Brain, Education” program at Harvard University. As President of “Translate the Brain,” Andrew now works with teachers, students, administrators, and parents to make learning easier and teaching more effective. He has presented at schools and workshops across the country; he also serves as an adviser to several organizations, including “The People’s Science.” Andrew is the author of "Learning Begins: The Science of Working Memory and Attention for the Classroom Teacher."

Research Morsel: Gender Differences in Math (Again)
Andrew Watson
Andrew Watson

AdobeStock_41992792_Credit

The journal Intelligence recently published an interesting study [1] analyzing gender differences in cognitive abilities in the US and India.

The question hovering in the background is—as it so often is—“are there innate gender differences in cognitive abilities?”

That is: we have lots of data showing gender differences in various measures of academic success; are those differences inherent in genders, or are they socially created? Or, a combination of both?

To answer this question, you might look at the very best performers in—say—math. If there are substantially more boys in the top 5% of math scorers, and if that substantial difference persists over time, then you might think that–all other things being equal–boys are innately better at math.

This study, however, shows that the difference has shrunk in the last twenty years—in both the US and India. If gender differences in math are innate, then these results are a shocker.

Of course, other readers might see this study reinforcing a theory of innate gender differences.

  1. Although there is a smaller difference in math success between genders, that difference does persist. That is: there are still more boys than girls at the very highest end of math performance.
  2. The gender difference at the high end of verbal performance has not changed. Girls still score higher than boys do on such tests.

In my view, this study tends to confirm the hypothesis that social forces exaggerate—and perhaps create—gender differences in academic performance.

  1. I suspect that gender differences in verbal performance haven’t changed because we haven’t focused on them with the same energy and creativity that we’ve used to promote girls’ self-perception in math.
  2. While math gender differences persist in the US and India, they vary quite widely by country [2]: an odd finding indeed if boys are naturally mathier than girls.

Reasons to be cautious when interpreting this article—and this blog post:

  1. This research looks at gender differences in one very specific way: math and verbal performance at the very high end (“the extreme right tail” of the bell curve). There are MANY other ways to consider these complex questions, and we shouldn’t let any one way determine our answer.
  2. We have only recently begun to understand that gender isn’t always binary. I don’t think many researchers in this field have found ways to analyze math performance of transgender students.
  3. The article is still behind a paywall, so I haven’t seen the numbers. You might want to look at the underlying data to see if you find it persuasive.
  4. I, of course, have my own biases:
    1. I think that gender differences in academic performance are much more likely to be socially created than innate [3, 4]. And so, it’s not surprising that I interpret this article as I have. (It’s also not surprising that I’ve decided to write about it for the blog.)
    2. More broadly, I think the “innate differences” hypothesis just isn’t helpful to teachers. My job is to help this student learn academic material—these facts, these procedures, these moral habits, these life lessons. If I clutter my brain with the belief that “girls can’t do math,” I do my students a deep disservice because I make it harder for them to learn. That is: my potentially false belief turns into a self-fulfilling prophecy. All of Carol Dweck’s research [5, 6] and all Claude Steele’s research [7, 8], highlights this point.

If you’re especially interested in this topic, Lisa Damour—Director of Laurel School’s invaluable Center For Research on Girls—has produced many evidence-based summaries that can be helpful to your thinking.

  1. Makel, M. C., Wai, J., Peairs, K., & Putallaz, M. (2016). Sex differences in the right tail of cognitive abilities: An update and cross cultural extension. Intelligence, 59, 8-15.
  2. Else-Quest, N. M., Hyde, J. S., & Linn, M. C. (2010). Cross-national patterns of gender differences in mathematics: a meta-analysis. Psychological Bulletin, 136(1), 103.
  3. Hyde, J. S. (2005). The gender similarities hypothesis. American Psychologist, 60(6), 581.
  4. Eliot, L. (2009). Pink brain, blue brain: How small differences grow into troublesome gaps – and what we can do about it. Boston: Houghton Mifflin Harcourt.
  5. Dweck, C. (2008). Mindset: The new psychology of success. New York: Ballantine Books.
  6. Rattan, A., Good, C., & Dweck, C. S. (2012). “It’s ok—Not everyone can be good at math”: Instructors with an entity theory comfort (and demotivate) students. Journal of Experimental Social Psychology, 48(3), 731-737.
  7. Steele, C. (2010). Whistling Vivaldi: How stereotypes affect us and what we can do. New York: W. W. Norton & Company.
  8. Murphy, M. C., Steele, C. M., & Gross, J. J. (2007). Signaling threat how situational cues affect women in math, science, and engineering settings. Psychological Science, 18(10), 879-885.

 

 

Default Image
Andrew Watson
Andrew Watson

This video, from TedEd, helpfully outlines many of the reasons it can be difficult to confirm research done in scientific fields–like neuroscience and psychology.

In brief: each research article you read takes a helpful step in a beneficial direction. (Even occasional missteps can be helpful, because they provide us with fresh perspectives.) However: researchers are always on a journey–and almost never at a destination.

For those of us who love hearing from scientists at Learning and the Brain conferences, we should remember: their research is always part of a large, complex, and fascinating discussion. The last word on any subject, however, has yet to be written…

(BTW: Don’t worry about the video’s hyperbolic title.)

LaTB Stories #1: Alex W.
Andrew Watson
Andrew Watson

AdobeStock_78204304_Credit

My name is Alex Wonnell, aka Wonz.  I work in a middle school in Burlington, VT.  

Dr. Kou Murayama, who researches motivation and learning, presented some of the most interesting and relevant research I saw at the November 2016 conference.  

As educators, we are constantly trying to motivate students. Do rewards work?  When should I provide this carrot?  What’s best for long-term learning? Murayama’s research provides much-needed context and science in this domain.  

Here is a list of Dr. Murayama’s general findings:

  •         Intrinsic Motivation (IM) leads to more long-term consolidation of learning.
  •         Extrinsic Motivation (EM) leads to more short-term learning.
  •         IQ is strongly related to baseline math achievement. However, growth in math achievement is unrelated to IQ.  
  •         Unlike IQ, IM predicts long-term learning.
  •         We can increase IM by promoting a sense of competence, relatedness to teachers and peers, and choice.
  •         IM may enhance people’s resilience to failure feedback.
  •         Performance-based incentives do not always work.
  •         Extrinsic rewards may not enhance learning for interesting work; there is an undermining effect.  
  •         Extrinsic rewards could facilitate performance with “boring” work.

What to do with Murayama’s findings?  

I particularly found increasing intrinsic motivation to be most valuable.  I spend most of my time in school with a high-needs, highly un-motivated student who has suffered developmental trauma.  Most of the work he does relies on an extrinsic reward, like throwing a ball around.  To him, all work is “boring” unless it’s a game.  So, Murayama’s conclusions partly validate these methods in this context.  

I balance these extrinsic rewards with several of Murayama’s intrinsic reward techniques.

I provide constant positive feedback to create feelings of competence; I encourage classmate communication to promote relatedness; and I ALWAYS give options.  “You can’t make me” is a very common response I get; providing choice is a way to make him feel more autonomous while providing a chance at increasing intrinsic motivation. (While this method is not completely self-directed, it is less forced.) Part of the art of teaching is the delivery and creativity designing the choices.  

In a way, I look at the work I do as extrinsically motivating his intrinsic motivation.  Dr. Murayama’s research has given me greater insight into this paradox.

In sum, Murayama provides a beginning framework to understand motivation in education.  The classroom is a complex environment – one very different from a laboratory – but his research can help steer us in the right direction. No wonder that he won the 2016 “Transforming Education Through Neuroscience” Award.

[Editor’s Note: Have you got a Learning and the Brain story you’d like to share? Email me at [email protected]]

 

Murayama, K., Elliot, A. J., & Yamagata, S. (2011). Separation of performance-approach and performance-avoidance achievement goals: A broader analysis. Journal of Educational Psychology, 103(1), 238. (Article)

Murayama, K., & Kuhbandner, C. (2011). Money enhances memory consolidation–But only for boring material. Cognition, 119(1), 120-124. (Article)

Murayama, K., Matsumoto, M., Izuma, K., Sugiura, A., Ryan, R. M., Deci, E. L., & Matsumoto, K. (2013). How self-determined choice facilitates performance: A key role of the ventromedial prefrontal cortex. Cerebral Cortex, 1241-1251. (Article)

Murayama, K., Pekrun, R., Lichtenfeld, S., & Vom Hofe, R. (2013). Predicting long‐term growth in students’ mathematics achievement: The unique contributions of motivation and cognitive strategies. Child development, 84(4), 1475-1490. (Article)

 

Default Image
Andrew Watson
Andrew Watson

Every wonder what it’s like to plan a Big Room presentation for Learning and the Brain?

In this blog post, Glenn Whitman and Ian Kelleher describe the thought process behind their adventurous presentation at this fall’s conference.

Enjoy!

Default Image
Andrew Watson
Andrew Watson

Now that you’ve been to LaTB, we’d love to hear your story.

What did you learn? What did you try? How did it go?

If you’d like to share your experience, please send me an email with:

  • Who you are and what you do.
  • The research and the researcher that inspired you (and, at which conference you heard this idea).
  • What you did with this inspiration.
  • The results you saw.

Please be sure to include a specific source (a book or article) for the ideas that you tried. And, keep in mind that you’re writing for a blog audience—short and punchy entries are especially welcome.

We won’t be able to publish every entry, but…we hope to hear from you!

[email protected]

[email protected]

Research Morsel: Digital Media vs. Flourishing
Andrew Watson
Andrew Watson

 

AdobeStock_97841942_Credit

The Findings: Researchers at Brown University1 have found that increased time spent on “digital media” reduces the likelihood of “flourishing.” For example, students who spent 2-4 hours on digital media were 23% less likely to complete homework than those who spent 0-2 hours.

Those who spent more than 6 hours (!) on digital media were 63% less likely (!!) to finish homework than their 0-2 hour peers.

Screen time impedes other kinds of growth. Extra minutes on digital media reduce the likelihood that students will complete tasks that they have started, or remain calm under pressure.

Surprise #1: This result holds true despite age, gender, or socio-economic status. Stereotypes might suggest, for instance, that girls can handle digital distraction better than boys can, but…at least in this study…not so much.

Nagging Questions:

At least so far, these researchers haven’t reported the effects of meaningful subcategories. Are all kind of digital media equally bad?

After all, other studies have shown cognitive benefits for some video games: for example, Portal2, or Starcraft3. One HUGE study (27,000 French middle schoolers) found that video games had basically no effect on academic performance4.

I also wonder: as teachers increasingly assign homework that might be done on a tablet, what effect does academic digital media time have on these findings? Do our efforts to join our students’ digital lives in fact impede their learning?

Is the problem here simply distraction from schoolwork? For example: if a student spends more than 6 hours a day (!) building snow forts, what effect does that have on the likelihood she will finish her homework?

The research hasn’t been published yet, so we’re still relying on the authors’ own summaries.

 

  1. American Academy of Pediatrics. (2016, October 21). More time on digital devices means kids less likely to finish homework: Study finds dose-dependent relationship between time spent watching TV, playing video games or using a smartphone and tablet, and the chances a child will regularly finish homework. ScienceDaily. Retrieved November 12, 2016 from sciencedaily.com/releases/2016/10/161021122236.htm
  2. Shute, V. J., Ventura, M., & Ke, F. (2015). The power of play: The effects of Portal 2 and Lumosity on cognitive and noncognitive skills. Computers & Education80, 58-67. [article]
  3. Glass, B. D., Maddox, W. T., & Love, B. C. (2013). Real-time strategy game training: emergence of a cognitive flexibility trait. PLoS One8(8), e70350. [article]
  4. Lieury, A., Lorant, S., Trosseille, B., Champault, F., & Vourc’h, R. (2014). Video games vs. reading and school/cognitive performances: a study on 27000 middle school teenagers. Educational Psychology, 1-36.

On Average, the Average is Off
Andrew Watson
Andrew Watson

 

AdobeStock_92636720_Credit

Here’s a potential headline:

BOOK ON STATISTICS MAKES GRIPPING READING

Or, another:

COMMONLY USED SCHOOL METRICS MOSTLY USELESS

Or, one more:

LIFE STORY OF FUNNY MAN EXEMPLIFIES MORAL IMPERATIVE

These headlines, perhaps, leave you deeply skeptical. And yet, Todd Rose’s The End of Average fulfills them all. It may be the only book about a basic mathematical procedure that you start recommending to your colleagues.

BOOK ON STATISTICS MAKES GRIPPING READING

As a culture, we’re obsessed with averages: from IQ and GPA, to ERA and on-base percentage, to the Dow Jones and monthly unemployment.

Given the ubiquity of these calculations, it’s amazing to learn that an identifiable individual first decided to use scientific averaging procedures to draw conclusions about human social institutions. (It’s even more amazing to learn that his name was Adolphe Quetelet. This man should have invented potato chips.)

In the 200 years since Quetelet, some have seen the average as the ideal, and vilified variance from the average as a problem that schools and factories must solve.

Others—including Francis Galton, relative of Charles Darwin—have been champions of those who exceeded the average, exalting the eminent above the mediocre (and certainly above “the imbecile”).

In other words, there is a social history to our obsession with averages. It’s not a timeless norm of human societies, but a recent quirk in our social world view.

Improbably, Rose recounts this intellectual and social history with admirable clarity and welcome humor. I rarely lost my place in his argument, and regularly appreciated his wry observation and turn of phrase. When reading about the history of math, every Dante deserves so engaging a Beatrice.

COMMONLY USED SCHOOL METRICS MOSTLY USELESS

But here’s the catch in Rose’s engaging and witty story: when we use averages to describe people, the underlying mathematical assumptions go badly awry.

I’ll leave the details to Rose (who, by the way, does an impressive job making the “ergodic switch” clear to non-math readers).

The simple version is this; the rules governing mathematical procedures assume that human beings are like certain gas molecules: identical, and changeless. Of course, you don’t need too much experience as a teacher to know that our students are not immutable clones.

The horrifying implication: our obsession with IQ and GPA and countless other measurements that depend on averaging depends ultimately on a mathematical error. There’s a bug deep in the code we’ve been programming with all along.

Here’s an analogy—adapted from Rose’s introduction. The last time you rented a car, you probably spent a few minutes adjusting all sorts of settings. You moved the seat up and back, tilted the steering wheel, rejiggered the mirrors—even before you got to the radio and the AC.

Of course, car makers would be much happier if they could dispense with all these adjustments; that is, if they could build a car for the average driver. But they (or, as Rose explains, the Navy) have found that no such driver exists. Even if you knew that a driver is 5’ 10”, you still can’t make good predictions about the right height for the steering wheel, or the proper tilt for the headrest…much less the best temperature for the car.

And yet, IQ tests assume, in effect, that all students can comfortably drive the same car. If their driving is faulty, the problem resides in the driver, not in the car itself.

Other books in this field offer specific teaching strategies. Instead, Rose offers readers a new way to think about information we already have. The uses of these new thought processes will be different for each of us.

If, for example, your school uses IQ scores or GPA as a prerequisite for advanced tracks or classes, you’ll know how to think about these criteria in the future.

If, on the other hand, you’re designing a new class, Rose’s frameworks will doubtless inspire you contemplate course requirements anew. His final three chapters, in fact, offer models for rethinking old systems to allow for complex individuality.

Alas, Rose’s examples don’t come from K-12 schools; we will have to do that work ourselves. At the same time, we can be more effective in rethinking approaches to teaching given Rose’s wisdom and guidance.

LIFE STORY OF FUNNY MAN EXEMPLIFIES MORAL IMPERATIVE

More than most books on science, Rose presents his own life story as a central example of his hypothesis.

In many ways, his biography resembles a cautionary tale about bad choices and misspent opportunities. After a series of failures in high school, he ended up on welfare—with a wife and two children to support.

And yet, Dr. Todd Rose is now the Director of the Mind, Brain, Education program at Harvard University’s School of Education—and the author of a book published by Harper Collins. He has, in brief, made it.

His remarkable story points to two key moral arguments.

First: as a society, our schools cheat many who don’t fit within “averagarian” norms. Clearly Rose has what it takes to succeed—the man is, after all, a Harvard professor. And yet, our education system didn’t facilitate his success; it routinely impeded that success.

We simply can’t feel good about social systems that block capable people.

Second: as a society, we cheat ourselves by limiting the successes of promising students. Think of all the other Todd Roses out there who were not able to overcome the hurdles our system placed before them. Think what they might have invented and accomplished and discovered—for us.

In other words: Rose’s desire to see past faulty “averagarian” thinking is not some dewy-eyed project to make do-gooders sleep cozily. Instead, it is an utterly rational appeal to our sense of justice and of logic. If we can take off our social blinders, we will benefit not only those who need non-average systems to thrive, but also ourselves, our students, our families, and our world.

 

Todd Rose (2016). The End of Average: How We Succeed in a World that Values Sameness. New York: HarperOne.

 

Full Disclosure: I took one course under Dr. Rose in the MBE program at Harvard’s School of Education.

Default Image
Andrew Watson
Andrew Watson

typing

In April of 2014, Pam Mueller and Dan Oppenheimer struck psychology gold with their cleverly titled article, “The Pen is Mightier than the Laptop: Advantages of Longhand over Laptop Note Taking.”1

No psychology article that I know of has gotten so much play: in newspaper articles, in teacherly blogs, in faculty room debates.

Heck, it shows up regularly on my Facebook feed, as my exasperated college professor friends vow to ban laptops from their classrooms. That prohibition will benefit students! Science says so!

Among the article’s many strengths: it confirms what we knew all along. The way we did things back in the day—that way was better. (If you’re so inclined, you might now add nostalgic words about high cotton paper positively drinking the ink from a fountain pen…)

More or Less Fidelity

Mueller and Oppenheimer picked a research question with two impressive qualities: teachers agree that it’s a really important inquiry, and it’s relatively easy to investigate.

So, the research team had two groups of students watch a lecture. One group took handwritten notes; the second group took laptop notes. On a later test, which group remembered more?

Being careful researchers, Mueller and Oppenheimer went beyond “laptop notes” and “handwritten notes” to investigate two other potentially important variables.

First: the number of words that students wrote. Did the students who wrote fewer words score higher on the ultimate test? Or, the students who wrote more words?

Let’s imagine the professor says this:

“Four score and seven years ago our fathers brought forth on this continent a new nation, conceived in liberty, and dedicated to the proposition that all men are created equal.”

A student could write lots of words:

  • Four score and seven years ago
  • Fathers brought forth new nation
  • Conceived in liberty
  • Dedicated to prop: all men equal

Or, relatively few words:

  • Four score and seven
  • New nation
  • Liberty
  • Equality

The second variable: fidelity to the lecturer’s words.

A student could copy down those words verbatim:

  • Four score and seven years ago
  • Fathers brought forth new nation
  • Conceived in liberty
  • Dedicated to prop: all men equal

Or, a student could deliberately put those ideas into her own words

  • 87 years ago
  • Revolutionary war created US
  • Two goals; free people, equal people
  • Might freedom conflict with equality?

Looking at all these variables—laptops vs. notebooks, number of words, and fidelity of notes—Mueller and Oppenheimer reached three conclusions.

a) The Big Reveal: hand-writers remembered more than laptop note-takers. When it comes to classroom note-taking, in the authors’ words: “the pen is mightier than the laptop.”

b) The number of words does matter. Students who wrote MORE words remembered more information than those who wrote FEWER words.

c) The fidelity of notes does matter. Students who REWORDED their notes remembered more information than those who took down the speaker’s words VERBATIM.

These conclusions align with our preconceptions. After all, a) OF COURSE handwritten notes are better. And, b+c) students who write more words, and write more of their own words, have devoted more mental energy to processing the ideas in the lecture. As we all know, more mental processing = more learning.

Laptops with Limits

These conclusions, however, create a bit of a puzzle. Handwriting takes more time and physical coordination than does typing, so laptop note-takers can write more words than hand-writers. If more words = more learning, why do the wordy laptop note-takers fall short of the relatively taciturn hand-writers on the final test?

Here we arrive at Mueller and Oppenheimer’s key finding: laptop note takers write more words, but they use this excess word capacity to write more VERBATIM words. Because hand-writers simply can’t write down everything the lecturer says, they have to REWORD the ideas in the lecture. This rewording leads to more cognitive effort, and that cognitive effort leads to more learning.

In other words, technology steers note-takers in meaningful directions. Those who use paper-and-pencil technologies write slowly, and therefore must reword their notes. Those who use laptop technologies write quickly, and therefore take down the speaker’s words verbatim. This second choice might seem wiser, but in fact reduces processing and thus undermines long-term learning.

Replacing evil with virtue

Being careful researchers, Mueller and Oppenheimer didn’t stop here. Instead, they asked a crucial question: can laptop note-takers learn to replace verbatim notes with reworded notes? Could they, in other words, use their capacity to write more words for good, rather than for evil?

To answer this question, they repeated their study, and they gave laptop note takers stern instructions: “People who take class notes on laptops … tend to transcribe what they’re hearing without thinking about it much. Please try not to do this as you take notes today. Take notes in your own words and don’t just write down word-for word what the speaker is saying”.

The result? Nothing changed. Defying these admirably clear instructions, laptop note-takers took verbatim notes, and remembered less than the hand-writers, who used their own words.

So, there you have it. Laptop note-takers can’t be retrained to reword their notes. Because hand-writers do reword their notes, the pen is mightier than the laptop…

Case Closed.

Case Reopened?

Let’s try an analogy here. When I tell my students how to subordinate a quotation in a participial phrase, they often try and fail. When they try and fail, I conclude that they can’t do it, and so I stop asking them to subordinate quotations in participial phrases. In brief, I give up. Isn’t that what you do?

Well, of course not. We’re teachers. When we show our students how to do something, they ALWAYS fail the first time. And, most likely, several more times. For this reason, we naturally build in time for students to practice. Learning any meaningful skill requires structured repetition. Obviously.

And yet, Mueller and Oppenheimer insist just the opposite. You can hear them cry: “Those laptop note-takers really should have used their own words BECAUSE WE EXPLICITLY TOLD THEM TO.”

Once. You told them to, once.

Did they get to practice? No. Did you tell them why? Not really. And: you’re surprised they didn’t change a behavior they’ve been practicing since they first started taking notes on laptops? Really?

A New Hypothesis

Let’s combine our experience as teachers with Mueller and Oppenheimer’s research.

Teacherly wisdom shows that we can train students to learn new skills: how to multiply fractions, how to use the subjunctive, how to throw a knuckleball. It seems highly likely that we can train laptop note takers to reword their notes. This training might take some time. The students’ progress probably won’t be constant. But, they can learn to do it.

After all: hand-writers have learned to reword their notes, so it’s hard to understand why laptop note-takers can’t.

If students succeed in this project, then they will end up with an awesome classroom superpower: the ability to write more words AND reworded words. With this superpower, they should remember even more than the hand-writers, who write fewer words that are reworded words. This likelihood, in fact, flows directly from Mueller and Oppenheimer’s research.

Under the right circumstances, the laptop just might defeat the pen.

Mind you: the study to test this hypothesis has not—to my knowledge—been done. But the hypothesis is, I think, the best interpretation of Mueller and Oppenheimer’s research.

Some Final Thoughts

  1. I should admit my own biases here. I take laptop notes. In fact, I’m a touch typist. I’m even a touch typist on the Dvořak keyboard. Like Liam Neeson, I’ve put a lot of hours into learning a particular set of skills. I’d be sad to learn those skills were weakening, not strengthening, my learning.
  2. Wise teachers often object that laptops introduce many other sources of potential distraction: Insta-snap-face-chat-gram, or email, or—heaven help us—Netflix. This objection is obviously true; in fact, Faria Sana has done impressive research into the power of these distractions.2 However, this objection doesn’t focus on Mueller and Oppenheimer’s underlying claim: the very technology that we use to take notes shapes their helpfulness. If laptop notes can truly boost learning more than hand-written notes, then we should help our students get those benefits without losing them to YouTube distractions.
  3. Even if Mueller and Oppenheimer’s study were done perfectly, teachers should still be cautious about adopting its conclusions. As you have read many times in this blog, we should look at bodies of research, not only at individual studies.
  4. The collaboration between psychology and education should be a conversation, not a lecture. When psychologists say “do this,” teachers should a) look hard at the research that led to that guidance, and b) use our own experience to ask hard questions. In other words: we should not take verbatim notes when psychologists speak—we should reword and reconsider as we go.
  5. We should ask those hard questions even when—perhaps especially when—psychology research seems to confirm beliefs that we have held all along. If we’ve always known that handwritten notes are best, then we should be thoughtfully skeptical of research that tells us what we want to hear. Me included.

Reference & Further Reading

  1. Mueller, P. A., & Oppenheimer, D. M. (2014). The pen is mightier than the keyboard advantages of longhand over laptop note taking. Psychological science, 0956797614524581. [Paper]
  2. Sana, F., Weston, T., & Cepeda, N. J. (2013). Laptop multitasking hinders classroom learning for both users and nearby peers. Computers & Education, 62, 24-31. [Paper]

Default Image
Andrew Watson
Andrew Watson

False Memories

Is mindful meditation good for learning?

If you work in or near a school—or if you often read this blog1—you have surely heard about meditation’s potential benefits for just about everything: executive function, stress reduction, strategic backgammon decision making. (I think I made that last one up.)

So what do you make of an article with this title: “Increased False-Memory Susceptibility After Mindfulness Meditation”?

If you’re like me, such an article might give us pause. If meditation promotes “false-memory susceptibility” of any kind, it must be bad for learning. No? Time to call a halt to all those meditation programs. Am I right?

List vs. Gist

Here’s a fun game you might try at your next dinner party.

I’m going to give you a list of words, and your job is to remember them.2 Ready? Here we go:

Table, sit, legs, seat, soft, desk, arm, sofa, wood, cushion, rest, stool 

A few minutes from now, when I ask you to write all those words down again, you’re likely to remember several of them. You’re also likely to include a word that wasn’t actually on the list: chair.

After all, while the word “chair” doesn’t appear in that list, it is implied by or associated with all the other words. Tables and desks and sofas and stools often accompany chairs; people sit on chairs; chairs have legs and arms and cushions.

In other words, when you remember that list of words, you remember not only the specific items on it, but also its gist. The gist includes the idea of “chair,” even though the list itself did not.

The Beginning of the End?

Brent M. Wilson and his colleagues wondered if meditation would increase the formation of gist memories. Their thought process went like this:

Because meditation promotes judgment-free observation of the world, people who have recently meditated might be less likely to distinguish between (that is, form judgments about the source of) internally and externally generated words. If this hypothesis is correct, meditators are less likely to see differences between (external) list memories and (internal) gist memories. They are therefore likelier to include gist words when they join us for our dinner party game.

To test this idea, Brent Wilson invited 140 college undergraduates to dinner. (Ok, no. The students did this exercise in a psychology lab. You have to admit, however, that my version sounds more fun.)

For fifteen minutes, half of the participants were invited to “focus attention on their breathing without judgment”: that is, they were guided through meditation. The other half spent fifteen minutes in a mind-wandering exercise: a common control task in studies of mindful meditation.3

Sure enough, when Wilson tested the post-meditation students, they were likelier to include gist words than students in the control group. Seemingly, meditation promotes the formation of false memories.

To make doubly sure, Wilson tried another research paradigm as well. Students saw 100 words on a computer screen; each word was half of a common pair (shoe/foot, for example, or hot/cold). They were then shown another 100 words—half of which were on the first list, and half of which were pairs of words from that list. Students who meditated were likelier than those in the control group to “remember” a new word as if it were an old one.

So, there you have it: meditating increases false-memory susceptibility. By definition, anything that promotes false memories harms learning. No doubt, Wilson’s study is the beginning of the end of school-based meditation.

Let Me Count the Ways

And yet, perhaps you do have some doubts. So do I. And here’s why…

First, it’s important to emphasize that Wilson and his crew never draw the conclusion that I have implied. As teachers, we might read the title of the article and plausibly extrapolate that meditation must be a terrible idea. But the study’s authors never say so.

And, even if they did, we must keep in mind that this study is…one study. The effects of mindfulness have been researched in hundreds of studies. Given that volume, we should expect some studies to show negative results, and others to show neither benefit nor harm.

In short, we should be interested in bodies of research as well as individual studies.

Second, when we read the specifics of this individual study, we can see how small the effects really are. In that dinner party game, for example, 26% of the control group thought that they “remembered” gist words, whereas in the meditation group, 34% did. This increase is statistically significant, but hardly alarming.

(For you statistics junkies, the Cohen’s d values are 0.38 and 0.28 in the two studies I described. Again: not nothing, but not much of something.)

Third: say it with me now—context always matters.

In some classes, a gist memory might be a bad thing. For example, a colleague of mine has her students learn a song to help them memorize all English prepositions. In this case, she doesn’t want her students to add to that list by forming a gist memory. Instead, she wants them to remember all the words in the song, and only the words in the song.

Specifically: “although” might feel like a preposition, and a student’s gist memory might try to incorporate it into that list. But “although” isn’t a preposition; it’s a conjunction. For this reason, Wilson’s research suggests that my colleague might not have her students meditate just before they learn the song. In this case, gist memory detracts from learning.

In other classes, however, gist memory might be my goal. When I teach Macbeth, for example, I want my students to recognize how Shakespeare constantly pits forces of order against forces of chaos. Every page of text includes multiple instances.

For instance: Lady Macbeth is extravagantly polite to King Duncan when he arrives in her castle. And yet, her display of social order masks her determination to commit regicide—the ultimate form of social disorder.

While I certainly want my students to remember specifics from the text, I also want them to feel the bigger picture, to identify both trees and forest. In other words, the event that Wilson calls “false memory” a teacher might call “learning.” Wilson’s research, thus, suggests that I might want my students to meditate before Macbeth class.

Context always matters.

Or, to paraphrase my wise blogging colleague Rina Deshpande, “our role as educators is not to dismiss or adopt a practice right away, but to consume with care.”4

Balancing Curiosity with Skepticism

I’ve explored this study in some detail because it points to helpfully contradictory points:

A. Although mindful meditation has gotten a lot of recent buzz, teachers should pause before we make it a part of our practice. All classroom techniques have both benefits and perils, and we should seek out information on both. In this case, for example, meditation might lead to a particular sort of false memory.

B. Terminology from psychology and neuroscience—terminology such as “false memories”—might be unhelpful, even misleading. In some cases—lists of prepositions—we don’t want students to create gist memories; in other cases—themes of literary works—we do. But alarming phrases like “false memories” shouldn’t distract us from thinking through those possibilities.

In other words: “false memory” sounds like a bad result, but once we realize that “gist memories” are a potentially useful kind of “false memory,” the phrase isn’t so scary any more.

C. For this reason, we must always look at the specific actions performed by specific study participants. If an article’s title claims that “high ambient temperature reduces learning,” you might find that interesting; your classroom often seems unreasonably warm, and your students unreasonably sluggish. However, if you read the study’s particulars, you might find that mice learn a water maze faster in cold water than in warm water. Because your students aren’t mice, aren’t learning mazes, and—I’m assuming—aren’t up to their necks in water, this study may not really apply to you. Perhaps you’ll find more relevant research elsewhere…5

Once More, With Feeling

So, to return to my initial question: Is mindful meditation good for learning?

My answer is: that’s too big a question to answer sensibly. Reading studies (like Wilson’s), we can balance specific potential perils of meditation against the specific potential benefits that Rina has wisely summarized.

References & Further Reading

  1. Deshpande, R. What we’re getting right—and wrong—about mindfulness research. [Blog]
  2. Roediger, H.L., & McDermott, K.B. (1995). Creating false memories: Remembering words not presented in lists. Journal of experimental psychology: Learning, Memory, and Cognition, 21 (4), 803-814. [Paper]
  3. Wilson, B.M., Mickes, L., Stolarz-Fantino, S., Evrard, M., & Fantino, E. (2015). Increased false memory susceptibility after mindfulness meditation. Psychological science, 26 (10), 1567-1573. [Paper]
  4. Deshpande, R. What we’re getting right—and wrong—about mindfulness research. [Blog]
  5. I made this study up too. Just for fun, here’s an article on the complex relationship between room temperature and working memory: Sellaro, R., Hommel, B., Manai, M., & Colzato, L.S. (2015). Preferred, but not objective temperature predicts working memory depletion. Psychological research, 79 (2), 282-288. [Paper]

Default Image
Andrew Watson
Andrew Watson

quizzing students

On some days, I just want my students to pay attention. Really, is this too much to ask?

“Attention” can be difficult indeed, and for multiple reasons. In the field of psychology, that is, “attention” includes several subcategories.

First of all: are my students awake enough to pay attention? Or, perhaps Red Bull overload has made them too awake?

In either case, I can help students pay attention by moderating their levels of alertness. (“Alertness” is one subcategory” of “attention.”)

If they are already appropriately alert, then perhaps the world around them has too many distractions: the smell of formaldehyde in the Biology classroom, or the sound of the squeaky door outside my English classroom, or the symphonic melodies of text message DINGS resounding down the corridors. If my students orient to these distractions, they can’t orient to me. (“Orienting” is another subcategory of “attention.)

In these cases, I can make attention likelier by reducing these disorienting stimuli: I’ll lemon pledge the lab, oil those hinges, and persuade teens that texts aren’t that important. (How hard can this be?)

But perhaps I need to look in the mirror. Perhaps the problem is that my own teaching isn’t quite zippy enough. All those other stimuli might be disorienting because, frankly, my own work doesn’t pull students in. What, then, can I do?

Beyond the Basics

This question is, I suppose, among the most basic a teacher can ask: how do I make my teaching interesting enough for my students to notice? In a perfect world, intrinsic motivation would keep them panting for more knowledge, but few schools in that perfect world are hiring.

Here’s one excellent source of ideas: in Teach Like a Champion1 Doug Lemov offers dozens of practical strategies to improve a teacher’s craft. From lesson plans to behavioral expectations, Lemov has advice drawn from years of observing highly effective teachers.

Throughout TLAC, Lemov offers strategies to make classroom content the center of attention. For example, he argues (persuasively) for the advantages of cold calling—as long as the technique is used correctly—and offers multiple ways to extend the wait time between questions and answers.

These techniques, and others he outlines, go well beyond the basics in helping teachers make classroom content our students’ focus.

WAAAAY Beyond the Basics

Lemov’s answers, although interesting and helpful, don’t draw on research from neuroscience and psychology—certainly not with the emphasis that Learning and the Brain readers expect. What brain research, then, can most helpfully answer this question?

For me, one research finding stands out for its surprising usefulness. Karl Szpunar’s lab looked at the effect of quizzes on attention, and his results certainly upended my predictions.

Here’s the setup:2

Szpunar had two groups of students watch an online lecture on statistics. (As many graduate students know, it can be a real challenge to make Stats an interesting topic, so this video serves as a useful test case of our teaching problem.) For both groups, the lecture video was divided into 4 segments. One group took a brief break between those 4 segments, but the second group took a short-answer quiz on the factual information they had just learned.

While students were watching these videos, Szpunar’s team interrupted them occasionally to ask if they were paying attention, or if their minds were wandering. In other words, were they orienting to the lecture, or were they disoriented?

And, after the video was over, they gathered two more kinds of data. First, they had students take a test on the lecture, to see who had absorbed more of the material. And second, they asked students to rate the experience in a number of ways. (We’ll get back to these ratings in just a moment…)

Good, Better, Worst

When looking over the data, Szpunar’s team first wanted to know what effect the short quizzes had on mind wandering; that is, did the students who took those quizzes focus on the lecture more or less than the students who simply took a break between the video segments?

Answer: the quizzes cut the mind-wandering in half. Students who took a break between video segments said they were mind-wandering 39% of the time, whereas students who took quizzes said so 19% of the time.

It’s not surprising, perhaps, that if I know I’m about to be quizzed on something, I’m much likelier to attend to it.

Did those quizzes affect how much students ultimately learned? Szpunar’s data show that quizzed students did much (MUCH) better on the final test as well. Group 1 students—who took the break between video segments—scored, on average, 59% on that test; Group 2 students—who took short quizzes—scored an 84%.

Because they were taking quizzes, they were focusing more on the lecture; because they were focusing more, they got a B on the final test, not an F.

So, the good news is that, at least in certain contexts, short quizzes make it likelier that my students will focus on the content that we’re covering (and, perhaps, less likely that they’ll focus on that text message). And the better news is, they learn more when they do so.

But the classroom implication of this research could be alarming indeed: do we really want to be adding more testing pressure to the classroom? Do we—in this age obsessed with high stakes testing—want to have still more tests?

Anxiety is Underrated

I promised we’d get back to ratings. You remember that Szpunar had students rate their experience: in particular, he had them rate their anxiety levels. And the students in the Quiz Group were less anxious…not more anxious, LESS anxious…than those in the Break Group. (Their average anxiety rating was a 2, compared to a 3.75 for those who didn’t take quizzes.)

We associate tests and stress so readily that these results seem baffling. How can it be that quizzes reduced stress? Two answers stand out.

First: the quizzes helped the students monitor their own progress. Every fifteen minutes or so, they got feedback about their own understanding. If they knew the answer to a question, then they could be confident that they were in fact learning the material. And, if they didn’t know the answer, they could look to pick up that information in later segments of the lecture. Quizzes provided feedback that boosted confidence.

Second: the quizzes themselves were low stakes and formative. The quizzes weren’t graded, or fussed over, or factored into class averages. The students simply answered a few questions, and then kept going with the lecture. The tone surrounding the quizzes shaped the students’ experience of them.

Szpunar’s research, although surprising, aligns with other studies on the effect of frequent assessment. When Frank Leeming used daily tests instead of one term-end exam in his college Psychology class, his students were skeptical.3 (You’d be skeptical too if you had to prepare for a test each class.) By the end of the term, however, he found that they learned more than students had in previous years, that they preferred daily tests to final exams, and that they recommended he continue the test-a-day plan in future years.

Lab to Classroom

In my experience, frequent low-stakes quizzing creates a virtuous cycle. The feedback that quizzes provide—here’s what I do know, here’s what I don’t—gives students confidence in the work that they’re doing effectively; it also helps them focus specifically on the problems they identify. Their confidence and focus, in turn, motivate them to work more effectively. And they see the results of this redoubled effort when their quiz grades improve.

In other words, frequent, low-stakes quizzes help create the intrinsic motivation we typically expect to find only in that perfect world school—the one that isn’t hiring.

Two final caveats.

First, the tone of these formative quizzes really does matter. We can tell our students why we’re using them, and even get feedback from them on their usefulness. If they feel burdensome and alarming, then they might cause more harm than good.

Second, note that both of the studies quoted here focus on college students: students who have seen enough academic success to get into college, and whose self-regulatory skills have allowed them to do so. As is always true with this kind of research, teachers must translate the ideas into our own contexts: the school where we teach, the students and the material we teach, and our own personalities.

But for now: put away the Red Bull, shelve the Lemon Pledge, and start thinking of fun quizzes to elevate your students’ attention.

References & Further Reading

  1. Lemov, Doug. (2015). Teach Like a Champion 2.0. San Francisco: Jossey-Bass. [Book]
  2. Szpunar, K.K., Khan, N.Y., and Schacter, D.L. (2013) “Interpolated memory tests reduce mind wandering and improve learning of online lectures.” Proceedings of the National Academy of Sciences, 110(16) 6313-6317. [Paper]
  3. Leeming, Frank C. “The exam-a-day procedure improves performance in psychology classes.” Teaching of Psychology3 (2002): 210-212. [Paper]