Skip to main content
Default Image
Andrew Watson
Andrew Watson

Here’s a hypothetical situation:

Let’s say that psychology researchers clearly demonstrate that retrieval practice helps students form long-term memories better than rereading the textbook does.

However, despite this clear evidence, these researchers nonetheless emphatically recommend that students avoid retrieval practice and instead reread the textbook. These researchers have two justifications for their perverse recommendation:

First: students aren’t currently doing retrieval practice, and

Second: they can’t possibly learn how to do so.

Because we are teachers, we are likely to respond this way: “Wait a minute! Students learn how to do new things all the time. If retrieval practice is better, we should teach them how to do it, and then they’ll learn more. This solution is perfectly obvious.”

Of course it is. It’s PERFECTLY OBVIOUS.

Believe It Or Not…

This hypothetical situation is, in fact, all too real.

In 2014, Pam Mueller and Dan Oppenheimer did a blockbuster study comparing the learning advantages of handwritten notes to laptop notes.

Their data clearly suggest that laptop notes ought to be superior to handwritten notes as long as students learn to take notes the correct way.

(The correct way is: students should reword the professor’s lecture, rather than simply copy the words down verbatim.)

However–amazingly–the study concludes

First: students aren’t currently rewording their professor’s lecture, and

Second: they can’t possibly learn how to do so.

Because of these two beliefs, Mueller and Oppenheimer argue that–in the witty title of their article–“The Pen is Mightier than the Laptop.”

But, as we’ve seen in the hypothetical above, this conclusion is PERFECTLY OBVIOUSLY incorrect.

Students can learn how to do new things. They do so all the time. Learning to do new things is the point of school.

If students can learn to reword the professor’s lecture when taking notes on a laptop, then Mueller and Oppenheimer’s own data suggest that they’ll learn more. And yes, I do mean “learn more than people who take handwritten notes.”

(Why? Because laptop note-takers can write more words than handwriters, and in M&O’s research, more words lead to more learning.)

And yet, despite the self-evident logic of this argument, the belief that handwritten notes are superior to laptop notes has won the day.

That argument is commonplace is the field of psychology. (Here‘s a recent example.)

Even the New York Times has embraced it.

The Fine Print

I do need to be clear about the limits of my argument:

First: I do NOT argue that a study has been done supporting my specific hypothesis. That is: as far as I know, no one has trained students to take reworded laptop notes, and found a learning benefit over reworded handwritten notes. That conclusion is the logical hypothesis based on Mueller and Oppenheimer’s research, but we have no explicit research support yet.

Second: I do NOT discount the importance of internet distractions. Of course students using laptops might be easily distracted by Twinsta-face-gram-book. (Like everyone else, I cite Faria Sana’s research to emphasize this point.)

However, that’s not the argument that Mueller and Oppenheimer are making. Their research isn’t about internet distractions; it’s about the importance of reworded notes vs. verbatim notes.

Third: I often hear the argument that the physical act of writing helps encode learning more richly than the physical act of typing. When I ask for research supporting that contention, people send me articles about 1st and 2nd graders learning to write.

It is, I suppose, possible that this research about 1st graders applies to college students taking notes. But, that’s a very substantial extrapolation–much grander than my own modest extrapolation of Mueller and Oppenheimer’s research.

And, again, it’s NOT the argument that M&O are making.

To believe that the kinesthetics of handwriting make an essential difference to learning, I want to find a study showing that the physical act of writing helps high school/college students who are taking handwritten notes learn more. Absent that research, this argument is even more hypothetical than my own.

Hopeful Conclusion

The field of Mind, Brain, & Education promises that the whole will be greater than the sum of the parts.

That is: if psychologists and neuroscientists and teachers work together, we can all help each other understand how to do our work better.

Frequently, advice from the world of psychology gives teachers wise guidance. (For example–retrieval practice.)

In this case, we teachers can give psychology wise guidance. The founding assumption of the Mueller and Oppenheimer study–that students can’t learn to do new things–simply isn’t true. No one knows that better than teachers do.

If we can keep this essential truth at the front of psychology and neuroscience research, we can benefit the work that they do, and improve the advice that they give.

Good News about Concept Mapping
Andrew Watson
Andrew Watson

AdobeStock_38023369_Credit

This meta-analysis, which looks at studies including almost 12,000 students, concludes that creating concept maps does indeed promote learning.

Specifically, it’s better than simply looking at concept maps, or listening to lectures, or participating in discussions, or even writing summaries.

The article summarizes several hypotheses to explain the benefits of concept mapping: it reduces working memory load by using both visual and verbal channels, it requires greater cognitive elaboration, and so forth.

So, let’s hear it: how do you get your students to map concepts? What successes have you had? Let me know in the comments…

(h/t IQ’s Corner)

The Benefits of Forgetting
Andrew Watson
Andrew Watson

AdobeStock_41168140_Credit

As teachers, we earnestly want our students to REMEMBER what they learned; their habit of FORGETTING leave us glum and frustrated.

(In truth, our own forgetting often leaves us glum and frustrated. If you could tell me where I put my to-do list, I’d be grateful.)

In this article at Neuron, authors Blake Richards and Paul Frankland argue that our teacherly priorities don’t quite align with our neurobiology.

In their account, we remember information not simply to have that information, but in order to make good decisions.

In some cases, of course, having more information benefits our decisions, and so our brains are designed to recall that information.

In other cases, however, some kinds of information might well interfere with good decision making.

Specifically, if we forget correctly, we are a) less likely to make decisions based on outdated information, and b) better able to form useful generalizations.

In other words: forgetting is a feature, not a bug.

 

Neuroplasticity in Rural India
Andrew Watson
Andrew Watson

AdobeStock_52841678_Credit

You hear so much about “neuroplasticity” at Learning and the Brain conferences that you already know its meaning: brains have the ability to change.

In fact, you hear about neuroplasticity so often that you might start to lose interest. You say to yourself: “Brains can change: blah, blah, blah. Tell me something I don’t already know.”

And then you read this study about adult women in rural India. They had never learned to read; heck, they had never even been to school.

And, sure enough, when they were taught to read, their brains started changing. After only six months, their brains looked measurably different–all because they had started to read.

On the one hand, this result is perfectly straightforward: if their brains hadn’t changed, how would they have learned anything? And yet, unlike most “doing X causes your brain to change!” stories, this one struck me as quite poignant.

Consider this your feel-good-about-neuroscience story of the day.

Correlation Isn’t Causation, Is It?
Andrew Watson
Andrew Watson

Rock-n-Roil

(Image source)

The ever provocative Freddie deBoer explores the relationship between correlation and causation.

You know, of course, that the one does not imply the other.

DeBoer, however, wants to push your certainty on this point.

Are there circumstances under which proving causation would be immoral, and therefore correlation is a useful placeholder? (Do you really want to run the double-blind study about smoking cigarettes?)

Are there circumstances under which the causal chain is wildly complicated, and so correlation is an important clue?

In other words: while correlation doesn’t prove causation, common sense tells us that it’s an interesting starting point. And: we often face circumstances where causal proof is hard to come by, and so correlation gets our attention as a useful indicator of potential causation.

As long as we’re careful about these subtleties, we can allow ourselves to notice correlation, and speculate (humbly) about causation.

Here’s how deBoer concludes his article:

What we need, I think, is to contribute to a communal understanding of research methods and statistics, including healthy skepticism.  […] Reasonable skepticism, not unthinking rejection; a critical utilization, not a thoughtless embrace.

That’s a hard logical place to find; here’s hoping we can find it together.

____________________

Update: I wrote the piece above on 11/8. Today (11/16), Greg Ashman posted a thoughtful piece making very similar arguments. I wonder what coincidence implies about causation…

Default Image
Andrew Watson
Andrew Watson

In the excitement of this weekend’s Learning and the Brain conference, I overlooked my own one-year anniversary as editor of this blog.

I’ve enjoyed the chance to think aloud with you about teaching, psychology, neuroscience, research–and all the odd topics that come across my desk. (Squirrels, anyone?)

I’ve particularly enjoyed the chance to interact with so many of you. Please keep your emails, questions, suggestions, criticisms, and experiences coming: [email protected].

Here’s to another year!

Cheers,

Andrew

Finding Meaning in Visuals
Andrew Watson
Andrew Watson

AdobeStock_106580650_Credit

When you open your eyes, where do they focus?

Presumably, your eyes automatically turn to the part of the visual field that stands out the most: the bright red door, the tower jutting up from the cliff, the sharp angle against all the curves.

However, it’s also possible that your eyes naturally turn to the part of the visual field that means the most: the subtle-but-essential clue, the small-but-important face, the mundane-but-crucial key that unlocks the mystery.

In the first hypothesis, our visual systems immediately process information without taking meaning into account; in the second, those systems take enough time to include meaning in their guidance.

John Henderson‘s team developed quite an intricate system for testing these possibilities. (You can read the full description here.)

The short version is: they used images where the part that stood out was distinct from the part that meant the most. And, they used an eye-tracking gizmo to see where people looked first.

The answer: eyes focus first on meaning.

Even at the most basic level of processing, our brains prioritize meaningful information over flashy information.

What Henderson’s Research Means for Teachers

This study reminds me of Daniel Willingham’s response to learning styles theory.

In Why Don’t Students Like School, Willingham argues that–for example–visual processing differences don’t make much difference for most students because–most of the time–we don’t want our students to think about what something looks like, we want them to think about what that something means.

Henderson’s study suggests that, even at the moment of initial processing, our eyes prioritize meaning.

For this reason, it can be true that some people remember visuals better than others, but they still aren’t “visual learners.” All of us are “visual learners” because all of our eyes focus on meaning more than on purely visual salience.

The Dangers of Weird Neuroscience
Andrew Watson
Andrew Watson

AdobeStock_86304839_Credit

How do psychologists know what they know about human mental processes?

Quite often, they run studies to see how people behave: what do they remember? where do they look? what do they choose? how do they describe their thoughts?

If they run those studies just right, psychologists can test a very small number of people, and reach conclusions about a very large number of people.

Perhaps they can reach conclusions about all 7,400,000,000  of us.

Unless…

What if that small group of people being studied isn’t even remotely a representative sample of the world’s population. What if almost all of them are psychology majors at American colleges and universities?

What if they are–almost exclusively–from countries that are Western, Educated, Industrial, Rich, and Democratic?

(Notice that, cleverly, those adjectives acronym up to the word WEIRD.)

Here’s an example of the problem. Last year, I spoke about Mindset at the African Leadership Academy in South Africa: a school that draws students from all across the African continent.

And yet, I know of no research at all that studies Mindset in an African cultural context. I could share with them research from the US, and from Hong Kong, and from France, and from Taiwan. But Africa? Nothing.

How valid are Mindset conclusions for their students? We don’t really know–at least, “know” in the way that psychologists want to know things–until we do research in Africa.

(By the way: if you know of some Mindset research done in Africa, please send it my way…)

Beyond Psychology

This article over at The Atlantic does a good job of describing this problem in neuroscience.

Because the sample of the population included in neuroscience studies is so skewed, the conclusions we reach about…say…typical brain development schedules are simply wrong.

Better said: those conclusions are correct about the subset of the population being studied, but not necessarily correct for everyone else.

And, of course, most people are “everyone else.”

What Does This Problem Mean for Teachers?

Here’s my advice to teachers:

When a researcher gives you advice, find out about the participants included in their study. If those participants resemble your students, that’s good. But if not, you needn’t be too quick to adopt this researcher’s advice.

For example: if a study of college students shows that a particular kind of challenging feedback promotes a growth mindset, that information is very helpful for people who teach college.

But, if you teach 3rd grade, you might need to translate that challenging feedback to fit your students’ development. In fact, you might need to set it aside altogether.

Because participants in these studies are often so WEIRD, we should beware extrapolating results to the rest of the world’s students, including our own.

Does project-based learning work?
Andrew Watson
Andrew Watson

AdobeStock_111982420_Credit

The answer to the titular question depends on a) your definition of “project-based learning,” and b) your methodology for measuring success.

In a just-published, comprehensive literature review, MDRC takes 84 pages to say: “we can’t really answer the question, because we don’t have consistent definitions or consistent methodologies.”

For example:

Without a common set of PBL design principles, it is difficult to use the existing body of research to draw conclusions about PBL’s effectiveness. (p. 53)

or

More rigorous evidence is needed to confirm whether PBL is a better approach to prepare students for college and career than traditional teacher-directed methods. (p. 55)

That’s a frustrating answer.

If you love and believe in PBL–and, more than most pedagogical theories, PBL really has true believers–you’d rather have a ringing endorsement.

If you’re a skeptic–check out Kirschner’s emphatic rejection here–you’d like this idea put to bed once and for all.

In this review, however, the authors make clear that until we agree what PBL really is (and, what it isn’t), we can’t coherently measure its effectiveness.

What Should Teachers Do?

In the absence of a clear research answer to this question, I have two suggestions.

First: teacher experience matters. If you and your colleagues have experience teaching both PBL and direct-instruction curricula, and you’ve had good success with one or the other, then draw on that experience. As long as you’re being honest with yourselves, and keeping good records, then your experience is–for now–at least as good as any other information we’ve got.

Second: rely on useful principles from cognitive science. Does PBL help your students pay attention? If yes, that’s good. Does PBL decrease their motivation? If yes, that’s bad.

Quite often, for instance, I find that PBL curricula overwhelm students’ working memory limits. If so, then it doesn’t matter that the curriculum ought to work, or was designed by experts, because it’s overwhelming working memory.

In other words: if the curriculum sounds upliftingly progressive, but it violates basic principles of cognition, then put the rubric down and step away from the authentic question.

Every curriculum must fit with the way that students’ brains work–including a PBL curriculum.

 

 

(In case you’re wondering, “MDRC” stands for “Manpower Demonstration Research Corporation.” It was created by the Ford Foundation; its lumpy name was simplified to MDRC in 2003. You can read its history here.)

Cell Phones and Boundaries
Andrew Watson
Andrew Watson

AdobeStock_151049690_Credit

Regular readers of this blog—and, people who have even a glimpse of common sense—already know that mobile devices distract college students during lectures.

(If you’d like a review of research on this topic, you can check out The Distracted Mind by Gazzaley and Rosen.)

You can picture college students now: squinting at small screens, thumbing away at tiny keyboards, chuckling at oddly inappropriate moments of the lecture.

How can there possibly be any reason to research this question further?

One Reason to Research this Question Further

When scientists discover any kind of principal, they quickly start investigating the specific conditions under which it applies.

For example: we know that retrieval practice is—generally speaking— a great way to review. But, does it work equally well for 8-year-olds, 18-year-olds, and 80-year-olds?

We know that a growth mindset—generally speaking—enhances motivation. But, does it work for athletic as well as academic endeavors?

We know that—generally speaking—stress is bad for learning. But: how much stress is bad? Is there a low level of stress that might be good? Or, are there some tasks that benefit from high levels of stress during learning?

Researchers call these boundary conditions: a finding applies under these particular circumstances, but not those particular circumstances.

And so, we might want to investigate use of mobile devices during lectures even further to discover their boundary conditions. Are there ages at which cell-phone use matters less? Are there class lengths where it matters more? Are there personality types who learn more while surfing away?

A Surprising Answer

A group of researchers in South Africa wanted to find boundary conditions for the harm done by mobile devices in college lectures. In particular, they wanted to know: do cell phones lower grades equally in all disciplines?

Perhaps history students are more distractible than classics students. Or, perhaps physics concepts can be obscured more readily than biology concepts.

By surveying students and by doing a meta-analysis of other studies, le Roux & Parry found that mobile phones did less harm in Engineering classes than in Arts and Social Sciences classes.

So: cell phones distract students during lectures, but they don’t distract students equally during lectures on different topics.

Teaching Implications

I, for one, wouldn’t encourage my Engineering students to break out the iPads during class. Those devices might not be as distracting as in other classes, but they’re still distracting.

(And: they’re probably distracting to other students: see Faria Sana’s research.)

Here’s what I would do: follow le Roux’s example and look for boundary conditions.

If a speaker says “working memory limits preclude students from remembering more than 2 instructions,” ask if that rule applies to your 11th graders. Ask if it applies to written instructions as well as verbal instructions. Ask if it applies to instructions given in a foreign language class. Ask if it applies to instructions that students must follow over the next 30 minutes.

Look for boundary conditions.

(By the way, the answer to those questions are:

  1. Because WM capacity increases with age, most 11th graders can recall more than 2 instructions.
  2. Written instructions don’t take up much working memory capacity at all.
  3. Because foreign language instruction is VERY WM taxing, students might struggle to remember even a small number of instructions.
  4. The longer students have to remember instructions, the harder that effort becomes. That’s why you make shopping lists: it’s hard to remember what you want at the store when it’s 30 minutes away.)

In Sum…

Cognitive sciences offers teachers general principles—and those principles can be mightily helpful. (For instance: retrieval practice DOES work well for 8, 18, and 80 year olds.)

But, most of those principles do have important boundaries. Your students, your class size, your discipline, your age group, your personality—all these variables just might be outside those boundaries.

And so: be curious about the general principles. And, be equally curious about their boundaries.