Andrew Watson – Page 59 – Education & Teacher Conferences Skip to main content

Andrew Watson About Andrew Watson

Andrew began his classroom life as a high-school English teacher in 1988, and has been working in or near schools ever since. In 2008, Andrew began exploring the practical application of psychology and neuroscience in his classroom. In 2011, he earned his M. Ed. from the “Mind, Brain, Education” program at Harvard University. As President of “Translate the Brain,” Andrew now works with teachers, students, administrators, and parents to make learning easier and teaching more effective. He has presented at schools and workshops across the country; he also serves as an adviser to several organizations, including “The People’s Science.” Andrew is the author of "Learning Begins: The Science of Working Memory and Attention for the Classroom Teacher."

Welcome to “the Messiness”
Andrew Watson
Andrew Watson

AdobeStock_47527235_Credit

In a recent interview on this blog, Dr. Pooja K. Agarwal spoke about the benefits of retrieval practice: a study technique that–in her words–focuses on pulling information OUT of students’ brains rather than getting it back IN.

For example: if I begin today’s class by having my students write down three things they remember from yesterday’s lesson on the Han dynasty, that’s retrieval practice. After all, they’re going back into their memories and drawing OUT facts and ideas we discussed.

If, however, I begin by briefly summarizing yesterday’s class, well, then I’m trying to put information back IN. That’s not retrieval practice.

Dr. Agarwal summarizes the benefits of retrieval practice thus: “it works for all students in all subjects, all the time.”

Sounds tempting, no?

Pushing Boundaries

In one part of our conversation, Dr. Aragwal notes that she likes doing research in actual classrooms with actual students–rather than in psychology labs in highly controlled conditions–because “I really like the messiness of of doing scientific research in classrooms. The fire alarms, and school assemblies, and kids who are out sick, I really enjoy it because it pushes boundaries.”

In the spirit of messiness, here’s a recent post from the Learning Scientists about using retrieval practice in elementary school to learn vocabulary.

The good news about this study:

First: it took place in a real school with real students, not in a psychology lab. That means its results are likelier to be meaningful to teachers.

Second: the participants were 9-year-olds, not college students. So, we can be more confident that retrieval practice works with…say…4th graders.

Third: the study took place in the Netherlands, so we’ve got reason to believe that the benefits go beyond a North American cultural context.

So far, so good.

Let the Messiness Begin

At the same time, this particular study revealed a few muddles as well.

Muddle #1: the size of the benefit was relatively small. Retrieval practice produced more learning than simple restudy, and more than “elaborative retrieval,” but statistically speaking that difference was harder to find than in a psychology lab.

Muddle #2: Dr. Agarwal’s research shows that fill-in-the-blank retrieval practice and multiple-choice retrieval practice are equally effective. This study, however, contradicts that finding; multiple-choice retrieval didn’t produce more learning than pure restudy.

Muddle #3: believe it or not, muddle #3 contradicts muddle #2. Because of the study design, the authors acknowledge that their own findings about multiple-choice tests aren’t fully persuasive. For example: because the average score on the multiple-choice tests was above a 90%, there wasn’t enough difference among the students’ scores to calculate meaningful effects.

What should teachers do with all this contradictory information?

My advice: Embrace the muddle.

Teachers should expect that different studies produce muddled–and occasionally contradictory–results.

No one study tells us everything we need to know about retrieval practice. Instead, we’re looking for patterns of findings.

If we do ten studies, and eight of them show that retrieval practice helps learning, that’s impressive. We don’t need to be thrown off by one study that shows no effect–or, as in this case, a relatively smaller effect than in a psych lab.

The Quiet Finding

Although the authors don’t dwell on this point, one finding jumped out at me.

In one of the restudy conditions, students were asked to “elaborate” on the meaning of the word. For example, as they tried to remember “compost pile,” they were asked to circle the words relating to a compost pile on this list: manure, plastic, delicious, orange-peels, mailbox, dead leaves.

My teacherly instincts tell me that this restudy condition ought to help students. After all, to circle the correct words, they have to think a bit harder about the meaning of the phrase “compost pile.” That additional thought strikes me as a desirable difficulty, and ought to produce more learning.

But–at least in this one study–it didn’t. Students who “elaboratively restudied” scored between the “pure restudy” group and the “retrieval practice” group–and their scores weren’t significantly different from either.

The Take-Aways…

I myself reach three conclusions based on this research:

A) Yup: retrieval practice still works, even with 4th graders, even with vocabulary learning, even in the Netherlands.

B) My instincts about elaborative restudy might be off. I should keep my eyes peeled for further research.

C) The muddle isn’t disheartening, it’s enjoyable. Jump in–the water’s warm!

 

The Benefits of Prediction; the Dangers of Vocabulary
Andrew Watson
Andrew Watson

AdobeStock_6446201_Credit

What’s the best way to study complex material?

Working with Charles Atwood at the University of Utah, Brock Casselman tried an idea:

He had students in a general chemistry class do weekly online problems and practice tests; after completing that work, the students received detailed feedback.

In addition to this online practice, half of the students also predicted their scores before they took the tests; they then made study plans after they received the feedback.

Did this additional work help?

Indeed it did. On average, it raised grades on the final exam by 4%.

Even more impressively, those in the bottom quartile of the class raised their exam grade by 10%.

Especially for those who struggled with the material, making predictions and updating study plans boosted learning.

Reasons to Celebrate; Reasons to Pause

Of course, this research is quite helpful in giving us specific teaching advice. The more we can encourage our students to stop and predict their success, the more we can prompt them to make thoughtful study plans, the more that they’re likely to learn.

So far, so good.

However, I do see two reasons to add a note of caution.

First, this study was done in a difficult college class; according to this interview, only 2/3 of the students who take the class ever pass it.

A study technique that helps in such a difficult class might be beneficial to students in less rigorous classes…but, we can’t be sure based on this research.

Second, I do worry about the broad vocabulary used to describe this study technique: “metacognition.”

No doubt you’ve heard of metacognition: it means “thinking about thinking.” When I stop and ask myself, “now, why did I get that problem wrong? What patterns do I notice with other mistakes I made?” I’ve engaged in metacognition.

Here’s the potential danger. While it is true that Casselman’s particular set of metacognitive strategies helped these students, that doesn’t mean that ALL metacognitive strategies will help ALL students.

For instance, you might read that “using context clues” is a metcognitive strategy. It certainly is. And, of course, using context clues might well help students to important discoveries.

However: that’s not the metacognitive strategy that was used in this case. So, this study doesn’t show that using context clues would help students in this chemistry class.

Or that it would help your students.

Boundaries Matter

In a recent post, I encouraged teachers to look for boundary conditions. In other words: we’re interested in researchers’ general findings, but we want to be sure that they apply specifically to our students.

To do so, check out the “participants” section of the research you’re reading. If the students who participated in the research resemble your students, then you’re good to go. If not, use your own best judgment about the applicability of that research.

Equally important: be sure that the specific techniques described as “metacognition” are in fact the ones that you’re using. If not, you should look for more research to be sure you’re on the right track.

After all, my predictions about the benefits of metacognition might be correct–but if my results show that a particular metacognitive strategy didn’t work, I need to develop a new study plan.

Default Image
Andrew Watson
Andrew Watson

Here’s a hypothetical situation:

Let’s say that psychology researchers clearly demonstrate that retrieval practice helps students form long-term memories better than rereading the textbook does.

However, despite this clear evidence, these researchers nonetheless emphatically recommend that students avoid retrieval practice and instead reread the textbook. These researchers have two justifications for their perverse recommendation:

First: students aren’t currently doing retrieval practice, and

Second: they can’t possibly learn how to do so.

Because we are teachers, we are likely to respond this way: “Wait a minute! Students learn how to do new things all the time. If retrieval practice is better, we should teach them how to do it, and then they’ll learn more. This solution is perfectly obvious.”

Of course it is. It’s PERFECTLY OBVIOUS.

Believe It Or Not…

This hypothetical situation is, in fact, all too real.

In 2014, Pam Mueller and Dan Oppenheimer did a blockbuster study comparing the learning advantages of handwritten notes to laptop notes.

Their data clearly suggest that laptop notes ought to be superior to handwritten notes as long as students learn to take notes the correct way.

(The correct way is: students should reword the professor’s lecture, rather than simply copy the words down verbatim.)

However–amazingly–the study concludes

First: students aren’t currently rewording their professor’s lecture, and

Second: they can’t possibly learn how to do so.

Because of these two beliefs, Mueller and Oppenheimer argue that–in the witty title of their article–“The Pen is Mightier than the Laptop.”

But, as we’ve seen in the hypothetical above, this conclusion is PERFECTLY OBVIOUSLY incorrect.

Students can learn how to do new things. They do so all the time. Learning to do new things is the point of school.

If students can learn to reword the professor’s lecture when taking notes on a laptop, then Mueller and Oppenheimer’s own data suggest that they’ll learn more. And yes, I do mean “learn more than people who take handwritten notes.”

(Why? Because laptop note-takers can write more words than handwriters, and in M&O’s research, more words lead to more learning.)

And yet, despite the self-evident logic of this argument, the belief that handwritten notes are superior to laptop notes has won the day.

That argument is commonplace is the field of psychology. (Here‘s a recent example.)

Even the New York Times has embraced it.

The Fine Print

I do need to be clear about the limits of my argument:

First: I do NOT argue that a study has been done supporting my specific hypothesis. That is: as far as I know, no one has trained students to take reworded laptop notes, and found a learning benefit over reworded handwritten notes. That conclusion is the logical hypothesis based on Mueller and Oppenheimer’s research, but we have no explicit research support yet.

Second: I do NOT discount the importance of internet distractions. Of course students using laptops might be easily distracted by Twinsta-face-gram-book. (Like everyone else, I cite Faria Sana’s research to emphasize this point.)

However, that’s not the argument that Mueller and Oppenheimer are making. Their research isn’t about internet distractions; it’s about the importance of reworded notes vs. verbatim notes.

Third: I often hear the argument that the physical act of writing helps encode learning more richly than the physical act of typing. When I ask for research supporting that contention, people send me articles about 1st and 2nd graders learning to write.

It is, I suppose, possible that this research about 1st graders applies to college students taking notes. But, that’s a very substantial extrapolation–much grander than my own modest extrapolation of Mueller and Oppenheimer’s research.

And, again, it’s NOT the argument that M&O are making.

To believe that the kinesthetics of handwriting make an essential difference to learning, I want to find a study showing that the physical act of writing helps high school/college students who are taking handwritten notes learn more. Absent that research, this argument is even more hypothetical than my own.

Hopeful Conclusion

The field of Mind, Brain, & Education promises that the whole will be greater than the sum of the parts.

That is: if psychologists and neuroscientists and teachers work together, we can all help each other understand how to do our work better.

Frequently, advice from the world of psychology gives teachers wise guidance. (For example–retrieval practice.)

In this case, we teachers can give psychology wise guidance. The founding assumption of the Mueller and Oppenheimer study–that students can’t learn to do new things–simply isn’t true. No one knows that better than teachers do.

If we can keep this essential truth at the front of psychology and neuroscience research, we can benefit the work that they do, and improve the advice that they give.

Good News about Concept Mapping
Andrew Watson
Andrew Watson

AdobeStock_38023369_Credit

This meta-analysis, which looks at studies including almost 12,000 students, concludes that creating concept maps does indeed promote learning.

Specifically, it’s better than simply looking at concept maps, or listening to lectures, or participating in discussions, or even writing summaries.

The article summarizes several hypotheses to explain the benefits of concept mapping: it reduces working memory load by using both visual and verbal channels, it requires greater cognitive elaboration, and so forth.

So, let’s hear it: how do you get your students to map concepts? What successes have you had? Let me know in the comments…

(h/t IQ’s Corner)

The Benefits of Forgetting
Andrew Watson
Andrew Watson

AdobeStock_41168140_Credit

As teachers, we earnestly want our students to REMEMBER what they learned; their habit of FORGETTING leave us glum and frustrated.

(In truth, our own forgetting often leaves us glum and frustrated. If you could tell me where I put my to-do list, I’d be grateful.)

In this article at Neuron, authors Blake Richards and Paul Frankland argue that our teacherly priorities don’t quite align with our neurobiology.

In their account, we remember information not simply to have that information, but in order to make good decisions.

In some cases, of course, having more information benefits our decisions, and so our brains are designed to recall that information.

In other cases, however, some kinds of information might well interfere with good decision making.

Specifically, if we forget correctly, we are a) less likely to make decisions based on outdated information, and b) better able to form useful generalizations.

In other words: forgetting is a feature, not a bug.

 

Neuroplasticity in Rural India
Andrew Watson
Andrew Watson

AdobeStock_52841678_Credit

You hear so much about “neuroplasticity” at Learning and the Brain conferences that you already know its meaning: brains have the ability to change.

In fact, you hear about neuroplasticity so often that you might start to lose interest. You say to yourself: “Brains can change: blah, blah, blah. Tell me something I don’t already know.”

And then you read this study about adult women in rural India. They had never learned to read; heck, they had never even been to school.

And, sure enough, when they were taught to read, their brains started changing. After only six months, their brains looked measurably different–all because they had started to read.

On the one hand, this result is perfectly straightforward: if their brains hadn’t changed, how would they have learned anything? And yet, unlike most “doing X causes your brain to change!” stories, this one struck me as quite poignant.

Consider this your feel-good-about-neuroscience story of the day.

Correlation Isn’t Causation, Is It?
Andrew Watson
Andrew Watson

Rock-n-Roil

(Image source)

The ever provocative Freddie deBoer explores the relationship between correlation and causation.

You know, of course, that the one does not imply the other.

DeBoer, however, wants to push your certainty on this point.

Are there circumstances under which proving causation would be immoral, and therefore correlation is a useful placeholder? (Do you really want to run the double-blind study about smoking cigarettes?)

Are there circumstances under which the causal chain is wildly complicated, and so correlation is an important clue?

In other words: while correlation doesn’t prove causation, common sense tells us that it’s an interesting starting point. And: we often face circumstances where causal proof is hard to come by, and so correlation gets our attention as a useful indicator of potential causation.

As long as we’re careful about these subtleties, we can allow ourselves to notice correlation, and speculate (humbly) about causation.

Here’s how deBoer concludes his article:

What we need, I think, is to contribute to a communal understanding of research methods and statistics, including healthy skepticism.  […] Reasonable skepticism, not unthinking rejection; a critical utilization, not a thoughtless embrace.

That’s a hard logical place to find; here’s hoping we can find it together.

____________________

Update: I wrote the piece above on 11/8. Today (11/16), Greg Ashman posted a thoughtful piece making very similar arguments. I wonder what coincidence implies about causation…

Default Image
Andrew Watson
Andrew Watson

In the excitement of this weekend’s Learning and the Brain conference, I overlooked my own one-year anniversary as editor of this blog.

I’ve enjoyed the chance to think aloud with you about teaching, psychology, neuroscience, research–and all the odd topics that come across my desk. (Squirrels, anyone?)

I’ve particularly enjoyed the chance to interact with so many of you. Please keep your emails, questions, suggestions, criticisms, and experiences coming: [email protected].

Here’s to another year!

Cheers,

Andrew

Finding Meaning in Visuals
Andrew Watson
Andrew Watson

AdobeStock_106580650_Credit

When you open your eyes, where do they focus?

Presumably, your eyes automatically turn to the part of the visual field that stands out the most: the bright red door, the tower jutting up from the cliff, the sharp angle against all the curves.

However, it’s also possible that your eyes naturally turn to the part of the visual field that means the most: the subtle-but-essential clue, the small-but-important face, the mundane-but-crucial key that unlocks the mystery.

In the first hypothesis, our visual systems immediately process information without taking meaning into account; in the second, those systems take enough time to include meaning in their guidance.

John Henderson‘s team developed quite an intricate system for testing these possibilities. (You can read the full description here.)

The short version is: they used images where the part that stood out was distinct from the part that meant the most. And, they used an eye-tracking gizmo to see where people looked first.

The answer: eyes focus first on meaning.

Even at the most basic level of processing, our brains prioritize meaningful information over flashy information.

What Henderson’s Research Means for Teachers

This study reminds me of Daniel Willingham’s response to learning styles theory.

In Why Don’t Students Like School, Willingham argues that–for example–visual processing differences don’t make much difference for most students because–most of the time–we don’t want our students to think about what something looks like, we want them to think about what that something means.

Henderson’s study suggests that, even at the moment of initial processing, our eyes prioritize meaning.

For this reason, it can be true that some people remember visuals better than others, but they still aren’t “visual learners.” All of us are “visual learners” because all of our eyes focus on meaning more than on purely visual salience.

The Dangers of Weird Neuroscience
Andrew Watson
Andrew Watson

AdobeStock_86304839_Credit

How do psychologists know what they know about human mental processes?

Quite often, they run studies to see how people behave: what do they remember? where do they look? what do they choose? how do they describe their thoughts?

If they run those studies just right, psychologists can test a very small number of people, and reach conclusions about a very large number of people.

Perhaps they can reach conclusions about all 7,400,000,000  of us.

Unless…

What if that small group of people being studied isn’t even remotely a representative sample of the world’s population. What if almost all of them are psychology majors at American colleges and universities?

What if they are–almost exclusively–from countries that are Western, Educated, Industrial, Rich, and Democratic?

(Notice that, cleverly, those adjectives acronym up to the word WEIRD.)

Here’s an example of the problem. Last year, I spoke about Mindset at the African Leadership Academy in South Africa: a school that draws students from all across the African continent.

And yet, I know of no research at all that studies Mindset in an African cultural context. I could share with them research from the US, and from Hong Kong, and from France, and from Taiwan. But Africa? Nothing.

How valid are Mindset conclusions for their students? We don’t really know–at least, “know” in the way that psychologists want to know things–until we do research in Africa.

(By the way: if you know of some Mindset research done in Africa, please send it my way…)

Beyond Psychology

This article over at The Atlantic does a good job of describing this problem in neuroscience.

Because the sample of the population included in neuroscience studies is so skewed, the conclusions we reach about…say…typical brain development schedules are simply wrong.

Better said: those conclusions are correct about the subset of the population being studied, but not necessarily correct for everyone else.

And, of course, most people are “everyone else.”

What Does This Problem Mean for Teachers?

Here’s my advice to teachers:

When a researcher gives you advice, find out about the participants included in their study. If those participants resemble your students, that’s good. But if not, you needn’t be too quick to adopt this researcher’s advice.

For example: if a study of college students shows that a particular kind of challenging feedback promotes a growth mindset, that information is very helpful for people who teach college.

But, if you teach 3rd grade, you might need to translate that challenging feedback to fit your students’ development. In fact, you might need to set it aside altogether.

Because participants in these studies are often so WEIRD, we should beware extrapolating results to the rest of the world’s students, including our own.