Andrew Watson – Page 42 – Education & Teacher Conferences Skip to main content

Andrew Watson About Andrew Watson

Andrew began his classroom life as a high-school English teacher in 1988, and has been working in or near schools ever since. In 2008, Andrew began exploring the practical application of psychology and neuroscience in his classroom. In 2011, he earned his M. Ed. from the “Mind, Brain, Education” program at Harvard University. As President of “Translate the Brain,” Andrew now works with teachers, students, administrators, and parents to make learning easier and teaching more effective. He has presented at schools and workshops across the country; he also serves as an adviser to several organizations, including “The People’s Science.” Andrew is the author of "Learning Begins: The Science of Working Memory and Attention for the Classroom Teacher."

Default Image
Andrew Watson
Andrew Watson

My friend Cindy Nebel has a thoughtful post about a recent article at TES.

Here’s the backstory: a world-famous geneticist has dismissed research into Mindset as “bullshit” and “gimmicks.”

Now, reasonable people have their doubts about Mindset Theory. We’ve written about such doubts before.

But, as Nebel emphasizes in her post, wholesale rejection of the theory simply doesn’t make sense. For instance:

Disciplines Matter…

Geneticists know a lot about genetics. And, genes matter for teaching and learning.

(How much do they matter? A BIG controversy…)

But: most geneticists remember that psychology research is complicated. Knowledge and skill in one field don’t automatically translate to knowledge and skill in another.

In other words: psychologists will — most likely — have better insights into the strengths and weaknesses of psychology debates than will rocket scientists, or nuclear submariners, or even geneticists.

This point, of course, extends to other kinds of cross-disciplinary critiques. Here’s Nebel on the intersection of neuroscience and education:

A common misconception that we hear is that education and neuroscience are related disciplines and that those who study the brain must know how we learn.

While one can inform the other, I promise that training in neuroscience does NOT include an understanding of how those brain processes translate into classroom practices.

We often talk about a very necessary dialogue between educators and researchers, because very few individuals have extensive experience in both domains.

For all these reasons, neuroscientists (and psychologists) can provide teachers with useful perspectives. But, only teachers can decide what makes the most sense in the classroom.

…but Cost Doesn’t Matter

One of the stranger parts of the TES interview: Plomin’s insistence that only expensive changes benefit education.

“To think there is some simple cheap little thing that is going to make everybody fine, it is crazy,” he said in exclusive remarks published today.

“Good interventions are the most expensive and intensive.”

Pish posh.

If you’ve spent any time at a Learning and the Brain conference, you know that teachers can make all sorts of highly effective changes to their teaching at no cost whatsoever.

Using retrieval practice instead of simple review: no cost.

Managing students’ working memory load by…say…spreading instructions out over time: no cost.

Moderating students’ alertness levels by having them move: no cost.

Anyone who says we can’t improve teaching and learning without spending lots of money simply doesn’t understand teaching, learning, or the promise of educational psychology.

Good News! Contradictory Research on Desirable Difficulties…
Andrew Watson
Andrew Watson

As we regularly emphasize here on the blog, attempts to recall information benefit learning.

That is: students might study by reviewing material. Or, they might study with practice tests. (Or flashcards. Perhaps Quizlet.)

Researchers call this technique “retrieval practice,” and we’ve got piles of research showing its effectiveness.

Learning Untested Material?

How far do the benefits of this technique go?

For instance, let’s imagine my students read a passage on famous author friendships during the Harlem Renaissance. Then they take a test on its key names, dates, and concepts.

We know that retrieval practice helps with facts (names and dates) and concepts.

But: does retrieval practice help with the names, dates, and concepts that didn’t appear on the practice test?

For instance, my practice test on Harlem Renaissance literature might include this question: “Zora Neale Hurston befriended which famous Harlem poet?”

That practice question will (probably) help my students do well on this test question: “Langston Hughes often corresponded with which well-known HR novelist?”

After all, the friendship between Hurston and Hughes was retrieved on the practice test, and therefore specifically recalled.

But: will that question help students remember that…say…Carl van Vechten took famous photos of poet and novelist Countee Cullen?

After all, that relationship was in the unit, but NOT specifically tested.

So, what are the limits of retrieval practice benefits?

Everything You Wanted to Know about Acid Reflux

Kevin Eva and Co. have explored this question, and found encouraging results.

In his study, Eva asked pharmacology students to study a PowerPoint deck about acid reflux and peptic ulcers: just the sort of information pharmacists need to know. In fact, this PPT deck would be taught later in the course – so students were getting a useful head start.

Half of them spent 30 minutes reviewing the deck. The other half spent 20 minutes reviewing, and 10 minutes taking a practice test.

Who remembered the information better 2 weeks later?

Sure enough: the students who took the practice test. And, crucially, they remembered more information on which they had been tested AND other information from the PPT that hadn’t been specifically tested.

That it, they would be likelier to remember information about Zora Hurston and Langton Hughes (the tested info) AND about van Vechten and Cullen (the UNtested info).

However, Eva’s tested students did NOT remember more general pharmacology info than their untested peers. In other words: retrieval practice helped with locally related information, but not with the entire discipline.

But Wait! There’s More! (Or, Less…)

About 2 month ago, I posted on the same topic – looking at a study by Cindy Nebel (nee Wooldridge) and Co.

You may recall that they reached the opposite finding. That is, in their research paradigm, retrieval practice helped students remember the information they retrieved, and only the information they retrieved.

Whereas retrieval practice helped students on later tests if the questions were basically the same, it didn’t have that effect if the questions were merely “topically related.”

For instance, a biology quiz question about “the fossil record” didn’t help students learn about “genetic differences,” even though both questions focus on the topic of “evolution.”

What Went Wrong?

If two psychology studies looked at (basically) the same question and got (basically) opposite answers, what went wrong?

Here’s a potentially surprising answer: nothing.

In science research, we often find contradictory results when we first start looking at questions.

We make progress in this field NOT by doing one study and concluding we know the answer, but by doing multiple (slightly different) studies and seeing what patterns emerge.

Only after we’ve got many data points can we draw strong conclusions.

In other words: the fact that we’ve got conflicting evidence isn’t bad news. It shows that the system is working as it should.

OK, but What Should Teachers Do?

Until we get those many data points, how should teachers use retrieval practice most effectively?

As is so often the case, we have to adapt research to our own teaching context.

If we want to ensure that our students learn a particular fact or concept or process, we should be sure to include it directly in our retrieval practice. In this case, we do have lots (and LOTS) of data points showing that this approach works. We can use this technique with great confidence.

Depending on how adventurous we feel, we might also use retrieval practice to enhance learning of topically related material. We’re on thinner ice here, so we shouldn’t do it with core content.

But, our own experiments  might lead us to useful conclusions. Perhaps we’ll find that…

Older students can use RP this way better than younger students, or

The technique works for factual learning better than procedural learning, or

Math yes, history no.

In brief: we can both follow retrieval practice research and contribute to it.

Is Your Classroom Worth More Than $10,000?
Andrew Watson
Andrew Watson

Here’s a remarkable story about potentially falsified research data.

The short version: researchers James Heathers and Nick Brown thought that Nicolas Guéguen’s research findings were both too sexy and too tidy.

Too sexy: Guéguen’s findings regularly made great headlines. For instance, his research shows that men voluntarily assist women who wear their hair loose much more than those who wear it in a ponytail or a bun.

Talk about your gender-roles clickbait!

Too tidy: As Heathers and Brown considered Guéguen’s math, they realized that his numbers were…too round. When Guéguen calculated averages, he had to divide by 30 — because his study groups had 30 people in them.

But, his results often ended in improbably round figures:  1.60, 1.80, 2.80.

Mathematically speaking, that’s possible. But, when you’re dividing by 30, it’s wildly unlikely.

Heathers and Brown have now spent quite a long time — years, in fact — trying to get Guéguen to share his data and/or answer their questions.

As far as I know, they haven’t gotten answers yet. (Here’s a somewhat more recent article.)

What Teachers Should Do

First, keep in mind that scientists are people too. While most strive honorably to follow procedures and discover meaningful findings, a few will always cheat the system and abuse our trust.

That’s true in any profession. It’s true in psychology and neuroscience research. (It might even be true of teachers.)

Second, let this knowledge energize your skepticism.

Here’s my suggestion: never make changes to your classroom work without asking hard questions.

Think about it this way: if someone wanted to sell you something for $10,000, you’d needs lots of confirmation. You wouldn’t take the seller’s word that the product was worth $10k. You’d investigate on your own.

For instance, ask…

Have other researchers gotten similar findings?

How aggressively have they tried to disprove them?

Is the suggestion strongly confirming a bias that everyone already holds?

So: your classroom is worth more than $10,000. You wouldn’t buy an expensive product without asking hard questions. Don’t adopt research suggestions without asking hard questions.

In the meanwhile, I’m going to put my hair up in a bun.

 

 

 

Healthy Snacks After Exercise? Depends on the Timing…
Andrew Watson
Andrew Watson

If you saw Roy Baumeister at the 2015 Learning and the Brain conference in Boston, you remember his presentation on self-control.

Of course, teachers care A LOT about self-control.

We need our students to control their behavior. (“Do not use the bunsen burner to light your backpack on fire,” my 6th grade science teacher said to me. Often.)

And, we need them to control their cognitive processes. (“When balancing chemical equations, start by identifying the elements.”)

Baumeister found, among other fascinating things, that both kinds of self-control “drain the same reservoir.”

That is: if I use up some self-control resisting the temptation to climb the jungle gym, I have less self-control left over to process the steps involved in subtracting two-digit numbers. (Baumeister’s book Willpower, written with John Tierney, explains his research in helpful detail.)

Replication Controversy

As the field of psychology wrestles with the “replication crisis,” Baumeister’s conclusions have come under question.

Some researchers haven’t gotten the same results when they run self-control experiments. Some question the research field in general. (For instance: terms like “self-control” are notoriously hard to define.)

This question matters to us. If Baumeister’s theories don’t hold water, then it’s unlikely the self-control solutions he proposes will help very much.

So, to take only the most recent example, John Medina’s Attack of the Teenage Brain devotes several chapters to helping adolescents develop executive functions — such as self-control.

If the research that Medina cites can’t be trusted…we might be back to square one.

Latest News

I’ve just found some pertinent research in an unlikely field: exercise and nutrition.

Researcher Christopher Gustafson and Co. asked visitors at a local gym to wear an accelerometer, purportedly so they could “keep track of relevant exercise data.” As a reward for participating, they were given a free snack after their workout.

In fact, the “accelerometer data” story masked the real interest of the study: participants’ snack choice.

All participants chose between a brownie and an apple. Some got the choice before they exercised; some after. Did the timing of the choice matter?

If Baumeister’s theory holds up, we would expect a difference between these two groups. Here’s why…

Self-Control, Snacks, and Exercise

Because apples are a healthier snack than brownies, we know we ought to choose them. But, for most of us, brownies taste a lot better. And so, we must use self-control to make that choice.

Likewise, we know that exercise is good for us. But, we rarely want to do it — and so that choice also takes self-control.

If I make the snack choice before exercise, my self-control reservoir remains relatively full. As a result, I’m likelier to make the “right” choice.

But, if I select my snack as I towel off after exercise, I’ve probably drained that reservoir considerably. So, I’ve got less willpower left. And I’m likelier to give into chocolatey temptation.

Is that what Gustafson found? Indeed he did.

In fact, 17% fewer people chose the apple, and 6% more chose the brownie. (The rest turned down a snack altogether.)

In other words: this study supports Baumeister, and gives us increased confidence in the research suggestions that flow from it.

What are some of those suggestions? You can start with an intriguing one here.

Welcome to San Francisco
Andrew Watson
Andrew Watson

If you’re a regular blog reader, you just might be a frequent Learning and the Brain conference attendee. (I got my start in 2007, and haven’t stopped since.)

We’re gathering — starting tomorrow! — at the Fairmont Hotel in San Francisco to discuss Educating with Empathy: cultivating kindness, compassion, empathy, and good behavior.

Many of the speakers have featured recently on the blog. John Medina’s most recent book has shown up at least twice.

Rebecca Gotleib — this blog’s book reviewer — will be presenting on the power of teens’ social-emotional skills.

And I’ll be offer a pre-conference session: “Understanding Adolescence, Teaching Adolescents.” We’ll be talking about teenage cognition, emotion — and even technology use.

I’ve enjoyed getting to know many of you over my blogging years. I hope you’ll stop by and introduce yourselves! I’m easy to find: I look a lot like the guy on the right…

There’s No Polite Way to Say “I Told You So”
Andrew Watson
Andrew Watson

Back in 2014, Pam Mueller and Dan Oppenheimer made headlines with their wittily titled study “The Pen Is Mightier Than The Keyboard.”

In that study, they found that students learn more from taking handwritten notes during a lecture than from laptop notes. Their conclusions spawned a thousand gloating posts. And (I don’t doubt) a multitude of well-intentioned anti-laptop policies.

Since I first read the study, I’ve been shouting that its conclusions simply don’t hold up.

Why?

Because M&O’s conclusions hold water only if you believe students can’t learn new things.

(That’s a very strange belief for teachers to have.)

If you believe that students can learn new things, then you believe that they can learn to take laptop notes correctly.

(“Correctly” = “rewriting the lecture’s main points in your own words; don’t just transcribe verbatim”)

If they do that, then this famous study actually suggests laptop notes will enhance learning, not detract from it.

You can find a summary of my argument — and its limitations — here.

Today’s News

Scholars have recently published an attempt at replication of Mueller & Oppenheimer’s study.

The results? Not much.

In the quiet language of research, they conclude:

“Based on the present outcomes and other available evidence, concluding which method [handwriting or laptops] is superior for improving the functions of note-taking seems premature.”

Not so much with the mighty pen.

By the way: a study from 2018 also concluded that — except in special circumstances — it just didn’t make much difference which method students use.

Why I Care

Perhaps surprisingly, I’m not an ardent advocate of laptop notes. Or, for that matter, of handwritten notes.

I advocate for teachers making classroom decisions informed by good research.

In this case, the Mueller and Oppenheimer study contains a perfectly obvious flaw. I have yet to meet anyone who doesn’t think a) that students can learn good note-taking skills, and b) that if they do, the study’s conclusions make no sense.

And yet, very few people have time to dig into research methodology. As a result, this one study had confirmed many teachers in their beliefs that technology harms learning during note-taking.

That statement might be true. It might be false. But this one study doesn’t give us good data to answer the question.

As a result, teachers might be taking laptops away from students who would learn more if they got to use them.

In brief: bad research harms learning.

I hope that this most recent study encourages teachers to rethink our classroom practices.

Can Creativity Be Taught? What’s the Formula?
Andrew Watson
Andrew Watson

My edutwitter feed has a lively debate about this question: can we teach people to be creative?

This round started with a post by David Didau, summarizing a debate between himself and Paul Carney.

Didau believes (oversimplifying here) that creativity is an emergent phenomenon, resulting from a knowledge-rich curriculum.

When people know lots o’ stuff, they are increasingly able to come up with new and useful combinations of that stuff. And, that’s how we typically define “creativity”: new & useful.

On the contrary, Carney believes that creativity can — in fact, must — be taught directly. For instance, he believes that helping students visualize complex patterns can help them see information in new ways.

That is, one teachable skill leads to greater creativity in general.

What’s the Secret Formula?

Tom Sherrington weighs in on this debate, and (creatively) adds his own twist.

Although he doesn’t think creativity can be taught, he does think it can be fostered. In fact, he’s got a formula for fostering it. Here goes:

c = f (K, P, D)

Unsurprisingly, the K in Sherrington’s formula is “knowledge.” I’ll let you read his article to explore the other two key variables.

As an added bonus, you’ll get to see a Francis-Bacon-inspired portrait of Sherrington’s son, painted by Sherrington’s daughter. I don’t doubt you’ll be impressed by the creativity on display…

 

Why Do Teachers Resist Research? And, Why Should We?
Andrew Watson
Andrew Watson

Let’s imagine that you show me research suggesting that students remember the words they draw better than the words they write down.

After some thought…perhaps some experimentation on my own…I decide not to follow this research advice.

Why did I “resist” these research findings? What prompted me to do so?

Education researcher Tim Cain divides possible answers into four categories. The verbs he uses to describe each one are all synonyms. But, he gives each one distinct meaning to distinguish among the possibilities.

And, as you’ll see, three of the four choices sound really bad.

3 Bad Choices

Denial: denialists basically pretend that there is reasonable disagreement on a topic where none really exits. Example: companies that say smoking isn’t bad for your lungs, or historians who pretend the Holocaust didn’t happen.

For the most part, in Cain’s analysis, deniers strive to “protect self-esteem and status.”

Opposition: whereas denialists typically have power and want to protect it, oppositionists don’t have much power, and reject scientific findings that might continue their subjugation.

For instance, I might have rejected the drawing strategy because I didn’t think it worked (see below). But, I might reject it because – as a teacher with relatively little cultural power – I don’t want to be bossed around by scientific researchers – who have more cultural standing than I do.

Rejection: Rejections gets a little complicated. In this model, I accept research findings only if they BOTH help students learn AND make me look good. But, if they don’t hit both targets, I’m not interested.

So, for example, if drawing does help students remember, but doesn’t win me esteem in the faculty room, then I’m just not interested.

As you can see, these first three choices don’t seem very appealing. I’m oversimplifying a bit – but not a lot – to say that teachers who resist research for these reasons are being jerks.

Frankly, I’m feeling a bit stressed right now. Does Cain acknowledge that teachers have any good reasons to resist research findings?

One More?

Indeed, Cain does give us one more choice.

Dissent: if teachers think critically about research, we might see gaps, flaws, or logical leaps. Rather than being driven by the sinister motives outlined above, we might honestly – even insightfully – disagree with the arguments put before us.

Being a researcher, Cain wanted to know: which is it? Why do teachers ultimately decide not to follow researchers’ advice?

Are we protecting the power we have (“denial”)? Fighting to prevent others from getting even more power over us (“opposition”)? Focusing on prestige more than usefulness (“rejection”)?

Or, are we enhancing educational debate by thinking critically (“dissent”)?

The Big Reveal

I’ll cut to the chase: for the most part, Cain finds that we’re in the critical thinking business.

To arrive at this conclusion, Cain worked with several teachers at two schools in northern England. He gave them some research articles, and asked them to try out the researchers’ findings. He then met with them to talk over their work, and interviewed them about their conclusions.

Here’s what he found:

First: teachers ultimately agreed with and accepted significant chunks of the researchers’ conclusions and advice. There didn’t simply reject everything they read and undertook.

Second: at the same time, teachers didn’t see researchers’ conclusions as more important than their own. As Cain puts it:

Essentially, almost all the teachers saw the authority of the published research reports as provisional. They did not see the research as having greater authority than their own experience or other forms of information.

Third: when teachers did resist researchers’ conclusions, they did so for entirely plausibly reasons.

They (plausibly) thought some of the studies contained contradictions. They (plausibly) saw some findings as out of date. And, they (plausibly) raised objections to research methodology.

They also – and I think this is very good news – emphasized the narrow particularity of research findings. As one teacher said:

If you researched in different schools, it would be different. If you had an inner-city school, a wealthy middle-class school, a private school, every one would be totally, totally different.

And another:

Does anything work for every single person? No, I don’t think there’s anything that will work exactly the same. It’s finding what’s right for your group: the age, the personalities.

(Regular readers of the blog know that I bang on about this point all the time, so I’m DELIGHTED to see it articulated so clearly here.)

Closing Thoughts

Cain (rightly) emphasizes that his study is early and exploratory. He worked with volunteers: that is, people who are likely to be interested in research in the first place. (If they weren’t interested, they wouldn’t have volunteered.)

And, like any study, this one has lots of limitations. For instance: these teachers worked in “Gifted and Talented” programs. Findings in other settings might be different.

But, at least initially, Cain’s finding shows that teachers can be great partners for researchers. We’re not resisting for the sake of resisting.

Instead, we’re thinking critically about the limits of research, and the goodness of fit for our particular classrooms.

Which is exactly what we should do.

 

Fool Me Twice, Shame on Me
Andrew Watson
Andrew Watson

We often post about the unreliability of “brain training.”

Heck, even though I live in Boston and am a Patriots fan, I made fun of Tom Brady’s website claiming to “increase brain speed” and other such nonsense. (I don’t even know what “increase brain speed” might mean.)

So, you think I’d be especially wary of these claims. But, even I can fall into such traps — at least temporarily. Last week, it happened TWICE.

Fool Me Once

Many researchers have claimed to be able to increase working memory capacity.

(It would be great if we could do so, because working memory is so important for all classroom learning.)

Alas, very consistently, we find that such programs don’t really work. (For instance, here and here.)

And so, I was very excited to see a new approach to the problem.

We have long known that the cerebellum helps control motor function. More recently, scientists have discovered that it also supports working memory performance.

Perhaps, we could strengthen cerebellar function, and that way enhance WM. Worth a try, no?

Although this explanation makes good sense, and the accompanying graphs looked impressive, I was drawn up short by a serious problem: the researchers didn’t measure working memory.

You read that right. Instead of a WM test, they gave participants a short-term memory test.

So, this research shows that cerebellar training might increase STM. But, it shows nothing about WM.

Brain training hopes dashed…

Fool Me Twice

Unlike WM training, we have had some luck with attention training.

For instance, Green and Bavalier have shown that playing certain computer games can increase various kinds of visual attention.

A recent study claimed that a specially designed iPad game could enhance sustained visual attention. I was gearing up to review the research so I could write about it here, when…

I learned that the test to measure students’ attention was very similar to the game itself. (H/t: Michael Kane)

In other words: participants might have gotten better because they (basically) practiced the test, not because their sustained attention improved.

To measure such progress, researchers would need a test that wasn’t similar to the game participants played.

Brain training hopes re-dashed…

The Big Take Away for Teachers

I’m basically an optimistic person, and I really don’t like being a grinch.

But, sometimes my job requires me to be grinchy.

At this point, I’ve been inspired by “brain training” claims so many times, only to be disappointed by an analysis of the research underlying those claims.

So, from now on, I’m just going to assume that new claims are highly likely to be false.

If brain training claims are subsequently replicated by many research teams; if the methodologies are scrutinized and approved by several scholars in the field; well, if that happens, I’ll relent.

For now, I don’t want to be fooled again.

The Joys (and Stresses) of Teacher/Neuroscientist Collaboration
Andrew Watson
Andrew Watson

In an ideal world, teachers and researchers collaborate to bring out the best in each other.

So, I might invite Pooja Agarwal to study retrieval practice in my 10th grade English classroom.

My students and I benefit because we learn more about this great study technique.

Dr. Agarwal’s research benefits because she sees how the theory of the technique functions in the real messy world of schools.

What’s not to like?

Theory, Meet Reality

Of course, our world rarely lives up to that ideal. Teacher/researcher collaboration creates all sorts of challenges.

We speak very different languages.

We operate within very different time frames.

At times, we highlight very different values.

All these differences can make communication, progress, and success difficult to achieve.

Today’s Example

Over at the Blog on Learning Development, Meeri Kim has recently written about a collaboration between neuroscientists and Head Start teachers. More precisely, she interviewed two of the scientists in the program.

The result: a refreshingly frank description of the benefits and stresses of this collaboration.

For instance: the curriculum that the scientists created improved social skills and selective attention, while reducing problem behaviors. What teacher wouldn’t like those results?

As researcher Lauren Vega O’Neil noted:

A lot of the activities were packaged as fun games. The teachers loved having these ready-made activities that would help them long-term in the classroom.

And yet, this collaboration included confusions and stresses as well.

I worked mostly with teachers in classrooms during the study, and many of them jumped on board right away. But there was some pushback, particularly since some teachers saw this as yet another curriculum that they were being asked to implement. […] So they just saw our training program as something else that was being asked of them.

Suggestions?

Researcher Eric Pakulak has some surprisingly direct advice for colleagues who want to do classroom research:

Unfortunately, it seems to be all too common that researchers come in and don’t listen as much as they should to educators, thinking that it should be all about neuroscience, and only using education to implement what they know, as opposed to something more bi-directional.

Instead, we need to work together and really understand the ways that the experience of teachers and administrators can inform our work.

I agree with this advice wholeheartedly.

And, I likewise think that teachers can do more to understand the pressures on researchers.

For instance: research works by isolating variables.

Classroom researchers might have very particular scheduling needs. They can be certain that retrieval practice produces a benefit only if nothing else in the class was different. So, they might have to insist we schedule quizzes at a very specific point in the class — even if that schedule is highly inconvenient for us.

The more that teachers understand these research requirements, the more effectively we can create classroom research paradigms that both help our individual students learn and help researchers discover enduring truths about learning.