Andrew Watson – Page 46 – Education & Teacher Conferences Skip to main content

Andrew Watson About Andrew Watson

Andrew began his classroom life as a high-school English teacher in 1988, and has been working in or near schools ever since. In 2008, Andrew began exploring the practical application of psychology and neuroscience in his classroom. In 2011, he earned his M. Ed. from the “Mind, Brain, Education” program at Harvard University. As President of “Translate the Brain,” Andrew now works with teachers, students, administrators, and parents to make learning easier and teaching more effective. He has presented at schools and workshops across the country; he also serves as an adviser to several organizations, including “The People’s Science.” Andrew is the author of "Learning Begins: The Science of Working Memory and Attention for the Classroom Teacher."

Two Helpful Strategies to Lessen Exam Stresses
Andrew Watson
Andrew Watson

Exam stress bothers many of our students. Sadly, it hinders students from lower socio-economic status (SES) families even more.

As a result, these students struggle — especially in STEM classes. And, exam stressthis struggle makes it harder for them to enter these important (and lucrative!) fields.

Can we break this cycle somehow?

Reducing Exam Stress: Two Approaches

Christopher Rozek tried a combination of strategies to help lower-SES science students manage exam stress.

This research stands out for a number of reasons: in particular, it included a large sample (almost 1200 students). And, it took place in a school, not a psychology lab. That is, his results apply to the “real world,” not just a hermetically sealed research space.

Rozek worked with students taking a 9th grade biology class. Before they took the two exams in the course, Rozek had students write for ten minutes.

One group spent their ten minutes writing about their current thoughts and feelings. This approach lets students “dump” their anxiety, and has been effective in earlier studies. (By the way: this earlier research is controversial. I’ve written about that controversy here.)

Another group read a brief article showing that the right amount of stress can enhance performance. This reading, and the writing they did about it, helps students “reappraise” the stress they feel.

A third group did shortened versions of both “dumping” and “reappraising” exercises.

And the control group read and wrote about the importance of ignoring and suppressing negative/stressful emotions.

So, did the “dump” strategy or the “reappraise” strategy help?

Dramatic Results

Indeed, they both did.

For example, Rozek and Co. measured the effect these strategies (alone or together) had on the exam-score gap between high- and low-SES students.

The result? They cut the gap by 29%.

Rozek also tracked course failure. Among low-SES students, these strategies cut the failure rate by 50%.

(In the control group, 36% of the low SES students failed the class; in the other three groups, that rate fell to 18%. Of course, 18% is high — but it’s dramatically lower than 36%.)

In his final measure, Rozek found that — after these interventions — low SES-students evaluated their stress much more like the high SES-students. The gap between these ratings fell…by 81%.

All this progress from a 10 minute writing exercise.

Classroom Guidance to Reduce Exam Stress

If you’ve got students who are likely to feel higher levels of anxiety before a test, you might adapt either (or both) of these strategies for your students.

The best way to make these strategies work will vary depending on your students’ age and academic experience.

You might start by reviewing Rozek’s research — click the link above, and look for the “Procedure” section on page 5. From there, use your teacherly wisdom to make those procedures fit your students, your classroom, and you.

Strategies that Backfire: Monitoring Screen Time
Andrew Watson
Andrew Watson

Teachers and parents, reasonably enough, worry about the time that children spend looking at screens. Given the phones, tablets, phablets, laptops, and televisions that surround them, it seems normal to worry about the long-term effects of screens, screens, screens.

monitoring screen time

Monitoring screen time seems the obvious parenting strategy, and obvious teacher recommendation.

Not So Fast…

Recent research out of Canada throws doubt on this seemingly sensible approach.

Researchers surveyed parents of young children (ages 1.5-5), asking about their technology habits and parenting approaches.

Sure enough, they found that monitoring screen time correlates with an increase in the child’s technology use.

That is: when parents reward children with extra screen time, those children use more screens. Ditto parents who punish with reduced screen time. Ditto parents who simply keep track of their child’s screen time.

YIKES.

What’s a Parent to Do?

As is so often true, our behavior points the way. Parents who use screens less often in front of their children model the behavior they want to see. Result: less screen time.

This finding holds true especially for screens at mealtimes.

The best advice we’ve got so far: if you don’t want your children to obsesses over their tables, avoid monitoring screen time.

Several Caveats

First, given the survey methodology, the study can find correlation, but can’t conclude causation.

Second, the nitty-gritty gets complicated. The research team kept track of multiple variables: mothers’ behavior vs. fathers’ behavior; screen time on week days vs. screen time on weekends. To understand the specific connections, click the link above.

Third, this study focused short-term correlations with very young children. We simply don’t know about older children. Who knows: teens forbidden from playing Minecraft more than 3 hours a day might just play less Minecraft.

Finally, I think research about bright screens before sleep is well-established enough to be worth a reminder here. Blue light from computer screens can muddle melatonin onset, and thereby interfere with sleep. In this case in particular, we should model healthy screen behavior.

Does Drawing a Simple Picture Benefit Memory?
Andrew Watson
Andrew Watson

If a picture is worth 1000 words, how many words is drawing a picture worth?

drawing benefits memory

More specifically, Jeffrey Wammes & Co. have been exploring this question: is it true that drawing benefits memory? If I draw a picture of a word, will I remember it better than if I simply wrote that word down several times?

To explore this question, Wammes and his team have run a series of studies over the last several years. Basically, they’re trying to disprove their own hypothesis. If they can’t disprove it…well, it’s increasingly likely to be true.

The basic studies took a fairly simple form. Students saw a word and then spent 40 seconds drawing a picture of it. Or, they saw a word and spent 40 seconds writing it down several times.

Which words did they remember better? Yup: the words that they had drawn.

This effect held up not only in a psychology lab, but also in a college lecture hall.

Drawing Benefits Memory: More Advanced Studies

This hypothesis makes a kind of rough-and ready sense, for a number of reasons.

For instance, it just seems plausible that drawing benefits memory because visuals aide memory. Or, because drawing requires a greater degree of cognitive processing than simply writing.

So: perhaps drawing is but one example of these other effects.

Wammes and Co. wanted to see if that’s true. (Remember: they’re trying to disprove their hypothesis.)

So, they repeated the study several more times. In some cases, students drew pictures for some words and looked at pictures of other words.

Or, in another study, they drew pictures of some words and wrote down key features of other words. (Writing down key features requires higher levels of processing.)

In every case, they found that drawing produces even greater benefits than each sub-strategy. Students remembered more words that they had drawn than words they had processed in all those other ways.

Classroom Implications

What should classroom teachers do with this information?

In the first place, keep in mind that we’re still in early days of testing this technique. Much of this research has focused on nouns that are relatively easy to draw: say, “apple.”

At the same time, Wammes ran one study where students either drew or copied verbatim definitions of words. For instance, “stratoscopes” are “airborne telescopes that are mounted on high altitude balloons.” Once again, drawing led to better memory than simple copying.

Wammes’s team is currently exploring drawings of more abstract words: I hope to see those results published soon.

With these caveats in mind, I think we can plausibly use this approach in our classrooms. If you think a word, definition, concept, or process can plausibly be drawn, give your students a change to “review by drawing.”

Or, if you’ve built in a moment for retrieval practice, encourage students to include a drawing as part of their retrieval.

You might conclude that a particular topic doesn’t lend itself to drawing. An an English teacher, I’m not immediately sure how to draw “ode” or “concatenation” or “litotes.”

But, if a word or concept seems drawable to you, you might give students a chance to try out this mnemonic aide.

A Final Note

I emailed Dr. Wammes with a few questions about his research. In his reply, he included this quite wonderful sentence:

“There certainly will be situations where it [drawing] doesn’t work, I just unfortunately haven’t found them yet.”

Too often, teachers can take research findings as absolute injunctions. When we learn about the 10 minute rule, we think: “okay, I have to change it up every ten minutes!”

But, that’s just not true.

Psychology findings will benefits some of our classroom situations, some of our students, some of our lesson plans, some of our schools.

But, almost no research finding always applies. We have to translate and adapt and tinker.

The field of Mind, Brain, Education is a partnership: teachers learn from researchers, and researchers learn from teachers.

So, when you try this technique in your classroom, keep track of your results. If you pass them on to me, I’ll let the researchers know.

 

 

Spiders in Budapest: Deeper Understanding of the Brain
Andrew Watson
Andrew Watson

“Why can I forget what the capital of Hungary is, but not that I’m afraid of spiders?”

Michael S. C. Thomas kicks off his website “How The Brain Works” with this intriguing question.

Dr. Thomas is a good person to ask. In the first place, he directs the Centre for Educational Neuroscience. He knows from brains.

In the second, he’s got a lively writing voice. Better than most, he can explain important brain concepts without being pedantic, and without relying on Latinate jargon.

The website covers several helpful topics: the importance of sleep, the structure of synapses, the reasons brains have two hemispheres. (And: why being “left-brained” really isn’t a thing.)

I recommend this website as a lively introduction to (or review of) important neuroscience information.

And: if you want to know the answer to that spider/Hungary question, click here.

Dodging “Dodgy” Research: Strategies to Get Past Bunk
Andrew Watson
Andrew Watson

If we’re going to rely on research to improve teaching — that’s why you’re here, yes? — we need to hone our skepticism skills.

After all, we don’t want just any research. We want the good stuff.

But, we face a serious problem. If we’re not psychology or neuroscience researchers, how can we tell what’s good?

Over at TES, Bridget Clay and David Weston have four suggestions.

Seek out review articles.

Don’t be impressed by lists.

Look for disagreement.

Don’t be impressed by one shiny new study.

Their post is clear and thoughtful; I encourage you to read it all.

Second Look

I want to go back to their third suggestion: “seek criticism.” This one habit, I believe, can make us all substantially wiser readers of classroom-relevant research.

Here’s what I mean.

When I first started in brain-research world, I wanted to hear the enduring truths that researchers discovered about learning.

I would then (nobly, heroically) enact those truths in my classroom.

As an entirely hypothetical example: imagine I heard a presentation about research showing that fluorescent lights inhibit learning. (To be clear: I have no idea if this is true, or even if anyone claims that it’s true. I just made this up as an example.)

Given that research finding, I would boldly refuse to turn on the fluorescent lights in my classroom, and set up several lamps and candles. Learning would flourish.

Right?

Research Reality

Well, maybe. But, maybe not.

Researchers simply don’t discover “the truth about learning.” Instead, they try to disprove a particular claim in a particular way. If they can’t disprove it, then that claim seem slightly more plausible.

But, someone else might disprove it in some other way. Or, under some other conditions.

Such an incremental, lumpy process isn’t surprising or strange. The system should work this way.

When Clay and Weston warn us against being impressed by one new study, they’re making exactly this point. If one research team comes to a conclusion once, that’s interesting … but we shouldn’t make any changes to our classrooms just yet.

So, back to my example. I’ve heard that presentation about fluorescent lights. What should I do next?

I should — for the time being — assume that the claim (“fluorescent lights inhibit learning”) is UNTRUE, and go look for counter-examples.

Or, perhaps, I should assume the claim is CONTROVERSIAL, and seek out evidence on both sides.

How do I do that?

Skeptical Research, with Boundaries

Believe it or not, start by going to google.

Use words like “controversy” or “debate” or “untrue.”

So, I’d google “fluorescent lights and learning controversy.” The results will give you some ideas to play with. (In fact, I just tried that search. LOTS of interesting sources.)

You might go to Google Scholar, which provides links to scholarly articles. Try “fluorescent light learning.” (Again, lots of sources — in this case including information about ADHD.)

When you review several of these articles, you’ll start noticing interesting specifics. Researchers call them “boundary conditions.” A research claim might prove true for one subset of learners — that is, within these boundaries — but not another.

So: perhaps 3rd graders do badly with fluorescent lights. What about 10th graders?

Perhaps such light hampered learning of math facts. What about critical thinking?

Perhaps the researchers studied turtles learning mazes. Almost certainly, you aren’t teaching turtles. Until we test the claim with humans, we shouldn’t worry too much about turtle learning.

Perhaps — in fact, quite often — culture matters. Research findings about adolescence will differ in the US and Japan because cultural norms shape behavior quite differently.

Back to Beginnings

Clay & Weston say: seek out disagreement.

I say: AMEN!

Science works by asking incremental questions and coming to halting, often-contradictory findings.

Look for the contradictions. Use your teacherly wisdom to sort through them. You’ll know what to do next.

 

Research Summary: The Best and Worst Highlighting Strategies
Andrew Watson
Andrew Watson

Does highlighting help students learn?

As is so often the case, the answer is: it depends.

highlighting

The right kind of highlighting can help. But, the wrong kind doesn’t help. (And, might hurt.)

And, most students do the wrong kind.

Today’s Research Summary

Over at Three Star Learning Experiences, Tim Surma & Co. offer a helpful overview of highlighting research.

The headlines: highlighting helps students if the highlight the right amount of the right information.

Right amount: students tend to highlight too much. This habit reduces the benefit of highlighting, for several reasons.

Highlighting can help if the result is that information “pops out.” If students highlight too much, then nothing pops out. After all, it’s all highlighted.

Highlighting can help when it prompts students to think more about the reading. When they say “this part is more important than that part,” this extra level of processing promotes learning. Too much highlighting means not enough selective processing.

Sometimes students think that highlighting itself is studying. Instead, the review of highlighted material produces the benefits. (Along with the decision making before-hand.)

Right information.

Unsurprisingly, students often don’t know what to highlight. This problem shows up most often for a) younger students, and b) novices to a topic.

Suggestions and Solutions

Surma & Co. include several suggestions to help students highlight more effectively.

For instance, they suggest that students not highlight anything until they’ve read everything. This strategy helps them know what’s important.

(I myself use this technique, although I tend to highlight once I’ve read a substantive section. I don’t wait for a full chapter.)

And, of course, teachers who teach highlighting strategies explicitly, and who model those strategies, will likely see better results.

Surma’s post does a great job summarizing and organizing all this research; I encourage you to read the whole thing.

You might also check out John Dunlosky’s awesome review of study strategies. He and his co-authors devote lots of attention to highlighting, starting on page 18. They’re quite skeptical about its benefits, and have lots to contribute to the debate.

For other suggestions about highlighting, especially as a form of retrieval practice, click here.

 

Let’s Have More Fun with the Correlation/Causation Muddle
Andrew Watson
Andrew Watson

We’ve explored the relationship of correlation and causation before on the blog.

In particular, this commentary on DeBoer’s blog notes that — while correlation doesn’t prove causation — it might be a useful first step in discovering causation.

DeBoer argues for a difficult middle ground. He wants us to know (say it with me) that “correlation doesn’t prove causation.” AND he wants us to be reasonably skeptical, not thoughtlessly reactive.

On some occasions, we really ought to pay attention to correlation.

More Fun

I recently stumbled across a livelier way to explore this debate: a website called Spurious Correlations.

If you’d like to explore the correlation between — say — the number of letters in the winning word of the Scripps National Spelling Bee and — hmmm — the number of people killed by venomous spiders: this is definitely website for you.

Just so you know, the correlation of the divorce rate in Maine with per-capita consumption of margarine is higher than 99%.

“Wait Just a Minute!”: The Benefits of Procrastination?
Andrew Watson
Andrew Watson

“A year from now, you’ll wish you had started today.”

procrastination

This quotation, attributed to Karen Lamb, warns us about the dangers of procrastination. Presumably our students would propose a slightly modified version:

“The night before the test, I’ll wish I had started studying today.”

Does procrastination ever help? Is there such a thing as “beneficial procrastination”?

Types of Procrastination

I myself was intrigued when recently asked this question.

(True story: I was the President-To-Be of the Procrastinators’ Society in my high school. I would surely have been elected, but we never scheduled the meeting.)

Sure enough, researchers have theorized that we procrastinate for different reasons and in different ways.

Many of us, of course, procrastinate because we can’t get ourselves organized to face the task ahead.

(Mark Twain assures us he never put off until tomorrow that which he could do the day after tomorrow.)

Danya Corkin and colleagues wondered about another kind of deliberate procrastination: something they call “active delay.”

Active delay includes four salient features:

First, students intentionally decide to postpone their work. It’s not a haphazard, subconscious process.

Second, they like working under pressure.

Third — unlike most procrastinators — they get the work done on time.

Fourth, they feel good about the whole process.

What did Corkin & Co. find when they looked for these distinct groups?

The Benefits of “Active Delay”

As is often the case, they found a mixed bag of results.

To their surprise, procrastinators and active delayers adopted learning strategies (rehearsal, elaboration, planning, monitoring) roughly equally.

Unsurprisingly, procrastinators generally followed unproductive motivational pathways. (If you follow Carol Dweck’s work, you know about the dangers of “performance goals” and “avoidance goals.”)

And, the big headline: procrastination led to lower grades. Active delay led to higher grades.

Classroom Implications

This research gives teachers a few points to consider.

First: both kinds of procrastination might look alike to us. However, they might lead to quite different results.

Even if students procrastinate from our perspective, we can distinguish between two categories of procrastination. And, we should worry less about “active delay” than good, old-fashioned putting stuff off because I can’t deal with it.

Second: even though “active delay” leads to more learning than “procrastination,” both probably produce less learning than well-scheduled learning.

As we know from many researchers, spreading practice out over time (interleaving) yields more learning than bunching it all together.

Active delay might not be as bad, but it’s still bad for learning.

Finally: if you’re an “active delayer,” you might forgive yourself. As long as you’re choosing delay as a strategy — especially because you work best under pressure — then this flavor of procrastination needn’t bring on a bout of guilt.

Me: I’m going to watch some football…

True/False: Grades Motivate Students to Study Better?
Andrew Watson
Andrew Watson

The following story is true. (The names have been left out because I’ve forgotten them.)

grades and motivation

When I attended graduate school in education, I handed in my first essay with some trepidation, and lots of excitement.

Like my classmates, I had worked hard to wrestle with the topic: how best to critique a study’s methodology. Like my classmates, I wanted to know how I could do better.

When we got those essays back, our TAs had written a number at the end. There were, quite literally, no other marks on the paper — much less helpful comments. (I’m an English teacher, so when I say “literally” I mean “literally.”)

We then sat through a slide show in which the head TA explained the most common errors, and what percentage of us had made each one.

Here’s the kicker. The head TA then said:

“Your TAs are very busy, and we couldn’t possibly meet with all of you. So, to be fair, we won’t discuss these essays individually with any of you.”

So, in a SCHOOL OF EDUCATION, I got exactly NO individual feedback on my essay. I have little idea what I did right or wrong. And, I have no idea whatsoever how I could have done better.

How’s that for teaching excellence?

Grades and Motivation: Today’s Research

My point with this story is: for me, the experience of getting a grade without feedback was a) demotivating, b) infuriating, and c) useless.

If you’d like to rethink your school’s grading strategy, my own experience would point you in a particular direction.

However: you’re not reading this blog to get anecdotes. If you’re in Learning and the Brain world, you’re interested in science. What does research tell us about grades and motivation?

A recent study on “The Impact of Grades on Student Motivation” has been getting some Twitter love.

The researchers surveyed students at a college that has grades only, a different college that offers narrative feedback only, and two colleges that use both. They also interviewed students at one of the “hybrid” colleges.

What did they find?

They didn’t pull any punches:

“Grades did not enhance academic motivation.”

“Grades promoted anxiety, a sense of hopelessness, social comparison, as well as a fear of failure.”

“In contrast, narrative evaluations supported basic psychological needs and enhanced motivation.”

Briefly: grades demotivate, while narrative feedback helpfully focuses students on useful strategies for improvement.

Certainly these conclusions align with my own grad-school experience.

Not So Fast

Despite these emphatic conclusions, and despite the Twitter love, teachers who want to do away with grades should not, in my view, rely too heavily on this study.

Here’s why:

First: unless you teach in a college or university, research with these students might not apply to your students. Motivation for 2nd and 3rd graders might work quite differently than motivation for 23-year-olds.

Second: most college and university students, unlike most K-12 students, have some choices about the schools the attend and the classes they take.

In other words: students with higher degrees of academic motivation might be choosing colleges and courses with narrative feedback instead of grades.

It’s not clear if their level of motivation results from or causes their choice of college. Or, perhaps, both.

(To be clear, the researchers acknowledge this concern.)

Third: in my experience, most K-12 teachers combine letter or number grades with specific feedback. Unlike my TAs, who gave me a number without guidance, teachers often provide both a number and specific guidance.

Fourth: the study includes a number of troubling quirks.

The interview portion of the study includes thirteen students. It is, ahem, unusual to draw strong conclusions from interviews with 13 people.

The interviewer was a student who already knew some of the interviewees. Their prior relationship might well influence their answers to the interview questions.

More than any study I’ve read, this one includes an overtly political and economic perspective. Research like this typically eschews a strong political stance, and its presence here is at odds with research norms. (To be clear: researchers have political opinions. It’s just very strange to see them in print.)

Given these concerns — big and small — we should look elsewhere for research on grades and motivation to guide our schools and our own practice.

Earlier Thoughts

We have, of course, often written about grades and motivation here on the blog. For example:

In this article, Doug Lemov argues that — although imperfect — grades are the best way to ensure that scare resources aren’t given entirely to well-connected people.

In this article, we look at the Mastery Transcript movement: a strategy to provide lots of meaningful feedback without the tyranny of grades and transcripts.

Your thoughts on grades and grading are welcome: please share your experience in the comments.

 

 

Physics and Engineering: My New Year’s Resolution
Andrew Watson
Andrew Watson

 

Over on Twitter, @DylanWilliam wrote:

“[P]hysics tells you about the properties of materials but it’s the engineer who designs the bridge. Similarly, psychology tells us about how our brains work, but it’s teachers who craft instruction.”

In other words, teachers should learn a great deal about psychology from psychologists.

(And should learn some things about neuroscience from neuroscientists.)

But the study of psychology doesn’t — and can’t — tell us exactly how to teach. We have to combine the underlying psychological principles (that’s “physics” in William’s analogy) with the day-to-day gritty demands of the environment (“engineering”).

And so, my clarifying New Year’s resolution:

Study physics to be a better engineer.

I hope you’ll join me this year, and share your wisdom!