L&B Blog – Page 45 – Education & Teacher Conferences Skip to main content
Does Drawing a Simple Picture Benefit Memory?
Andrew Watson
Andrew Watson

If a picture is worth 1000 words, how many words is drawing a picture worth?

drawing benefits memory

More specifically, Jeffrey Wammes & Co. have been exploring this question: is it true that drawing benefits memory? If I draw a picture of a word, will I remember it better than if I simply wrote that word down several times?

To explore this question, Wammes and his team have run a series of studies over the last several years. Basically, they’re trying to disprove their own hypothesis. If they can’t disprove it…well, it’s increasingly likely to be true.

The basic studies took a fairly simple form. Students saw a word and then spent 40 seconds drawing a picture of it. Or, they saw a word and spent 40 seconds writing it down several times.

Which words did they remember better? Yup: the words that they had drawn.

This effect held up not only in a psychology lab, but also in a college lecture hall.

Drawing Benefits Memory: More Advanced Studies

This hypothesis makes a kind of rough-and ready sense, for a number of reasons.

For instance, it just seems plausible that drawing benefits memory because visuals aide memory. Or, because drawing requires a greater degree of cognitive processing than simply writing.

So: perhaps drawing is but one example of these other effects.

Wammes and Co. wanted to see if that’s true. (Remember: they’re trying to disprove their hypothesis.)

So, they repeated the study several more times. In some cases, students drew pictures for some words and looked at pictures of other words.

Or, in another study, they drew pictures of some words and wrote down key features of other words. (Writing down key features requires higher levels of processing.)

In every case, they found that drawing produces even greater benefits than each sub-strategy. Students remembered more words that they had drawn than words they had processed in all those other ways.

Classroom Implications

What should classroom teachers do with this information?

In the first place, keep in mind that we’re still in early days of testing this technique. Much of this research has focused on nouns that are relatively easy to draw: say, “apple.”

At the same time, Wammes ran one study where students either drew or copied verbatim definitions of words. For instance, “stratoscopes” are “airborne telescopes that are mounted on high altitude balloons.” Once again, drawing led to better memory than simple copying.

Wammes’s team is currently exploring drawings of more abstract words: I hope to see those results published soon.

With these caveats in mind, I think we can plausibly use this approach in our classrooms. If you think a word, definition, concept, or process can plausibly be drawn, give your students a change to “review by drawing.”

Or, if you’ve built in a moment for retrieval practice, encourage students to include a drawing as part of their retrieval.

You might conclude that a particular topic doesn’t lend itself to drawing. An an English teacher, I’m not immediately sure how to draw “ode” or “concatenation” or “litotes.”

But, if a word or concept seems drawable to you, you might give students a chance to try out this mnemonic aide.

A Final Note

I emailed Dr. Wammes with a few questions about his research. In his reply, he included this quite wonderful sentence:

“There certainly will be situations where it [drawing] doesn’t work, I just unfortunately haven’t found them yet.”

Too often, teachers can take research findings as absolute injunctions. When we learn about the 10 minute rule, we think: “okay, I have to change it up every ten minutes!”

But, that’s just not true.

Psychology findings will benefits some of our classroom situations, some of our students, some of our lesson plans, some of our schools.

But, almost no research finding always applies. We have to translate and adapt and tinker.

The field of Mind, Brain, Education is a partnership: teachers learn from researchers, and researchers learn from teachers.

So, when you try this technique in your classroom, keep track of your results. If you pass them on to me, I’ll let the researchers know.

 

 

Spiders in Budapest: Deeper Understanding of the Brain
Andrew Watson
Andrew Watson

“Why can I forget what the capital of Hungary is, but not that I’m afraid of spiders?”

Michael S. C. Thomas kicks off his website “How The Brain Works” with this intriguing question.

Dr. Thomas is a good person to ask. In the first place, he directs the Centre for Educational Neuroscience. He knows from brains.

In the second, he’s got a lively writing voice. Better than most, he can explain important brain concepts without being pedantic, and without relying on Latinate jargon.

The website covers several helpful topics: the importance of sleep, the structure of synapses, the reasons brains have two hemispheres. (And: why being “left-brained” really isn’t a thing.)

I recommend this website as a lively introduction to (or review of) important neuroscience information.

And: if you want to know the answer to that spider/Hungary question, click here.

Dodging “Dodgy” Research: Strategies to Get Past Bunk
Andrew Watson
Andrew Watson

If we’re going to rely on research to improve teaching — that’s why you’re here, yes? — we need to hone our skepticism skills.

After all, we don’t want just any research. We want the good stuff.

But, we face a serious problem. If we’re not psychology or neuroscience researchers, how can we tell what’s good?

Over at TES, Bridget Clay and David Weston have four suggestions.

Seek out review articles.

Don’t be impressed by lists.

Look for disagreement.

Don’t be impressed by one shiny new study.

Their post is clear and thoughtful; I encourage you to read it all.

Second Look

I want to go back to their third suggestion: “seek criticism.” This one habit, I believe, can make us all substantially wiser readers of classroom-relevant research.

Here’s what I mean.

When I first started in brain-research world, I wanted to hear the enduring truths that researchers discovered about learning.

I would then (nobly, heroically) enact those truths in my classroom.

As an entirely hypothetical example: imagine I heard a presentation about research showing that fluorescent lights inhibit learning. (To be clear: I have no idea if this is true, or even if anyone claims that it’s true. I just made this up as an example.)

Given that research finding, I would boldly refuse to turn on the fluorescent lights in my classroom, and set up several lamps and candles. Learning would flourish.

Right?

Research Reality

Well, maybe. But, maybe not.

Researchers simply don’t discover “the truth about learning.” Instead, they try to disprove a particular claim in a particular way. If they can’t disprove it, then that claim seem slightly more plausible.

But, someone else might disprove it in some other way. Or, under some other conditions.

Such an incremental, lumpy process isn’t surprising or strange. The system should work this way.

When Clay and Weston warn us against being impressed by one new study, they’re making exactly this point. If one research team comes to a conclusion once, that’s interesting … but we shouldn’t make any changes to our classrooms just yet.

So, back to my example. I’ve heard that presentation about fluorescent lights. What should I do next?

I should — for the time being — assume that the claim (“fluorescent lights inhibit learning”) is UNTRUE, and go look for counter-examples.

Or, perhaps, I should assume the claim is CONTROVERSIAL, and seek out evidence on both sides.

How do I do that?

Skeptical Research, with Boundaries

Believe it or not, start by going to google.

Use words like “controversy” or “debate” or “untrue.”

So, I’d google “fluorescent lights and learning controversy.” The results will give you some ideas to play with. (In fact, I just tried that search. LOTS of interesting sources.)

You might go to Google Scholar, which provides links to scholarly articles. Try “fluorescent light learning.” (Again, lots of sources — in this case including information about ADHD.)

When you review several of these articles, you’ll start noticing interesting specifics. Researchers call them “boundary conditions.” A research claim might prove true for one subset of learners — that is, within these boundaries — but not another.

So: perhaps 3rd graders do badly with fluorescent lights. What about 10th graders?

Perhaps such light hampered learning of math facts. What about critical thinking?

Perhaps the researchers studied turtles learning mazes. Almost certainly, you aren’t teaching turtles. Until we test the claim with humans, we shouldn’t worry too much about turtle learning.

Perhaps — in fact, quite often — culture matters. Research findings about adolescence will differ in the US and Japan because cultural norms shape behavior quite differently.

Back to Beginnings

Clay & Weston say: seek out disagreement.

I say: AMEN!

Science works by asking incremental questions and coming to halting, often-contradictory findings.

Look for the contradictions. Use your teacherly wisdom to sort through them. You’ll know what to do next.

 

Research Summary: The Best and Worst Highlighting Strategies
Andrew Watson
Andrew Watson

Does highlighting help students learn?

As is so often the case, the answer is: it depends.

highlighting

The right kind of highlighting can help. But, the wrong kind doesn’t help. (And, might hurt.)

And, most students do the wrong kind.

Today’s Research Summary

Over at Three Star Learning Experiences, Tim Surma & Co. offer a helpful overview of highlighting research.

The headlines: highlighting helps students if the highlight the right amount of the right information.

Right amount: students tend to highlight too much. This habit reduces the benefit of highlighting, for several reasons.

Highlighting can help if the result is that information “pops out.” If students highlight too much, then nothing pops out. After all, it’s all highlighted.

Highlighting can help when it prompts students to think more about the reading. When they say “this part is more important than that part,” this extra level of processing promotes learning. Too much highlighting means not enough selective processing.

Sometimes students think that highlighting itself is studying. Instead, the review of highlighted material produces the benefits. (Along with the decision making before-hand.)

Right information.

Unsurprisingly, students often don’t know what to highlight. This problem shows up most often for a) younger students, and b) novices to a topic.

Suggestions and Solutions

Surma & Co. include several suggestions to help students highlight more effectively.

For instance, they suggest that students not highlight anything until they’ve read everything. This strategy helps them know what’s important.

(I myself use this technique, although I tend to highlight once I’ve read a substantive section. I don’t wait for a full chapter.)

And, of course, teachers who teach highlighting strategies explicitly, and who model those strategies, will likely see better results.

Surma’s post does a great job summarizing and organizing all this research; I encourage you to read the whole thing.

You might also check out John Dunlosky’s awesome review of study strategies. He and his co-authors devote lots of attention to highlighting, starting on page 18. They’re quite skeptical about its benefits, and have lots to contribute to the debate.

For other suggestions about highlighting, especially as a form of retrieval practice, click here.

 

Let’s Have More Fun with the Correlation/Causation Muddle
Andrew Watson
Andrew Watson

We’ve explored the relationship of correlation and causation before on the blog.

In particular, this commentary on DeBoer’s blog notes that — while correlation doesn’t prove causation — it might be a useful first step in discovering causation.

DeBoer argues for a difficult middle ground. He wants us to know (say it with me) that “correlation doesn’t prove causation.” AND he wants us to be reasonably skeptical, not thoughtlessly reactive.

On some occasions, we really ought to pay attention to correlation.

More Fun

I recently stumbled across a livelier way to explore this debate: a website called Spurious Correlations.

If you’d like to explore the correlation between — say — the number of letters in the winning word of the Scripps National Spelling Bee and — hmmm — the number of people killed by venomous spiders: this is definitely website for you.

Just so you know, the correlation of the divorce rate in Maine with per-capita consumption of margarine is higher than 99%.

“Wait Just a Minute!”: The Benefits of Procrastination?
Andrew Watson
Andrew Watson

“A year from now, you’ll wish you had started today.”

procrastination

This quotation, attributed to Karen Lamb, warns us about the dangers of procrastination. Presumably our students would propose a slightly modified version:

“The night before the test, I’ll wish I had started studying today.”

Does procrastination ever help? Is there such a thing as “beneficial procrastination”?

Types of Procrastination

I myself was intrigued when recently asked this question.

(True story: I was the President-To-Be of the Procrastinators’ Society in my high school. I would surely have been elected, but we never scheduled the meeting.)

Sure enough, researchers have theorized that we procrastinate for different reasons and in different ways.

Many of us, of course, procrastinate because we can’t get ourselves organized to face the task ahead.

(Mark Twain assures us he never put off until tomorrow that which he could do the day after tomorrow.)

Danya Corkin and colleagues wondered about another kind of deliberate procrastination: something they call “active delay.”

Active delay includes four salient features:

First, students intentionally decide to postpone their work. It’s not a haphazard, subconscious process.

Second, they like working under pressure.

Third — unlike most procrastinators — they get the work done on time.

Fourth, they feel good about the whole process.

What did Corkin & Co. find when they looked for these distinct groups?

The Benefits of “Active Delay”

As is often the case, they found a mixed bag of results.

To their surprise, procrastinators and active delayers adopted learning strategies (rehearsal, elaboration, planning, monitoring) roughly equally.

Unsurprisingly, procrastinators generally followed unproductive motivational pathways. (If you follow Carol Dweck’s work, you know about the dangers of “performance goals” and “avoidance goals.”)

And, the big headline: procrastination led to lower grades. Active delay led to higher grades.

Classroom Implications

This research gives teachers a few points to consider.

First: both kinds of procrastination might look alike to us. However, they might lead to quite different results.

Even if students procrastinate from our perspective, we can distinguish between two categories of procrastination. And, we should worry less about “active delay” than good, old-fashioned putting stuff off because I can’t deal with it.

Second: even though “active delay” leads to more learning than “procrastination,” both probably produce less learning than well-scheduled learning.

As we know from many researchers, spreading practice out over time (interleaving) yields more learning than bunching it all together.

Active delay might not be as bad, but it’s still bad for learning.

Finally: if you’re an “active delayer,” you might forgive yourself. As long as you’re choosing delay as a strategy — especially because you work best under pressure — then this flavor of procrastination needn’t bring on a bout of guilt.

Me: I’m going to watch some football…

True/False: Grades Motivate Students to Study Better?
Andrew Watson
Andrew Watson

The following story is true. (The names have been left out because I’ve forgotten them.)

grades and motivation

When I attended graduate school in education, I handed in my first essay with some trepidation, and lots of excitement.

Like my classmates, I had worked hard to wrestle with the topic: how best to critique a study’s methodology. Like my classmates, I wanted to know how I could do better.

When we got those essays back, our TAs had written a number at the end. There were, quite literally, no other marks on the paper — much less helpful comments. (I’m an English teacher, so when I say “literally” I mean “literally.”)

We then sat through a slide show in which the head TA explained the most common errors, and what percentage of us had made each one.

Here’s the kicker. The head TA then said:

“Your TAs are very busy, and we couldn’t possibly meet with all of you. So, to be fair, we won’t discuss these essays individually with any of you.”

So, in a SCHOOL OF EDUCATION, I got exactly NO individual feedback on my essay. I have little idea what I did right or wrong. And, I have no idea whatsoever how I could have done better.

How’s that for teaching excellence?

Grades and Motivation: Today’s Research

My point with this story is: for me, the experience of getting a grade without feedback was a) demotivating, b) infuriating, and c) useless.

If you’d like to rethink your school’s grading strategy, my own experience would point you in a particular direction.

However: you’re not reading this blog to get anecdotes. If you’re in Learning and the Brain world, you’re interested in science. What does research tell us about grades and motivation?

A recent study on “The Impact of Grades on Student Motivation” has been getting some Twitter love.

The researchers surveyed students at a college that has grades only, a different college that offers narrative feedback only, and two colleges that use both. They also interviewed students at one of the “hybrid” colleges.

What did they find?

They didn’t pull any punches:

“Grades did not enhance academic motivation.”

“Grades promoted anxiety, a sense of hopelessness, social comparison, as well as a fear of failure.”

“In contrast, narrative evaluations supported basic psychological needs and enhanced motivation.”

Briefly: grades demotivate, while narrative feedback helpfully focuses students on useful strategies for improvement.

Certainly these conclusions align with my own grad-school experience.

Not So Fast

Despite these emphatic conclusions, and despite the Twitter love, teachers who want to do away with grades should not, in my view, rely too heavily on this study.

Here’s why:

First: unless you teach in a college or university, research with these students might not apply to your students. Motivation for 2nd and 3rd graders might work quite differently than motivation for 23-year-olds.

Second: most college and university students, unlike most K-12 students, have some choices about the schools the attend and the classes they take.

In other words: students with higher degrees of academic motivation might be choosing colleges and courses with narrative feedback instead of grades.

It’s not clear if their level of motivation results from or causes their choice of college. Or, perhaps, both.

(To be clear, the researchers acknowledge this concern.)

Third: in my experience, most K-12 teachers combine letter or number grades with specific feedback. Unlike my TAs, who gave me a number without guidance, teachers often provide both a number and specific guidance.

Fourth: the study includes a number of troubling quirks.

The interview portion of the study includes thirteen students. It is, ahem, unusual to draw strong conclusions from interviews with 13 people.

The interviewer was a student who already knew some of the interviewees. Their prior relationship might well influence their answers to the interview questions.

More than any study I’ve read, this one includes an overtly political and economic perspective. Research like this typically eschews a strong political stance, and its presence here is at odds with research norms. (To be clear: researchers have political opinions. It’s just very strange to see them in print.)

Given these concerns — big and small — we should look elsewhere for research on grades and motivation to guide our schools and our own practice.

Earlier Thoughts

We have, of course, often written about grades and motivation here on the blog. For example:

In this article, Doug Lemov argues that — although imperfect — grades are the best way to ensure that scare resources aren’t given entirely to well-connected people.

In this article, we look at the Mastery Transcript movement: a strategy to provide lots of meaningful feedback without the tyranny of grades and transcripts.

Your thoughts on grades and grading are welcome: please share your experience in the comments.

 

 

Physics and Engineering: My New Year’s Resolution
Andrew Watson
Andrew Watson

 

Over on Twitter, @DylanWilliam wrote:

“[P]hysics tells you about the properties of materials but it’s the engineer who designs the bridge. Similarly, psychology tells us about how our brains work, but it’s teachers who craft instruction.”

In other words, teachers should learn a great deal about psychology from psychologists.

(And should learn some things about neuroscience from neuroscientists.)

But the study of psychology doesn’t — and can’t — tell us exactly how to teach. We have to combine the underlying psychological principles (that’s “physics” in William’s analogy) with the day-to-day gritty demands of the environment (“engineering”).

And so, my clarifying New Year’s resolution:

Study physics to be a better engineer.

I hope you’ll join me this year, and share your wisdom!

New Research: Personal Best Goals (Might) Boost Learning
Andrew Watson
Andrew Watson

Some research-based suggestions for teaching require a lot of complex changes. (If you want to develop an interleaved syllabus, you’re going to need some time.)

personal best goals

Others couldn’t be simpler to adopt.

Here’s a suggestion from researchers Down Under: encourage your students to adopt “personal best goals.”

The Research

In a straightforward study, Andrew Martin and Australian colleagues asked 10- to 12-year-olds to solve a set of math problems. After each student worked for one minute, she learned how well she had done on that group of problems.

Students then worked that same set of problems again. Martin measured their improvement from the first to the second attempt.

Here’s the key point: after half of the students heard their score, they got these additional instructions:

“That is your Personal Best score. Now we’re going to do these question again, and I would like you to set a goal where you aim to do better on these questions than you did before.”

The other half of the students simply heard their score and were told to try the problems again.

Sure enough, this simple “personal best” prompt led to greater improvement than in the control group.

To be clear: the difference was statistically significant, but relatively small. The Cohen’s d was 0.08 — lower than typically gets my attention.

However, as the researchers point out, perhaps the structure of the study kept that value low. Given the process — students worked the same problem sets twice — the obvious thing for students to do is strive to improve performance on the second iteration.

In other words: some students might have been striving for “personal bests” even when they weren’t explicitly instructed to do so.

In my own view, a small Cohen’s d matters a lot if the research advice is difficult to accomplish. So, if interleaving leads to only a small bump in learning, it might not be worth it. As noted above, interleaving takes a lot of planning time.

In this case, the additional instruction to “strive for your personal best” has essentially no cost at all.

Classroom Implications

Martin’s study is the first I know of that directly studies this technique.

(Earlier work, well summarized by Martin, looks at self-reports by students who set personal best goals. That research is encouraging — but self-reports aren’t as persuasive as Martin’s design.)

For that reason, we should be careful and use our best judgement as we try out this idea.

For example:

I suspect this technique works when used occasionally, not constantly.

In this study, the technique was used for the very short term: the personal best goals applied to the very next minute.

One intriguing suggestion that Martin makes: teachers could encourage personal best goals for the process not the result. That is: the goal could be “ask for help before giving up” rather than “score higher than last time.”

One final point stands out in this research. If you’re up to date on your Mindset research, you know the crucial difference between “performance goals” and “learning goals.”

Students with “performance goals” strive, among other things, to beat their peers. Of course, “personal best goals” focus not on beating peers but on beating oneself. They are, in other words, “learning goals.”

And, we’ve got LOTS of research showing that learning goals result in lots more learning.

Bit by Bit, Putting It Together
Andrew Watson
Andrew Watson

Over at Teacherhead, Tom Sherrington has posted a form that teachers can use for lesson plans.

He has put together different versions: one filled-in with explanations, another left blank for teachers to use, yet another for adapting and editing.

The Bigger Picture

In the world of Learning and the Brain, researchers explore precise, narrow questions about learning. The result: lots of precise, narrow answers.

For instance: Technique X helped this group of bilingual 5th graders in Texas learn more about their state constitution.

How might Technique X help you? With your students? And your curriculum?

And, crucially, how does Technique X fit together with Technique Y, Technique 7, and Technique Gamma — which you also heard about at the conference?

As you’ve heard me say: only the teacher can figure out the best way to put the research pieces together. Once you’ve gathered all the essential information, you’re in the best position to conjure the optimal mix for your specific circumstances.

All Together Now

And, that’s why I like Sherrington’s lesson planning form so much.

You’ve seen research into the importance of “activating prior knowledge.” You’ve also seen research into the importance of “retrieval practice.” You know about “prior misconceptions.” And so forth…

But, how do those distinct pieces all fit together?

This lesson planning form provides one thoughtful answer.

To be clear: this answer doesn’t have to be your answer. For this reason (I assume), Sherrington included a form that you can edit and make your own.

The key message as you start gearing up for January: research does indeed offer exciting examples and helpful new ways to think about teaching and learning.

Teachers should draw on that research. And: we’ll each put the pieces together in our own ways.