Skip to main content
Dodging “Dodgy” Research: Strategies to Get Past Bunk
Andrew Watson
Andrew Watson

If we’re going to rely on research to improve teaching — that’s why you’re here, yes? — we need to hone our skepticism skills.

After all, we don’t want just any research. We want the good stuff.

But, we face a serious problem. If we’re not psychology or neuroscience researchers, how can we tell what’s good?

Over at TES, Bridget Clay and David Weston have four suggestions.

Seek out review articles.

Don’t be impressed by lists.

Look for disagreement.

Don’t be impressed by one shiny new study.

Their post is clear and thoughtful; I encourage you to read it all.

Second Look

I want to go back to their third suggestion: “seek criticism.” This one habit, I believe, can make us all substantially wiser readers of classroom-relevant research.

Here’s what I mean.

When I first started in brain-research world, I wanted to hear the enduring truths that researchers discovered about learning.

I would then (nobly, heroically) enact those truths in my classroom.

As an entirely hypothetical example: imagine I heard a presentation about research showing that fluorescent lights inhibit learning. (To be clear: I have no idea if this is true, or even if anyone claims that it’s true. I just made this up as an example.)

Given that research finding, I would boldly refuse to turn on the fluorescent lights in my classroom, and set up several lamps and candles. Learning would flourish.

Right?

Research Reality

Well, maybe. But, maybe not.

Researchers simply don’t discover “the truth about learning.” Instead, they try to disprove a particular claim in a particular way. If they can’t disprove it, then that claim seem slightly more plausible.

But, someone else might disprove it in some other way. Or, under some other conditions.

Such an incremental, lumpy process isn’t surprising or strange. The system should work this way.

When Clay and Weston warn us against being impressed by one new study, they’re making exactly this point. If one research team comes to a conclusion once, that’s interesting … but we shouldn’t make any changes to our classrooms just yet.

So, back to my example. I’ve heard that presentation about fluorescent lights. What should I do next?

I should — for the time being — assume that the claim (“fluorescent lights inhibit learning”) is UNTRUE, and go look for counter-examples.

Or, perhaps, I should assume the claim is CONTROVERSIAL, and seek out evidence on both sides.

How do I do that?

Skeptical Research, with Boundaries

Believe it or not, start by going to google.

Use words like “controversy” or “debate” or “untrue.”

So, I’d google “fluorescent lights and learning controversy.” The results will give you some ideas to play with. (In fact, I just tried that search. LOTS of interesting sources.)

You might go to Google Scholar, which provides links to scholarly articles. Try “fluorescent light learning.” (Again, lots of sources — in this case including information about ADHD.)

When you review several of these articles, you’ll start noticing interesting specifics. Researchers call them “boundary conditions.” A research claim might prove true for one subset of learners — that is, within these boundaries — but not another.

So: perhaps 3rd graders do badly with fluorescent lights. What about 10th graders?

Perhaps such light hampered learning of math facts. What about critical thinking?

Perhaps the researchers studied turtles learning mazes. Almost certainly, you aren’t teaching turtles. Until we test the claim with humans, we shouldn’t worry too much about turtle learning.

Perhaps — in fact, quite often — culture matters. Research findings about adolescence will differ in the US and Japan because cultural norms shape behavior quite differently.

Back to Beginnings

Clay & Weston say: seek out disagreement.

I say: AMEN!

Science works by asking incremental questions and coming to halting, often-contradictory findings.

Look for the contradictions. Use your teacherly wisdom to sort through them. You’ll know what to do next.

 

Research Summary: The Best and Worst Highlighting Strategies
Andrew Watson
Andrew Watson

Does highlighting help students learn?

As is so often the case, the answer is: it depends.

highlighting

The right kind of highlighting can help. But, the wrong kind doesn’t help. (And, might hurt.)

And, most students do the wrong kind.

Today’s Research Summary

Over at Three Star Learning Experiences, Tim Surma & Co. offer a helpful overview of highlighting research.

The headlines: highlighting helps students if the highlight the right amount of the right information.

Right amount: students tend to highlight too much. This habit reduces the benefit of highlighting, for several reasons.

Highlighting can help if the result is that information “pops out.” If students highlight too much, then nothing pops out. After all, it’s all highlighted.

Highlighting can help when it prompts students to think more about the reading. When they say “this part is more important than that part,” this extra level of processing promotes learning. Too much highlighting means not enough selective processing.

Sometimes students think that highlighting itself is studying. Instead, the review of highlighted material produces the benefits. (Along with the decision making before-hand.)

Right information.

Unsurprisingly, students often don’t know what to highlight. This problem shows up most often for a) younger students, and b) novices to a topic.

Suggestions and Solutions

Surma & Co. include several suggestions to help students highlight more effectively.

For instance, they suggest that students not highlight anything until they’ve read everything. This strategy helps them know what’s important.

(I myself use this technique, although I tend to highlight once I’ve read a substantive section. I don’t wait for a full chapter.)

And, of course, teachers who teach highlighting strategies explicitly, and who model those strategies, will likely see better results.

Surma’s post does a great job summarizing and organizing all this research; I encourage you to read the whole thing.

You might also check out John Dunlosky’s awesome review of study strategies. He and his co-authors devote lots of attention to highlighting, starting on page 18. They’re quite skeptical about its benefits, and have lots to contribute to the debate.

For other suggestions about highlighting, especially as a form of retrieval practice, click here.

 

Let’s Have More Fun with the Correlation/Causation Muddle
Andrew Watson
Andrew Watson

We’ve explored the relationship of correlation and causation before on the blog.

In particular, this commentary on DeBoer’s blog notes that — while correlation doesn’t prove causation — it might be a useful first step in discovering causation.

DeBoer argues for a difficult middle ground. He wants us to know (say it with me) that “correlation doesn’t prove causation.” AND he wants us to be reasonably skeptical, not thoughtlessly reactive.

On some occasions, we really ought to pay attention to correlation.

More Fun

I recently stumbled across a livelier way to explore this debate: a website called Spurious Correlations.

If you’d like to explore the correlation between — say — the number of letters in the winning word of the Scripps National Spelling Bee and — hmmm — the number of people killed by venomous spiders: this is definitely website for you.

Just so you know, the correlation of the divorce rate in Maine with per-capita consumption of margarine is higher than 99%.

“Wait Just a Minute!”: The Benefits of Procrastination?
Andrew Watson
Andrew Watson

“A year from now, you’ll wish you had started today.”

procrastination

This quotation, attributed to Karen Lamb, warns us about the dangers of procrastination. Presumably our students would propose a slightly modified version:

“The night before the test, I’ll wish I had started studying today.”

Does procrastination ever help? Is there such a thing as “beneficial procrastination”?

Types of Procrastination

I myself was intrigued when recently asked this question.

(True story: I was the President-To-Be of the Procrastinators’ Society in my high school. I would surely have been elected, but we never scheduled the meeting.)

Sure enough, researchers have theorized that we procrastinate for different reasons and in different ways.

Many of us, of course, procrastinate because we can’t get ourselves organized to face the task ahead.

(Mark Twain assures us he never put off until tomorrow that which he could do the day after tomorrow.)

Danya Corkin and colleagues wondered about another kind of deliberate procrastination: something they call “active delay.”

Active delay includes four salient features:

First, students intentionally decide to postpone their work. It’s not a haphazard, subconscious process.

Second, they like working under pressure.

Third — unlike most procrastinators — they get the work done on time.

Fourth, they feel good about the whole process.

What did Corkin & Co. find when they looked for these distinct groups?

The Benefits of “Active Delay”

As is often the case, they found a mixed bag of results.

To their surprise, procrastinators and active delayers adopted learning strategies (rehearsal, elaboration, planning, monitoring) roughly equally.

Unsurprisingly, procrastinators generally followed unproductive motivational pathways. (If you follow Carol Dweck’s work, you know about the dangers of “performance goals” and “avoidance goals.”)

And, the big headline: procrastination led to lower grades. Active delay led to higher grades.

Classroom Implications

This research gives teachers a few points to consider.

First: both kinds of procrastination might look alike to us. However, they might lead to quite different results.

Even if students procrastinate from our perspective, we can distinguish between two categories of procrastination. And, we should worry less about “active delay” than good, old-fashioned putting stuff off because I can’t deal with it.

Second: even though “active delay” leads to more learning than “procrastination,” both probably produce less learning than well-scheduled learning.

As we know from many researchers, spreading practice out over time (interleaving) yields more learning than bunching it all together.

Active delay might not be as bad, but it’s still bad for learning.

Finally: if you’re an “active delayer,” you might forgive yourself. As long as you’re choosing delay as a strategy — especially because you work best under pressure — then this flavor of procrastination needn’t bring on a bout of guilt.

Me: I’m going to watch some football…

True/False: Grades Motivate Students to Study Better?
Andrew Watson
Andrew Watson

The following story is true. (The names have been left out because I’ve forgotten them.)

grades and motivation

When I attended graduate school in education, I handed in my first essay with some trepidation, and lots of excitement.

Like my classmates, I had worked hard to wrestle with the topic: how best to critique a study’s methodology. Like my classmates, I wanted to know how I could do better.

When we got those essays back, our TAs had written a number at the end. There were, quite literally, no other marks on the paper — much less helpful comments. (I’m an English teacher, so when I say “literally” I mean “literally.”)

We then sat through a slide show in which the head TA explained the most common errors, and what percentage of us had made each one.

Here’s the kicker. The head TA then said:

“Your TAs are very busy, and we couldn’t possibly meet with all of you. So, to be fair, we won’t discuss these essays individually with any of you.”

So, in a SCHOOL OF EDUCATION, I got exactly NO individual feedback on my essay. I have little idea what I did right or wrong. And, I have no idea whatsoever how I could have done better.

How’s that for teaching excellence?

Grades and Motivation: Today’s Research

My point with this story is: for me, the experience of getting a grade without feedback was a) demotivating, b) infuriating, and c) useless.

If you’d like to rethink your school’s grading strategy, my own experience would point you in a particular direction.

However: you’re not reading this blog to get anecdotes. If you’re in Learning and the Brain world, you’re interested in science. What does research tell us about grades and motivation?

A recent study on “The Impact of Grades on Student Motivation” has been getting some Twitter love.

The researchers surveyed students at a college that has grades only, a different college that offers narrative feedback only, and two colleges that use both. They also interviewed students at one of the “hybrid” colleges.

What did they find?

They didn’t pull any punches:

“Grades did not enhance academic motivation.”

“Grades promoted anxiety, a sense of hopelessness, social comparison, as well as a fear of failure.”

“In contrast, narrative evaluations supported basic psychological needs and enhanced motivation.”

Briefly: grades demotivate, while narrative feedback helpfully focuses students on useful strategies for improvement.

Certainly these conclusions align with my own grad-school experience.

Not So Fast

Despite these emphatic conclusions, and despite the Twitter love, teachers who want to do away with grades should not, in my view, rely too heavily on this study.

Here’s why:

First: unless you teach in a college or university, research with these students might not apply to your students. Motivation for 2nd and 3rd graders might work quite differently than motivation for 23-year-olds.

Second: most college and university students, unlike most K-12 students, have some choices about the schools the attend and the classes they take.

In other words: students with higher degrees of academic motivation might be choosing colleges and courses with narrative feedback instead of grades.

It’s not clear if their level of motivation results from or causes their choice of college. Or, perhaps, both.

(To be clear, the researchers acknowledge this concern.)

Third: in my experience, most K-12 teachers combine letter or number grades with specific feedback. Unlike my TAs, who gave me a number without guidance, teachers often provide both a number and specific guidance.

Fourth: the study includes a number of troubling quirks.

The interview portion of the study includes thirteen students. It is, ahem, unusual to draw strong conclusions from interviews with 13 people.

The interviewer was a student who already knew some of the interviewees. Their prior relationship might well influence their answers to the interview questions.

More than any study I’ve read, this one includes an overtly political and economic perspective. Research like this typically eschews a strong political stance, and its presence here is at odds with research norms. (To be clear: researchers have political opinions. It’s just very strange to see them in print.)

Given these concerns — big and small — we should look elsewhere for research on grades and motivation to guide our schools and our own practice.

Earlier Thoughts

We have, of course, often written about grades and motivation here on the blog. For example:

In this article, Doug Lemov argues that — although imperfect — grades are the best way to ensure that scare resources aren’t given entirely to well-connected people.

In this article, we look at the Mastery Transcript movement: a strategy to provide lots of meaningful feedback without the tyranny of grades and transcripts.

Your thoughts on grades and grading are welcome: please share your experience in the comments.

 

 

Physics and Engineering: My New Year’s Resolution
Andrew Watson
Andrew Watson

 

Over on Twitter, @DylanWilliam wrote:

“[P]hysics tells you about the properties of materials but it’s the engineer who designs the bridge. Similarly, psychology tells us about how our brains work, but it’s teachers who craft instruction.”

In other words, teachers should learn a great deal about psychology from psychologists.

(And should learn some things about neuroscience from neuroscientists.)

But the study of psychology doesn’t — and can’t — tell us exactly how to teach. We have to combine the underlying psychological principles (that’s “physics” in William’s analogy) with the day-to-day gritty demands of the environment (“engineering”).

And so, my clarifying New Year’s resolution:

Study physics to be a better engineer.

I hope you’ll join me this year, and share your wisdom!

New Research: Personal Best Goals (Might) Boost Learning
Andrew Watson
Andrew Watson

Some research-based suggestions for teaching require a lot of complex changes. (If you want to develop an interleaved syllabus, you’re going to need some time.)

personal best goals

Others couldn’t be simpler to adopt.

Here’s a suggestion from researchers Down Under: encourage your students to adopt “personal best goals.”

The Research

In a straightforward study, Andrew Martin and Australian colleagues asked 10- to 12-year-olds to solve a set of math problems. After each student worked for one minute, she learned how well she had done on that group of problems.

Students then worked that same set of problems again. Martin measured their improvement from the first to the second attempt.

Here’s the key point: after half of the students heard their score, they got these additional instructions:

“That is your Personal Best score. Now we’re going to do these question again, and I would like you to set a goal where you aim to do better on these questions than you did before.”

The other half of the students simply heard their score and were told to try the problems again.

Sure enough, this simple “personal best” prompt led to greater improvement than in the control group.

To be clear: the difference was statistically significant, but relatively small. The Cohen’s d was 0.08 — lower than typically gets my attention.

However, as the researchers point out, perhaps the structure of the study kept that value low. Given the process — students worked the same problem sets twice — the obvious thing for students to do is strive to improve performance on the second iteration.

In other words: some students might have been striving for “personal bests” even when they weren’t explicitly instructed to do so.

In my own view, a small Cohen’s d matters a lot if the research advice is difficult to accomplish. So, if interleaving leads to only a small bump in learning, it might not be worth it. As noted above, interleaving takes a lot of planning time.

In this case, the additional instruction to “strive for your personal best” has essentially no cost at all.

Classroom Implications

Martin’s study is the first I know of that directly studies this technique.

(Earlier work, well summarized by Martin, looks at self-reports by students who set personal best goals. That research is encouraging — but self-reports aren’t as persuasive as Martin’s design.)

For that reason, we should be careful and use our best judgement as we try out this idea.

For example:

I suspect this technique works when used occasionally, not constantly.

In this study, the technique was used for the very short term: the personal best goals applied to the very next minute.

One intriguing suggestion that Martin makes: teachers could encourage personal best goals for the process not the result. That is: the goal could be “ask for help before giving up” rather than “score higher than last time.”

One final point stands out in this research. If you’re up to date on your Mindset research, you know the crucial difference between “performance goals” and “learning goals.”

Students with “performance goals” strive, among other things, to beat their peers. Of course, “personal best goals” focus not on beating peers but on beating oneself. They are, in other words, “learning goals.”

And, we’ve got LOTS of research showing that learning goals result in lots more learning.

Bit by Bit, Putting It Together
Andrew Watson
Andrew Watson

Over at Teacherhead, Tom Sherrington has posted a form that teachers can use for lesson plans.

He has put together different versions: one filled-in with explanations, another left blank for teachers to use, yet another for adapting and editing.

The Bigger Picture

In the world of Learning and the Brain, researchers explore precise, narrow questions about learning. The result: lots of precise, narrow answers.

For instance: Technique X helped this group of bilingual 5th graders in Texas learn more about their state constitution.

How might Technique X help you? With your students? And your curriculum?

And, crucially, how does Technique X fit together with Technique Y, Technique 7, and Technique Gamma — which you also heard about at the conference?

As you’ve heard me say: only the teacher can figure out the best way to put the research pieces together. Once you’ve gathered all the essential information, you’re in the best position to conjure the optimal mix for your specific circumstances.

All Together Now

And, that’s why I like Sherrington’s lesson planning form so much.

You’ve seen research into the importance of “activating prior knowledge.” You’ve also seen research into the importance of “retrieval practice.” You know about “prior misconceptions.” And so forth…

But, how do those distinct pieces all fit together?

This lesson planning form provides one thoughtful answer.

To be clear: this answer doesn’t have to be your answer. For this reason (I assume), Sherrington included a form that you can edit and make your own.

The key message as you start gearing up for January: research does indeed offer exciting examples and helpful new ways to think about teaching and learning.

Teachers should draw on that research. And: we’ll each put the pieces together in our own ways.

New Year, New Habits: More Learning!
Andrew Watson
Andrew Watson

When the school year starts back up in January, teachers would LOVE to use this fresh start for good.

new learning habits

In particular, our students might have developed some counter-productive habits during the first half of the year. Wouldn’t it be great if we could help them develop new learning habits?

Maybe homework would be a good place to start. Better homework habits should indeed lead to more learning.

The Problem: Old Habits

When I sit down to do my homework, the same problems always crop up.

My cell phone buzzes with texts.

I’m really tired. SO tired.

The abominable noise from my brother’s room (heavy metal horror) drives me crazy.

I try to solve all these problems when they appear, but they get me so distracted and addled that I just can’t recover quickly. Result: I’m just not very efficient.

Wouldn’t it be great if I could develop new habits to solve these problems? What would these new learning habits be?

New Learning Habits: “Implementation Intentions”

We actually have a highly effective habit strategy to deal with this problem. Sadly, the solution has a lumpish name: “implementation intentions.”

Here’s what that means.

Step 1: I make a list of the problems that most often vex me. (In fact, I’ve already made that list — see above.)

Important note about step 1: everyone’s list will be different. The problems that interfere with my homework might not bother other people. (Apparently, some folks like my brother’s dreadful music.)

Step 2: decide, IN ADVANCE, how I will solve each problem.

For example, when my cell phone buzzes, I won’t look at the message. Instead, I will turn the phone to airplane mode.

When I feel tired, I’ll do 20 jumping jacks. If that doesn’t work, I’ll take a quick shower. That always wakes me right up.

When my brother cranks his stereo, I’ll move to my backup study location in the basement.

Just as everyone faces different problems, everyone will come up with different solutions.

Step 3: let the environment do the work.

Here’s the genius of “implementation intentions”: the environment does the work for us.

Now, when my phone buzzes, I already know what to do. I’ve already made the decision. I don’t have to make a new decision. I simply execute the plan.

Phone buzzes, I switch it to airplane mode. Done.

New Learning Habits: the Research

Now, I have to be honest with you. When I first read about this strategy, I was REALLY SKEPTICAL.

I mean, it’s so simple. How can this possibly work?

The theory — “the environment does the work, activating a decision chain that’s already been planned” — sort of makes sense, but: really?

In fact, we do have lots of good research showing that this strategy works.

For instance, Angela Duckworth (yes, that Angela Duckworth) found that students who went through this process completed 60% more practice problems for the PSAT than those who simply wrote about their goals for the test.

You read that right: 60% more practice problems.

How’s that for new learning habits?

Classroom Applications

What does this technique look like in your classroom?

Of course: everyone reading this blog teaches different content to different students at different schools. And, we are all different people.

So, your precise way of helping your students will differ from my way.

I’m including a link to Ollie Lovell’s post on this topic. To be clear, I’m not suggesting that you follow his example precisely. After all, you and Ollie are two different people.

However, I am suggesting that his example helpfully illustrates the concept. And, it will give you ideas on how best to apply it in your world.

Default Image
landb
landb

We don’t focus a lot on seasonal cues here at the blog, but…

Given that many of us are celebrating holidays about now, perhaps you’d like a present.

(Trust me: it’s information every teacher wants…)