methodology – Page 2 – Education & Teacher Conferences Skip to main content
There’s No Polite Way to Say “I Told You So”
Andrew Watson
Andrew Watson

Back in 2014, Pam Mueller and Dan Oppenheimer made headlines with their wittily titled study “The Pen Is Mightier Than The Keyboard.”

In that study, they found that students learn more from taking handwritten notes during a lecture than from laptop notes. Their conclusions spawned a thousand gloating posts. And (I don’t doubt) a multitude of well-intentioned anti-laptop policies.

Since I first read the study, I’ve been shouting that its conclusions simply don’t hold up.

Why?

Because M&O’s conclusions hold water only if you believe students can’t learn new things.

(That’s a very strange belief for teachers to have.)

If you believe that students can learn new things, then you believe that they can learn to take laptop notes correctly.

(“Correctly” = “rewriting the lecture’s main points in your own words; don’t just transcribe verbatim”)

If they do that, then this famous study actually suggests laptop notes will enhance learning, not detract from it.

You can find a summary of my argument — and its limitations — here.

Today’s News

Scholars have recently published an attempt at replication of Mueller & Oppenheimer’s study.

The results? Not much.

In the quiet language of research, they conclude:

“Based on the present outcomes and other available evidence, concluding which method [handwriting or laptops] is superior for improving the functions of note-taking seems premature.”

Not so much with the mighty pen.

By the way: a study from 2018 also concluded that — except in special circumstances — it just didn’t make much difference which method students use.

Why I Care

Perhaps surprisingly, I’m not an ardent advocate of laptop notes. Or, for that matter, of handwritten notes.

I advocate for teachers making classroom decisions informed by good research.

In this case, the Mueller and Oppenheimer study contains a perfectly obvious flaw. I have yet to meet anyone who doesn’t think a) that students can learn good note-taking skills, and b) that if they do, the study’s conclusions make no sense.

And yet, very few people have time to dig into research methodology. As a result, this one study had confirmed many teachers in their beliefs that technology harms learning during note-taking.

That statement might be true. It might be false. But this one study doesn’t give us good data to answer the question.

As a result, teachers might be taking laptops away from students who would learn more if they got to use them.

In brief: bad research harms learning.

I hope that this most recent study encourages teachers to rethink our classroom practices.

The Joys (and Stresses) of Teacher/Neuroscientist Collaboration
Andrew Watson
Andrew Watson

In an ideal world, teachers and researchers collaborate to bring out the best in each other.

So, I might invite Pooja Agarwal to study retrieval practice in my 10th grade English classroom.

My students and I benefit because we learn more about this great study technique.

Dr. Agarwal’s research benefits because she sees how the theory of the technique functions in the real messy world of schools.

What’s not to like?

Theory, Meet Reality

Of course, our world rarely lives up to that ideal. Teacher/researcher collaboration creates all sorts of challenges.

We speak very different languages.

We operate within very different time frames.

At times, we highlight very different values.

All these differences can make communication, progress, and success difficult to achieve.

Today’s Example

Over at the Blog on Learning Development, Meeri Kim has recently written about a collaboration between neuroscientists and Head Start teachers. More precisely, she interviewed two of the scientists in the program.

The result: a refreshingly frank description of the benefits and stresses of this collaboration.

For instance: the curriculum that the scientists created improved social skills and selective attention, while reducing problem behaviors. What teacher wouldn’t like those results?

As researcher Lauren Vega O’Neil noted:

A lot of the activities were packaged as fun games. The teachers loved having these ready-made activities that would help them long-term in the classroom.

And yet, this collaboration included confusions and stresses as well.

I worked mostly with teachers in classrooms during the study, and many of them jumped on board right away. But there was some pushback, particularly since some teachers saw this as yet another curriculum that they were being asked to implement. […] So they just saw our training program as something else that was being asked of them.

Suggestions?

Researcher Eric Pakulak has some surprisingly direct advice for colleagues who want to do classroom research:

Unfortunately, it seems to be all too common that researchers come in and don’t listen as much as they should to educators, thinking that it should be all about neuroscience, and only using education to implement what they know, as opposed to something more bi-directional.

Instead, we need to work together and really understand the ways that the experience of teachers and administrators can inform our work.

I agree with this advice wholeheartedly.

And, I likewise think that teachers can do more to understand the pressures on researchers.

For instance: research works by isolating variables.

Classroom researchers might have very particular scheduling needs. They can be certain that retrieval practice produces a benefit only if nothing else in the class was different. So, they might have to insist we schedule quizzes at a very specific point in the class — even if that schedule is highly inconvenient for us.

The more that teachers understand these research requirements, the more effectively we can create classroom research paradigms that both help our individual students learn and help researchers discover enduring truths about learning.

Let’s Have More Fun with the Correlation/Causation Muddle
Andrew Watson
Andrew Watson

We’ve explored the relationship of correlation and causation before on the blog.

In particular, this commentary on DeBoer’s blog notes that — while correlation doesn’t prove causation — it might be a useful first step in discovering causation.

DeBoer argues for a difficult middle ground. He wants us to know (say it with me) that “correlation doesn’t prove causation.” AND he wants us to be reasonably skeptical, not thoughtlessly reactive.

On some occasions, we really ought to pay attention to correlation.

More Fun

I recently stumbled across a livelier way to explore this debate: a website called Spurious Correlations.

If you’d like to explore the correlation between — say — the number of letters in the winning word of the Scripps National Spelling Bee and — hmmm — the number of people killed by venomous spiders: this is definitely website for you.

Just so you know, the correlation of the divorce rate in Maine with per-capita consumption of margarine is higher than 99%.

Avoiding Extremes: Common Sense in the Middle
Andrew Watson
Andrew Watson

Teachers feel passionate about our work. As a result, we can advocate exuberantly — occasionally too exuberantly? — for a particular position.

Advocates for (or against) Social-Emotional Learning can make zealous claims for their beliefs. Same for PBL, or direct instruction. Or for flipped classrooms, or traditional ones.

Of course, given the variety of teachers, students, schools, curricula — and the variety of societies in which they all operate — we perhaps should hesitate to make absolute claims.

Today’s Shining Example

I recently rediscovered a marvelous example of comfort with the ambiguous middle ground.

In this EdSurge post, Art Markman explains how mindfulness can help. And: how it might not help.

He explains the benefits of a growth mindset. And: its potential detriments.

When asked “if schools teach the way students learn,” he doesn’t scream “OF COURSE!” Nor does he bellow “NEVER!”

Instead, he offers this answer: “Sometimes, but often not.”

In other words: we’re not all spectacular successes or hideous failures. Contrary to much of the rhetoric you hear, we live somewhere in between.

I hope you enjoy reading this interview. And, that Markman’s sensible example offers guidance on moderation and nuance.

I myself look forward to reading more of his work.

How to Stop Cheating: An Awkward Debate
Andrew Watson
Andrew Watson

We would, of course, LOVE to prevent cheating.

prevent cheatingIt does moral damage to the cheater. It undermines classroom trust. And: it makes it hard for us to know how much our students are actually learning.

So: what techniques might help us do so?

How To Prevent Cheating: “Moral Reminders”

For some time now, Dan Ariely has made this his field. (Check out his book:  The (Honest) Truth about Dishonesty: How We Lie to Everyone — Especially Ourselves.)

Over the years, he developed a clever research paradigm to see how much people cheat. With that in place, he tested various strategies to prevent cheating.

(He can also promote cheating, but that’s not exactly what we’re looking for.)

One strategy that has gotten a lot of attention over the years: moral reminders.

Ariely asked some students to write down ten books they had read in high school. He asked the others to write down the 10 Commandments.

That is: he made them think about foundational moral standards in our culture.

Sure enough, once reminded about moral standards, students cheated less. (The Cohen’s d was 0.48, which is an impressive effect for such an easy intervention.)

Then Again, Maybe Not

In a study published just a month ago, Bruno Verschuere (and many others) retested Ariely’s hypothesis. Whereas the original study included 209 students, this meta-analysis included almost 4700. That is … [checks math] … more than 20 times as many students.

Studying much more data, they found that “moral reminders” made no difference.

(In fact, they found that students who recalled the 10 commandments were just a smidge likelier to cheat; but, the difference was tiny — not even approaching statistical significance.)

As we’ve seen in other cases of the “replication crisis,” seemingly settled results are back in question.

What’s a Teacher to Do?

Of course, Ariely had other suggestions as well. Signing  pledges not to cheat reduces cheating.  And, of course, teachers who supervise students closely reduce their opportunities to cheat.

As far as I know, these strategies have not been retested (although the second one seems too obvious to need much retesting).

For the time being, sadly, we should rely less on indirect moral reminders, and more on direct pledges — and direct supervision.

Using and Misusing Averages: The Benefits of Music?
Andrew Watson
Andrew Watson

The “10 Minute Rule” tells us that people can’t pay attention to something for longer than ten minutes.

As teachers, therefore, we shouldn’t do any one thing for longer than ten minutes. We need to mix it up a bit.

There’s an obvious problem here. The “rule” assumes that all people think alike — that one number is correct for all students in all situations.

That’s a bizarre assumption. It’s also wildly untrue.

(In fact, the “rule” itself has a weird history. )

The Bigger Picture: When teachers convert averages into absolutes — like, say, the 10 minute rule — we’re likely to miss out on the distinct needs of our particular students.

Today’s Example

Should students listen to music when they study or read?

If we go by averages, the answer is: no! We’ve got data to prove it. We’ve even got meta-analyses.

And yet, as Daniel Willingham argues, we should be aware of the variety in the data:

While mean of the grand distribution may show a small hit to comprehension when background music plays, it’s NOT the case that every child reads a little worse with background music on.

He’s got a specific example in mind:

Some of my students say they like music playing in the background because it makes them less anxious. It could be that a laboratory situation (with no stakes) means these students aren’t anxious (and hence show little cost when the music is off) but would have a harder time reading without music when they are studying.

In other words: psychology research can be immensely helpful. It can produce useful — even inspiring — guidance.

At the same time: when we work with our own students, we should always keep their individual circumstances in mind.

If this student right here needs music to stay focused and relaxed, then data on “the average student” just isn’t the right guide.

 

Do We Actually Know What We Think We Know?
Andrew Watson
Andrew Watson

Teachers worry a lot about stress. For that reason, this blog regularly scans research updates for useful news about stress and stress reduction techniques.

replication crisis

(In fact, I posted about potential benefits of stress just two weeks ago.)

One of our favorite studies offers a surprisingly simple approach to stress reduction: writing about it.

We like this study so much, we’ve posted about it twice: here and here.

So here’s an important question: do we actually know that the strategy works?

Put in a different way: how do psychologists and neuroscientists know what they claim to know?

Replication, Replication Crisis

To answer this question, we can describe research methodology. The ritual of science publication, in fact, requires almost gruesomely soporific descriptions of EXACTLY what the scientists did.

(What computer model did they use? What software version did they use? What size was the font? How far away from the computer did the participants sit? You get the idea…)

Here’s another answer: to be sure we know what we know, we double check. When one group of researchers arrives at a conclusion — especially an exciting or novel conclusion — other researchers see if they get the same result.

To be fancy, we can say they try to “replicate” the original result. As we get more replications, we feel more confident that we know.

Here’s the problem: psychology is facing a replication crisis.

A Case in Point: Writing About Stress

In recent years, more and more psychology studies just don’t replicate. We thought we knew, but now we’re not so sure.

For example: that study we love? The one about “reducing stress by writing about it?” A new replication crisis project tried to replicate it, and got no results.

In this replication, the benefits of writing about stress were…nada.

In fact, this group of researchers tried to replicate all 21 studies published in Nature and Science between 2010 and 2015. Depending on how you count, 13 of them replicated. That means that eight of them did not replicate.

YIKES.

What’s a Teacher to Do?

This news might seem like a disaster. If so much research doesn’t replicate, we might be tempted to stop relying on science in our teaching.

Although understandable, this dis-spirited conclusion goes too far. Instead, the replication crisis should remind us of two essential points:

First: teachers should adopt teaching ideas that have lots of research support.

If you see one study suggesting that (say) chewing gum helps students convert fractions to decimals, DO NOT adopt that technique. At a minimum, you should look to see if other researchers have replicated that finding.

In fact, you’d like several researchers to have explored this idea — preferably in different grades and schools, with different research paradigms.

(You should also weigh the other pluses and minuses. All that gum chewing might be really distracting.)

Second: we should be ready to change our minds. If a wave of research points one direction, we might adopt a particular strategy.

However, if better research over longer periods of time discounts those findings, then we have to be willing to try something else.

The best-known example of this problem: “learning styles.” Early research suggested that they might exist, but the vast weight of evidence shows clearly that they don’t.

In brief: we should adopt evidence-based theories only if we’re willing to let go of them as further evidence dictates.

Final Thoughts

But what about that stress-reduction technique? Do we have to give it up, now that it didn’t replicate?

That’s a surprisingly complicated question.

True enough: this attempt at replication failed.

However, in the original study, researchers Ramirez and Beilock tried their technique twice — just to be sure they had it right.

In fact, they tried it twice in the psychology lab and twice more in a local high school.

And, a similar research paradigm arrived at similar results.

We can say that the study has an uneven replication record. That’s not the same thing as “didn’t replicate.”

For these reasons, I think we should put the “writing about stress” strategy in the “we just don’t know” category. It might work. It might not.

It might depend on circumstances that our research paradigms haven’t yet revealed.

When psychology research offers you contradictory advice, think about your own classroom circumstances and make the best decision you can.

That advice is complicated, but it’s certainly not a crisis.

 

Understanding Scanning Technology: When and Where in the Brain
Andrew Watson
Andrew Watson

The good folks over at TedEd have produced another helpful brain video — this one exploring different brain-scanning techniques.

https://www.youtube.com/watch?v=B10pc0Kizsc

This video does a particularly good job exploring both the strengths and the weaknesses of each technology.

Location, Location…oh, and Timing

In particular, EEG is very good at measuring timing precisely. Sadly, it can’t pinpoint location very accurately.

brain scanning technology

On the other hand, fMRI can zoom in on location within a few millimeters. However, its timing measurements are only rough-n-ready: within a few seconds or so.

Surprisingly, the video doesn’t discuss magnetoencephalography (MEG) — which does with magnetic waves what EEG does with electrical waves.

For fun: this video shows the MEG image when the brain reads the single word “dog.”

Can You Rely on Meta-analysis? Can You Doubt It?
Andrew Watson
Andrew Watson

Over at his blog Filling the Pail, Greg Ashman likes challenging popular ideas. In a recent post, he takes issue with meta-analysis as a way of analyzing educational research.

In the first place, Ashman argues — in effect —  “garbage in, garbage out.” Combining badly-designed studies with well-designed studies still gives some weight to the badly-designed ones.

Of course, Ashman has some thoughtful suggestions as well.

Why Does It Matter?

Why should we care about such an obscure and complicated statistical technique?

Meta-analysis matters because we pay so much attention to it.

For instance: just a month ago, a pair of meta-analyses about Mindset Theory set off another round of anxiety. Edu-twitter lit right up with thoughtful scholars wondering if we should stop focusing so much on the right kind of praise.

Or: I frequently rebut claims about working memory training by citing this well-known meta-analysis by Melby-Lervag and Hulme.

If we’re going to rely so much on this technique, we should be clear-minded about its strengths and its weaknesses.

Vital Resources in Psychology: the Best Research for Teachers
Andrew Watson
Andrew Watson

vital resources in psychologyOver the last ten years, I’ve found many articles and studies that I return to frequently.  Some summarize lots of research suggestions. Others explore particular questions with verve and clarity.

I hope you enjoy these as much as I do.

Vital Resources in Psychology: Big Lists

Our students often confuse PERFORMANCE (a high score on a test) with LEARNING (enduring knowledge and skill). Nick Soderstrom sorts through all kinds of evidence to help teachers distinguish between the two. Helpfully, he includes evidence for both physical and cognitive learning.

Learning versus Performance: An Integrative Review, by Nick Soderstrom and Robert Bjork

This comprehensive (!) article examines research behind ten well-known teaching practices: from underlining to retrieval practice. In each case, it looks at the quality of evidence. It then helps you choose those that fit your subject and your students best. (Danger: several sacred oxen gored here.)

Improving Students’ Learning with Effective Learning Techniques, by John Dunlosky (and many others)

Deans for Impact have boiled their suggestions to a list of six. You’ve got everything here from motivation to transfer. It also offers a solid list of sources when you want to check out the primary research.

The Science of Learning, by Deans for Impact

Vital Resources: Enlightening Studies

Regular readers of this blog know that “retrieval practice” helps students learn MUCH more effectively than simple review does. In brief: don’t have students reread a chapter. Have them quiz each other on the chapter. This kind of active recall fosters new learning. In this splendid study, a researcher, a teacher, and a principal move this finding out of the psychology lab and into the classroom.

The Value of Applied Research: Retrieval Practice Improves Classroom Learning and Recommendations from a Teacher, a Principal, and a Scientist, by Agarwal, Bain, and Chamberlain

In this marvelous study, researchers wonder if testing students on material before they’ve even seen it might help them ultimately learn it better. Here’s the fun part: when their first study suggests the answer is “yes,” they then repeat the study four more times in an attempt to prove themselves wrong.  Only when they can’t come up with any other explanations for their findings do they finally persuade themselves.

The Pretesting Effect: Do Unsuccessful Retrieval Attempts Enhance Learning?, by Richland, Kornell, and Kau