methodology – Page 2 – Education & Teacher Conferences Skip to main content
Default Image
Andrew Watson
Andrew Watson

In a blog post, David Didau raises concerns about “the problem with teachers’ judgment.”

Here goes:

If a brain expert offers me a teaching suggestion, I might respond: “Well, I know my students, and that technique just wouldn’t work with them.”

Alas, this rebuttal simply removes me from the realm of scientific discussion.

Scientific research functions only when a claim can be disproven. Yet the claim “I know my students better than you do” can’t be disproven.

Safe in this “I know my students” fortress, I can resist all outside guidance.

As Didau writes:

If, in the face of contradictory evidence, we [teachers] make the claim that a particular practice ‘works for me and my students’, then we are in danger of adopting an unfalsifiable position. We are free to define ‘works’ however we please.

It’s important to note: Didau isn’t arguing with a straw man. He’s responding to a tweet in which a former teacher proudly announces: “I taught 20 years without evidence or research…I chose to listen to my students.”

(Didau’s original post is a few years old; he recently linked to it to rebut this teacher’s bluff boast.)

Beware Teachers’ Judgment, Part 2

In their excellent book Understanding How We Learn, the Learning Scientists Yana Weinstein and Megan Sumeracki make a related pair of arguments.

They perceive in teachers “a huge distrust of any information that comes ‘from above’ “… and “a preference for relying on [teachers’] intuitions” (p. 22).

And yet, as they note,

There are two major problems that arise from a reliance on intuition.

The first is that our intuitions can lead us to pick the wrong learning strategies.

Second, once we land on a learning strategy, we tend to seek out “evidence” that favors the strategy we have picked. (p. 23)

Weinstein and Sumeracki cite lots of data supporting these concerns.

For instance, college students believe that rereading a textbook leads to more learning than does retrieval practice — even when their own experience shows the opposite.

The Problems with the Problem

I myself certainly agree that teachers should listen to guidance from psychology and neuroscience. Heck: I’ve spent more than 10 years making such research a part of my own teaching, and helping others do so too.

And yet, I worry that this perspective overstates its case.

Why? Because as I see it, we absolutely must rely on teachers’ judgment — and even intuition. Quite literally, we have no other choice. (I’m an English teacher. When I write “literally,” I mean literally.)

At a minimum, I see three ways that teachers’ judgments must be a cornerstone in teacher-researcher conversations.

Judgment #1: Context Always Matters

Researchers arrive at specific findings. And yet, the context in which we teach a) always matters, and b) almost never matches the context in which the research was done.

And therefore, we must rely on teachers’ judgments to translate the specific finding to our specific context.

For example: the estimable Nate Kornell has shown that the spacing effect applies to study with flashcards. In his research, students learned more by studying 1 pile of 20 flashcards than 4 piles of 5 flashcards. The bigger pile spaced out practice of specific flashcards, and thus yielded more learning.

So, clearly, we should always tell our students to study with decks of 20 flashcards.

No, we should not.

Kornell’s study showed that college students reviewing pairs of words learned more from 20-flashcard piles than 5-flashcard piles. But, I don’t teach college students. And: my students simply NEVER learn word pairs.

So: I think Kornell’s research gives us useful general guidance. Relatively large flashcard decks will probably result in more learning than relatively small ones. But, “relatively large” and “relatively small” will vary.

Doubtless, 2nd graders will want smaller decks than 9th graders.

Complex definitions will benefit from smaller decks than simple ones.

Flashcards with important historical dates can be studied in larger piles than flashcards with lengthy descriptions.

In every case, we have to rely on … yes … teachers’ judgments to translate a broad research principle to the specific classroom context.

Judgment #2: Combining Variables

Research works by isolating variables. Classrooms work by combining variables.

Who can best combine findings from various fields? Teachers.

So: we know from psychology research that interleaving improves learning.

We also know from psychology research that working memory overload impedes learning.

Let’s put those findings together and ask: at what point does too much interleaving lead to working memory overload?

It will be simply impossible for researchers to explore all possible combinations of interleaving within all levels of working memory challenge.

The best we can do: tell teachers about the benefits of interleaving, warn them about the dangers of WM overload – and let them use their judgment to find the right combination.

Judgment #3: Resolving Disputes

Some research findings point consistently in one direction. But, many research fields leave plenty of room for doubt, confusion, and contradiction.

For example: the field of retrieval practice is (seemingly) rock solid. We’ve got all sorts of research showing its effectiveness. I tell teachers and students about its benefits all the time.

And yet, we still don’t understand its boundary conditions well.

As I wrote last week, we do know that RP improves memory of specifically tested facts and processes. But we don’t know if it improves memory of facts and processes adjacent to the ones that got tested.

This study says it does. This one says it doesn’t.

So: what should the teacher do right now, before we get a consistent research answer? We should hear about the current research, and then use our best judgment.

One Final Point

People who don’t want to rely on teacherly judgment might respond thus: “well, teachers have to be willing to listen to research, and to make changes to their practice based upon it.”

For example, that teacher who boasted about ignoring research is no model for our work.

I heartily – EMPHATICALLY – agree with that point of view.

At the same time, I ask this question: “why would teachers listen to research-based guidance if those offering it routinely belittle our judgment in the first place?”

If we start by telling teachers that their judgment is not to be trusted, we can’t be surprised that they respond with “a huge distrust of any information that comes ‘from above’.”

So, here’s my suggestion: the field of Mind, Brain, Education should emphasize equal partnership.

Teachers: listen respectfully to relevant psychology and neuroscience research. Be willing to make changes to your practice based upon it.

Psychology and neuroscience researchers: listen respectfully to teachers’ experience. Be up front about the limits of your knowledge and its applicability.

Made wiser by these many points of view, we can all trust each other to do our best within our fields of expertise.

There’s No Polite Way to Say “I Told You So”
Andrew Watson
Andrew Watson

Back in 2014, Pam Mueller and Dan Oppenheimer made headlines with their wittily titled study “The Pen Is Mightier Than The Keyboard.”

In that study, they found that students learn more from taking handwritten notes during a lecture than from laptop notes. Their conclusions spawned a thousand gloating posts. And (I don’t doubt) a multitude of well-intentioned anti-laptop policies.

Since I first read the study, I’ve been shouting that its conclusions simply don’t hold up.

Why?

Because M&O’s conclusions hold water only if you believe students can’t learn new things.

(That’s a very strange belief for teachers to have.)

If you believe that students can learn new things, then you believe that they can learn to take laptop notes correctly.

(“Correctly” = “rewriting the lecture’s main points in your own words; don’t just transcribe verbatim”)

If they do that, then this famous study actually suggests laptop notes will enhance learning, not detract from it.

You can find a summary of my argument — and its limitations — here.

Today’s News

Scholars have recently published an attempt at replication of Mueller & Oppenheimer’s study.

The results? Not much.

In the quiet language of research, they conclude:

“Based on the present outcomes and other available evidence, concluding which method [handwriting or laptops] is superior for improving the functions of note-taking seems premature.”

Not so much with the mighty pen.

By the way: a study from 2018 also concluded that — except in special circumstances — it just didn’t make much difference which method students use.

Why I Care

Perhaps surprisingly, I’m not an ardent advocate of laptop notes. Or, for that matter, of handwritten notes.

I advocate for teachers making classroom decisions informed by good research.

In this case, the Mueller and Oppenheimer study contains a perfectly obvious flaw. I have yet to meet anyone who doesn’t think a) that students can learn good note-taking skills, and b) that if they do, the study’s conclusions make no sense.

And yet, very few people have time to dig into research methodology. As a result, this one study had confirmed many teachers in their beliefs that technology harms learning during note-taking.

That statement might be true. It might be false. But this one study doesn’t give us good data to answer the question.

As a result, teachers might be taking laptops away from students who would learn more if they got to use them.

In brief: bad research harms learning.

I hope that this most recent study encourages teachers to rethink our classroom practices.

The Joys (and Stresses) of Teacher/Neuroscientist Collaboration
Andrew Watson
Andrew Watson

In an ideal world, teachers and researchers collaborate to bring out the best in each other.

So, I might invite Pooja Agarwal to study retrieval practice in my 10th grade English classroom.

My students and I benefit because we learn more about this great study technique.

Dr. Agarwal’s research benefits because she sees how the theory of the technique functions in the real messy world of schools.

What’s not to like?

Theory, Meet Reality

Of course, our world rarely lives up to that ideal. Teacher/researcher collaboration creates all sorts of challenges.

We speak very different languages.

We operate within very different time frames.

At times, we highlight very different values.

All these differences can make communication, progress, and success difficult to achieve.

Today’s Example

Over at the Blog on Learning Development, Meeri Kim has recently written about a collaboration between neuroscientists and Head Start teachers. More precisely, she interviewed two of the scientists in the program.

The result: a refreshingly frank description of the benefits and stresses of this collaboration.

For instance: the curriculum that the scientists created improved social skills and selective attention, while reducing problem behaviors. What teacher wouldn’t like those results?

As researcher Lauren Vega O’Neil noted:

A lot of the activities were packaged as fun games. The teachers loved having these ready-made activities that would help them long-term in the classroom.

And yet, this collaboration included confusions and stresses as well.

I worked mostly with teachers in classrooms during the study, and many of them jumped on board right away. But there was some pushback, particularly since some teachers saw this as yet another curriculum that they were being asked to implement. […] So they just saw our training program as something else that was being asked of them.

Suggestions?

Researcher Eric Pakulak has some surprisingly direct advice for colleagues who want to do classroom research:

Unfortunately, it seems to be all too common that researchers come in and don’t listen as much as they should to educators, thinking that it should be all about neuroscience, and only using education to implement what they know, as opposed to something more bi-directional.

Instead, we need to work together and really understand the ways that the experience of teachers and administrators can inform our work.

I agree with this advice wholeheartedly.

And, I likewise think that teachers can do more to understand the pressures on researchers.

For instance: research works by isolating variables.

Classroom researchers might have very particular scheduling needs. They can be certain that retrieval practice produces a benefit only if nothing else in the class was different. So, they might have to insist we schedule quizzes at a very specific point in the class — even if that schedule is highly inconvenient for us.

The more that teachers understand these research requirements, the more effectively we can create classroom research paradigms that both help our individual students learn and help researchers discover enduring truths about learning.

Let’s Have More Fun with the Correlation/Causation Muddle
Andrew Watson
Andrew Watson

We’ve explored the relationship of correlation and causation before on the blog.

In particular, this commentary on DeBoer’s blog notes that — while correlation doesn’t prove causation — it might be a useful first step in discovering causation.

DeBoer argues for a difficult middle ground. He wants us to know (say it with me) that “correlation doesn’t prove causation.” AND he wants us to be reasonably skeptical, not thoughtlessly reactive.

On some occasions, we really ought to pay attention to correlation.

More Fun

I recently stumbled across a livelier way to explore this debate: a website called Spurious Correlations.

If you’d like to explore the correlation between — say — the number of letters in the winning word of the Scripps National Spelling Bee and — hmmm — the number of people killed by venomous spiders: this is definitely website for you.

Just so you know, the correlation of the divorce rate in Maine with per-capita consumption of margarine is higher than 99%.

Avoiding Extremes: Common Sense in the Middle
Andrew Watson
Andrew Watson

Teachers feel passionate about our work. As a result, we can advocate exuberantly — occasionally too exuberantly? — for a particular position.

Advocates for (or against) Social-Emotional Learning can make zealous claims for their beliefs. Same for PBL, or direct instruction. Or for flipped classrooms, or traditional ones.

Of course, given the variety of teachers, students, schools, curricula — and the variety of societies in which they all operate — we perhaps should hesitate to make absolute claims.

Today’s Shining Example

I recently rediscovered a marvelous example of comfort with the ambiguous middle ground.

In this EdSurge post, Art Markman explains how mindfulness can help. And: how it might not help.

He explains the benefits of a growth mindset. And: its potential detriments.

When asked “if schools teach the way students learn,” he doesn’t scream “OF COURSE!” Nor does he bellow “NEVER!”

Instead, he offers this answer: “Sometimes, but often not.”

In other words: we’re not all spectacular successes or hideous failures. Contrary to much of the rhetoric you hear, we live somewhere in between.

I hope you enjoy reading this interview. And, that Markman’s sensible example offers guidance on moderation and nuance.

I myself look forward to reading more of his work.

How to Stop Cheating: An Awkward Debate
Andrew Watson
Andrew Watson

We would, of course, LOVE to prevent cheating.

prevent cheatingIt does moral damage to the cheater. It undermines classroom trust. And: it makes it hard for us to know how much our students are actually learning.

So: what techniques might help us do so?

How To Prevent Cheating: “Moral Reminders”

For some time now, Dan Ariely has made this his field. (Check out his book:  The (Honest) Truth about Dishonesty: How We Lie to Everyone — Especially Ourselves.)

Over the years, he developed a clever research paradigm to see how much people cheat. With that in place, he tested various strategies to prevent cheating.

(He can also promote cheating, but that’s not exactly what we’re looking for.)

One strategy that has gotten a lot of attention over the years: moral reminders.

Ariely asked some students to write down ten books they had read in high school. He asked the others to write down the 10 Commandments.

That is: he made them think about foundational moral standards in our culture.

Sure enough, once reminded about moral standards, students cheated less. (The Cohen’s d was 0.48, which is an impressive effect for such an easy intervention.)

Then Again, Maybe Not

In a study published just a month ago, Bruno Verschuere (and many others) retested Ariely’s hypothesis. Whereas the original study included 209 students, this meta-analysis included almost 4700. That is … [checks math] … more than 20 times as many students.

Studying much more data, they found that “moral reminders” made no difference.

(In fact, they found that students who recalled the 10 commandments were just a smidge likelier to cheat; but, the difference was tiny — not even approaching statistical significance.)

As we’ve seen in other cases of the “replication crisis,” seemingly settled results are back in question.

What’s a Teacher to Do?

Of course, Ariely had other suggestions as well. Signing  pledges not to cheat reduces cheating.  And, of course, teachers who supervise students closely reduce their opportunities to cheat.

As far as I know, these strategies have not been retested (although the second one seems too obvious to need much retesting).

For the time being, sadly, we should rely less on indirect moral reminders, and more on direct pledges — and direct supervision.

Using and Misusing Averages: The Benefits of Music?
Andrew Watson
Andrew Watson

The “10 Minute Rule” tells us that people can’t pay attention to something for longer than ten minutes.

As teachers, therefore, we shouldn’t do any one thing for longer than ten minutes. We need to mix it up a bit.

There’s an obvious problem here. The “rule” assumes that all people think alike — that one number is correct for all students in all situations.

That’s a bizarre assumption. It’s also wildly untrue.

(In fact, the “rule” itself has a weird history. )

The Bigger Picture: When teachers convert averages into absolutes — like, say, the 10 minute rule — we’re likely to miss out on the distinct needs of our particular students.

Today’s Example

Should students listen to music when they study or read?

If we go by averages, the answer is: no! We’ve got data to prove it. We’ve even got meta-analyses.

And yet, as Daniel Willingham argues, we should be aware of the variety in the data:

While mean of the grand distribution may show a small hit to comprehension when background music plays, it’s NOT the case that every child reads a little worse with background music on.

He’s got a specific example in mind:

Some of my students say they like music playing in the background because it makes them less anxious. It could be that a laboratory situation (with no stakes) means these students aren’t anxious (and hence show little cost when the music is off) but would have a harder time reading without music when they are studying.

In other words: psychology research can be immensely helpful. It can produce useful — even inspiring — guidance.

At the same time: when we work with our own students, we should always keep their individual circumstances in mind.

If this student right here needs music to stay focused and relaxed, then data on “the average student” just isn’t the right guide.

 

Do We Actually Know What We Think We Know?
Andrew Watson
Andrew Watson

Teachers worry a lot about stress. For that reason, this blog regularly scans research updates for useful news about stress and stress reduction techniques.

replication crisis

(In fact, I posted about potential benefits of stress just two weeks ago.)

One of our favorite studies offers a surprisingly simple approach to stress reduction: writing about it.

We like this study so much, we’ve posted about it twice: here and here.

So here’s an important question: do we actually know that the strategy works?

Put in a different way: how do psychologists and neuroscientists know what they claim to know?

Replication, Replication Crisis

To answer this question, we can describe research methodology. The ritual of science publication, in fact, requires almost gruesomely soporific descriptions of EXACTLY what the scientists did.

(What computer model did they use? What software version did they use? What size was the font? How far away from the computer did the participants sit? You get the idea…)

Here’s another answer: to be sure we know what we know, we double check. When one group of researchers arrives at a conclusion — especially an exciting or novel conclusion — other researchers see if they get the same result.

To be fancy, we can say they try to “replicate” the original result. As we get more replications, we feel more confident that we know.

Here’s the problem: psychology is facing a replication crisis.

A Case in Point: Writing About Stress

In recent years, more and more psychology studies just don’t replicate. We thought we knew, but now we’re not so sure.

For example: that study we love? The one about “reducing stress by writing about it?” A new replication crisis project tried to replicate it, and got no results.

In this replication, the benefits of writing about stress were…nada.

In fact, this group of researchers tried to replicate all 21 studies published in Nature and Science between 2010 and 2015. Depending on how you count, 13 of them replicated. That means that eight of them did not replicate.

YIKES.

What’s a Teacher to Do?

This news might seem like a disaster. If so much research doesn’t replicate, we might be tempted to stop relying on science in our teaching.

Although understandable, this dis-spirited conclusion goes too far. Instead, the replication crisis should remind us of two essential points:

First: teachers should adopt teaching ideas that have lots of research support.

If you see one study suggesting that (say) chewing gum helps students convert fractions to decimals, DO NOT adopt that technique. At a minimum, you should look to see if other researchers have replicated that finding.

In fact, you’d like several researchers to have explored this idea — preferably in different grades and schools, with different research paradigms.

(You should also weigh the other pluses and minuses. All that gum chewing might be really distracting.)

Second: we should be ready to change our minds. If a wave of research points one direction, we might adopt a particular strategy.

However, if better research over longer periods of time discounts those findings, then we have to be willing to try something else.

The best-known example of this problem: “learning styles.” Early research suggested that they might exist, but the vast weight of evidence shows clearly that they don’t.

In brief: we should adopt evidence-based theories only if we’re willing to let go of them as further evidence dictates.

Final Thoughts

But what about that stress-reduction technique? Do we have to give it up, now that it didn’t replicate?

That’s a surprisingly complicated question.

True enough: this attempt at replication failed.

However, in the original study, researchers Ramirez and Beilock tried their technique twice — just to be sure they had it right.

In fact, they tried it twice in the psychology lab and twice more in a local high school.

And, a similar research paradigm arrived at similar results.

We can say that the study has an uneven replication record. That’s not the same thing as “didn’t replicate.”

For these reasons, I think we should put the “writing about stress” strategy in the “we just don’t know” category. It might work. It might not.

It might depend on circumstances that our research paradigms haven’t yet revealed.

When psychology research offers you contradictory advice, think about your own classroom circumstances and make the best decision you can.

That advice is complicated, but it’s certainly not a crisis.

 

Understanding Scanning Technology: When and Where in the Brain
Andrew Watson
Andrew Watson

The good folks over at TedEd have produced another helpful brain video — this one exploring different brain-scanning techniques.

https://www.youtube.com/watch?v=B10pc0Kizsc

This video does a particularly good job exploring both the strengths and the weaknesses of each technology.

Location, Location…oh, and Timing

In particular, EEG is very good at measuring timing precisely. Sadly, it can’t pinpoint location very accurately.

brain scanning technology

On the other hand, fMRI can zoom in on location within a few millimeters. However, its timing measurements are only rough-n-ready: within a few seconds or so.

Surprisingly, the video doesn’t discuss magnetoencephalography (MEG) — which does with magnetic waves what EEG does with electrical waves.

For fun: this video shows the MEG image when the brain reads the single word “dog.”

Can You Rely on Meta-analysis? Can You Doubt It?
Andrew Watson
Andrew Watson

Over at his blog Filling the Pail, Greg Ashman likes challenging popular ideas. In a recent post, he takes issue with meta-analysis as a way of analyzing educational research.

In the first place, Ashman argues — in effect —  “garbage in, garbage out.” Combining badly-designed studies with well-designed studies still gives some weight to the badly-designed ones.

Of course, Ashman has some thoughtful suggestions as well.

Why Does It Matter?

Why should we care about such an obscure and complicated statistical technique?

Meta-analysis matters because we pay so much attention to it.

For instance: just a month ago, a pair of meta-analyses about Mindset Theory set off another round of anxiety. Edu-twitter lit right up with thoughtful scholars wondering if we should stop focusing so much on the right kind of praise.

Or: I frequently rebut claims about working memory training by citing this well-known meta-analysis by Melby-Lervag and Hulme.

If we’re going to rely so much on this technique, we should be clear-minded about its strengths and its weaknesses.