Here’s a remarkable story about potentially falsified research data.
The short version: researchers James Heathers and Nick Brown thought that Nicolas Guéguen’s research findings were both too sexy and too tidy.
Too sexy: Guéguen’s findings regularly made great headlines. For instance, his research shows that men voluntarily assist women who wear their hair loose much more than those who wear it in a ponytail or a bun.
Talk about your gender-roles clickbait!
Too tidy: As Heathers and Brown considered Guéguen’s math, they realized that his numbers were…too round. When Guéguen calculated averages, he had to divide by 30 — because his study groups had 30 people in them.
But, his results often ended in improbably round figures: 1.60, 1.80, 2.80.
Mathematically speaking, that’s possible. But, when you’re dividing by 30, it’s wildly unlikely.
Heathers and Brown have now spent quite a long time — years, in fact — trying to get Guéguen to share his data and/or answer their questions.
As far as I know, they haven’t gotten answers yet. (Here’s a somewhat more recent article.)
What Teachers Should Do
First, keep in mind that scientists are people too. While most strive honorably to follow procedures and discover meaningful findings, a few will always cheat the system and abuse our trust.
That’s true in any profession. It’s true in psychology and neuroscience research. (It might even be true of teachers.)
Second, let this knowledge energize your skepticism.
Here’s my suggestion: never make changes to your classroom work without asking hard questions.
Think about it this way: if someone wanted to sell you something for $10,000, you’d needs lots of confirmation. You wouldn’t take the seller’s word that the product was worth $10k. You’d investigate on your own.
For instance, ask…
Have other researchers gotten similar findings?
How aggressively have they tried to disprove them?
Is the suggestion strongly confirming a bias that everyone already holds?
So: your classroom is worth more than $10,000. You wouldn’t buy an expensive product without asking hard questions. Don’t adopt research suggestions without asking hard questions.
In the meanwhile, I’m going to put my hair up in a bun.
After some thought…perhaps some experimentation on my own…I decide not to follow this research advice.
Why did I “resist” these research findings? What prompted me to do so?
Education researcher Tim Cain divides possible answers into four categories. The verbs he uses to describe each one are all synonyms. But, he gives each one distinct meaning to distinguish among the possibilities.
And, as you’ll see, three of the four choices sound really bad.
3 Bad Choices
Denial: denialists basically pretend that there is reasonable disagreement on a topic where none really exits. Example: companies that say smoking isn’t bad for your lungs, or historians who pretend the Holocaust didn’t happen.
For the most part, in Cain’s analysis, deniers strive to “protect self-esteem and status.”
Opposition: whereas denialists typically have power and want to protect it, oppositionists don’t have much power, and reject scientific findings that might continue their subjugation.
For instance, I might have rejected the drawing strategy because I didn’t think it worked (see below). But, I might reject it because – as a teacher with relatively little cultural power – I don’t want to be bossed around by scientific researchers – who have more cultural standing than I do.
Rejection: Rejections gets a little complicated. In this model, I accept research findings only if they BOTH help students learn AND make me look good. But, if they don’t hit both targets, I’m not interested.
So, for example, if drawing does help students remember, but doesn’t win me esteem in the faculty room, then I’m just not interested.
As you can see, these first three choices don’t seem very appealing. I’m oversimplifying a bit – but not a lot – to say that teachers who resist research for these reasons are being jerks.
Frankly, I’m feeling a bit stressed right now. Does Cain acknowledge that teachers have any good reasons to resist research findings?
One More?
Indeed, Cain does give us one more choice.
Dissent: if teachers think critically about research, we might see gaps, flaws, or logical leaps. Rather than being driven by the sinister motives outlined above, we might honestly – even insightfully – disagree with the arguments put before us.
Being a researcher, Cain wanted to know: which is it? Why do teachers ultimately decide not to follow researchers’ advice?
Are we protecting the power we have (“denial”)? Fighting to prevent others from getting even more power over us (“opposition”)? Focusing on prestige more than usefulness (“rejection”)?
Or, are we enhancing educational debate by thinking critically (“dissent”)?
The Big Reveal
I’ll cut to the chase: for the most part, Cain finds that we’re in the critical thinking business.
To arrive at this conclusion, Cain worked with several teachers at two schools in northern England. He gave them some research articles, and asked them to try out the researchers’ findings. He then met with them to talk over their work, and interviewed them about their conclusions.
Here’s what he found:
First: teachers ultimately agreed with and accepted significant chunks of the researchers’ conclusions and advice. There didn’t simply reject everything they read and undertook.
Second: at the same time, teachers didn’t see researchers’ conclusions as more important than their own. As Cain puts it:
Essentially, almost all the teachers saw the authority of the published research reports as provisional. They did not see the research as having greater authority than their own experience or other forms of information.
Third: when teachers did resist researchers’ conclusions, they did so for entirely plausibly reasons.
They (plausibly) thought some of the studies contained contradictions. They (plausibly) saw some findings as out of date. And, they (plausibly) raised objections to research methodology.
They also – and I think this is very good news – emphasized the narrow particularity of research findings. As one teacher said:
If you researched in different schools, it would be different. If you had an inner-city school, a wealthy middle-class school, a private school, every one would be totally, totally different.
And another:
Does anything work for every single person? No, I don’t think there’s anything that will work exactly the same. It’s finding what’s right for your group: the age, the personalities.
(Regular readers of the blog know that I bang on about this point all the time, so I’m DELIGHTED to see it articulated so clearly here.)
Closing Thoughts
Cain (rightly) emphasizes that his study is early and exploratory. He worked with volunteers: that is, people who are likely to be interested in research in the first place. (If they weren’t interested, they wouldn’t have volunteered.)
And, like any study, this one has lots of limitations. For instance: these teachers worked in “Gifted and Talented” programs. Findings in other settings might be different.
But, at least initially, Cain’s finding shows that teachers can be great partners for researchers. We’re not resisting for the sake of resisting.
Instead, we’re thinking critically about the limits of research, and the goodness of fit for our particular classrooms.
If we’re going to rely on research to improve teaching — that’s why you’re here, yes? — we need to hone our skepticism skills.
After all, we don’t want just any research. We want the good stuff.
But, we face a serious problem. If we’re not psychology or neuroscience researchers, how can we tell what’s good?
Over at TES, Bridget Clay and David Weston have four suggestions.
Seek out review articles.
Don’t be impressed by lists.
Look for disagreement.
Don’t be impressed by one shiny new study.
Their post is clear and thoughtful; I encourage you to read it all.
Second Look
I want to go back to their third suggestion: “seek criticism.” This one habit, I believe, can make us all substantially wiser readers of classroom-relevant research.
Here’s what I mean.
When I first started in brain-research world, I wanted to hear the enduring truths that researchers discovered about learning.
I would then (nobly, heroically) enact those truths in my classroom.
As an entirely hypothetical example: imagine I heard a presentation about research showing that fluorescent lights inhibit learning. (To be clear: I have no idea if this is true, or even if anyone claims that it’s true. I just made this up as an example.)
Given that research finding, I would boldly refuse to turn on the fluorescent lights in my classroom, and set up several lamps and candles. Learning would flourish.
Right?
Research Reality
Well, maybe. But, maybe not.
Researchers simply don’t discover “the truth about learning.” Instead, they try to disprove a particular claim in a particular way. If they can’t disprove it, then that claim seem slightly more plausible.
But, someone else might disprove it in some other way. Or, under some other conditions.
Such an incremental, lumpy process isn’t surprising or strange. The system should work this way.
When Clay and Weston warn us against being impressed by one new study, they’re making exactly this point. If one research team comes to a conclusion once, that’s interesting … but we shouldn’t make any changes to our classrooms just yet.
So, back to my example. I’ve heard that presentation about fluorescent lights. What should I do next?
I should — for the time being — assume that the claim (“fluorescent lights inhibit learning”) is UNTRUE, and go look for counter-examples.
Or, perhaps, I should assume the claim is CONTROVERSIAL, and seek out evidence on both sides.
How do I do that?
Skeptical Research, with Boundaries
Believe it or not, start by going to google.
Use words like “controversy” or “debate” or “untrue.”
So, I’d google “fluorescent lights and learning controversy.” The results will give you some ideas to play with. (In fact, I just tried that search. LOTS of interesting sources.)
You might go to Google Scholar, which provides links to scholarly articles. Try “fluorescent light learning.” (Again, lots of sources — in this case including information about ADHD.)
When you review several of these articles, you’ll start noticing interesting specifics. Researchers call them “boundary conditions.” A research claim might prove true for one subset of learners — that is, within these boundaries — but not another.
So: perhaps 3rd graders do badly with fluorescent lights. What about 10th graders?
Perhaps such light hampered learning of math facts. What about critical thinking?
Perhaps the researchers studied turtles learning mazes. Almost certainly, you aren’t teaching turtles. Until we test the claim with humans, we shouldn’t worry too much about turtle learning.
Perhaps — in fact, quite often — culture matters. Research findings about adolescence will differ in the US and Japan because cultural norms shape behavior quite differently.
Back to Beginnings
Clay & Weston say: seek out disagreement.
I say: AMEN!
Science works by asking incremental questions and coming to halting, often-contradictory findings.
Look for the contradictions. Use your teacherly wisdom to sort through them. You’ll know what to do next.
Brunskill does offer a strong conclusion in this debate. But just as important: the way he frames the discussion.
Following Rapoport’s Rules to Promote Civil Discourse (which I haven’t heard of before), Brunskill sets himself several tasks.
First, he summarizes the opposite belief as accurately and fairly as he can. (The goal, according to Daniel Dennett, is that the other person say “Thanks, I wish I’d thought of putting it that way.”)
Second, he notes his points of agreement with that position, and (third) what he has learned while thinking about it.
Only then, fourth, does he get to express his disagreement, and advocate for a distinct point of view.
(By the way: you haven’t accidentally skipped a paragraph. I’ve deliberately not said what his conclusion is, because I want to focus on his methodology.)
The Takeaway
You might agree with Brunskill’s conclusion. Or, you might emphatically disagree with it.
If the latter, great news! You have an opportunity to follow his example.
How might you summarize his position as fairly as possible?
What do you agree with?
What did you learn?
Once you’ve answered those questions, then your rebuttal will be more persuasive, and more beneficial to the debate. I suspect it will also be more beneficial to you.
Teachers worry a lot about stress. For that reason, this blog regularly scans research updates for useful news about stress and stress reduction techniques.
One of our favorite studies offers a surprisingly simple approach to stress reduction: writing about it.
We like this study so much, we’ve posted about it twice: here and here.
So here’s an important question: do we actually know that the strategy works?
Put in a different way: how do psychologists and neuroscientists know what they claim to know?
Replication, Replication Crisis
To answer this question, we can describe research methodology. The ritual of science publication, in fact, requires almost gruesomely soporific descriptions of EXACTLY what the scientists did.
(What computer model did they use? What software version did they use? What size was the font? How far away from the computer did the participants sit? You get the idea…)
Here’s another answer: to be sure we know what we know, we double check. When one group of researchers arrives at a conclusion — especially an exciting or novel conclusion — other researchers see if they get the same result.
To be fancy, we can say they try to “replicate” the original result. As we get more replications, we feel more confident that we know.
Here’s the problem: psychology is facing a replication crisis.
A Case in Point: Writing About Stress
In recent years, more and more psychology studies just don’t replicate. We thought we knew, but now we’re not so sure.
For example: that study we love? The one about “reducing stress by writing about it?” A new replication crisis project tried to replicate it, and got no results.
In this replication, the benefits of writing about stress were…nada.
In fact, this group of researchers tried to replicate all 21 studies published in Nature and Science between 2010 and 2015. Depending on how you count, 13 of them replicated. That means that eight of them did not replicate.
YIKES.
What’s a Teacher to Do?
This news might seem like a disaster. If so much research doesn’t replicate, we might be tempted to stop relying on science in our teaching.
Although understandable, this dis-spirited conclusion goes too far. Instead, the replication crisis should remind us of two essential points:
First: teachers should adopt teaching ideas that have lots of research support.
If you see one study suggesting that (say) chewing gum helps students convert fractions to decimals, DO NOT adopt that technique. At a minimum, you should look to see if other researchers have replicated that finding.
In fact, you’d like several researchers to have explored this idea — preferably in different grades and schools, with different research paradigms.
(You should also weigh the other pluses and minuses. All that gum chewing might be really distracting.)
Second: we should be ready to change our minds. If a wave of research points one direction, we might adopt a particular strategy.
However, if better research over longer periods of time discounts those findings, then we have to be willing to try something else.
The best-known example of this problem: “learning styles.” Early research suggested that they might exist, but the vast weight of evidence shows clearly that they don’t.
In brief: we should adopt evidence-based theories only if we’re willing to let go of them as further evidence dictates.
Final Thoughts
But what about that stress-reduction technique? Do we have to give it up, now that it didn’t replicate?
That’s a surprisingly complicated question.
True enough: this attempt at replication failed.
However, in the original study, researchers Ramirez and Beilock tried their technique twice — just to be sure they had it right.
In fact, they tried it twice in the psychology lab and twice more in a local high school.
Presumably, the study compares “the generation effect” to “rote memorization.” Presumably it concludes the former helps more than the latter.
Nope. Not a bit of it.
This study, instead, looks at different kinds of mistakes that students might make when they guess. It concludes that nearby guesses help, but far away guesses don’t.
Here’s the researchers’ summary: “errors benefit memory to the extent that they overlap semantically with targets.”
The study just isn’t about rote memorization. (Just to be sure, I checked with the study’s lead author. She confirms my understanding of the research.)
Lesson Learned
If you can’t even trust [name-of-well-known-education-blog], what can you do? How can you trust any news from the interwebs.
Here’s my advice:
First: anyone who quotes research should link to it. If you can’t find the link quite easily, don’t make any changes to your teaching.
Second: click the link. (Don’t yet make any changes to your teaching.)
Third: feel a little nervous. You’ve got several daunting pages of big words and bizarro graphs.
Fourth: Think about giving in and just making the changes that the article suggests. Don’t give in to that impulse. No changes yet.
Fifth: read the first paragraph of the research. It’s called the “abstract,” and it should summarize what the researchers did and what they concluded.
If the abstract DOESN’T include the point you read on the website, then you’re done. The research doesn’t focus on the argument that the blogger is making, and so shouldn’t have been cited in the first place.
Lesson learned. Nothing to see here. Move on.
In this case: when you read the abstract about the generation effect, you’d see that it never mentions rote memorization.
Of course, if the abstract DOES include that point — now you can start taking the blog seriously. You might not decide to change your teaching, but at least you’ve got an argument worth considering.
Flipping the Script
I had a similar experience (in reverse) about a month ago. This blog’s software notified me that another blog had linked to my article on omega-3 fish oil.
That post isn’t obviously the sexiest one on the website, so I was a bit surprised to see it getting internet love.
A brief investigation showed that the link came from a website in a foreign language — one that encouraged pregnant women to buy omega-3 fish oil supplements.
Hmmm.
My article summarized research showing that such supplements don’t help with working memory or in-school behavior. Not so much about benefits for pregnant women.
It seems clear that this other blog assumed its readers a) wouldn’t click on the link, and b) if they did, they wouldn’t be able to read English well.
The link was there to fool their readers, not help them.
The Headline
When we see a blog link to research, we can feel reassured. At the other end of the link, doubtless, lies research supporting the author’s point.
This kind of story crops up often. But, given the popularity of “brain training games,” it merits repetition: “brain training” doesn’t work.
Of course, school works. We can train our brains to know new things: the history of the Silk Road, pre-calculus, geology, good pottery-wheel technique. We can’t, however, train up working memory or general intelligence in some artificial way.
“We hypothesized that if you get really, really good at one [working memory] test by training for a very long time, maybe then you’ll get improvement on tests that are quite similar. Unfortunately, we found no evidence to support that claim,” says Bobby Stojanoski, a research scientist in the Owen Lab at Western’s world renowned Brain and Mind Institute and lead author of the paper. “Despite hours of brain training on that one game, participants were no better at the second game than people who tested on the second game, but hadn’t trained on the first one.”
To be clear: I hope that some day we figure out a brain training technique that works.
If we could increase our students’ working memory capacity, that would — I think — revolutionize human cognition. But, we just don’t know how to do so yet.
Here’s a link to the underlying paper. And here’s a link to more thoughts on brain-training flim flam.
I spoke at this month’s Signs Summit in Chicago about problems in science communication. Here is the second half of what I said.
(You can find the first half, which focuses on “too much skepticism” here.)
We live in age that been called the “decade of the brain.” Of course, we’ve been saying that for about 25 years now, so we need a new phrase.
In this era of the brain (corpus callosum!), we can make almost any argument sound more persuasive (occipital lobe!) by throwing in fancy-but-irrelevant neuroscience terminology (ventral tegmental area!).
For instance, Deena Skolnick Weisberg’s team found that people can generally distinguish between coherent and circular science explanations. However: when Weisberg added irrelevant neuroscience terminology to those explanations, readers judged the circular explanations to be much more persuasive.
Weisberg’s team memorably describes this result as “the seductive allure of neuroscience.”
The resulting problem is not exactly that people dress up good teaching advice with needless neuroscience language. (Although that happens.)
The problem is not exactly that people dress up bad teaching advice with needless neuroscience language. (Although that happens.)
The problem is that many teachers respond so positively when we do.
For example: teachers can get absorbed in the differences between alpha, beta, and gamma waves — although this distinction has no practical teaching implications that I know of.
In other words: although some teachers respond to outside experts with excessive skepticism, others respond to fancy brain terminology with inadequate skepticism.
Multi-tasking and Beyond
For example: I once heard an MBE speaker explain that limits in the “bandwidth of the corpus callosum” make multi-tasking impossible.
(The corpus callosum connects the two hemispheres of our brains.)
This information leads teachers to an obvious question: what can we do to increase the capacity of a student’s corpus callosum, and thereby facilitate multi-tasking?
One answer to that question is: nothing.
Another answer is: its capacity increases as our students mature, so we don’t need to worry about doing so artificially.
Another really important answer is: that factual statement is simply false.
(When I asked for the research supporting this claim, the speaker declined to provide it to me. “You’d need a PhD in neuroscience to understand it” was the extraordinary excuse.)
Although we dohave neural processing limits that preclude multi-tasking, the corpus callosum itself is one of the largest and densest structures in the brain. The idea that its “limits” hamper multi-tasking is silly.
This false information, at best, distracts teachers from from the key point: brains don’t multi-task. We need to design lesson plans and assessments that don’t ask our students to do so.
At worst, this falsehood leads ultimately to disillusionment. When teachers discover that this expert was dressing up good advice with needless terminology, we’re less likely to believe experts who genuinely know what they’re talking about.
Reaching for a Solution
Because neuroscience terminology can seduce teachers, it can seduce those of us who explain brain research to teachers. How can we resist its allure?
The obvious answer inverts the Nike slogan: “Just Don’t Do It.” If the medial temporal lobe isn’t practically relevant to our explanation, we can just not mention it.
At the same time, I think we need a strategy to lead teachers away from this seductive allure. Here’s my mantra: “Don’t just do this thing. Think this way.”
In other words: psychology and neuroscience communicators shouldn’t simply give teachers a list of instructions to follow. (Use visuals! Normalize struggle! Interleave topics!)
Instead, we should encourage teachers to think about learning with the helpful categories that cognitive science supplies.
Imagine, for example, that a teacher is creating a lesson plan. She recalls that, last year, this lesson went quite badly. What should she do?
I’m saying: that’s the wrong question. She shouldn’t just do this thing. She should think this way.
So, last year — did class go badly because of an attention problem?
The teacher thinks for a while. She concludes: Nope.
Was it a motivation problem?
Nope.
Was it a working memory problem?
Oh, wow: YES. Absolutely.
Okay, once you know it’s a WM problem, you’ve got a long list of potential solutions — which one sounds most likely?
This approach doesn’t explicitly reject neuroscience terminology. It simply redirects communication away from that set of questions. After all, teachers rarely say “well, that was an insula problem.”
The insula is cool and fascinating. But, we don’t know what to do about an insula problem. We have LOTS of ideas about solving working memory problems.
Two Sentences
In sum, researchers can overcome too much skepticism only by knowing a teacher’s world and work with the clarity that comes from LOTS of daily experience.
And, we can redirect too little skepticism by moving beyond specific teaching suggestions to fostering thinking habits that best shape teaching and learning.
I was invited to speak at March 4 Science SIGNS Summit on Saturday. The question was: what challenges bedevil the field of Science Communication? And, what can we do to fix them?
Here is, more or less, the first half of what I said:
I began my professional life as a high school English teacher, and loved that work for several centuries.
For teachers, the challenge of science communication is NOT that many (many) teachers are deeply skeptical of outside experts telling us how to be better teachers.
The problem is that many (many) teachers are RIGHTLY deeply skeptical of outside experts telling us how to be better teachers.
I don’t know how it works in other professions. But, oddly, people who have never taught anyone anything feel comfortable telling teachers how to do our jobs better. At any party you attend, someone will blithely tell you to do this one simple thing, and your job will get every so much easier.
Do people tell surgeons which scalpel to use? Do people tell fire-fighters to be more like fire-fighters in Finland, where they’re so much better at fighting fires?
Just Do It
Here’s a true story. (I’m blurring the details, for obvious reasons.)
I once attended a conference (not, by the way, a Learning and the Brain conference) where the keynote speaker presented data on a specific teaching approach. The first question, quite marvelously, went like this:
Now that you’ve shared your ideas and your data with us, what plans do you have to hear back from teachers who try this idea? How can we keep this dialogue going, so you can learn from teachers as they learn from you?
The speaker thought for a moment and said: “We have no such plans. We’ve done the research. We know the right answer. Teachers should just do it. Do it.”
For this speaker, “interdisciplinary” apparently means “I tell you what to do, and you do it.”
“Too Much Skepticism”
For those of us in the field of Science Communication, teachers may seem unreasonably resistant — unreasonably skeptical — of outside experts. We exhibit too much skepticism.
But I suspect most people don’t much like outside experts.
If we, as science communicators, want people to hear and believe what we say, we should practice becoming inside experts.
For example: imagine that I change my focus, and strive to improve education through the legislature. I want to persuade congress to revoke old laws and pass new ones.
My temptation, of course, would be to talk with representatives and senators about brain science. Here’s the psychology. Here’s the neuroscience. Legislators should pass laws that align with that research.
However, I suspect that such guidance would be much more effective if I knew in a gritty, day-to-day way what legislators really do.
To find this out, I should — perhaps — volunteer at a representative’s campaign, or intern at my senator’s office. I should attend committee meetings and community meetings. I might even (perish the thought) try my hand at political fundraising.
A New Language, or Two
One the one hand, none of these activities (the volunteering, the interning, the fundraising) has much to do with brain science.
On the other hand, all of them will help me learn to speak politics. And, if I can speak politics, I’ll be vastly more effective framing science research to persuade legislators to act.
I won’t be an outside expert. I will sound increasingly like an inside expert.
By implication, I’m encouraging two kinds of cross-training.
Psychology and neuroscience researchers can make their guidance more meaningful and useful by spending more time — real time — working with the classroom teachers they seek to guide.
And: classroom teachers can collaborate with researchers as equals if and only if we devote more time — real time — to mastering the complex disciplines that we hope will benefit our students.
This is a big ask. I’m not talking about days or weeks, but months and years.
I wish we had an easier way to accomplish this mission. And yet, for science communication to succeed — for brain researchers and teachers to work effectively together — we really do need this kind of sustained, gritty, determined exploration.
I’m thinking our students are worth it.
In Part II, I’ll consider the dangers of too LITTLE skepticism.
Teachers who want to shape our practice with research find ourselves taking on extra responsibilities.
In particular, we should probably hone our skills at investigating the research we use.
Are we sure — or, sure enough — that the research is well done? How do we know whom to trust?
Over at The Effortful Educator, Blake Harvard has 3 very practical suggestions for considering “research based” advice offered during PD sessions.
Note, in particular, his emphasis on adapting research to your own situation. As I said in a post just 2 days ago: don’t just do this thing. Instead, PD might help you think this way.
Happy Timing…
Just after I wrote the short post above, I found a useful addition over at 3-Star Learning Experiences.
In this post, Mirjam Neelen & Paul A. Kirschner offer a usefully complex mental model for integrating research, technology, society and teaching.
And, they’ve got specific strategies for evaluating the evidence that might influence your teaching practice.
When you first try them out, these strategies will take some time, and some real mental effort. But: once you get into these habits, you’ll find yourself seeing past weak claims with increasing frequency.
In it, the authors claim that sleeplessness interferes with men’s working memory, but not women’s.
And yet, when you read the methodology section of the study, the flaw quickly becomes clear. Researchers made claims about working memory, but they tested short-term memory.
As you read more and more studies with Neelen and Kirschner’s guidance in mind, you’ll spot this kind of discrepancy with increasing ease.