If you keep up on brain news, you have frequently had this experience:
You read a newspaper headline or a blog link, and news of a potential discovery gives you an adrenaline rush:
Eating garlic may prevent age-related memory loss!
Radoiactive tracer shows effectiveness of cancer treatment!!
Ketamine may relieve depression!!!
Filled with enthusiasm, you read the accompanying article. Only to discover: the research was done on mice. The resulting adrenaline crash might cause you to desire chocolate-chip cookies (according to research done in mice).
Today’s News
Of course, mouse research is super important to establish basic biological processes. But, it doesn’t give teachers useful guidance. Ever.
(Now might be a good time to repeat one of my few absolute rules:
NEVER, NEVER, NEVER,
Change your teaching practice
Based on research
Into non-human animals.)
To highlight the foolishness of headline-hiding-the-mouse gambit, researcher James Heathers has created a vital new twitter account: @justsaysinmice.
That’s it. When you follow his account, you’ll get occasional updates with links to articles drawing breathless conclusions about research. Heathers wants you to know that the research shows results in mice.
As of this writing, Heathers’s account has 29 tweets, and north of 45,000 followers.
(By the way, we’ve written about Heathers’s skepticism before. He is developing a specialty in debunking inaccurate science communication.)
So, to tune up your skepticism skills, I encourage you to make @justsaysinmice a part of your twitter world.
But: most geneticists remember that psychology research is complicated. Knowledge and skill in one field don’t automatically translate to knowledge and skill in another.
In other words: psychologists will — most likely — have better insights into the strengths and weaknesses of psychology debates than will rocket scientists, or nuclear submariners, or even geneticists.
This point, of course, extends to other kinds of cross-disciplinary critiques. Here’s Nebel on the intersection of neuroscience and education:
A common misconception that we hear is that education and neuroscience are related disciplines and that those who study the brain must know how we learn.
While one can inform the other, I promise that training in neuroscience does NOT include an understanding of how those brain processes translate into classroom practices.
We often talk about a very necessary dialogue between educators and researchers, because very few individuals have extensive experience in both domains.
For all these reasons, neuroscientists (and psychologists) can provide teachers with useful perspectives. But, only teachers can decide what makes the most sense in the classroom.
…but Cost Doesn’t Matter
One of the stranger parts of the TES interview: Plomin’s insistence that only expensive changes benefit education.
“To think there is some simple cheap little thing that is going to make everybody fine, it is crazy,” he said in exclusive remarks published today.
“Good interventions are the most expensive and intensive.”
Pish posh.
If you’ve spent any time at a Learning and the Brain conference, you know that teachers can make all sorts of highly effective changes to their teaching at no cost whatsoever.
Using retrieval practice instead of simple review: no cost.
Managing students’ working memory load by…say…spreading instructions out over time: no cost.
Moderating students’ alertness levels by having them move: no cost.
Anyone who says we can’t improve teaching and learning without spending lots of money simply doesn’t understand teaching, learning, or the promise of educational psychology.
Here’s a remarkable story about potentially falsified research data.
The short version: researchers James Heathers and Nick Brown thought that Nicolas Guéguen’s research findings were both too sexy and too tidy.
Too sexy: Guéguen’s findings regularly made great headlines. For instance, his research shows that men voluntarily assist women who wear their hair loose much more than those who wear it in a ponytail or a bun.
Talk about your gender-roles clickbait!
Too tidy: As Heathers and Brown considered Guéguen’s math, they realized that his numbers were…too round. When Guéguen calculated averages, he had to divide by 30 — because his study groups had 30 people in them.
But, his results often ended in improbably round figures: 1.60, 1.80, 2.80.
Mathematically speaking, that’s possible. But, when you’re dividing by 30, it’s wildly unlikely.
Heathers and Brown have now spent quite a long time — years, in fact — trying to get Guéguen to share his data and/or answer their questions.
As far as I know, they haven’t gotten answers yet. (Here’s a somewhat more recent article.)
What Teachers Should Do
First, keep in mind that scientists are people too. While most strive honorably to follow procedures and discover meaningful findings, a few will always cheat the system and abuse our trust.
That’s true in any profession. It’s true in psychology and neuroscience research. (It might even be true of teachers.)
Second, let this knowledge energize your skepticism.
Here’s my suggestion: never make changes to your classroom work without asking hard questions.
Think about it this way: if someone wanted to sell you something for $10,000, you’d needs lots of confirmation. You wouldn’t take the seller’s word that the product was worth $10k. You’d investigate on your own.
For instance, ask…
Have other researchers gotten similar findings?
How aggressively have they tried to disprove them?
Is the suggestion strongly confirming a bias that everyone already holds?
So: your classroom is worth more than $10,000. You wouldn’t buy an expensive product without asking hard questions. Don’t adopt research suggestions without asking hard questions.
In the meanwhile, I’m going to put my hair up in a bun.
After some thought…perhaps some experimentation on my own…I decide not to follow this research advice.
Why did I “resist” these research findings? What prompted me to do so?
Education researcher Tim Cain divides possible answers into four categories. The verbs he uses to describe each one are all synonyms. But, he gives each one distinct meaning to distinguish among the possibilities.
And, as you’ll see, three of the four choices sound really bad.
3 Bad Choices
Denial: denialists basically pretend that there is reasonable disagreement on a topic where none really exits. Example: companies that say smoking isn’t bad for your lungs, or historians who pretend the Holocaust didn’t happen.
For the most part, in Cain’s analysis, deniers strive to “protect self-esteem and status.”
Opposition: whereas denialists typically have power and want to protect it, oppositionists don’t have much power, and reject scientific findings that might continue their subjugation.
For instance, I might have rejected the drawing strategy because I didn’t think it worked (see below). But, I might reject it because – as a teacher with relatively little cultural power – I don’t want to be bossed around by scientific researchers – who have more cultural standing than I do.
Rejection: Rejections gets a little complicated. In this model, I accept research findings only if they BOTH help students learn AND make me look good. But, if they don’t hit both targets, I’m not interested.
So, for example, if drawing does help students remember, but doesn’t win me esteem in the faculty room, then I’m just not interested.
As you can see, these first three choices don’t seem very appealing. I’m oversimplifying a bit – but not a lot – to say that teachers who resist research for these reasons are being jerks.
Frankly, I’m feeling a bit stressed right now. Does Cain acknowledge that teachers have any good reasons to resist research findings?
One More?
Indeed, Cain does give us one more choice.
Dissent: if teachers think critically about research, we might see gaps, flaws, or logical leaps. Rather than being driven by the sinister motives outlined above, we might honestly – even insightfully – disagree with the arguments put before us.
Being a researcher, Cain wanted to know: which is it? Why do teachers ultimately decide not to follow researchers’ advice?
Are we protecting the power we have (“denial”)? Fighting to prevent others from getting even more power over us (“opposition”)? Focusing on prestige more than usefulness (“rejection”)?
Or, are we enhancing educational debate by thinking critically (“dissent”)?
The Big Reveal
I’ll cut to the chase: for the most part, Cain finds that we’re in the critical thinking business.
To arrive at this conclusion, Cain worked with several teachers at two schools in northern England. He gave them some research articles, and asked them to try out the researchers’ findings. He then met with them to talk over their work, and interviewed them about their conclusions.
Here’s what he found:
First: teachers ultimately agreed with and accepted significant chunks of the researchers’ conclusions and advice. There didn’t simply reject everything they read and undertook.
Second: at the same time, teachers didn’t see researchers’ conclusions as more important than their own. As Cain puts it:
Essentially, almost all the teachers saw the authority of the published research reports as provisional. They did not see the research as having greater authority than their own experience or other forms of information.
Third: when teachers did resist researchers’ conclusions, they did so for entirely plausibly reasons.
They (plausibly) thought some of the studies contained contradictions. They (plausibly) saw some findings as out of date. And, they (plausibly) raised objections to research methodology.
They also – and I think this is very good news – emphasized the narrow particularity of research findings. As one teacher said:
If you researched in different schools, it would be different. If you had an inner-city school, a wealthy middle-class school, a private school, every one would be totally, totally different.
And another:
Does anything work for every single person? No, I don’t think there’s anything that will work exactly the same. It’s finding what’s right for your group: the age, the personalities.
(Regular readers of the blog know that I bang on about this point all the time, so I’m DELIGHTED to see it articulated so clearly here.)
Closing Thoughts
Cain (rightly) emphasizes that his study is early and exploratory. He worked with volunteers: that is, people who are likely to be interested in research in the first place. (If they weren’t interested, they wouldn’t have volunteered.)
And, like any study, this one has lots of limitations. For instance: these teachers worked in “Gifted and Talented” programs. Findings in other settings might be different.
But, at least initially, Cain’s finding shows that teachers can be great partners for researchers. We’re not resisting for the sake of resisting.
Instead, we’re thinking critically about the limits of research, and the goodness of fit for our particular classrooms.
If we’re going to rely on research to improve teaching — that’s why you’re here, yes? — we need to hone our skepticism skills.
After all, we don’t want just any research. We want the good stuff.
But, we face a serious problem. If we’re not psychology or neuroscience researchers, how can we tell what’s good?
Over at TES, Bridget Clay and David Weston have four suggestions.
Seek out review articles.
Don’t be impressed by lists.
Look for disagreement.
Don’t be impressed by one shiny new study.
Their post is clear and thoughtful; I encourage you to read it all.
Second Look
I want to go back to their third suggestion: “seek criticism.” This one habit, I believe, can make us all substantially wiser readers of classroom-relevant research.
Here’s what I mean.
When I first started in brain-research world, I wanted to hear the enduring truths that researchers discovered about learning.
I would then (nobly, heroically) enact those truths in my classroom.
As an entirely hypothetical example: imagine I heard a presentation about research showing that fluorescent lights inhibit learning. (To be clear: I have no idea if this is true, or even if anyone claims that it’s true. I just made this up as an example.)
Given that research finding, I would boldly refuse to turn on the fluorescent lights in my classroom, and set up several lamps and candles. Learning would flourish.
Right?
Research Reality
Well, maybe. But, maybe not.
Researchers simply don’t discover “the truth about learning.” Instead, they try to disprove a particular claim in a particular way. If they can’t disprove it, then that claim seem slightly more plausible.
But, someone else might disprove it in some other way. Or, under some other conditions.
Such an incremental, lumpy process isn’t surprising or strange. The system should work this way.
When Clay and Weston warn us against being impressed by one new study, they’re making exactly this point. If one research team comes to a conclusion once, that’s interesting … but we shouldn’t make any changes to our classrooms just yet.
So, back to my example. I’ve heard that presentation about fluorescent lights. What should I do next?
I should — for the time being — assume that the claim (“fluorescent lights inhibit learning”) is UNTRUE, and go look for counter-examples.
Or, perhaps, I should assume the claim is CONTROVERSIAL, and seek out evidence on both sides.
How do I do that?
Skeptical Research, with Boundaries
Believe it or not, start by going to google.
Use words like “controversy” or “debate” or “untrue.”
So, I’d google “fluorescent lights and learning controversy.” The results will give you some ideas to play with. (In fact, I just tried that search. LOTS of interesting sources.)
You might go to Google Scholar, which provides links to scholarly articles. Try “fluorescent light learning.” (Again, lots of sources — in this case including information about ADHD.)
When you review several of these articles, you’ll start noticing interesting specifics. Researchers call them “boundary conditions.” A research claim might prove true for one subset of learners — that is, within these boundaries — but not another.
So: perhaps 3rd graders do badly with fluorescent lights. What about 10th graders?
Perhaps such light hampered learning of math facts. What about critical thinking?
Perhaps the researchers studied turtles learning mazes. Almost certainly, you aren’t teaching turtles. Until we test the claim with humans, we shouldn’t worry too much about turtle learning.
Perhaps — in fact, quite often — culture matters. Research findings about adolescence will differ in the US and Japan because cultural norms shape behavior quite differently.
Back to Beginnings
Clay & Weston say: seek out disagreement.
I say: AMEN!
Science works by asking incremental questions and coming to halting, often-contradictory findings.
Look for the contradictions. Use your teacherly wisdom to sort through them. You’ll know what to do next.
Brunskill does offer a strong conclusion in this debate. But just as important: the way he frames the discussion.
Following Rapoport’s Rules to Promote Civil Discourse (which I haven’t heard of before), Brunskill sets himself several tasks.
First, he summarizes the opposite belief as accurately and fairly as he can. (The goal, according to Daniel Dennett, is that the other person say “Thanks, I wish I’d thought of putting it that way.”)
Second, he notes his points of agreement with that position, and (third) what he has learned while thinking about it.
Only then, fourth, does he get to express his disagreement, and advocate for a distinct point of view.
(By the way: you haven’t accidentally skipped a paragraph. I’ve deliberately not said what his conclusion is, because I want to focus on his methodology.)
The Takeaway
You might agree with Brunskill’s conclusion. Or, you might emphatically disagree with it.
If the latter, great news! You have an opportunity to follow his example.
How might you summarize his position as fairly as possible?
What do you agree with?
What did you learn?
Once you’ve answered those questions, then your rebuttal will be more persuasive, and more beneficial to the debate. I suspect it will also be more beneficial to you.
Teachers worry a lot about stress. For that reason, this blog regularly scans research updates for useful news about stress and stress reduction techniques.
One of our favorite studies offers a surprisingly simple approach to stress reduction: writing about it.
We like this study so much, we’ve posted about it twice: here and here.
So here’s an important question: do we actually know that the strategy works?
Put in a different way: how do psychologists and neuroscientists know what they claim to know?
Replication, Replication Crisis
To answer this question, we can describe research methodology. The ritual of science publication, in fact, requires almost gruesomely soporific descriptions of EXACTLY what the scientists did.
(What computer model did they use? What software version did they use? What size was the font? How far away from the computer did the participants sit? You get the idea…)
Here’s another answer: to be sure we know what we know, we double check. When one group of researchers arrives at a conclusion — especially an exciting or novel conclusion — other researchers see if they get the same result.
To be fancy, we can say they try to “replicate” the original result. As we get more replications, we feel more confident that we know.
Here’s the problem: psychology is facing a replication crisis.
A Case in Point: Writing About Stress
In recent years, more and more psychology studies just don’t replicate. We thought we knew, but now we’re not so sure.
For example: that study we love? The one about “reducing stress by writing about it?” A new replication crisis project tried to replicate it, and got no results.
In this replication, the benefits of writing about stress were…nada.
In fact, this group of researchers tried to replicate all 21 studies published in Nature and Science between 2010 and 2015. Depending on how you count, 13 of them replicated. That means that eight of them did not replicate.
YIKES.
What’s a Teacher to Do?
This news might seem like a disaster. If so much research doesn’t replicate, we might be tempted to stop relying on science in our teaching.
Although understandable, this dis-spirited conclusion goes too far. Instead, the replication crisis should remind us of two essential points:
First: teachers should adopt teaching ideas that have lots of research support.
If you see one study suggesting that (say) chewing gum helps students convert fractions to decimals, DO NOT adopt that technique. At a minimum, you should look to see if other researchers have replicated that finding.
In fact, you’d like several researchers to have explored this idea — preferably in different grades and schools, with different research paradigms.
(You should also weigh the other pluses and minuses. All that gum chewing might be really distracting.)
Second: we should be ready to change our minds. If a wave of research points one direction, we might adopt a particular strategy.
However, if better research over longer periods of time discounts those findings, then we have to be willing to try something else.
The best-known example of this problem: “learning styles.” Early research suggested that they might exist, but the vast weight of evidence shows clearly that they don’t.
In brief: we should adopt evidence-based theories only if we’re willing to let go of them as further evidence dictates.
Final Thoughts
But what about that stress-reduction technique? Do we have to give it up, now that it didn’t replicate?
That’s a surprisingly complicated question.
True enough: this attempt at replication failed.
However, in the original study, researchers Ramirez and Beilock tried their technique twice — just to be sure they had it right.
In fact, they tried it twice in the psychology lab and twice more in a local high school.
Presumably, the study compares “the generation effect” to “rote memorization.” Presumably it concludes the former helps more than the latter.
Nope. Not a bit of it.
This study, instead, looks at different kinds of mistakes that students might make when they guess. It concludes that nearby guesses help, but far away guesses don’t.
Here’s the researchers’ summary: “errors benefit memory to the extent that they overlap semantically with targets.”
The study just isn’t about rote memorization. (Just to be sure, I checked with the study’s lead author. She confirms my understanding of the research.)
Lesson Learned
If you can’t even trust [name-of-well-known-education-blog], what can you do? How can you trust any news from the interwebs.
Here’s my advice:
First: anyone who quotes research should link to it. If you can’t find the link quite easily, don’t make any changes to your teaching.
Second: click the link. (Don’t yet make any changes to your teaching.)
Third: feel a little nervous. You’ve got several daunting pages of big words and bizarro graphs.
Fourth: Think about giving in and just making the changes that the article suggests. Don’t give in to that impulse. No changes yet.
Fifth: read the first paragraph of the research. It’s called the “abstract,” and it should summarize what the researchers did and what they concluded.
If the abstract DOESN’T include the point you read on the website, then you’re done. The research doesn’t focus on the argument that the blogger is making, and so shouldn’t have been cited in the first place.
Lesson learned. Nothing to see here. Move on.
In this case: when you read the abstract about the generation effect, you’d see that it never mentions rote memorization.
Of course, if the abstract DOES include that point — now you can start taking the blog seriously. You might not decide to change your teaching, but at least you’ve got an argument worth considering.
Flipping the Script
I had a similar experience (in reverse) about a month ago. This blog’s software notified me that another blog had linked to my article on omega-3 fish oil.
That post isn’t obviously the sexiest one on the website, so I was a bit surprised to see it getting internet love.
A brief investigation showed that the link came from a website in a foreign language — one that encouraged pregnant women to buy omega-3 fish oil supplements.
Hmmm.
My article summarized research showing that such supplements don’t help with working memory or in-school behavior. Not so much about benefits for pregnant women.
It seems clear that this other blog assumed its readers a) wouldn’t click on the link, and b) if they did, they wouldn’t be able to read English well.
The link was there to fool their readers, not help them.
The Headline
When we see a blog link to research, we can feel reassured. At the other end of the link, doubtless, lies research supporting the author’s point.
This kind of story crops up often. But, given the popularity of “brain training games,” it merits repetition: “brain training” doesn’t work.
Of course, school works. We can train our brains to know new things: the history of the Silk Road, pre-calculus, geology, good pottery-wheel technique. We can’t, however, train up working memory or general intelligence in some artificial way.
“We hypothesized that if you get really, really good at one [working memory] test by training for a very long time, maybe then you’ll get improvement on tests that are quite similar. Unfortunately, we found no evidence to support that claim,” says Bobby Stojanoski, a research scientist in the Owen Lab at Western’s world renowned Brain and Mind Institute and lead author of the paper. “Despite hours of brain training on that one game, participants were no better at the second game than people who tested on the second game, but hadn’t trained on the first one.”
To be clear: I hope that some day we figure out a brain training technique that works.
If we could increase our students’ working memory capacity, that would — I think — revolutionize human cognition. But, we just don’t know how to do so yet.
Here’s a link to the underlying paper. And here’s a link to more thoughts on brain-training flim flam.
I spoke at this month’s Signs Summit in Chicago about problems in science communication. Here is the second half of what I said.
(You can find the first half, which focuses on “too much skepticism” here.)
We live in age that been called the “decade of the brain.” Of course, we’ve been saying that for about 25 years now, so we need a new phrase.
In this era of the brain (corpus callosum!), we can make almost any argument sound more persuasive (occipital lobe!) by throwing in fancy-but-irrelevant neuroscience terminology (ventral tegmental area!).
For instance, Deena Skolnick Weisberg’s team found that people can generally distinguish between coherent and circular science explanations. However: when Weisberg added irrelevant neuroscience terminology to those explanations, readers judged the circular explanations to be much more persuasive.
Weisberg’s team memorably describes this result as “the seductive allure of neuroscience.”
The resulting problem is not exactly that people dress up good teaching advice with needless neuroscience language. (Although that happens.)
The problem is not exactly that people dress up bad teaching advice with needless neuroscience language. (Although that happens.)
The problem is that many teachers respond so positively when we do.
For example: teachers can get absorbed in the differences between alpha, beta, and gamma waves — although this distinction has no practical teaching implications that I know of.
In other words: although some teachers respond to outside experts with excessive skepticism, others respond to fancy brain terminology with inadequate skepticism.
Multi-tasking and Beyond
For example: I once heard an MBE speaker explain that limits in the “bandwidth of the corpus callosum” make multi-tasking impossible.
(The corpus callosum connects the two hemispheres of our brains.)
This information leads teachers to an obvious question: what can we do to increase the capacity of a student’s corpus callosum, and thereby facilitate multi-tasking?
One answer to that question is: nothing.
Another answer is: its capacity increases as our students mature, so we don’t need to worry about doing so artificially.
Another really important answer is: that factual statement is simply false.
(When I asked for the research supporting this claim, the speaker declined to provide it to me. “You’d need a PhD in neuroscience to understand it” was the extraordinary excuse.)
Although we dohave neural processing limits that preclude multi-tasking, the corpus callosum itself is one of the largest and densest structures in the brain. The idea that its “limits” hamper multi-tasking is silly.
This false information, at best, distracts teachers from from the key point: brains don’t multi-task. We need to design lesson plans and assessments that don’t ask our students to do so.
At worst, this falsehood leads ultimately to disillusionment. When teachers discover that this expert was dressing up good advice with needless terminology, we’re less likely to believe experts who genuinely know what they’re talking about.
Reaching for a Solution
Because neuroscience terminology can seduce teachers, it can seduce those of us who explain brain research to teachers. How can we resist its allure?
The obvious answer inverts the Nike slogan: “Just Don’t Do It.” If the medial temporal lobe isn’t practically relevant to our explanation, we can just not mention it.
At the same time, I think we need a strategy to lead teachers away from this seductive allure. Here’s my mantra: “Don’t just do this thing. Think this way.”
In other words: psychology and neuroscience communicators shouldn’t simply give teachers a list of instructions to follow. (Use visuals! Normalize struggle! Interleave topics!)
Instead, we should encourage teachers to think about learning with the helpful categories that cognitive science supplies.
Imagine, for example, that a teacher is creating a lesson plan. She recalls that, last year, this lesson went quite badly. What should she do?
I’m saying: that’s the wrong question. She shouldn’t just do this thing. She should think this way.
So, last year — did class go badly because of an attention problem?
The teacher thinks for a while. She concludes: Nope.
Was it a motivation problem?
Nope.
Was it a working memory problem?
Oh, wow: YES. Absolutely.
Okay, once you know it’s a WM problem, you’ve got a long list of potential solutions — which one sounds most likely?
This approach doesn’t explicitly reject neuroscience terminology. It simply redirects communication away from that set of questions. After all, teachers rarely say “well, that was an insula problem.”
The insula is cool and fascinating. But, we don’t know what to do about an insula problem. We have LOTS of ideas about solving working memory problems.
Two Sentences
In sum, researchers can overcome too much skepticism only by knowing a teacher’s world and work with the clarity that comes from LOTS of daily experience.
And, we can redirect too little skepticism by moving beyond specific teaching suggestions to fostering thinking habits that best shape teaching and learning.