Dr. Neil Lewis thinks a lot about science communication: in fact, his appointment at Cornell is in both the Psychology AND the Communications departments. (For a complete bio, click here.)
He and Dr. Jonathan Wai recently posted an article focusing on a troubling communication paradox:
Researchers are encouraged to “give science away”; however, because of the “replication crisis,” it’s hard to know what science is worth being given.
Here at Learning and the Brain, we think about that question frequently — so I was delighted that Dr. Lewis agreed to chat with me about his article.
In this conversation, we talk about…
… how teachers can ask psychologists good questions
… the dangers of “eminence”
… what we should think about growth mindset research
… the research “hype cycle.”
I hope you enjoy this conversation as much as I did.
Thank you, Dr. Lewis, for sharing your ideas with our readers.
In your recent article, you and Dr. Wai write about tensions between two imperatives in the field of psychology.
First, psychologists are being asked to “give research away.” And second, our field worries about the “replication crisis.”
Both of those phrases mean more or less what they say. Could you define them a little more precisely, and talk about the tensions that these imperatives are creating?
There has been a long-standing call in psychology—going back, really, to the 60’s when George Miller first issued this call—to “give psychology away.”
As scholars, we spend our time doing all this research: we should try to communicate it with the world so that people can use it and improve lives.
Professional psychology societies and organizations really encourage researchers to “get our work out there.”
But at the same time, over the past decade or so, there has been a movement to reflect on what we really know in psychology.
A “replication crisis” has occurred—not only in psychology, it’s been happening in many areas.
We are having a hard time replicating many research findings. And that [failure] is making us, the scientists, wrestle with: what do we know? How do we know it? How robust are some of our findings?
And so there’s a tension here. We’re supposed to be “giving our findings away,” but at the same time we’re not sure which ones are robust enough to be worth giving away.
That does sound like a problem. In that tension, do you see any special concerns about the field of education?
One of the things I’ve been thinking about for education researchers is: how do we know what we know? We have to look very closely at the details of the paper to figure those things out.
Which students are being studied in the papers you’re reading?
What kinds of schools?
What kind of teachers?
At least in the US, there’s so much segregation in our school systems that schools look very different.
If studies are run—let’s say—with kids in the Ithaca school district where I live in upstate New York: those kids, those parents, those schools are very different than studies run—let’s say—in the Detroit public school district, which is the district I thought a lot about during my graduate training when I lived in Michigan.
There are big differences between these districts. We have to figure out: are the schools that we’re trying to intervene in, similar to the studies that were run? Or are they different?
I have a question about that process.
Here’s a problem: to know what questions teachers ought to be asking, we need expert knowledge. Because we’re teachers not psychologists, it’s hard to know the right questions.
So: what’s the best question that a nonspecialist teacher can ask of a researcher, in order to get an answer that we can genuinely understand?
I think there are some basic things that teachers can ask of researchers.
The teachers can ask what kinds of schools were these studies run in. Are they urban schools, rural schools?
What percentage of the students are on free lunch? (That’s an indicator of poverty levels of the school. Research findings are often influenced by background characteristics about the students.)
What do we know about the kinds of students that were involved in studies?
What do we know about the teachers?
Those are basic things that the researchers should be able to tell you. And then you can figure out whether those are similar to:
the students that you’re working with,
the kinds of schools that you have,
the kind of leadership in your school district, and the like.
Those basic characteristics about how the study was done will help you figure out whether or not you can use it.
I spend a lot of time talking with teachers about this concern. Most psychology research is done with college undergraduates. That research is obviously important. But if you’re teaching reading to third graders, maybe that research translates to your context and maybe it doesn’t.
One of the more intriguing points you made in the article has to do with the idea of eminence.
In the world of education, we’re often drawn to Big Names. You argue that the things scholars do to achieve eminence don’t necessarily help them produce high quality research.
As teachers, how do we sort through this paradox? How can we be wise when we think about that?
We brought up eminence to reinforce what I just noted. Look at the details of the study and don’t rely on the “cue” of eminence as your signal that research must be good.
Researchers are judged by many metrics. Once you put those metrics in place, people do what they can to… I hesitate to use the word “game,” but to optimize their standing in those metrics.
Which is a lot like “gaming,” isn’t it?
Yes. In the research world, there are a few metrics that don’t necessarily help [produce meaningful results]. One of them, for instance, is that researchers are incentivized to publish as much as we can.
Unfortunately, publishing fast is the way to rise up the ranks. But sometimes figuring out these differences that I have been talking about—like, between contexts and samples—it takes some time. It slows you down from churning out papers; and unfortunately, researchers often aren’t incentivized to take that slower, more careful approach.
And so there’s that tension again too. I don’t want to leave the impression that we just shouldn’t trust eminent people. That’s not the point I want to make.
The point is: eminence in and of itself is not a useful signal of quality. You have to look very closely at the details of the studies in front of you. Then compare those details to your own situation and judge the work on that. Judge the work, don’t judge based on how famous the person is.
It occurs to me as you’re explaining this, there’s a real problem with the emphasis on rapid publication. One of the consistent findings in education research is that short-term performance isn’t a good indicator of long-term learning.
But if scholars are incentivized to publish quickly, they’re incentivized to the study short-term, which doesn’t tell us much about what we really want to know: learning that lasts.
Absolutely right. As I’ve written in other articles, we don’t have enough longitudinal studies for the very reasons we’re talking about: longitudinal studies take forever—and, again, the incentive is to publish fast, publish often.
The outcomes that are often measured in psychology studies are these shorter term things. You have the student do something, and you measure at the end of the session. Maybe you look again at the end of the semester.
But [we should] look next year, two years, three years, because we know some of these effects take time to accumulate.
Some older studies have looked at long-term outcomes. I’ve seen a few fascinating studies showing, initially, no significant findings. But if you look far enough down the road, you start to see meaningful effects. It just takes time for the benefits to accumulate.
In education, we shouldn’t assume that research results “generalize.” [Editor: That is, we shouldn’t assume that research with 1st graders applies to 10th graders; or that short term findings will also be true in the long term.]
Now, until I see more evidence, I assume findings are context-specific. [Editor: That is, research with 1st graders applies to 1st graders—but not much beyond that age/grade. Research from the United States applies to the US cultural context, but not—perhaps—to Korea.]
For instance: “growth mindset.” In recent studies, authors have been looking at how much the effect varies by context and by population. Those details matter in thinking about mindset studies.
Yes, I think mindset is a really interesting case study for the topic we’re talking about. My impression is that teachers got super excited about growth mindset. We went to a highly simplistic “poster-on-the-wall” version of the theory.
And in the last 18 months or so, there has been a real backlash. Now we hear: “growth mindset means nothing whatsoever! Why are you wasting your time?”
We need to find our way to a nuanced middle ground. No, growth mindset is not a panacea. But nothing is a panacea. At the same time, in a specific set of circumstances, mindset can help certain students in specific ways.
That balanced conclusion can be a hard place to get the conversation to go.
Yes, issues like that motivated us to write our paper.
If we [researchers] are able to communicate those nuances clearly, then I think we avoid these misunderstandings. It’s not that mindset is useless; instead, mindset will have a small effect under certain conditions. We should just say that.
We have a problem with the “hype cycle.”
If something is over-hyped one day, then you’re really setting people’s expectations unreasonably high. Later, when the research doesn’t meet those expectations, teachers are disappointed.
And so researchers should set expectations appropriately. Mindset is not a panacea. We shouldn’t expect enormous impacts. And that’s fine. Let’s just say that.
I think this “hype cycle” is part of the challenge that we’re facing.
For instance, with learning styles, teachers thought that it had a lot of scientific backing. We embraced it because it was “research based.”
Now the message is: “no, research got that wrong; learning styles aren’t a thing. But here’s another research-based thing instead.”
And teachers are saying: “wait, if I shouldn’t have followed research about learning styles, why should I believe new research about new teaching suggestions?”
That’s a tricky problem.
One way to think about science is: science is a way of reducing uncertainty.
We had this idea about learning styles. We gathered some initial evidence about it. It seemed like a good idea for a while.
But as we continued studying it, we realized, well, maybe there is not as much good evidence as we thought.
And that’s part of the scientific process. I think it’s important to explain that.
But: that shift without an explanation naturally leads teachers to be suspicious.
Teachers think: “why are you telling me, just make this change. You have to explain to me what is going on and why should I make that change.”
This explanation does take more time. But that’s what is necessary to get people to update their understanding of the world.
Something that we all have to keep in mind: just as every year teachers are learning new ways to teach the new generations of students, scientists are doing the same thing too. We’re constantly trying to update our knowledge.
So there will be changes in the recommendations over time. If there weren’t changes, none of us would be doing our best. So we’re learning and improving constantly.
But we have to have that conversation. How are we updating our knowledge? And what are ways that we can implement that new knowledge into curriculum?
And, the conversation has to go both ways. Researchers communicate things to teachers, but teachers also need to be telling things to researchers. So we can keep that real classroom context in mind as we’re developing research advice.
In your article, you and Dr. Wai remind researchers that they’re not communicating with one undifferentiated public. They are talking with many distinct, smaller audiences—audiences which have different interests and needs.
Are there difficulties that make it especially hard to communicate with teachers about psychology research? Is there some way that we’re an extra challenging audience? Or maybe, an especially easy audience?
I think what’s hard for presenters is not knowing details about the audience, where they’re coming from. That section of the paper is about is really getting to know your audience, and tailoring your message from there.
If I’m going to go explain psychology findings to a group of STEM teachers, that talk might be different than if the audience is a broader cross-section of teachers.
In the university setting, it’s easier to figure out those distinctions because you know which department invited you to speak.
In broader K-12 settings you don’t always know. A school district invites you. You can do some Googling to try to figure something out about the district. But you don’t know who’s going to be in the room, and what is happening [in that district]. So you might end up giving too broad a talk, that might be less informative than if you did get some more information.
Are there questions I haven’t asked that I ought to have asked?
The key point for me is: when we communicate about science in the world, we really have to look at key research details and have serious conversations about them. Nuances matter, and we just can’t gloss over them.
Dr. Lewis, I very much appreciate your taking the time to talk with me today.