When Do We Trust the Experts? When They Don’t Trust...

Back in 2010, three scholars published a widely-discussed paper on “Power Poses.” The headlines: when people adopt a strong stance (say, fists on hips, like Superman), they…

…take more risks in gambling tasks,

…change various hormone levels, and

…answer questions more confidently in interviews.

In other words, simply changing the way we stand can affect meaningful variables in our biology, and our performance on life tasks.

A TED Talk on the subject has gotten more than 61 million views. (Yes: 61,000,000!)

Of course, any claim this provocative may generate controversy. Sure enough, skeptics weighed in with counter-claims.

Then, in 2016, something quite shocking happened: one of the original researchers publicly withdrew her support for the claim.

Researcher Dana Carney wrote, with bracing forthrightness, “I do not believe that power pose effects are real.” (As you can see in this link, Carney herself put those words in bold type.)

She went on to list her concerns about the initial study (small sample size, “flimsy” data, and so forth), to include her skepticism on her CV, and to discourage others from studying the topic. *

Wow!

What Next?

In theory, science is gradually “self-correcting.” That is: if one group of researchers arrives at an incorrect conclusion, other researchers will – over time – sleuth out their mistakes. (Max Plank wryly observed that the process might take a long time indeed. In his grim formula, opponents don’t change their minds; they die out.)

Looking at Carney’s example, researcher Julia Rohrer wondered if we could speed that process up. What would happen, she wondered, if we gave researchers a chance to change their minds? What if we invited them to do what Carney did?

She and her colleagues spread the word that they hoped researchers might publicly self-correct. As she puts it:

“The idea behind the initiative was to help normalize and destigmatize individual self-correction while, hopefully, also rewarding authors for exposing themselves in this way with a publication.”

The result? Several did.

And, the stories these thirteen researchers have to tell is fascinating.

In the first place, these self-corrections came from a remarkably broad range of fields in psychology. Some researchers studied extraversion; others, chess perception. One looked at the effect that German names have on professional career; another considered the credibility of Swedish plaintiffs.

One – I’m not inventing this topic – considered the relationship between testosterone and wearing make-up.

Stories to Tell

These researchers, in fact, went into great detail — often painful detail — during their self-corrections.

They worried about small sample sizes, overlooked confounds, and mistakes in methodology. They noted that some replications hadn’t succeeded. Several acknowledged different versions of “p-hacking”: a strategy for finding p values that hold up under scrutiny.

A few, in fact, were remarkably self-critical.

Tal Yarkoni wrote these amazing words:

I now think most of the conclusions drawn in this article were absurd on their face. … Beyond these methodological problems, I also now think the kinds of theoretical explanations I proposed in the article were ludicrous in their simplicity and naivete—so the results would have told us essentially nothing even if they were statistically sound.

OUCH.

With equally scathing self-criticism, Simine Vazire wrote:

I cherry-picked which results to report. This is basically p-hacking, but because most of my results were not statistically significant, I did not quite successfully p-hack by the strict definition. Still, I cherry-picked the results that made the contrast between self-accuracy and peer accuracy the most striking and that fit with the story about evaluativeness and observability. That story was created post hoc and chosen after I had seen the pattern of results.

Others, however, critiqued their own methodology, but held up hope that their conclusions might be correct; “These claims may be true, but not because of our experiment.”

What Should Teachers Do?

These self-corrections might tempt us, or our colleagues, to cynicism. “See? Science isn’t objective! Researchers are just makin’ stuff up…”

I would understand that reaction, but I think it misses the point.

In truth, all ways of knowing include weaknesses and flaws.

Science, unlike many ways of knowing, acknowledges that awkward truth. In fact, science tries to build into its methodology strategies to address that problem.

For this reason, research studies include so many (gruesomely tedious) details.

For this reason, psychology journals require peer review.

Indeed, for this reason, researchers try to replicate important findings.

Obviously, these strategies at self-correction don’t always work. Obviously, researchers do fool themselves…and us.

However, every time we read stories like these, they remind us that — as a profession — scientists take correction (and self-correction) unusually seriously.

In fact, I think the teaching profession might have something to learn from these brave examples.

How often do schools — how often do teachers — admit that a success we once claimed might not hold up under scrutiny?

As far as I know, we have few Yarkonis and Vizires in our ranks. (I certainly have never made this kind of public statement.)

In brief: this kind of self-correction makes me trust both the profession of psychology and these individual researchers even more. If you’re conspicuously willing to fess up when you’re wrong, you deserve a much stronger presumption of trustworthiness when you ultimately say you’re right.


* By the way: one of Carney’s co-authors continues to defend power poses emphatically. You can read Amy Cuddy’s response at the end of this article.

 

tags: category: L&B Blog

2 Responses to When Do We Trust the Experts? When They Don’t Trust...

  1. Curtis Kelly says:

    Another great article from Andrew. And again, timely. My team and I just put together a MindBrainEd Think Tank issue on consuming research. https://www.dropbox.com/s/o7wfh7fe4gouq09/3_MindBrained_Think_Tank_V7i3_research_Mar_2021.pdf?dl=0 If I have a chance, I might refer to Andrew’s article ina follow-up.

  2. Thanks, great insight and perspective……. good for us all not just, scientists, or PhD candidates, researchers, professors and teachers AND consultants

Leave a Reply

Your email address will not be published. Required fields are marked *