Weather Forecasting and Cognitive Science

I live in Boston, and we just had an ENORMOUS snow storm. TWELVE INCHES of snow fell in just a few hours. It was, as we say, “a monstah.”

Oh, wait a minute, that didn’t happen.

A winter scene: cars covered in a foot of swon, and two pedestrians walking away from the camera, shoulders hunched agains the cold snow

The FORECAST said we’d get a monstah. In reality, by the end of the day, exactly 0.0 inches of snow had accumulated on my sidewalk. It was as close to “nothing” as was the Patriots’ chance of winning the Super Bowl this year.

You can imagine the public response:

Hah! All the “experts” with all their science-y equipment and equations and models and colorful images … they all got it wrong. AGAIN!

That’s it: I’m done with all this weather forecasting nonsense. I’ll rely on my trick knee to tell me when the weather is a-changing.

While that response is predictable, I also think it’s unfair. In fact, believe it or not, it reminded me of the work we do at Learning and the Brain.

In most ways, weather forecasting has almost nothing to do with cognitive science. But the few similarities might help explain what psychology and neuroscience research can (and can’t do) for teachers.

I want to focus on three illustrative similarities.

Spot the Butterfly

First, both meteorologists and cognitive scientists focus on fantastically complex systems.

In the world of weather:

As the butterfly theory reminds us, small changes over here (a butterfly flapping its wings in my backyard) could cause enormous changes over there (a typhoon in Eastern Samar).

In the world of education:

Whether we’re looking at neurons or IEPs or local school budgets or working memory systems or mandated annual testing, we’ve got an almost infinite number of interconnected variables.

Research might tell us to “do this thing!”, but the effect of that recommendation will necessarily depend on all those other variables.

We should not be shocked, therefore, that a one-step intervention (e.g.: growth mindset training) doesn’t have exactly the effect we want it to. That one intervention interacts with all those other complex systems.

The research-based suggestion isn’t necessarily wrong, but it also can’t briskly overcome all the other forces that influence learning.

Possibilities and Probabilities

Second: like weather forecasts, research-based suggestions focus on probabilities.

That is: the weather channel didn’t say “Boston is going to get 12 inches of snow!”

If you looked past the simplified headline, it said:

“We’ve seen conditions more-or-less like this 100 times before.

2 of those times, we got less than 2 inches

8 times, we got 2-6 inches

25 times, 6-10 inches

45 times, 10-14 inches

15 times, 14-18 inches

5 times, more than 18 inches.

Make plans accordingly.”

They don’t know for sure; they’re making predictions based on previous cases — and those previous cases provide a range of possibilities.

Notice, by the way, that the forecasters weren’t exactly wrong. New York and Philly got pounded; they got the “monstah” we were expecting.

But — because a butterfly somewhere flapped its wings — the storm went slightly south and left us alone.

So, too, with psychology and neuroscience research aimed at the classroom.

Researchers can say: “this strategy helped students score 5% higher on the end-of-year exam … ON AVERAGE.”

That means the strategy (probably) helped more students than it hurt. But the effects were different student-by-student.

Who knows: the strategy could have made learning harder for some students.

We’re looking at probabilities, not panaceas.

The Bigger the Claim…

Third: expert forecasters get their predictions right more often than they get them wrong. And — this is crucial — the “wrong” results come more often for big, outlier events.

Sunny days in June? Glum rain in November?

Relatively easy to predict.

A once-in-a-generation hurricane? A monstah snow storm?

MUCH harder to predict. We just have less data about unusual events because…they’re unusual!

So too in the world of research-based teaching advice.

I honestly think that researchers get their advice “right” much of the time — at least within the narrow confines of the context they describe.

That is: a large collection of well-designed studies probably merits careful consideration.

At the same time, if researchers loudly announce a big, outlier conclusion, we should be ready for that claim to collapse upon further investigation.

Imagine that researchers claim…

… dancing a hornpipe helps students learn fractions, or

… standing in a “power pose” does something worthwhile/important, or

… teachers don’t need to know anything about a topic to teach it well.

In each of these cases, the extremity of the claim should prepare us for doubts.

Equally true, let’s say “research shows” that a particular teaching strategy has a HUGE effect on learning.

It’s possible, but honestly kinda rare.

For instance, as I wrote recently, I found a meta-analysis claiming that the “jigsaw” method has a cohen’s d value of 1.20. As stats people know, that’s simply ENORMOUS.

It’s possible…but I wasn’t at all surprised to find very little support for that claim. I honestly can’t think of any teaching intervention that makes that much of a difference on its own.

TL;DR

Like weather forecasters, psychology and neuroscience research…

… looks at enormously complicated systems,

… offers conclusions best understood as probabilities, and

… is likeliest to be right when it makes modest claims.

In brief: this field can be fantastically useful to classroom teachers, as long as we understand its challenges and limitations.

Our teacherly “trick knee” might be right from time to time. But wisely considered research will probably be better.

tags: category: L&B Blog

Leave a Reply

Your email address will not be published. Required fields are marked *