Skip to main content
Making “Learning Objectives” Explicit: A Skeptic Converted? [Reposted]
Andrew Watson
Andrew Watson

Teachers have long gotten guidance that we should make our learning objectives explicit to our students.

The formula goes something like this: “By the end of the lesson, you will be able to [know and do these several things].”

I’ve long been skeptical about this guidance — in part because such formulas feel forced and unnatural to me. I’m an actor, but I just don’t think I can deliver those lines convincingly.

The last time I asked for research support behind this advice, a friend pointed me to research touting its benefits. Alas, that research relied on student reports of their learning. Sadly, in the past, such reports haven’t been a reliable guide to actual learning.

For that reason, I was delighted to find a new study on the topic.

I was especially happy to see this research come from Dr. Faria Sana, whose work on laptop multitasking  has (rightly) gotten so much love. (Whenever I talk with teachers about attention, I share this study.)

Strangely, I like research that challenges my beliefs. I’m especially likely to learn something useful and new when I explore it. So: am I a convert?

Take 1; Take 2

Working with college students in a psychology course, Sana’s team started with the basics.

In her first experiment, she had students read five short passages about mirror neurons.

Group 1 read no learning objectives.

Group 2 read three learning objectives at the beginning of each passage.

And, Group 3 read all fifteen learning objectives at the beginning of the first passage.

The results?

Both groups that read the learning objectives scored better than the group that didn’t. (Group 2, with the learning objectives spread out, learned a bit more than Group  3, with the objectives all bunched together — but the differences weren’t large enough to reach statistical significance.)

So: compared to doing nothing, starting with learning objectives increased learning of these five paragraphs.

But: what about compared to doing a plausible something else? Starting with learning objectives might be better than starting cold. Are they better than other options?

How about activating prior knowledge? Should we try some retrieval practice? How about a few minutes of mindful breathing?

Sana’s team investigated that question. In particular — in their second experiment — they combined learning objectives with research into pretesting.

As I’ve written before, Dr. Lindsay Richland‘s splendid study shows that “pretesting” — asking students questions about an upcoming reading passage, even though they don’t know the answers yetyields great results. (Such a helpfully counter-intuitive suggestion!)

So, Team Sana wanted to know: what happens if we present learning objectives as questions rather than as statements? Instead of reading

“In the first passage, you will learn about where the mirror neurons are located.”

Students had to answer this question:

“Where are the mirror neurons located?” (Note: the students hadn’t read the passage yet, so it’s unlikely they would know. Only 38% of these questions were answered correctly.)

Are learning objectives more effective as statements or as pretests?

The Envelope Please

Pretests. By a lot.

On the final test — with application questions, not simple recall questions — students who read learning-objectives-as-statements got 53% correct.

Students who answered learning-objectives-as-pretest-questions got 67% correct. (For the stats minded, Cohen’s d was 0.84! That’s HUGE!)

So: traditional learning objectives might be better than nothing, but they’re not nearly as helpful as learning-objectives-as-pretests.

This finding prompts me to speculate. (Alert: I’m shifting from research-based conclusions to research-&-experience-informed musings.)

First: Agarwal and Bain describe retrieval practice this way: “Don’t ask students to put information into their brains (by, say, rereading). Instead, ask students to pull information out of their brains (by trying to remember).”

As I see it, traditional learning objectives feel like review: “put this information into your brain.”

Learning-objectives-as-pretests feel like retrieval practice: “try to take information back out of your brain.” We suspect students won’t be successful in these retrieval attempts, because they haven’t learned the material yet. But, they’re actively trying to recall, not trying to encode.

Second: even more speculatively, I suspect many kinds of active thinking will be more effective than a cold start (as learning objectives were in Study 1 above). And, I suspect that many kinds of active thinking will be more effective that a recital of learning objectives (as pretests were in Study 2).

In other words: am I a convert to listing learning objectives (as traditionally recommended)? No.

I simply don’t think Sana’s research encourages us to follow that strategy.

Instead, I think it encourages us to begin classes with some mental questing. Pretests help in Sana’s studies. I suspect other kinds of retrieval practice would help. Maybe asking students to solve a relevant problem or puzzle would help.

Whichever approach we use, I suspect that inviting students to think will have a greater benefit than teachers’ telling them what they’ll be thinking about.

Three Final Points

I should note three ways that this research might NOT support my conclusions.

First: this research was done with college students. Will objectives-as-pretests work with 3rd graders? I don’t know.

Second: this research paradigm included a very high ratio of objectives to material. Students read, in effect, one learning objective for every 75 words in a reading passage. Translated into a regular class, that’s a HUGE number of learning objectives.

Third: does this research about reading passages translate to classroom discussions and activities? I don’t know.

Here’s what I do know. In these three studies, Sana’s students remembered more when they started reading with unanswered questions in mind. That insight offers teachers a inspiring prompt for thinking about our daily classroom work.

“Once Upon a Time”: Do Stories Help Learning?
Andrew Watson
Andrew Watson

When Daniel Willingham wrote Why Don’t Students Like School, he accomplished a mini-miracle: he made abstract psychology research…

…easy to understand, and

… obviously helpful to classroom teachers.

Its invaluable pages include emphatically practical teaching advice: “memory is the residue of thought,” novices and experts think differently. (Little wonder its third edition was just published.)

In his third chapter, Willingham included one important strategy for helping students understand and remember: use stories.

We understand and remember stories for many reasons:

They follow a familiar cause/effect structure.

They focus on people and conflicts.

We (most of us) grew up hearing stories.

Stories evoke emotions.

Expository writing — essays, textbooks — has its own advantages, but they probably can’t compete with the advantages of narrative.

Today’s News

Willingham first published Why Don’t Students Like School in 2009. What have we learned about narratives vs. exposition since then?

After all, research conclusions change over time. Does this advice still hold?

Earlier this year, Raymond Mar and colleagues published a meta-analysis of research on this topic. They wanted to know:

Does narrative presentation of information improve memory, compared to expository texts?

Does it improve comprehension?

Are there boundary conditions?

They identified 37 studies (with 78 data sets and more than 33,000 participants!) that matched their criteria.

So, what did they find?

The Envelope Please…

Sure enough, narratives help students understand. And they help students remember. And — this news is surprising to me — those benefits don’t have quirky exceptions. (Most research findings do: e.g., “This technique works in these circumstances but not those.”)

For the stats minded, they calculated a Hedges’s g of 0.55. In my experience, that’s a surprisingly big effect for measurements across such a big field. (Hedges’s g is a version of Cohen’s d — it’s more appropriate for meta-analyses.)

One of my favorite examples of this strategy wasn’t (I believe) included in this study. McNamara and Scott asked students to remember a list of words. They coached one group to turn those words into a story.

Given the words “foot, cow, shirt, hut,” students created sentences like “my foot was stepped on by a cow that wore a shirt and lived in a hut.”

McNamara and Scott coached the other students to remember their words by “thinking out loud.”

Sure enough, the group that composed stories remembered a lot more words.

Getting the Story Just Right

Although Mar’s meta-analysis did not find boundary conditions, I do want to add a note of caution. This advice — like so much of cognitive science — can be easily misconstrued.

The idea makes sense, but its classroom application can be tricky.

Imagine that, as a science teacher, I want my students to understand Boyle’s Law. I’m tempted to tell my students that P1V1=P2V2, to define the various terms, and to run some sample problems.

If I hear the advice that stories improve comprehension, I’m probably tempted to tell some interesting stories about Boyle’s life. For instance, he made a list a wish list of 24 inventions (a machine that could fly, drugs that could “exalt the imagination”). Way back in the 1600s! So cool!!

That story is interesting and memorable, but it doesn’t have anything to do with the pressure or volume of gas.

I shouldn’t shouldn’t, in other words, tell stories about the general subject; we’ve got lots of research about the dangers of “seductive details.”

I should instead tell stories tailored specifically to the meaning and importance of the topic.

Why does the relationship between volume and pressure matter?

Who else was trying to find out?

What changed as a result of Boyle’s discovery?

If I can fashion those questions into a story, now I’ve got all the characteristics of a memorable narrative: cause and effect, personal conflict, familiarity, and emotion.

And, all those benefits attach to the content I want my students to learn.

In Brief

Do narratives help students understand and remember?

Yes, the right stories do.

Conflicting Advice: What to Do When Cognitive Science Strategies Clash?
Andrew Watson
Andrew Watson

Teachers like research-informed guidance because it offers a measure of certainty.

“Why do you run your classes that way?”

“Because RESEARCH SAYS SO!”

Alas, we occasionally find that research encourages AND DISCOURAGES the same strategy simultaneously.

What to do when expert advice differs?

In fact, I got this question on Thursday during a Learning and the Brain Summer Institute. Here’s the setup.

“Difficult” Can be Good

Regular readers know that desirable difficulties help students learn. As explained by Bob Bjork and Elizabeth Ligon Bjork — and researched by countless scholars — some degree of cognitive challenge enhances long-term memory formation.

In brief: “easy learning doesn’t stick.”

And so: why do spacing and interleaving help students learn? Because they ramp up desirable difficulty.

Why is retrieval practice better than simple review? Because (among other reasons) review isn’t difficult enough. Retrieval practice, done correctly, adds just the right amount of challenge.

And so, if you attend Learning and the Brain conferences (like this one on “Teaching Thinking Brains”), or if you read any of the great books about long-term memory formation, you’ll hear a lot about desirable difficulty.

Memory at Work

Cognitive scientists who don’t focus on long-term memory might instead focus on a distinct mental capacity: working memory. 

Working memory allows us to gather information — facts, procedures, etc. — into a mental holding space, and then to reorganize and combine them into new patterns and ideas.

In other words: it’s absolutely vital for thinking and learning. If students are learning academic information, they are using their working memory.

Alas, all this good news comes with some bad news: we don’t have much working memory. And, our students probably have less than we do. (For evidence, try this mental exercise: try alphabetizing the workdays of the week. No problem alphabetizing 5 words? Now try alphabetizing the twelve months of the year. OUCH.)

For this reason, effective teachers pay scrupulous attention to working memory load. Every time we go beyond working memory constraints, we make learning MUCH HARDER.

In fact, I think working memory is so important that I wrote a lengthy series of blog posts on the topic. I’m kind of obsessed. (Heck: I even wrote a book on the topic, called Learning Begins.)

Trouble in CogSci Paradise

Because both topics — desirable difficulties and working memory — provide teachers with important and powerful insights, I devoted much of last week’s workshop to them. Almost every day, in fact, we talked about both.

On Thursday, one participant asked this wise and provocative question:

Wait a minute. You’ve told us that desirable difficulties help learning. And you’ve told us that working memory overload hinders learning.

But: isn’t desirable difficulty a potential working memory overload? Don’t those two pieces of advice conflict with each other? Won’t “spacing” and “interleaving” vex working memory?

Yes, reader, they certainly might.

So, what’s a research-focused teacher to do? Team Desirable Difficulty tells us to space and interleave practice. Team Working Memory tells us to beware overload. How can we make sense of this conflicting advice?

This (entirely reasonable) question has two answers: one specific, one general.

A Specific Answer

When we consider the tension between “working memory” and “desirable difficulty,” we can focus for a moment on the adjective “desirable.”

In almost every case, working memory overload is UNdesirable.

So, if our teaching strategy — spacing, interleaving, retrieval practice, metacognition — results in overload, we shouldn’t do it: it’s not desirably difficult. We should, instead, back off on the difficulty until students can manage that cognitive load.

How do we get that balance just right?

We use our teacherly experience and insight. If I create a homework assignment with lots of interleaved practice AND ALL MY STUDENTS DO TERRIBLY, then interleaving wasn’t desirably difficult. (Or, perhaps, I taught the concepts ineffectively.)

In this case, I know the next night’s assignment should be working-memory-friendlier.

No research can tell us exactly what the best balance will be. Our expertise as teachers will guide us.

The General Answer

Researchers and teachers have different goals, and follow different practices. In brief: researchers isolate variables; teachers combine variables.

We think about stress and about working memory and about alertness and about technology and about spacing and

That list goes on almost infinitely.

For that reason, I chant my mantra: when adopting cognitive science approaches to teaching, “don’t just do this thing; instead, think this way.”

That is: don’t just DO “spacing and interleaving” because research tells us they’re good ideas. Instead, we have to THINK about the ideas that guide spacing and interleaving, and be sure they make sense at this particular moment.

Should we have students meditate at the beginning of each class? It depends on our students, our school, our schedule, our culture, our curriculum, our goals, and … too many other variables to list here.

Should we ban laptops from classrooms? Ditto.

Should high schools start later? Ditto.

Should 3rd graders learn by doing projects? Ditto.

Should students read on exercycles? Ditto.

One isolated piece of research advice can’t effectively guide teaching and school-keeping decisions. We have to combine the variables, and think about them in our specific context.

Simply put: we can’t just “do what the research says.” It’s not possible; different research pools almost certainly conflict.

Instead, we’re doing something more challenge, more interesting, and more fun.

Let the adventure begin!

Does Online Learning Work? Framing the Debate to Come…
Andrew Watson
Andrew Watson

I first published this blog post back in January. I’ve been seeing more and more discussion of this question on social media, so I thought it might be helpful to offer this perspective once again.


With news that several very effective vaccines will be increasingly available over the upcoming months, we teachers can now start thinking about “a return to normal”: that is — in person teaching as we (mostly) worked before February of 2020.

One question will inevitably be debated: did online learning work?

I suspect that the

“debate” will go something like this. One voice will stake an emphatic opinion: ONLINE CLASSES WERE AN UNEXPECTED TRIUMPH! Some data will be offered up, perhaps accompanied by a few stories.

An equally emphatic voice will respond: ONLINE CLASSES FAILED STUDENTS, TEACHERS, AND PARENTS! More data. More stories.

This heated exchange will reverberate, perhaps improved by all of Twitter’s nuance and common sense.

A Better Way?

Rather than launch and participate a BATTLE OF EXTREMES, I hope we can look for a more level-headed approach. As is so often the case when research meets teaching, a key question should be boundary conditions.

Whenever we look for a research  finding (e.g.: drawing helps students learn!), we should ask: under what precise conditions is this true?

Does drawing help older students and younger ones? In math and in phonics? Autistic students, dyslexic students, aphantasic students, and neurotypical students?

We’re always looking for boundaries, because every research finding has boundaries. As Dylan Wiliam (who will be speaking at our February Conference) famously says: “When it comes to educational interventions, everything works somewhere. Nothing works everywhere.”

If we ask about boundary conditions for the strengths and weaknesses of online learning, we can have a much more productive discussion.

Places to Start

Age: I suspect we’ll find that — on average — older students did better with online classes than younger ones. My friends who teach college/high school don’t love online teaching, but they don’t seem quite as overwhelmed/defeated by those who teach younger grades.

Additional Technology: Is it better to have a simple Zoom-like platform with occasional breakout sessions? Does it help to use additional, elaborate programs to supplement online learning?

Discipline: Perhaps online teaching worked better with one kind of class (science?) than another (physical education?).

Personality: Although most high school students I know emphatically prefer in-person classes, I do know two who greatly prefer the online version. Both really struggle negotiating adolescent social networks; they’ve been frankly grateful to escape from those pressures and frustrations.

Teachers’ personalities could matter as well. Some of us comfortably roll with the punches. Some of us feel set in our ways.

Administration: Did some school leaders find more effective ways to manage transitions and support teachers and students? The question “does online learning work” might get different answers depending on the managerial skill supervising the whole process. (In my work, I find teachers appreciated decisiveness and clear communication above all else. Even when they didn’t like the decision itself, they liked knowing that a decision had been made.)

SES: No doubt the socio-economic status (SES) of school districts made a big difference. It’s hard to run online classes in schools and communities that don’t have money for technology, or infrastructure to support its use.

Pedagogy: Do some styles of teaching work better online? Or — a slightly different version of this questions — do teachers and schools with experience “flipping the classroom” have greater success with an online model?

Teacher Experience: Perhaps well-seasoned teachers had more experience to draw on as they weathered the muddle? Or, perhaps younger teachers — comfortable with tech, not yet set in their ways — could handle all the transitions more freely?

Country/Culture: Do some countries or cultures manage this kind of unexpected social transition more effectively than others?

Two Final Points

First: We should, I think, expect complex and layered answers to our perfectly appropriate question.

In other words: online learning (a la Covid) probably worked well for these students studying this topic in this country using this technology. It was probably so-so for other students in other circumstances. No doubt it was quite terrible for still other students and disciplines and pedagogies.

Second: I myself have long been skeptical of the idea that “online learning is the future of education (and everything else)!”

And yet, I don’t think we can fairly judge the validity of that claim based on this last year’s experience.

After all: most teachers and school and students didn’t get well-designed and deliberately-chosen online education. They got what-can-we-throw-together-with-grit-and-hope online education.

Of course that didn’t work as well as our old ways (for most students). Nothing worked well: restaurants struggled to adjust. The travel industry struggled. Retail struggled.

Yes: I think that — for almost everybody learning almost everything — in-person learning is likely to be more effective. But I myself won’t judge the whole question based on this year’s schooling.

We all benefit from forgiveness for our lapses and muddles during Covid times.

Let’s learn what we reasonably can about online education, and use that experience to improve in-person and remote learning in the future.

Putting It All Together: “4C/ID”
Andrew Watson
Andrew Watson

We’ve got good news and bad news.

Good news: we’ve got SO MUCH research about learning that can guide and inform our teaching!

Bad news: we’ve got SO MUCH research about learning that…well, it can honestly overwhelm us.

I mean: should we focus on retrieval practice or stress or working memory limitations or handshakes at the door? How do we put all these research findings together?

Many scholars have created thoughtful systems to assemble those pieces into a conceptual whole. (For example: here, and here, and here, and here.)

Recently, I’ve come across a system called 4C/ID — a catchy acronym for “four component instructional design.” (It might also be R2D2’s distant cousin.)

First proposed by van Merriënboer, and more recently detailed by van Merriënboer and Kirschner, 4C/ID initially strikes me as compelling for two reasons.

Reason #1: Meta-analysis

Here at Learning and the Brain, we always look at research to inform our decisions. Often we look at one study — or a handful of studies — for interesting findings and patterns.

Scholars often take another approach, called “meta-analysis.” When undertaking a meta-analysis, researchers gather ALL the studies that fit certain criteria, and aggregate their findings. For this reason, some folks think of meta-analytic conclusions as very meaningful. *

A recent meta-analysis looked at studies of 4C/ID, and found … well … found that it REALLY HELPS. In stats language, it found a Cohen’s d of 0.79.

For any one intervention, that’s a remarkably high number. For a curriculum and instruction planning system, that’s HUGE. I can’t think of any other instructional design program with such a substantial effect.

In fact, it was this meta-analysis, and that Cohen’s d, that prompted me to investigate 4C/ID.

Reason #2: Experience

Any substantial instructional planning concept resists easy summary. So, I’m still making my way through the descriptions and diagrams and examples.

As I do so, I think: well, it all just makes a lot of sense.

As you can see from this graphic, the details get complex quickly. But (I think) the headlines are:

A graphical view on the four components: (a) learning tasks, (b) supportive information, (c) procedural information, and (d) part-task practice. Author: Jeroen J. G. van Merriënboer

… ensure students know relevant procedures fluently before beginning instruction

… organize problems from simple to complex, aiming to finish with “real-life” tasks

… create varied practice

… insist on repetition

And many others. (Some summaries encapsulate 4C/ID in 10 steps.)

None of that guidance sounds shocking or novel. But, if van Merriënboer and Kirschner have put it together into a coherent program — one that works across grades and disciplines and even cultures — that could be a mighty enhancement to our practice.

In fact, as I review the curriculum planning I’m doing for the upcoming school year, I think: “I’m trying to do something like this, but without an explicit structure to guide me.”

In brief: I’m intrigued.

The Water’s Warm

Have you had experience with 4C/ID? Has it proved effective, easy to implement, and clear? The opposite?

I hope you’ll let me know in the comments.


* Others, however, remain deeply skeptical of meta-analysis. The short version of the argument: “garbage in, garbage out.” In this well-known post, for instance, Robert Slavin has his say about meta-analysis.

How to Capture Students’ Attention for Online Readings (tl;dr)
Andrew Watson
Andrew Watson

When do students focus while reading online?

When do they lose focus and let their minds wander?

Does the length of the passage being read influence the answer to these questions?

Several researchers, including Dr. Noah Forrin, have been exploring this topic, and have some interesting — and helpful! — answers.

Here’s the story:

The Research Setup

Happily, this question can be explored with well-established methods.

Forrin and his colleagues had 80 college students read articles from Wikipedia: topics included “the galaxy,” “Pompeii,” and “Sartre.” The passages were at a 9th grade reading level, and ran about 500 words.

Students read half of these passages in one-sentence chunks (averaging about 12 words). The other half they read in two-to-six sentence chunks (averaging 30 words).

As students read, Forrin interrupted them to ask if they were thinking about the reading, or thinking about any topic other than the Wikipedia passage.

And — here’s a key point — Forrin’s team asked if the students were mind-wandering intentionally or unintentionally. (Yes: in this field, “mind wander” is a verb.)

Why ask that odd question?

If students mind-wander intentionally, they and their teachers can (presumably) have some control over that problem.

However, if mind wandering is unintentional, then we all might struggle to fix this problem.

As the researchers say:

“intentional processes are presumably more likely to be changed by instructions and conscious strategies than are unintentional processes.”

So, what did Team Forrin find?

The Results

Sure enough, the passage length mattered.

More precisely, it mattered for unintentional mind reading (but not intentional). When reading the one-sentence passages, students unintentionally mind-wandered 19% of the time; when reading long passages, they did so 24% of the time.

Forrin’s team speculates that long passages act as a signal that students might find the passage uninteresting. In their grim summary, they write that

students’ increase in mind-wandering while reading educational texts may (1) emerge rapidly, (2) persist over time, (3) harm comprehension, and (4) be related to a decrease in interest.

Ugh.

Next Steps

So, what should we DO with this glum news?

First, as is always the case, I think teachers should use our experience to apply research wisely to our circumstances. For instance, if you don’t have your students do online readings, don’t worry about Forrin’s findings!

If, however, your students spend LOTS of time reading online, then his conclusions merit your attention.

Second, I think these findings add to an increasingly clear research conclusion: online reading doesn’t promote learning as much as old-fashioned ink-on-paper does.

To my mind Dr. Lalo Salmeron’s meta-analysis remains the most useful exploration of this question. He goes through important findings (no, the age of the reader doesn’t matter; no, we aren’t getting better at this skill) and interesting exceptions (prose fiction).

Third, Forrin himself offers a practical suggestion. If we MUST assign online readings, and we CAN break them down into smaller paragraphs, then maybe we should. His research suggests that doing so reduces the amount of unintentional mind-wandering.

Potential result: students concentrate better and learn more.

If he’s right, then Forrin’s research will have been well worth reading — long paragraphs and all.

When Do We Trust the Experts? When They Don’t Trust Themselves!
Andrew Watson
Andrew Watson

Back in 2010, three scholars published a widely-discussed paper on “Power Poses.” The headlines: when people adopt a strong stance (say, fists on hips, like Superman), they…

…take more risks in gambling tasks,

…change various hormone levels, and

…answer questions more confidently in interviews.

In other words, simply changing the way we stand can affect meaningful variables in our biology, and our performance on life tasks.

A TED Talk on the subject has gotten more than 61 million views. (Yes: 61,000,000!)

Of course, any claim this provocative may generate controversy. Sure enough, skeptics weighed in with counter-claims.

Then, in 2016, something quite shocking happened: one of the original researchers publicly withdrew her support for the claim.

Researcher Dana Carney wrote, with bracing forthrightness, “I do not believe that power pose effects are real.” (As you can see in this link, Carney herself put those words in bold type.)

She went on to list her concerns about the initial study (small sample size, “flimsy” data, and so forth), to include her skepticism on her CV, and to discourage others from studying the topic. *

Wow!

What Next?

In theory, science is gradually “self-correcting.” That is: if one group of researchers arrives at an incorrect conclusion, other researchers will – over time – sleuth out their mistakes. (Max Plank wryly observed that the process might take a long time indeed. In his grim formula, opponents don’t change their minds; they die out.)

Looking at Carney’s example, researcher Julia Rohrer wondered if we could speed that process up. What would happen, she wondered, if we gave researchers a chance to change their minds? What if we invited them to do what Carney did?

She and her colleagues spread the word that they hoped researchers might publicly self-correct. As she puts it:

“The idea behind the initiative was to help normalize and destigmatize individual self-correction while, hopefully, also rewarding authors for exposing themselves in this way with a publication.”

The result? Several did.

And, the stories these thirteen researchers have to tell is fascinating.

In the first place, these self-corrections came from a remarkably broad range of fields in psychology. Some researchers studied extraversion; others, chess perception. One looked at the effect that German names have on professional career; another considered the credibility of Swedish plaintiffs.

One – I’m not inventing this topic – considered the relationship between testosterone and wearing make-up.

Stories to Tell

These researchers, in fact, went into great detail — often painful detail — during their self-corrections.

They worried about small sample sizes, overlooked confounds, and mistakes in methodology. They noted that some replications hadn’t succeeded. Several acknowledged different versions of “p-hacking”: a strategy for finding p values that hold up under scrutiny.

A few, in fact, were remarkably self-critical.

Tal Yarkoni wrote these amazing words:

I now think most of the conclusions drawn in this article were absurd on their face. … Beyond these methodological problems, I also now think the kinds of theoretical explanations I proposed in the article were ludicrous in their simplicity and naivete—so the results would have told us essentially nothing even if they were statistically sound.

OUCH.

With equally scathing self-criticism, Simine Vazire wrote:

I cherry-picked which results to report. This is basically p-hacking, but because most of my results were not statistically significant, I did not quite successfully p-hack by the strict definition. Still, I cherry-picked the results that made the contrast between self-accuracy and peer accuracy the most striking and that fit with the story about evaluativeness and observability. That story was created post hoc and chosen after I had seen the pattern of results.

Others, however, critiqued their own methodology, but held up hope that their conclusions might be correct; “These claims may be true, but not because of our experiment.”

What Should Teachers Do?

These self-corrections might tempt us, or our colleagues, to cynicism. “See? Science isn’t objective! Researchers are just makin’ stuff up…”

I would understand that reaction, but I think it misses the point.

In truth, all ways of knowing include weaknesses and flaws.

Science, unlike many ways of knowing, acknowledges that awkward truth. In fact, science tries to build into its methodology strategies to address that problem.

For this reason, research studies include so many (gruesomely tedious) details.

For this reason, psychology journals require peer review.

Indeed, for this reason, researchers try to replicate important findings.

Obviously, these strategies at self-correction don’t always work. Obviously, researchers do fool themselves…and us.

However, every time we read stories like these, they remind us that — as a profession — scientists take correction (and self-correction) unusually seriously.

In fact, I think the teaching profession might have something to learn from these brave examples.

How often do schools — how often do teachers — admit that a success we once claimed might not hold up under scrutiny?

As far as I know, we have few Yarkonis and Vizires in our ranks. (I certainly have never made this kind of public statement.)

In brief: this kind of self-correction makes me trust both the profession of psychology and these individual researchers even more. If you’re conspicuously willing to fess up when you’re wrong, you deserve a much stronger presumption of trustworthiness when you ultimately say you’re right.


* By the way: one of Carney’s co-authors continues to defend power poses emphatically. You can read Amy Cuddy’s response at the end of this article.

 

Beyond Slogans and Posters: The Science of Student Motivation
Andrew Watson
Andrew Watson

In many cases, cognitive science offers clear teaching advice.

You’re curious about working memory? We’ve got LOTS of strategies.

Wondering about the limits of attention? Good news!

Alas, in other cases, research doesn’t give us such clarity. If, for instance, I want to ramp up my students’ motivation, what should I do?

Should I put up posters with uplifting quotations?

Should I encourage grit?

Or, should I promote a “growth mindset”?

If you’d like answers to these specific questions — and the broader questions that prompt them — I have a place to start: meet Peps Mccrea.

In his new book Motivated Teaching, Mccrea sorts though dozens/hundreds of studies to create a clear, readable, research-informed, and practical guide to the science of student motivation.

Here’s the story…

Evolution, and Beyond

Mccrea, sensibly enough, starts with an evolutionary perspective.

Humans face a cognitive problem: the environment offers us so many stimuli that we can struggle to know where to focus our attention. (Teachers REALLY care where students focus their attention.)

Motivation helps solve this problem. If I’m motivated to do Y, I will attend to Y; if I attend to Y, I just might learn it. As Mccrea pithily writes, “Motivation is a system for allocating attention.” *

How then do teachers amp up motivation?

For an initial answer to this question, Mccrea turns to behavioral economics. In his formulation, students feel motivated to learn when

… they see the VALUE in what they’re learning,

… the ODDS ARE GOOD that they can learn it, and

… the COSTS of learning are low.

If we manipulate these variables just right — increasing the odds of learning, reducing the costs — those teacherly efforts create student motivation.

For instance, I’ve spent years emphasizing the importance of classroom routines. From my perspective, they reduce working memory load — usually a good thing.

From Mccrea’s perspective — thinking about that cost/benefit formula above — routines reduce the costs of learning. Once students have classroom systems and mental systems in place, they can easily use them to learn more complex material.

As Mccrea says: we should make “the process of learning easy, whilst keeping the content of learning challenging.” (You see what I mean about his catchy summaries?)

Learning Is Individual AND Social

Of course, learning takes place in a social context, and Mccrea studies that research pool as well.

For instance, he highlights the importance of school and classroom norms. If students see that, around here, we all act a particular way, they’re likelier to join in the normal behavior they see.

For this reason, Mccrea advocates taking the time we need to articulate and re-establish our norms. Early work now will pay off later in the year.

By the way, Mccrea’s chapter here reminded me of a powerful story. At the high school where I work, a new freshman once used a mild slur to insult a classmate. Before the teacher could do anything, one of the student leaders looked up and said five simple words: “We don’t do that here.”

Imagine the power of that sentence. If the new student wanted to be a part of “we” and “here,” he needed to change his behavior pronto.

Norm established.

The Big Pictures

In Mccrea’s system — to answer the questions that opened this post — motivation does not result from uplifting posters. It produces grit, but does not result from it. (Mccrea does not specifically mention growth mindset.)

In fact, he specifically discounts “fun” as a good way to motivate students. Fun is an extrinsic driver: one that we should use sparingly, and as infrequently as possible.

Instead, he argues that if teachers focus on five key drivers of motivation, their cumulative results will foster motivation; and thereby attention; and thereby learning.

When you start reading Mccrea’s book, be aware that he’s explicitly aiming for “ultra-concise.” He has, in fact, boiled an early draft of 200,000 words to this slim volume of 10,000 words. (You read those numbers right — two hundred thousand words boiled down to ten thousand. **)

To achieve that goal, he gives few detailed examples, and saves research for links that you can follow. To imagine Mccrea’s suggestions at work in your context, you really should take time with the exercises he outlines on his page 112.

In other words: because he condenses research so effectively — like a bouillon cube — we readers need to soak it in our own context to let it expand and work its flavorful magic. You won’t get a detailed motivation checklist; you’ll get something much better — a way to think about motivation in many classroom contexts.

As someone who has written a book about research on motivation, I can tell you: Motivated Teaching is an excellent, readable, and practical book. It’s so short, you can easily read it twice.

In fact, after you’ve finished your first reading, you’ll be highly motivated to do so.


* Mccrea is GREAT at this sort of succinct formulation. In this review, I’m working really hard to limit the number of quotations from the book. I suspect I could compose a review almost entirely of his wise sentences.

** At the same time I read Mccrea’s book, I listened to Ollie Lovell’s podcast interview with him. This astonishing fact comes from that interview. By the way, if you DO like podcasts and you DON’T yet follow Lovell, now is an excellent time to start. He’s a one-man Learning and the Brain podcast in Australia.

Does Chewing Gum Improve Memory and Learning?
Andrew Watson
Andrew Watson

I recently read a striking Twitter claim from a well-known teacher: chewing gum helps memory and concentration.

In fact, according to the teacher, research supports this claim: the tweet cites this study as one of many to make this gum-chewing suggestion credible.

I’m always on the lookout for practical strategies to boost memory and attention. If chewing gum gets the job done, well, that’s exciting news. (I can already hear the catchy new jingle: “Double your learning, double your fun, with Doublemint, Doublemint, Doublemint gum!”)

At the same time, I confess, the idea has a Mozart-effect whiff of implausibility.

I can imagine that, perhaps, chewing gum raises alertness levels (for some people); this increased alertness might result in greater learning. But I suspect that effect would wear off fairly quickly.

Of course, if good research consistently supports the claim, then I’ll admit my instincts mislead me. However, I’d like to take a look at that research first…

Early Steps

We start with good news. The well-known teacher said that research supports the claim, and then cited research.

I’m amazed how often that second step doesn’t happen.

Folks regularly claim that “research shows” that a teaching technique provides specific benefits, but won’t identify any specific research. “Oh, you know, all the research shows that…” (Pro tip: in psychology, it is NEVER true that “all the research” shows  anything. If someone says that to you, you can politely and confidently decline their advice.)

This teacher, however, gives us the crucial details. We can look for ourselves.

When we do, we get a bit more good news. This research study does indeed conclude that chewing gum helps with memory and attention. So far, so good.

At the same time, we can register some important concerns.

First: the study includes sixteen participants. Now, researchers have good reasons to run small studies; they let scholars know if they should run larger studies testing the same idea. However, teachers should never change our classroom based on such a small sample. We want MUCH more evidence. (How much more? Keep reading…)

Second: the study is published in The International Journal of Scientific Research and Engineering Development. I don’t know anything about it (although its website says that it does use peer review). However, I’m inclined to rely on memory research in journals that focus on memory, rather than on engineering.

Third: the researcher’s technique for measuring attention is rather hunchy. The researchers videotaped participants, and looked for behavior that suggested inattention. As I’ve written before, that strategy doesn’t sound highly scientific.

And so: we can conclude that — yes —  this research supports the claim that chewing gum improves memory and attention. But given its size, provenance, and methodology, we probably want more evidence before we start making big changes to our teaching.

The Adventure Continues

To see how others responded to this study, I plugged it into my two favorite ai platforms: scite.ai and connectedpapers.com. Alas, neither search produced any results. I’m guessing (but I don’t know) that the journal doesn’t meet the standards that these websites use.

Next, I searched for papers about chewing gum and learning.

The most cited paper, according to Google Scholar, comes from 2002. In it, Wilkinson and others conclude that chewing gum does indeed help memory (but not attention).

However, according to this paper by Tucha,

the chewing of gum did not improve participants’ memory functions. Furthermore, chewing may differentially affect specific aspects of attention. While sustained attention was improved by the chewing of gum, alertness and flexibility were adversely affected by chewing. In conclusion, claims that the chewing a gum improves cognition should be viewed with caution.

And this 2009 study by Smith concludes,

The results of this study showed that chewing gum increases alertness. In contrast, no significant effects of chewing gum were observed in the memory tasks. Intellectual performance was improved in the gum condition. Overall, the results suggest further research on the alerting effects of chewing gum and possible improved test performance in these situations.

In other words: three studies show a cluttered hodgepodge of results.

If we look at research findings about, say, retrieval practice, we find that – over and over – it helps! In this case, however, no consistent message comes through.

I’ve even looked for a meta-analysis about chewing gum and memory. (I wonder if I’m the only person in history to google “chewing gum meta-analysis.”)

This overview, noting that we can find clear evidence of both benefits and detriments, concludes that “the robustness of reported effects of gum chewing on cognition has to be questioned.”

To Sum Up

First: We have a surprising (to me) amount of research into the cognitive effects of chewing gum. However, that research doesn’t provide a clear picture if its benefits, or detriments.

We might have school or classroom policies about gum, but we shouldn’t claim that research has given us clear guidance one way or another.

Does chewing gum improve memory? We just don’t know.

Second: People often tell us: “you should change your teaching or your school policies: research says so!”

When that happens, start by looking at the research they cite. If it doesn’t inspire confidence, keep looking…

Jerome Kagan: A Teacher’s Appreciation
Guest Blogger
Guest Blogger

A guest post, by Rob McEntarffer

 

I didn’t get to learn about Jerome Kagan (1929-2021) during my teacher’s college training. I regret that.

While I was a teacher, my contact with Kagan’s research was limited to teaching about temperament research during the developmental psychology unit of the high school psychology class I taught for 13 years.

Students learned about how Kagan measured infant temperament, and how those reactions predicted temperament later in life (Kagan, 1978). This research often helped my students think about how their thinking and behavior might be influenced by earlier factors in their lives, which opened a door for some of them in how they thought about themselves.

Kagan’s research helped us start great, research-informed discussions.

As a public-school administrator (assessment/evaluation specialist), I now realize that I could have learned much more from Kagan’s research.

I often focus exclusively on specific aspects of teaching and learning (like cognitive load, working memory, and retrieval practice) and ignore other important elements. As Chew (2021) and many others highlight, our models of teaching and learning need to include much more: student fear/mistrust, student mindset, and other self-perception and emotional factors that directly influence what students learn.

Kagan (2006) said:

“Although humans inherit a biological bias that permits them to feel anger, jealousy, selfishness and envy, … they inherit an even stronger biological bias for kindness, compassion, cooperation, love and nurture – especially toward those in need. This inbuilt ethical sense is a biological feature of our species.”

As I help teachers figure out how to create assignments that allow students to express what they are thinking, Kagan might remind me to think about how ethics, and an “inbuilt” ethical sense, could be usefully included in classroom discussions and assignments.

I experienced this sense often as a teacher: in my psychology classroom, our discussions about research often moved into discussions about ethics and feelings of compassion. We talked about what should be, not just what is.

As an administrator, Kagan can remind me to include these ideas in my current work. In the end, teaching and learning are also about ethics and care, not just about what environments create the most likely context for elaborative encoding.

I’m grateful for Jerome Kagan’s thoughtful, caring research, and thinking about this work will change how I work with teachers.

 

References:

Kagan, J., Lapidus, D., & Moore, M. (1978). Infant Antecedents of Cognitive Functioning: A Longitudinal Study. Child Development, 49(4), 1005-1023. doi:10.2307/1128740

Stephen L. Chew & William J. Cerbin (2021) The cognitive challenges of effective teaching, The Journal of Economic Education, 52:1, 17-40, DOI: 10.1080/00220485.2020.1845266

Kagan, J. (2006). On the case for kindness. In A. Harrington & A. Zajonc (Eds.), The Dalai Lama at MIT. Cambridge, MA: Harvard University Press.


Dr. McEntarffer is an Assessment and Evaluation Specialist with the Lincoln Public School System in Lincoln, Nebraska.

You can read more at his blog, Not For Points.