Should teachers lets students *figure out* mathematical ideas and processes *on their own*?

Or, should we *walk students through* those ideas/processes *step by step*?

This debate rages hotly, from eX-Twitter to California teaching standards.

As best I understand them, the arguments goes like this:

If students figure out ideas and processes for themselves, they *think hard* about those mathematical ideas. (“Thinking hard” = **more learning**.)

And, they feel emotionally invested in their discoveries. (“Emotional investment” = **more learning**.)

Or,

If students attempt to figure out math ideas for themselves, they **first** have to contemplate *what they already know*. **Second,** they contemplate *where they’re going*. And **third,** they have to (basically) guess until they figure out how to get from start to finish.

Holding all those pieces — starting place, finish line, all the potential avenues in between — almost certainly overwhelms working memory. (“Overwhelmed working memeory” = **less learning**.)

Therefore, teachers should walk students directly through the mathematical ideas/process with step-by-step “worked” examples. This process reduces cognitive load and builds schema. (“Reduced cognitive load” + “building schema” = **more learning**.)

Depending on your philosophical starting place, both argument might sound plausible. Can we use research to answer the question?

## Enter the Meta

One problem with “using research to answer the question”: individual studies have yielded different answers.

While it’s not true that “you can find research that says anything,” it IS true — in this specific case — that some studies point one way and some point another.

When research produces this kind of muddle, we can turn to a mathematical technique called “meta-analysis.” Folks wise in the ways of math take MANY different studies and analyze all their results together.

If scholars do this process well, then we get an idea not what ONE study says, but what LOTS AND LOTS of well-designed studies say (on average).

This process might also help us with some follow up questions: *how much do specific circumstances matter?*

For instance: do worked examples help younger students more than older? Do they help with — say — math but not English? And so forth.

Today’s news:

This recent meta-analysis looks at the benefits of “worked examples,” especially in math instruction.

It also asks about specific circumstances:

Do students benefit from generating “self-explanations” in addition to seeing worked examples?

Do they learn more when the worked examples include BOTH correct AND incorrect examples?

So: what did the meta-analysis find?

## Yes, No, No

The meta-analysis arrives at conclusions that — I suspect — suprise almost everyone. (If memory serves, I first read about it from a blogger who champions “worked examples,” and was baffled by some of this meta-analysis’s findings.)

In the **first** place, the meta-analysis found that *students benefit from worked examples*.

If you do speak stats, you’ll want to know that the g-value was 0.48: basically 1/2 of a standard deviation.

If you don’t speak stats, you’ll want to know that the findings were “moderate”: not a home run, but at least a solid single. (Perhaps another runner advanced to third as well.)

While that statement requires LOTS of caveats (not all studies pointed the same direction), it’s a useful headline.

In the dry language of research, the authers write:

“The worked examples effect yields a medium effect on mathematics outcomes whether used for practice or initial skill acquisition. Correct examples are particularly beneficial for learning overall.”

So, what’s the surprise? Where are those “no’s” that I promised?

Well, in the **second **place, *adding self-explanation to worked examples didn’t help* (on average). In fact, doing so reduced learning.

For lots of reasons, you might have expected the opposite. (Certainly I did.)

But, once researchers did all their averaging, they found that “pairing examples with self-explanation prompts may not be a fruitful design modification.”

They hypothesize that — more often than not — students’ self explanations just weren’t very good, and might have included prior misconceptions.

## The Third Place?

In the **third** place came — to me, at least — the biggest surprise: contrasting *correct* worked examples with *incorrect* worked examples **didn’t benefit students**.

That is: they learned information better when they saw the right method, but didn’t explore wrong ones.

I would have confidently predicted the opposite. (This finding, in fact, is the one that shocked the blogger who introduced me to the study.)

Given these findings and calculations, I think we can come to three useful conclusions: in most cases, math students will learn new ideas…

… when introduced via worked examples,

… without being asked to generate their own explanations first,

… without being shown incorrect examples alongside correct ones.

## Always with the Caveats

So far, this blog post has moved from plausible reasons why worked examples help students learn (theory) to a meta-analysis showing that they mostly do help (research).

That journey always benefits from a recognition of the argument’s limitations.

**First**, most of the 43 studies included in the meta-analysis focused on middle- and high-school math: algebra and geometry.

For that reason, I don’t know that we can automatically extrapolate its findings to other — especially younger — grades; or to other, less abstract, topics.

**Second**, the findings about self-explanations include an obvious potential solution.

The researchers speculate that self-explanation doesn’t help because students’ prior knowledge is incorrect and misleading. So: students’ self-explantions activate schema that complicate — rather than simplify — their learning.

For example: they write about one (non-math) study where students were prompted to generate explanations about the *causes of earthquakes*.

Because the students’ prior knowledge was relatively low, they generated low-quality self-explanations. And, they learned less.

This logic suggests an obvious exception to the rule. If you believe your students have *relatively high and accurate prior knowledge*, then letting them generate self-explanations might in fact benefit students.

In my own work as an English teacher, I think of **participles and gerunds**.

As a grammar teacher, I devote LOTS of time to a discussion of participles; roughly speaking, a participle is “a verb used as an adjective.”

During these weeks, students will occasionally point out a gerund (roughly speaking, a “verb used as a noun”) and ask if it’s a participle. I say: “No, it’s something else, and we’ll get there later.”

When “later” finally comes, I put up sentences that include participles, and others that include similar gerunds.

I ask them to consider the differences *on their own and in small groups*; that is, I let them do some “self-explanation.”

Then I explain the concept precisely, including an English-class version of “worked examples.”

Because their prior knowledge is quite high — they already know participles well, and have already been wondering about those “something else” words that *look like* participles — they tend to have high quality explanations.

In my experience, students take gerunds on board relatively easily.

That is: when prior knowledge is high, self-explanation might (!) benefit worked examples.

## TL;DR

A recent meta-analysis suggests that worked examples help students learn algebra and geometry (and perhaps other math topics as well).

It also finds that self-explanations probably *don’t* help, and that incorrect examples *don’t* help either.

More broadly, it suggests that meta-analysis can offer helpful and nuanced guidance when we face contradictory research about complex teaching questions.

Barbieri, C. A., Miller-Cotto, D., Clerjuste, S. N., & Chawla, K. (2023). A meta-analysis of the worked examples effect on mathematics performance. *Educational Psychology Review*, *35*(1), 11.