mathematics – Education & Teacher Conferences Skip to main content
Using “Worked Examples” in Mathematics Instruction: a New Meta-Analysis
Andrew Watson
Andrew Watson

Should teachers lets students figure out mathematical ideas and processes on their own?

Or, should we walk students through those ideas/processes step by step?

3 students working together on a math problemThis debate rages hotly, from eX-Twitter to California teaching standards.

As best I understand them, the arguments goes like this:

If students figure out ideas and processes for themselves, they think hard about those mathematical ideas. (“Thinking hard” = more learning.)

And, they feel emotionally invested in their discoveries. (“Emotional investment” = more learning.)

Or,

If students attempt to figure out math ideas for themselves, they first have to contemplate what they already know. Second, they contemplate where they’re going. And third, they have to (basically) guess until they figure out how to get from start to finish.

Holding all those pieces — starting place, finish line, all the potential avenues in between — almost certainly overwhelms working memory. (“Overwhelmed working memeory” = less learning.)

Therefore, teachers should walk students directly through the mathematical ideas/process with step-by-step “worked” examples. This process reduces cognitive load and builds schema. (“Reduced cognitive load” + “building schema” = more learning.)

Depending on your philosophical starting place, both argument might sound plausible. Can we use research to answer the question?

Enter the Meta

One problem with “using research to answer the question”: individual studies have yielded different answers.

While it’s not true that “you can find research that says anything,” it IS true — in this specific case — that some studies point one way and some point another.

When research produces this kind of muddle, we can turn to a mathematical technique called “meta-analysis.” Folks wise in the ways of math take MANY different studies and analyze all their results together.

If scholars do this process well, then we get an idea not what ONE study says, but what LOTS AND LOTS of well-designed studies say (on average).

This process might also help us with some follow up questions: how much do specific circumstances matter?

For instance: do worked examples help younger students more than older? Do they help with — say — math but not English? And so forth.

Today’s news:

This recent meta-analysis looks at the benefits of “worked examples,” especially in math instruction.

It also asks about specific circumstances:

Do students benefit from generating “self-explanations” in addition to seeing worked examples?

Do they learn more when the worked examples include BOTH correct AND incorrect examples?

So: what did the meta-analysis find?

Yes, No, No

The meta-analysis arrives at conclusions that — I suspect — suprise almost everyone. (If memory serves, I first read about it from a blogger who champions “worked examples,” and was baffled by some of this meta-analysis’s findings.)

In the first place, the meta-analysis found that students benefit from worked examples.

If you do speak stats, you’ll want to know that the g-value was 0.48: basically 1/2 of a standard deviation.

If you don’t speak stats, you’ll want to know that the findings were “moderate”: not a home run, but at least a solid single. (Perhaps another runner advanced to third as well.)

While that statement requires LOTS of caveats (not all studies pointed the same direction), it’s a useful headline.

In the dry language of research, the authers write:

“The worked examples effect yields a medium effect on mathematics outcomes whether used for practice or initial skill acquisition. Correct examples are particularly beneficial for learning overall.”

So, what’s the surprise? Where are those “no’s” that I promised?

Well, in the second place, adding self-explanation to worked examples didn’t help (on average). In fact, doing so reduced learning.

For lots of reasons, you might have expected the opposite. (Certainly I did.)

But, once researchers did all their averaging, they found that “pairing examples with self-explanation prompts may not be a fruitful design modification.”

They hypothesize that — more often than not — students’ self explanations just weren’t very good, and might have included prior misconceptions.

The Third Place?

In the third place came — to me, at least — the biggest surprise: contrasting correct worked examples with incorrect worked examples didn’t benefit students.

That is: they learned information better when they saw the right method, but didn’t explore wrong ones.

I would have confidently predicted the opposite. (This finding, in fact, is the one that shocked the blogger who introduced me to the study.)

Given these findings and calculations, I think we can come to three useful conclusions: in most cases, math students will learn new ideas…

… when introduced via worked examples,

… without being asked to generate their own explanations first,

… without being shown incorrect examples alongside correct ones.

Always with the Caveats

So far, this blog post has moved from plausible reasons why worked examples help students learn (theory) to a meta-analysis showing that they mostly do help (research).

That journey always benefits from a recognition of the argument’s limitations.

First, most of the 43 studies included in the meta-analysis focused on middle- and high-school math: algebra and geometry.

For that reason, I don’t know that we can automatically extrapolate its findings to other — especially younger — grades; or to other, less abstract, topics.

Second, the findings about self-explanations include an obvious potential solution.

The researchers speculate that self-explanation doesn’t help because students’ prior knowledge is incorrect and misleading. So: students’ self-explantions activate schema that complicate — rather than simplify — their learning.

For example: they write about one (non-math) study where students were prompted to generate explanations about the causes of earthquakes.

Because the students’ prior knowledge was relatively low, they generated low-quality self-explanations. And, they learned less.

This logic suggests an obvious exception to the rule. If you believe your students have relatively high and accurate prior knowledge, then letting them generate self-explanations might in fact benefit students.

In my own work as an English teacher, I think of participles and gerunds.

As a grammar teacher, I devote LOTS of time to a discussion of participles; roughly speaking, a participle is “a verb used as an adjective.”

During these weeks, students will occasionally point out a gerund (roughly speaking, a “verb used as a noun”) and ask if it’s a participle. I say: “No, it’s something else, and we’ll get there later.”

When “later” finally comes, I put up sentences that include participles, and others that include similar gerunds.

I ask them to consider the differences on their own and in small groups; that is, I let them do some “self-explanation.”

Then I explain the concept precisely, including an English-class version of “worked examples.”

Because their prior knowledge is quite high — they already know participles well, and have already been wondering about those “something else” words that look like participles — they tend to have high quality explanations.

In my experience, students take gerunds on board relatively easily.

That is: when prior knowledge is high, self-explanation might (!) benefit worked examples.

TL;DR

A recent meta-analysis suggests that worked examples help students learn algebra and geometry (and perhaps other math topics as well).

It also finds that self-explanations probably don’t help, and that incorrect examples don’t help either.

More broadly, it suggests that meta-analysis can offer helpful and nuanced guidance when we face contradictory research about complex teaching questions.


Barbieri, C. A., Miller-Cotto, D., Clerjuste, S. N., & Chawla, K. (2023). A meta-analysis of the worked examples effect on mathematics performance. Educational Psychology Review35(1), 11.

Building Thinking Classrooms in Mathematics, Grades K-12 by Peter Liljedahl
Erik Jahner, PhD
Erik Jahner, PhD

Initially, I looked at this title and thought “not another best practice book” the bookstores already have too many poor books on how to teach content effectively. However, I begrudgingly opened Building Thinking Classrooms in Mathematics, Grades K-12: 14 Teaching Practices for Enhancing Learning and found an unexpected reward. As a learning scientist, I was pleasantly surprised by Peter Liljedahl’s approach to education. There is no ivory tower mentality here: no belief that teachers need to align with abstract theory or laboratory learnings not grounded in practice. Liljedahl really sees teachers! The contents of this book come from countless observations and trials in real classrooms and the best practices that emerged from them.  He begins with a basic premise in his observations asking what the factors that encourage “thinking” in the mathematics classroom are; and then, based on extensive research, he unwraps 14 concrete and often deceptively simple recommendations that have emerged.

Yes, a best practices book can be a page-turner. I am not a math teacher, and I would say I did not have a pleasant experience with math in my youth. However, what I was reading here really resonated with me, and the recommendations for enhancing learning are not isolated to mathematics. The reader’s attention is drawn to practices that, at times, seem minor but can have big impacts on learning.  Consistent with the author’s notion of encouraging thinking, the material is presented in such a way that it provokes curiosity. Amazingly simple questions spark interest: where should students practice math: whiteboards on the wall, whiteboards on the table, posterboards, or notebooks? He takes us through the investigation predicting our thinking and ending each section with frequently asked questions that reveal he has had plenty of field experience with teachers and skeptics.

Each chapter engages the teacher’s likely goals and a comparison to student goals. Throughout the book, I found myself in the narrative of each giving me insight into my learning and my teaching. Take group work that is central to every active classroom: when we are instructors, we plan groupings carefully; but when we are students, we often have another interpretation of instructor efforts in mind, and we have our own social goals. Liljedahl brings these into some alignment, so both student and teacher work toward deeper thinking. As the author points out, students and teachers love to think and think deeply when the conditions facilitate and don’t interfere or distract.

While each chapter ends with a summary of the main points in the form of macro and micro moves that we can take as educators, the meat of the chapters offers valuable context and back up the claims in ways that allow us to spread the knowledge captured in these pages among our peers. I tried to critique every suggestion, but the author was particularly good at anticipating this doubt, and those points not addressed in the main narrative were given direct attention in the frequently asked questions sections at the end of each chapter, a part that I particularly enjoyed.

But best practices mean little in standardized systems that constrain our ability to create — “There is no more time. There is no room to add more. ” Stop fretting, evidence is loaded into these pages that refute that the teacher is too constrained to enhance learning in these ways. The author breaks down curricular time into minute-by-minute activities demonstrating that these practices enable efficient use of classroom time. Other concerns about making sure you meet curricular demands are also addressed. Not all activities are curricular and that’s ok; instead, they often prepare the learner to do curricular activities effectively. Constrained by finance? Alternatives abound and are supported by previous implementation and testing. If you have reasons to not enhance student learning as suggested, be prepared to have those concerns alleviated.

So the book is useful for teachers, but what about the researcher who yearns for an academic discussion. If this is you, you also have something great to learn on these pages. This book is an illustrative guide of one excellent way to do great learning science research. The researcher will learn from Liljedahl’s communication and experience with teachers. But will also be tickled by the attention to detail and nuance that is enjoyable in all scientific endeavors. Science is about seeing and noticing and letting the data teach us. This is what you will find here making it an excellent lighthearted college text for preparing teachers or researchers.

Often an education book offers much for the reader as both a teacher and a learner. This book is no exception. Take some of these practices to your own learning opportunities, places of work, research labs, and faculty meetings. Enjoy thinking deeply with Liljedahl.

Liljedahl, P. (2020). Building thinking classrooms in mathematics, grades K-12: 14 teaching practices for enhancing learning. Corwin Press