classroom advice – Page 2 – Education & Teacher Conferences Skip to main content
Graphic Disorganizers; or, When Should Teachers Decorate Handouts?
Andrew Watson
Andrew Watson

Teachers regularly face competing goals. For instance:

On the one hand — obviously — we want our students to learn.

And, on the other hand, we equally obviously want them to feel safe, comfortable, at home.

To accomplish that second goal, we might decorate our classrooms. The more adorable cat photos, inspirational posters, and familiar art work, the homier the classroom will feel.

A colorful bar graph, showing 20%, 40%, 60%,etc.

But here’s the problem: what if all that decoration (in pursuit of goal #2) interferes with goal #1?

What if decorations inhibit learning?

The Story so Far

I’ve written about this topic a fair amount, and the story so far gives us reason to concentrate on that question.

So: do decorations get in the way of learning? According to this study: yes.

Is this a problem for all age groups? Research done by this team suggests: yes.

When I showed teachers all this research, they often raised a perfectly plausible doubt:

Don’t students get used to the decorations? According to this recent study: nope.

Given these studies (and many others), I think we’ve got a compelling narrative encouraging our profession to rethink decoration. While I don’t think that classrooms should be sterile fields … I do worry we’ve gone substantially too far down the “let’s decorate!” road.

“I’ve Still Got Questions”

Even with this research pool, I think teachers can reasonably ask for more information. Specifically: “what counts as a decoration?”

I mean: is an anchor chart decration?

How about a graphic organizer?

A striking picture added to a handout? (If they’re answering questions about weather, why would it be bad to have a picture of a thunderstorm on the handout?)

An anchor chart might be “decorative.” But, if students use it to get their math work done, doesn’t it count as something other than a “decoration”?

In other words: if I take down an anchor chart, won’t my students learn less?

Because practically everything in the world can be made prettier, we’ve got an almost infinite number of things that might be decorated. (I’ve done some work at a primary school that has arrows embedded in the floor: arrows pointing to, say, Beijing or Cairo or Los Angeles. Does that count as “decoration”?)

For this reason, research to explore this question gets super detailed. But if we find enough detailed examples that more-or-less resemble our own classroom specifics, we can start to credit a “research-informed” answer.

Graphic Disorganizer?

A friend recently pointed me to a study about reading bar graphs.

This research team wanted to know if “decorated” bar graphs make learning harder for students in kindergarten, and in 1st and 2nd grade.

So, if a bar graph shows the number of gloves in the lost and found box each week, should the bar representing that number…

Be decorated with little glove icons?

Or, should it be filled in with stripes?

How about dots?

This study in fact incorporates four separate experiments; the researchers keep repeating their basic paradigm and modifying a variable or two. For this reason, they can measure quite precisely the problems and the factors that cause them.

And — as you remember — they’re working with students in three different grades. So: they’ve got LOTS of data to report…

The Headlines, Please…

Rather than over-decorate this blog post with a granular description, I’ll hit a few telling highlights.

First: iconic decorations inhibit learning.

That is: little gloves on the bar graph made it harder for students to learn to read those graphs correctly.

Honestly, this result doesn’t surprise me. Gloves are concrete and familiar, whereas bar graphs represent more abstract concepts. No wonder the little tykes get confused.

Second: stripes and dots also inhibit learning.

Once again, the students tend to count the objects contained within the bar — even little dots! — instead of the observing the height of the bar

This finding did surprise me a bit more. I wasn’t surprised that young learners focus on concrete objects (gloves, trees), but am intrigued to discover they also want to count abstract objects (lines, dots) within the bar.

Third: age matters.

That is: 1st graders did better than kindergarteners. And, 2nd graders better than first graders.

On the one hand, this result makes good sense. As we get older, we get better at understanding more abstract concepts, and at controlling attention.

On the other hand, this finding points to an unfortunate irony. Our profession tends to emphasize decoration in classrooms for younger students.

In other words: we decorate most where decoration might do the most harm! (As a high-school teacher, I never got any instructions about decoration, and was never evaluated on it.)

In Brief

We teachers certainly might be tempted to make our environments as welcoming — even festive! — as possible.

And yet, we’ve got a larger (and larger) pool of research pointing out the distraction in all that decoration.

This concern goes beyond — say — adorable dolphin photos on the wall, or uplifting quotations on waterfall posters.

In this one study, something as seemingly-harmless as dots in a bar graph can interfere with our students learning.

When it comes to decorating — even worksheets and handouts — we should keep the focus on the learning.


Fisher, A. V., Godwin, K. E., & Seltman, H. (2014). Visual environment, attention allocation, and learning in young children: When too much of a good thing may be bad. Psychological science25(7), 1362-1370.

Godwin, K. E., Leroux, A. J., Seltman, H., Scupelli, P., & Fisher, A. V. (2022). Effect of Repeated Exposure to the Visual Environment on Young Children’s Attention. Cognitive Science46(2), e13093.

Kaminski, J. A., & Sloutsky, V. M. (2013). Extraneous perceptual information interferes with children’s acquisition of mathematical knowledge. Journal of Educational Psychology105(2), 351.

The Jigsaw Advantage: Should Students Puzzle It Out?
Andrew Watson
Andrew Watson

The “jigsaw” method sounds really appealing, doesn’t it?

Imagine that I’m teaching a complex topic: say, the digestive system.

Asking students to understand all those pieces — pancreas here, stomach there, liver yon — might get overwhelming quickly.

So, I could break that big picture down into smaller pieces: puzzle pieces, even. And, I assign different pieces to subgroups of students.

Group A studies the liver.

Group B, they’ve got the small intestine.

Group C focuses on the duodenum.

Once each group understands its organ — its “piece of the puzzle” — they can explain it to their peers. That is: they re-assemble the larger puzzle from the small, understandable bits.

This strategy has at least two potential advantages:

First, by breaking the task down into smaller steps, it reduces working memory load. (Blog readers know that I’m a BIG advocate for managing working memory load.)

Second, by inviting students to work together, it potentially increases engagement.

Sadly, both those advantages have potential downsides.

First: the jigsaw method could reduce working memory demands initially. But: it also increases working memory demands in other ways:

… students must figure out their organ themselves, and

… they have to explain their organ (that’s really complicated!), and

… they have to understand other students’ explanations of several other organs!

Second: “engagement” is a notoriously squishy term. It sounds good — who can object to “engagement”? — but how do we define or measure it?

After all, it’s entirely possible that students are “engaged” in the process of teaching one another, but that doesn’t mean they’re helpfully focused on understanding the core ideas I want them to learn.

They could be engaged in, say, making their presentation as funny as possible — as a way of flirting with that student right there. (Can you tell I teach high school?)

In other words: it’s easy to spot ways that the jigsaw method could help students learn, or could interfere with their learning.

If only we had research on the subject…

Research on the Subject

A good friend of mine recently sent me a meta-analysis puporting to answer this question. (This blog post, in fact, springs from his email.)

It seems that this meta-analysis looks at 37 studies and finds that — YUP — jigsaw teaching helps students learn.

A closeup of four hands holding out single puzzle pieces, trying to see how to put them together well.

I’m always happy to get a research-based answer…and I always check out the research.

In this case, that “research-based” claim falls apart almost immediately.

The meta-analysis crunches the results of several studies, and claims that jigsaw teaching has a HUGE effect. (Stats people: it claims a Cohen’s d of 1.20 — that’s ENORMOUS.)

You’ve probably heard Carl Sagan’s rule that “extraordinary claims require extraordinary evidence.” What evidence does this meta-analysis use to make its extraordinary claim?

Well:

… it doesn’t look at 37 studies, but at SIX (plus five student dissertations), and

… it’s published in a journal that doesn’t focus on education or psychology research, and

… as far as I can tell, the text of the meta-analysis isn’t available online — a very rare limitation.

For that reason, we know nothing about the included studies.

Do they include a control condition?

Were they studying 4th graders or college students?

Were they looking at science or history or chess?

We just don’t know.

So, unless I can find a copy of this meta-analysis online (I looked!), I don’t think we can accept it as extraordinary evidence of its extraordinary claim.

Next Steps

Of course, just because this meta-analysis bonked doesn’t mean we have no evidence at all. Let’s keep looking!

I next went to my go-to source: elicit.com. I asked it to look for research answering this question:

Does “jigsaw” teaching help K-12 students learn?

The results weren’t promising.

Several studies focus on college and graduate school. I’m glad to have that information, but college and graduate students…

… already know a great deal,

… are especially committed to education,

… and have higher degrees of cognitive self-control than younger students.

So, they’re not the most persuasive source of information for K-12 teachers.

One study from the Phillipines showed that, yes, students who used the jigsaw method did learn. But it didn’t have a control condition, so we don’t know if they would have learned more doing something else.

After all, it’s hardly a shocking claim to say “the students studied something, and they learned something.” We want to know which teaching strategy helps them learn the most!

Still others report that “the jigsaw method works” because “students reported higher levels of engagement.”

Again, it’s good that they did so. But unless they learned more, the “self-reports of higher engagement” argument doesn’t carry much weight.

Recent News

Elicit.com did point me to a highly relevant and useful study, published in 2022.

This study focused on 6th graders — so, it’s probably more relevant to K-12 teachers.

It also included control conditions — so we can ask “is jigsaw teaching more effective than something else?” (Rather than the almost useless question: “did students in a jigsaw classroom know more afterwards than they did before?” I mean: of course they did…)

This study, in fact, encompases five separate experiments. For that reason, it’s much too complex to summarize in detail. But the headlines are:

The study begins with a helpful summary of the research so far. (Tl;dr : lots of contradictory findings!)

The researchers worked carefully to provide appropriate control conditions.

They tried different approaches to jigsaw teaching — and different control conditions — to reduce the possibility that they’re getting flukey results.

It has all the signs of a study where the researchers earnestly try to doubt and double-check their own findings.

Their conclusions? How much extra learning did the jigsaw method produce?

Exactly none.

Over the course of five experiments (some of which lasted an entire school term), students in the jigsaw method group learned ever-so-slightly-more, or ever-so-slightly-less, than their control group peers.

The whole process averaged out to no difference in learning whatsoever.

The Last Word?

So, does this recent study finish the debate? Should we cancel all our jigsaw plans?

Based on my reading of this research, I do NOT think you have to stop jigsawing — or, for that matter — start jigsawing. Here’s why:

First: we’ve got research on both sides of the question. Some studies show that it benefits learning; others don’t. I don’t want to get all bossy based on such a contradictory research picture.

Second: I suspect that further research will help us use this technique more effectively.

That is: jigsaw learning probably helps these students learn this material at this point in the learning process. But it doesn’t help other students in other circumstances.

When we know more about those boundary conditions, we will know if and when to jigsaw with our students.

I myself suspect that we need to focus on a key, under-discussed step in the process: when and how the teacher ensures that each subgroup understands their topic correctly before they “explain” it to the next group. If they misunderstand their topic, after all, they won’t explain it correctly!

Third: let’s assume that this recent study is correct; jigsaw teaching results in no extra learning. Note, however, that it doesn’t result in LESS learning — according to these results, it’s exactly the same.

For that reason, we can focus on the other potential benefits of jigsaw learning. If it DOES help students learn how to cooperate, or foster motivation — and it DOESN’T reduce their learning — then it’s a net benefit.

In sum:

If you’re aware of the potential pitfalls of the jigsaw method (working memory overload, distraction, misunderstanding) and you have plans to overcome them, and

If you really like its potential other benefits (cooperation, motivation),

then you can make an informed decision about using this technique well.

At the same time, I certainly don’t think we have enough research to make jigsaw teaching a requirement.

As far as I know, we just don’t have a clear research picture on how to do it well.


Stanczak, A., Darnon, C., Robert, A., Demolliens, M., Sanrey, C., Bressoux, P., … & Butera, F. (2022). Do jigsaw classrooms improve learning outcomes? Five experiments and an internal meta-analysis. Journal of Educational Psychology114(6), 1461.

Overwhelmed Teachers: The Working-Memory Story (Part II) [Updated with Link]
Andrew Watson
Andrew Watson

Last week, I offered an unusual take on working memory in the classroom.

Typically, I (and other writers) focus on the dangers of students’ working memory overload. Of course, we SHOULD focus on that problem — when students’ working memory is overloaded, they stop learning (temporarily).

Young teacher wearing sweater and glasses sitting on desk at kindergarten clueless and confused expression with arms and hands raised.

But last week, I focused on the dangers of a teacher’s working memory overload.

If I’m experiencing cognitive muddle, I won’t be able to explain concepts clearly, or answer questions coherently, or remember important school announcements. (Or, remember to buy the dog food on my drive home.)

So, I suggested teachers start by ANTICIPATING potential causes of working memory overload. (Say: teaching a complicated concept, or, unusual stresses at home.)

We should also be able to IDENTIFY working memory overload when it happens. (When my own working memory gets overloaded, I lose track of sentences and start garbling words.)

Next up:

Step #3: SOLVING — or mitigating, or reducing — working memory problems.

As always, the specific strategies that benefit me might not work for you. As my mantra goes: “don’t just do this thing; instead, think this way.”

The Power of Routines

By definition, routines live in long-term memory. Therefore, I don’t need to process them in working memory.

For that reason, classroom routines reduce my working memory load. (Important additional benefit: they also reduce working memory load for my students.)

So: I (almost) always begin class with a “do now” exercise. When students enter the classroom, they see that I’ve written questions on the board. They sit down and start writing their answers in their notebooks.

Once that routine has formed, I can use my working memory to process the answers that they’re writing, not to think about what I should be doing at this moment.

After we discuss their answers to my “do now” questions, I (almost) always review the previous night’s homework. I then remind them of their homework for the upcoming class. (This habit means that I don’t have to scramble and shout the assignment at them as they’re going out the door.)

Turn and talk? We have a routine.

Cold call? We have a routine.

Write your answers on the board? See previous answer.

By the way, Peps Mccrea wisely notes that creating routines takes time. That is: we initially spend class time on routine building, and therefore have less time for — say — learning.

But: once those routines are in place, we GAIN lots more time than we spent. And, because my working memory load has been reduced, I’ve got more working memory headroom to teach effectively.

Offload the Job

Of course, lots of the teaching work we do requires nimble and effective response to moment-by-moment events — responses that can’t be made into a routine.

In these cases, recruiting working memory allies can be an enormous boon.

During the 2021-22 school year, I had the great good forture of sharing a class with another teacher.

When I found myself getting verbally tangled — a clear sign of working memory overload — I would often hand off:

“Oh, wow, I can feel a mental traffic jam coming on. Mr. Kim, can you take over? What was I saying? Can you clarify the muddle I just made?”

He would then un-knot the explanatory thread I had tangled, and I’d have time to regain my mental bearings.

This strategy also helped out during hybrid teaching.

With most of my students seated in the classroom before me, I could quite easily forget all about the one or two “participating” from the iPad.

A wise colleague suggested creating a “buddy” system. The remote students picked an in-class buddy — and the buddy would check in to be sure they understood the discussion, heard their classmates’ comments, and had a chance to ask questions.

Because the buddy had that responsibility, I didn’t have to worry about it so much. Voila: working memory load reduced.

Offload, Part II

As I noted last week, working memory selects, holds, reorganizes, and combines bits of information.

So, the less information I have to “select and hold,” the lower the working memory load.

One easy way to offload the “select/hold” responsibilities: WRITE STUFF DOWN.

A few examples:

Following Ollie Lovell’s advice, I’ve started crafting “bullet-proof definitions” of important concepts. Because such a definition requires precision and nuance, it’s easy to get the words or the phrasing wrong.

For those reasons, I write down my bullet-proof definitions. I don’t have to use working memory to recall the nuances; I’ve got them on the page right in front of me.

Another strategy:

I write down the start/end times for each of my lesson-plan segments.

That is: my lesson plan might note that we’ll have a discussion about comic and tragic symbols in Act 3 Scene 4 of Macbeth — the famous “banquet scene.”

My notes will include the important line-numbers and passages to highlight.

And, I’ll also write down the times: the discussion begins at 10:12, and goes to 10:32.

This level of detail might sound silly. However, if I DON’T write those times, my working memory will be extra cluttered.

That is: part of my working memory will be processing our discussion (“Notice that Benjamin’s point contradicts Ana’s earlier argument. Can we resolve that disagreement?”).

But at least some of my working memory will be trying to calculate how much more time to spend (“If I let this part of the discussion go on to long, then we won’t have time Act 4 Scene 1. When should I stop?”)

That extra working-memory drag will slow down my processing ability for the scene discussion.

These simple steps to offload working memory demands help me focus on the teaching part of my job.

Your Turn

The strategies I’ve outlined above have helped me reduce the working-memory demands of my own teaching. In theory, anyway, they should help me teach more effectively. (You’ll have to ask my students how effective they’ve really been…)

Of course, these specific strategies might not help you.

The goal, therefore, is NOT that you do what I do. Instead, I hope you’ll think the way I thought: how to anticipate, identify, and reduce working-memory problems.

The more time you devote to these steps, the lower your working memory demands will be. The result: your students too will appreciate the clarity and focus of your classroom.


 

Update: 2/4/24

It seems I’m not the only one focusing on working memory overload for teachers.

Here’s a recent blog post from Doug Lemov — with videos!

The Cold-Calling Debate: Potential Perils, Potential Successes
Andrew Watson
Andrew Watson

Some education debates focus on BIG questions:

high structure vs. low structure pedagogy?

technology: good or bad?

how much should teachers focus on emotions?

Other debatess focus on narrower topics. For instance: cold calling. (“Cold calling” means “calling on student who haven’t raised their hands.”)

Proponents generally see several benefits:

Cold calling helps broaden check-for-understanding strategies. That is: it lets teachers know that MANY students understand, not just those who raise their hands.

It increases accountability.

It adds classroom variety.

And so forth.

Opponents likewise raise several concerns. Primarily:

Cold-calling could stress students out — even the ones not being cold called. That is: even the possibility that I might be called on could addle me.

Also, cold calling signals a particular power dynamic — one that runs contrary to many school philosophies.

Because both sides focus on different measures of success or peril, this debate can be difficult to resolve.

The Story So Far

Back in 2020, a friend asked about the cold calling debate. I looked for research, and –honestly — didn’t find much. The result of that search was this blog post.

Kindergarten students sitting on the floor, listening to the teacher at the chalkboard

In brief, the only study I found (focusing on college sophmores) found more benefits and fewer perils.

Students who had been cold-called a) asked more questions later on, and b) felt less stress.

But, one study is just one study. And, if you don’t teach college sophomores, you might not want to rely on research with that age group.

Today’s News

Research might offer teachers useful guidance, but we shouldn’t accept all research without asking a few questions.

One way to ensure we’re getting GOOD research-based advice is to look for wide ranges of evidence: evidence from…

… primary school AND high school

… science class AND history class

… small AND large school

… Stockholm AND Johannesburg

And so forth.

Similarly, teachers should feel especially confident when reseachers use different methodologies to explore their questions.

For this reason, I was especially pleased to find a cold-calling study published just last year.

This study doesn’t go in for random distribution or control groups (staples of other research paradigms). Instead, it uses a technique called “multimodal interaction analysis.”

I haven’t run into this technique before, so I’m honestly a newbie here. But the headline is: researchers used videotapes to study 86 cold-calling interactions.

In their analysis, the break the interaction down into a second-by-second record — noting the spoken words, the hand gestures, the length of pauses, the direction of the teacher’s gaze. (In some ways, it reminds me of Nuthall’s The Hidden Lives of Learners.)

Heck, they even keep track of the teacher’s use of modal verbs. (No, I’m not entirely sure what modal verbs are in German.)

By tracking the interactions with such extraordinary precision, they’re able to look for nuances and patterns that go beyond simply: “the teacher did or didn’t cold call.”

Conclusions?

Perhaps unsurprisingly, the study’s broad conclusion sounds like this: details matter.

The researchers offer a detailed analysis of one cold call, showing how the teacher’s build up to the moment created just the right support, and just the right tone, for the student to succeed.

They likewise detailed another cold call where the teacher’s body language and borderline insulting framing (“do you dare to answer?”) seem to have alarmed a shy student in monosyllables.

By implication, this research suggests that both opponents and proponents are missing a key point.

We needn’t ask: “is cold calling good or bad?”

Instead, we should ask: “what precise actions — what words, what gestures, what habits — set the student up for a positive interaction? Which precise actions do the opposite?”

Once we get good answers, we can focus and practice! Let’s do more of the good stuff, and less of the harmful stuff.

TL;DR

“Is cold calling good or bad?” is probably the wrong question.

Recent research focusing on nuances of technique suggests that teachers can reduce the perils of cold calling to foster participation and enhance learning.


Morek, M., Heller, V., & Kinalzik, N. (2022). Engaging ‘silent’students in classroom discussions: a micro-analytic view on teachers’ embodied enactments of cold-calling practices. Language and Education, 1-19.

Getting the Details Just Right: Highlighting
Andrew Watson
Andrew Watson

Because the school year starts right now, I’m using this month’s blog posts to give direct classroom guidance.

Female student using pale blue highlighter in a book

Last week, I wrote about a meta-analysis showing that — yup — retrieval practice is awesome.

Teachers should be aware of a few detail (e.g.: “brain dumps” are among the least effective kinds of retrieval practice).

But for the most part, asking students to retrieve stuff (facts, processes, etc.) helps them remember that stuff better — and to transfer their understanding to new situations.

This week, let’s talk about another strategy that teachers and students might use: highlighting.

We know that retrieval practice is awesome. Is highlighting equally awesome? More or less so? When and how should students highlight?

Start Here

For several years, the go-to answer to this question has come from this research summary, by John Dunlosky, Dan Willingham, and others.

Their rather bleak conclusion:

we rate highlighting and underlining as having low utility. In most situations that have been examined and with most participants, highlighting does little to boost performance.

It may help when students have the knowledge needed to highlight more effectively, or when texts are difficult, but it may actually hurt performance on higher level tasks that require inference making. (emphasis added)

They reached this conclusion 10 years ago. Do we know anything more today?

Who Times Two

Last year, Ponce, Mayer & Méndez published a meta-analysis looking at the potential benefits of highlighting.

They found two key variables not included in the earlier research summary.

First: the students’ age/grade.

Second: the person doing the highlighting.

That is: they found that …

If the INSTRUCTOR does the highlighting, doing so benefits college students AND K-12 students, but

If the STUDENT does the highlighting, doing so benefits college studets but NOT K-12 students.

These findings make rough-n-ready sense.

We teachers know what the important ideas are. For that reason, our highlighting help students (on average) focus on those important ideas — so they learn and understand more.

Students — especially younger students — probably don’t know what the important ideas are. For that reason, their own highlighting might not accentuate important ideas (on average), and so they don’t benefit from highlighting.

When I ask a student why he highlighted a passage, I sometimes get a version this answer: “Honestly, I realized I hadn’t highlighted anything in a few pages, so I thought I really needed to find something that sounded important.”

Little wonder, then, that my 10th graders don’t benefit from highlighting.

Classroom Specifics

Of course, this meta-analysis also arrived at other useful conclusions.

This first one came to me as something of a shock: although highlighting does benefit some students, reviewing the highlights doesn’t.

The researchers write:

“on average, reviewing highlighted text previously highlighted by learners did not improve learning significantly more than students who only read or studied the text.”

I infer from this finding that highlighting helps (if at all) because it prompts students to FOCUS ON and THINK ABOUT information the first time they read it.

It does not, however, help students when they return to the highlighted passage later.

That’s useful to know!

Another conclusion is less of a surprise: training helps.

That is: we can help students (yes, even K-12 students) highlight more effectively.

According to the meta-analysis, we can…

… show students examples of good and bad highlighting,

… help them distinguish between main ideas and secondary ones, and

… emphasize that too much highlighting reduces the benefit.

For example:

I myself don’t ask my English students to highlight much. But, I do ask them to note very specific parts of the text.

When we read Macbeth, I ask them to circle/highlight every time they see the words “do,” “done,” or “deed.” (Believe it or not, those words show an important pattern in the play.)

When we read Their Eyes Were Watching God, they highlight various symbols: hair, gates/fences, mules, trees.

I hope that these very modest highlights help students spot patterns they otherwise would have missed — without distracting them too much from other important parts of the story.

In other words: used judiciously and narrowly, highlighting can provide some benefit.

TL;DR

This recent meta-analysis gives us helpful specifics on how best to use highlighting.

Ideally, we teachers do the highlighting ourselves, especially in K-12 classrooms ; we teach students how to highlight (not too much!); we don’t encourage them to review their highlights.

In fact, as we saw in last week’s post, retrieval practice should replace “review the highlights” as a way to review and study.


Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., & Willingham, D. T. (2013). Improving students’ learning with effective learning techniques: Promising directions from cognitive and educational psychology. Psychological Science in the Public interest14(1), 4-58.

Ponce, H. R., Mayer, R. E., & Méndez, E. E. (2022). Effects of learner-generated highlighting and instructor-provided highlighting on learning from text: a meta-analysis. Educational Psychology Review34(2), 989-1024.

 

 

 

Getting the Details Just Right: Retrieval Practice
Andrew Watson
Andrew Watson

As we gear up for the start of a new school year, we’re probably hearing two words over and over: retrieval practice.

That is: students have two basic options when they go back over the facts, concepts, and procedures they’ve learned.

Option 1: they could review it; that is, reread a passage, or rewatch a video, or review their notes.

Option 2: they could retrieve it; that is, ask themselves what they remember about a passage, a video, or a page of notes.

Well, the research verdict is clear: lots of research shows that OPTION 2 is the winner. The more that students practice by retrieving, the better they remember and apply their learning in the long term.

This clear verdict, however, raises lots of questions.

How, exactly, should we use retrieval practice in classrooms.

Does it work in all disciplines and all grades?

Is its effectiveness different for boys and girls?

Does retrieval practice help students remember material that they didn’t practice?

Do multiple choice questions count as retrieval practice?

And so forth.

Given that we have, literally, HUNDREDS of studies looking at these questions, we teachers would like someone to sort through all these sub-questions and give us clear answers.

Student contentrating on taking notes and reading books in the library

Happily, a research team recently produced just such a meta-analysis. They looked at 222 studies including more than 48,000 students, and asked nineteen specific questions.

These numbers are enormous.

Studies often get published with a few dozen participants – which is to say, a lot less than 48,000.

Researchers often ask 2 or 3 questions – or even 1. I don’t recall ever seeing a study or meta-analysis considering nineteen questions.

As a result, we’ve got a lot to learn from this meta-analysis, and can feel more confidence than usual in its conclusions.

The Big Picture

For obvious reasons, I won’t discuss all nineteen questions in detail. Instead, I’ll touch on the big-picture conclusions, highlight some important questions about practical classroom implementation, and point out a few surprises.

The high-level findings of this meta-analysis couldn’t be more reassuring.

YES: retrieval practice enhances long-term memory.

YES: in fact, it enhances memory of facts and concepts, and improves subsequent problem solving. (WOW.)

YES: it benefits students from kindergarten to college, and helps in all 18 (!!) disciplines that the researchers considered.

NO: the student’s gender doesn’t matter. (I was honestly a little surprised they studied this question, but since they’ve got an answer I’m reporting it here.)

I should note that these statistical results mostly fall in the “medium effect size” range: a hedges g of something like 0.50. Because I’m commenting on so many findings, I won’t comment on statistical values unless they’re especially high or low.

So the easy headline here is: retrieval practice rocks.

Making Retrieval Practice Work in the Classroom

Once teachers know that we should use retrieval practice, we’ve got some practical questions about putting it into practice.

Here again, this meta-analysis offers lots of helpful guidance.

Does it help for students to answer similar questions over multiple days?

Yes. (Honestly, not really surprising – but good to know.)

More specifically: “There is a positive relationship between the number of [retrieval practice] repetitions and the [ultimate learning outcome], indicating that the more occasions on which class content is quizzed, the larger the learning gains.”

Don’t just use retrieval practice; REPEAT retrieval practice.

Is feedback necessary?

Feedback significantly increases the benefit of retrieval practice – but the technique provides benefits even without feedback.

Does the mode matter?

Pen and paper, clicker quizzes, online platforms: all work equally well.

Me: I write “do now” questions on the board and my students write down their answers. If you want to use quizlet or mini-white boards, those strategies will work just as well.

Does retrieval practice help students learn untested material?

This question takes a bit of explaining.

Imagine I design a retrieval exercise about Their Eyes Were Watching God. If I ask my students to recall the name of Janie’s first husband (Logan Killocks), that question will help them remember his name later on.

But: will it help them remember the name of her second husband? Or, her third (sort-of) husband?

The answer is: direct retrieval practice questions help more, but this sort of indirect prompt has a small effect.

In brief, if I want my students to remember the names Jody Starks and Vergible Woods, I should ask them direct questions about those husbands.

Shiver Me Timbers

So far, these answers reassure me, but they don’t surprise me.

However, the meta-analysis did include a few unexpected findings.

Does the retrieval question format matter? That is: is “matching” better than “short answer” or “free recall” or “multiple choice”?

To my surprise, “matching” and “fill-in-the-blank” produce the greatest benefits, and “free recall” the least.

This finding suggests that the popular “brain dump” approach (“write down everything you remember about our class discussion yesterday!”) produces the fewest benefits.

I suspect that “brain dumps” don’t work as well because, contrary to the advice above, they don’t directly target the information we want students to remember.

Which is more effective: a high-stakes or a low-stakes format?

To my astonishment, both worked (roughly) equally well.

So, according to this meta-analysis, you can grade or not grade retrieval practice exercises. (I will come back to this point below.)

Should students collaborate or work independently on retrieval practice answers?

The studies included in the meta-analysis suggest no significant difference between these approaches. However, the researchers note that they don’t have all that many studies on the topic, so they’re not confident about this answer. (For a number of reasons, I would have predicted that individual work helps more.)

Beyond the Research

I want to conclude by offering an opinion that springs not from research but from experience.

For historical reasons, “retrieval practice” had a different name. Believe it or not, it was initially called “the testing effect.” (In fact, the authors of this meta-analysis use this term.)

While I understand why researchers use it, I think we can agree that “the testing effect” is a TERRIBLE name.

No student anywhere wants to volunteer for more testing. No teacher anywhere either.

And – crucially – the benefits have nothing to do with “testing.” We don’t need to grade them. Students don’t need to study. The retrieving itself IS the studying.

For that reason, I think teachers and schools should focus as much as possible on the “retrieval” part, and as little as possible on the “testing.”

No, HONESTLY, students don’t need to be tested/graded for this effect to work.

TL;DR

Retrieval practice — in almost any form — helps almost everybody learn, remember, and use almost anything.

As long as we don’t call it “testing,” schools should employ retrieval strategically and frequently.


Yang, C., Luo, L., Vadillo, M. A., Yu, R., & Shanks, D. R. (2021). Testing (quizzing) boosts classroom learning: A systematic and meta-analytic review. Psychological Bulletin147(4), 399.

Using “Worked Examples” in Mathematics Instruction: a New Meta-Analysis
Andrew Watson
Andrew Watson

Should teachers lets students figure out mathematical ideas and processes on their own?

Or, should we walk students through those ideas/processes step by step?

3 students working together on a math problemThis debate rages hotly, from eX-Twitter to California teaching standards.

As best I understand them, the arguments goes like this:

If students figure out ideas and processes for themselves, they think hard about those mathematical ideas. (“Thinking hard” = more learning.)

And, they feel emotionally invested in their discoveries. (“Emotional investment” = more learning.)

Or,

If students attempt to figure out math ideas for themselves, they first have to contemplate what they already know. Second, they contemplate where they’re going. And third, they have to (basically) guess until they figure out how to get from start to finish.

Holding all those pieces — starting place, finish line, all the potential avenues in between — almost certainly overwhelms working memory. (“Overwhelmed working memeory” = less learning.)

Therefore, teachers should walk students directly through the mathematical ideas/process with step-by-step “worked” examples. This process reduces cognitive load and builds schema. (“Reduced cognitive load” + “building schema” = more learning.)

Depending on your philosophical starting place, both argument might sound plausible. Can we use research to answer the question?

Enter the Meta

One problem with “using research to answer the question”: individual studies have yielded different answers.

While it’s not true that “you can find research that says anything,” it IS true — in this specific case — that some studies point one way and some point another.

When research produces this kind of muddle, we can turn to a mathematical technique called “meta-analysis.” Folks wise in the ways of math take MANY different studies and analyze all their results together.

If scholars do this process well, then we get an idea not what ONE study says, but what LOTS AND LOTS of well-designed studies say (on average).

This process might also help us with some follow up questions: how much do specific circumstances matter?

For instance: do worked examples help younger students more than older? Do they help with — say — math but not English? And so forth.

Today’s news:

This recent meta-analysis looks at the benefits of “worked examples,” especially in math instruction.

It also asks about specific circumstances:

Do students benefit from generating “self-explanations” in addition to seeing worked examples?

Do they learn more when the worked examples include BOTH correct AND incorrect examples?

So: what did the meta-analysis find?

Yes, No, No

The meta-analysis arrives at conclusions that — I suspect — suprise almost everyone. (If memory serves, I first read about it from a blogger who champions “worked examples,” and was baffled by some of this meta-analysis’s findings.)

In the first place, the meta-analysis found that students benefit from worked examples.

If you do speak stats, you’ll want to know that the g-value was 0.48: basically 1/2 of a standard deviation.

If you don’t speak stats, you’ll want to know that the findings were “moderate”: not a home run, but at least a solid single. (Perhaps another runner advanced to third as well.)

While that statement requires LOTS of caveats (not all studies pointed the same direction), it’s a useful headline.

In the dry language of research, the authers write:

“The worked examples effect yields a medium effect on mathematics outcomes whether used for practice or initial skill acquisition. Correct examples are particularly beneficial for learning overall.”

So, what’s the surprise? Where are those “no’s” that I promised?

Well, in the second place, adding self-explanation to worked examples didn’t help (on average). In fact, doing so reduced learning.

For lots of reasons, you might have expected the opposite. (Certainly I did.)

But, once researchers did all their averaging, they found that “pairing examples with self-explanation prompts may not be a fruitful design modification.”

They hypothesize that — more often than not — students’ self explanations just weren’t very good, and might have included prior misconceptions.

The Third Place?

In the third place came — to me, at least — the biggest surprise: contrasting correct worked examples with incorrect worked examples didn’t benefit students.

That is: they learned information better when they saw the right method, but didn’t explore wrong ones.

I would have confidently predicted the opposite. (This finding, in fact, is the one that shocked the blogger who introduced me to the study.)

Given these findings and calculations, I think we can come to three useful conclusions: in most cases, math students will learn new ideas…

… when introduced via worked examples,

… without being asked to generate their own explanations first,

… without being shown incorrect examples alongside correct ones.

Always with the Caveats

So far, this blog post has moved from plausible reasons why worked examples help students learn (theory) to a meta-analysis showing that they mostly do help (research).

That journey always benefits from a recognition of the argument’s limitations.

First, most of the 43 studies included in the meta-analysis focused on middle- and high-school math: algebra and geometry.

For that reason, I don’t know that we can automatically extrapolate its findings to other — especially younger — grades; or to other, less abstract, topics.

Second, the findings about self-explanations include an obvious potential solution.

The researchers speculate that self-explanation doesn’t help because students’ prior knowledge is incorrect and misleading. So: students’ self-explantions activate schema that complicate — rather than simplify — their learning.

For example: they write about one (non-math) study where students were prompted to generate explanations about the causes of earthquakes.

Because the students’ prior knowledge was relatively low, they generated low-quality self-explanations. And, they learned less.

This logic suggests an obvious exception to the rule. If you believe your students have relatively high and accurate prior knowledge, then letting them generate self-explanations might in fact benefit students.

In my own work as an English teacher, I think of participles and gerunds.

As a grammar teacher, I devote LOTS of time to a discussion of participles; roughly speaking, a participle is “a verb used as an adjective.”

During these weeks, students will occasionally point out a gerund (roughly speaking, a “verb used as a noun”) and ask if it’s a participle. I say: “No, it’s something else, and we’ll get there later.”

When “later” finally comes, I put up sentences that include participles, and others that include similar gerunds.

I ask them to consider the differences on their own and in small groups; that is, I let them do some “self-explanation.”

Then I explain the concept precisely, including an English-class version of “worked examples.”

Because their prior knowledge is quite high — they already know participles well, and have already been wondering about those “something else” words that look like participles — they tend to have high quality explanations.

In my experience, students take gerunds on board relatively easily.

That is: when prior knowledge is high, self-explanation might (!) benefit worked examples.

TL;DR

A recent meta-analysis suggests that worked examples help students learn algebra and geometry (and perhaps other math topics as well).

It also finds that self-explanations probably don’t help, and that incorrect examples don’t help either.

More broadly, it suggests that meta-analysis can offer helpful and nuanced guidance when we face contradictory research about complex teaching questions.


Barbieri, C. A., Miller-Cotto, D., Clerjuste, S. N., & Chawla, K. (2023). A meta-analysis of the worked examples effect on mathematics performance. Educational Psychology Review35(1), 11.

“Teaching” Helps Students Learn: New Research
Andrew Watson
Andrew Watson

A smiling young man wearing a jeans jacket, wool cap, and headphones sits at a desk and talks to a camera in front of him.Not even two months ago, I admitted my skepticism about a popular teaching technique.

While I accept that “students teaching students” SOUNDS like a great idea, I nonetheless worry about the practical application of this idea:

Understanding a new idea requires lots of mental resources. Explaining a new idea requires even more. All those cognitive demands might overwhelm a student’s WM.

Even if students have the mental resources to accomplish these tasks, how can we be sure that their peers are — in fact — LEARNING the new ideas they’re being taught? For instance: what if the student-teachers misunderstood the material they’re meant to teach?

Peers can intimidate. If teachers have “first day of school” anxiety dreams, imagine how students feel when they must take on the teacher’s job. (And: they don’t have our training and experience.)

So: while I think it’s possible that students benefit from teaching their peers, making this pedagogy successful will take LOTS of preparation, skill, and humility.

Today’s Update: Does the Audience Matter?

Happily, Prof. Dan Willingham recently highlighted a new study exploring this pedagogical question. Specifically, researchers wanted to know if it matters whom the students are teaching.

College students in China watched a two-minute video on synapses, specifically:

how signals are transmitted across neurons in the human nervous system and the role of action potentials, calcium ions, synaptic vesicles, neurotransmitters, sodium ions, and receptors.

After a few extra minutes of prepration, they then “taught” a lesson on this topic.

One third of the participants explained chemical synapses to 7 peers;

one third explained to 1 peer;

and the final third explained to a video camera.

Students in all three groups were instructed that the peers would have to take a test based on these explanations.

So, what effect did the audience have on the student doing the explaining?

Results and Conclusions

The researchers had hypothesized that the presence of peers would ramp up stress and reduce the benefits of this teaching methodology.

For that reason, they suspected that students would do better if they taught their lesson to the video camera instead of to live human beings.

Sure enough, students who taught to the camera did better on basically every measurement.

They offered more thorough explanations (Cohen’s d values here ranged from 0.95 – 1.23: unusually high numbers).

They remembered the information better an hour later.

They transferred their understanding to new questions more effectively.

They felt less stress, and lower cognitive load.

As the authors write: “minimizing the social presence of the audience [by have students teach to a camera] during teaching  resulted in maximizing learning outcomes.”

Classroom Implications

At first look, this study seems to suggest that — sure enough! — students DO learn more when they teach.

Alas, I don’t think we can draw that conclusion.

First: this study didn’t measure that question. That is: it didn’t include a control condition where students used some other method to study information about synapses.

This study DOES suggest that teaching to a camera helps more than teaching to peers. But it DOESN’T suggest that teaching (to a camera, or to peers) helps more than something else.

Second: I’m not sure that the verb “teach” makes sense in this context.

The students explained synapses to a camera, and they believed that another student would watch the video and take a test on it.

I suppose we can call that “teaching.” But that’s a very niche-y version of it.

And, in my experience, it’s not AT ALL what teachers think of when they hear about this methodology. More often, students break up into groups to study small parts of a process, and then circulate and “teach” the other groups what they learned.

Third: how would this “teach the camera” plan work in the classroom?

The “explain to a camera” approach might work better than an “explain to peers” version. But I imagine at least two practical problems.

#1: logistically, how does it work? Do I have 25 students explaining to 25 separate cameras simultaneuosly? Do I have a separate place with cameras where students go to record?

#2: In this study, researchers told participants that other students would watch their videos and be tested on their understanding.

Presumably this statement made the teacher-students quite conscientious about their explanations. For that reason (probably), they thought harder and therefore remembered more.

That is: the camera method helped students learn largely because participants believed that others relied on their teaching.

If, however, I use this strategy in my class, that causal chain (conscientiousness –> thinking –> remembering) could easily break down.

Either I DO use those videos to help other students learn — in which case I have to review and critque them scrupulously;

Or I DON’T use those videos — in which case my students know they don’t really have to be so concientious. (Reduced conscientiousness –> reduced thinking –> reduced memory.)

These practical questions might sound mundane, even grouchy. But I’m not trying to be grouchy — I’m trying to help my students learn material!

TL;DR

A recent study suggests that college students benefit more from “teaching” if they teach to a camera than if they teach peers.

Although I’m inclined to believe these results — they certainly make a lot of sense — I still worry that a “students-teaching-students” pedagogy sounds better in theory than it might work in practice.


Wang, F., Cheng, M., & Mayer, R. E. (2023). Improving learning-by-teaching without audience interaction as a generative learning activity by minimizing the social presence of the audience. Journal of Educational Psychology.

 

Should Teachers Explain or Demonstrate?
Andrew Watson
Andrew Watson

If I were a chess teacher, I would want my newbies to understand …

… how a bishop moves,

… how castling works,

… what checkmate means.

To help them understand, I could…

show them (“see how this piece moves; now see how that piece moves”)

tell them (“checkmate is defined as…”).

Both strategies sound plausible. Both probably help, at least a little bit.

Is one better than the other?

Today’s Research

I recently came across a fascinating study that explores this question.

A chess board seen from an angle, with red arrows showing how pieces might move in different combinations

In this research, two strangers met over an online puzzle — sort of a maze with prizes at the end of various paths.

Sometimes, one stranger could EXPLAIN to the other the best strategy to get the most points. (“Get the pink triangles, then the hollow squares, then the green circles.”)

Other times, one stranger could SHOW the other the winning path. (“Watch me go this way, now this way, now this way.”)

Which method worked better, show or tell?

PLOT TWIST.

In this case, the answer depended on the complexity of the puzzle.

For simple puzzles, both methods worked equally well.

For complex puzzles, telling helped more than showing.

I would have been surprised if there were a straightforward answer to the question; I am, therefore, more inclined to believe this “it depends” answer.

Take Two

This result — explaining complexity > showing complexity — prompted the researchers to test a second hypothesis.

In this case, the research details get very tricky, so I won’t go into them. But the basic idea was:

Perhas both words and actions can explain concrete things, but

Perhas words do better than actions at explaining abstract things.

Sure enough, the second experiment supported that hypothesis.

As the researchers say in their first paragraph:

Our findings suggest that language communicates complex concepts by directly transmitting abstract rules. In contrast, demonstrations transmit examples, requiring the learner to infer the rules.

In brief, the more abstract and complex the concept, the more important the words.

Teaching Implications?

Before we rush to reform our teaching, we should notice several key points about this study:

It involved adults working with other adults, and strangers working with strangers.

The participants were not — as far as I know — teachers. That is: they have neither expertise nor training in helping others understand.

The task involved (sort of) solving mazes. I’m an English teacher; my teaching — and perhaps your teaching — doesn’t focus on maze-solving like mental activity.

In other words, because this research differs A LOT from typical classroom work, its findings might not apply precisely to classroom work.

Teaching Implications!!

That said, this study reminds me of an important lesson:

Practice. My. Words.

That is: when I’m explaining a concept to my students for the first time, I should script and rehearse my explanation carefully.

Now, because I’ve been teaching for a few centuries, I’m occasionally tempted to wing.

Yes, “indirect object” is a tricky concept … but I understand it well, and I’ve explained it frequently over the years, and I’m sure I’ll do just fine…

No, wait, stop it. This research reminds me: words really matter for helping students understand abstractions.

I need to get those words just right, and doing so will take time, thought, and concentraction. (Ollie Lovell emphasizes a similar idea when he writes about the importance of “bullet-proof definitions”; for instance, in this book.)

A second point jumps out at me as well.

This study contrasts showing and telling. Of course, most of the time we combine showing and telling.

As I’ve written before, Oliver Caviglioli’s Dual Coding offers a comprehensive, research-informed exploration of this complex blend.

When I think about dual coding, I typically focus on the “showing/drawing” half of the “dual.” This study, however, reminds me that the “telling” part is equally important — and, in the case of highly abstract concepts, might even be more important.

 

In brief, in my chess classroom:

I can simply show my students how bishops move: that’s easy.

But “checkmate” is complex. I should both show and tell — and get the telling just right.


Sumers, T. R., Ho, M. K., Hawkins, R. D., & Griffiths, T. L. (2023). Show or Tell? Exploring when (and why) teaching with language outperforms demonstration. Cognition232, 105326.

Book Review: Teaching Secondary Science, by Adam Boxer
Andrew Watson
Andrew Watson

Let’s start by making this simple:

First: You should absolutely buy Adam Boxer’s Teaching Secondary Science: A Complete Guide. Sooner is better than later.

Second: You will probably not READ Boxer’s book so much as you will STUDY it. Have a pen handy; some sticky notes; your favorite memory app. Whatever system you use to keep track of big ideas and vital details — have it ready to work.

Now that I’ve been bossy, let me explain why.

Two Big Surprises

Surprise #1:Book Cover for Adam Boxer's Teaching Secondary Science: A copmlete guide.

I myself don’t teach high-school science. (I taught 10th and 12th grade English, and worked at a summer camp for 8-14 year olds.)

So, the title (Teaching Secondary Science) might suggest that the book isn’t for me.

Well, Boxer’s book (and the precision of his thinking) will absolutely make me a better English teacher; I suspect his approach will benefit almost any teacher.

Here’s why…

Surprise #2:

Longtime readers know my mantra: “don’t just do this thing; instead, think this way.”

That is: cognitive science research cannot provide us with a script (“do this thing”). Instead, that research CAN give us ways to think about memory and attention and motivation and stress. When we “think this way” about those topics, we’ll have better ideas about our teaching.

Well, Boxer’s book comes as close as any to effectively defying this mantra.

His book includes a GREAT MANY “do this thing” kind of instructions.

Phrase your question this way, not that way.

Present topics in this order, not that order.

Calculate cognitive load with this formula, not that formula.

You might think, given my mantra, I’d resist the specificity of his advice.

And yet, over and over, I found myself agreeing with his logic, and believing that I’ll do better classroom work if I understand and follow several of his scripts.

To my astonishment, I’m highly tempted to “do things Boxer’s way.” Why? Because he’s already done so much thinking for me.

Case in Point

I recently discussed Boxer’s book with a group of friends. All of us had highlighted this specific advice:

When introducing a concept, start with examples, not definitions.

Why?

Because definitions are necessarily abstract, and abstraction increases working memory load.

Examples, in contrast, live comfortably in the familiar, concrete world. This very  familiarity and concreteness reduce WM load, and thereby makes learning easier.

When my friends and I tried to apply this advice to our own teaching world, we immediately saw its usefulness.

The Spanish teacher said: don’t start with the abstract definition of the subjunctive; start with familiar examples in English.

The PD provider said: don’t start with abstract definitions of “declarative” and “procedural” memory; start with concrete classroom examples.

And so forth.

Two points merit notice here.

First: although Boxer writes about science instruction, his guidance applies widely across disciplines and age groups.

Second: although Boxer’s advice stems from (abstract) cognitive psychology, he frames it in (concrete) teaching suggestions.

That is: over and over, Boxer’s book practices what it preaches. His book does what he tells us teachers should do.

You perhaps have heard a conference speaker give passionate teaching advice (“never talk for more than ten minutes!”), only to defy this advice in his hour-long talk. Boxer carefully avoids such hypocricy.

The Big One

A few of my opinions in this interdisciplinary field approach heresy. Here’s one:

In my view, cognitive load theory helps experts talk with other experts about working memory load in the classroom.

Paradoxically, however, cognitive load theory almost certainly overwhelms the working memory of non-experts. It is, after all, complicated and jargony. (Quick: define “element interactivity” and “germane load.”)

For that reason, cognitive load theory probably isn’t useful as a framework for discussing working memory load with teachers. (Several people whom I admire are howling as they read these paragraphs.)

Boxer does not articulate this heretical claim directly. However, he enacts its conclusion quite directly.

That is: he translates the abstractions of cognitive load theory into a concrete formula — a proportionality formula using words anyone can understand.

Rather than reproduce the mathematical version of the formula here, I’ll summarize it this way:

Task complexity and abstraction increase working memory load.

The student’s background knowledge and the teacher’s support reduce working memory load.

Therefore, to optimize working memory load, we should look out for those four variables and manage them appropriately. (He’s got CHAPTERS on each of those topics.)

If you speak cognitive load theory, you see exactly how Boxer has translated its abstractions into this concrete formulation.

But — crucially — you don’t need to speak cognitive load theory to get its benefits.

Boxer, again, has taken his own advice. He has started with concrete examples rather than abstract definitions; he has thereby made it MUCH easier to learn from this book.

Always with the Limitations

Having raved for several hundred words, let me add a few quick notes of caution.

First: I don’t agree with absolutely everything Boxer writes. (I don’t agree with absolulety everything I write.) For instance: he emphatically champions mini white boards; I don’t think they’ll work in my context.

Second: Boxer’s examples draw on science teaching in high school in England. All three of those truths require some degree of translation as you apply his ideas to your work.

The English education system thrives of mysterious acronyms; you’ll just have to figure them out. When the SLT talks with the NQT about Supply, well, I can’t help you there.

Third: Full disclosure, I should point out that Boxer’s publisher is also my publisher — so I might have a conflict of interest in writing such an enthusiastic review. I certainly don’t think this connection has skewed my perspective, but you should have that information to make your own decisions.

These few points aside, I return to my initial hearty recommendation.

When you read and study Boxer’s Teaching Secondary Science, you’ll get specific and wise guidance for applying the abstractions of cognitive science to your classroom.

You’ll enjoy it, and your students will learn more.