Skip to main content
Balancing Direct Instruction with Project-Based Pedagogies
Andrew Watson
Andrew Watson

A month ago, I wrote about a Tom Sherrington essay proposing a truce between partisans of direct instruction and those of project-based learning (and other “constructivist pedagogies”).

In brief, Sherrington argues that both pedagogical approaches have their appropriate time in the learning process.

EARLY in schema formation, direct instruction helps promote learning for novices.

LATER in schema formation, project-based pedagogies can apply, enrich, and connect concepts for experts.

Today’s Update

At the time I wrote about Sherrington’s essay, it was available in a book on Education Myths, edited by Craig Barton.

I do recommend that book–several of its essays offer important insights. (See this post on Clare Sealy’s distinction between autobiographical and semantic memory.)

If you’d like to read Sherrington’s essay right away, I have good news: he has published it on his website.

Happily, his contribution to the debate is now more broadly available.

A Final Note

Like other thinkers in this field, Sherrington proposes the novice/expert divide as the most important framework for understanding when to adapt pedagogical models.

In my own thinking, I’m increasingly interested in understanding and defining the transition points from one to the other.

That is: how can we tell when our novices have become experts?

What are the signs and symptoms of expertise? How can we describe those signs and symptoms so that 3rd grade teachers and 7th grade teachers can make sense of them?

Or, science teachers and history teachers?

Or, soccer coaches as well as dance instructors?

In other words: I agree with Sherrington’s framework, but I think it’s incomplete without clearer guidance about the novice/expert continuum.

Dangerous Fluency: Performance Isn’t Always Learning
Andrew Watson
Andrew Watson

How often do you have this experience?

Your students obviously understood yesterday’s topic. You know this because, say, their exit tickets revealed a high level of progress.

And yet, when you begin class today, they have seemingly forgotten everything you discussed, and everything they learned. Or, “learned.”

Teachers experience this frustration all the time: short-term performance doesn’t necessarily translate into long-term learning. (I’ve written before about Nick Soderstrom’s AWESOME review article considering this tension.)

A Telling Example

Last year, Glass and Kang published an important study about the effect of technology use during college lectures.

G&K let students use technology–laptops and cellphones–during 1/2 of the lectures in a psychology class, and forbade them during the other half.

In fact, they taught two identical sections of the same course, and enforced this ban in each class on alternating days. So: they could compare students to themselves in the ban- vs. no-ban classes.

The result headlines go like this:

This tech ban had NO EFFECT on immediate quizzes: students scored equally well on average in the ban- and the no-ban classes.

And yet, it had a SUBSTANTIAL effect on the term-end exam. Students averaged 7 points lower on material they had learned when tech was allowed than when it was forbidden.

And, crucially, students scored lower in no-ban classes even if they didn’t use technology themselves. Presumably, their classmates’ technology use distracted them.

This study suggests several conclusions. (I’ll mention a particularly counter-intuitive one at the end of this post.)

Most People Don’t Judge Their Own Learning Well

Because we’re teachers, we are–relatively speaking–experts on learning. Especially if you’re reading this blog (and attending Learning and the Brain conferences), you probably know a lot more about the complexities of learning than most people do.

And, you know more about learning than your students do.

That’s a (surprisingly) controversial statement. But, consider the students’ perspective in Glass and Kang’s psychology lecture.

They might reasonably say:

“Look: I scored equally well on the daily quizzes whether or not I was using technology. Clearly I understand material just fine when I’m texting my friends.

Have a little faith in me as a learner. I know when the professor is saying important things, and I focus then. And, I know when she’s gone off on a cute-but-unessential anecdote, and I use that time to check in with my people.”

Everything in that hypothetical statement is accurate, or at least plausible.

However, it lacks the long-term perspective. Their performance on short-term quizzes does not predict their long-term learning and understanding.

Because we have G&K’s research, and see the longer perspective, we know that their texting had a subtle, harmful effect on retention. However well they did right away, students just didn’t learn that information deeply.

For this reason–among many others–I think teachers should be confident in claiming our expertise. When our students say “I know I learn better this way,” we can use our best judgment in evaluating that claim.

At times–especially if they have a particular diagnosis–they might well be right.

At other times–especially if they want to watch YouTube while doing homework, or claim that their learning style requires that they do X instead of Y–you can offer strong guidance based on cognitive science research.

Counter-Intuitive Conclusion

I promised above I’d offer a surprising interpretation of Glass and Kang’s study. Here goes:

Because students did worse in the no-ban classes whether or not they used technology, the obvious conclusion is that we should utterly ban technology from our classrooms.

However, that conclusion misses an essential part of G&K’s methodology. They didn’t exactly ban technology use. In fact, they required technology use.

You read that right.

Those “immediate quizzes” you’ve been reading about? Students had to take them on some kind of electronic device: a laptop or a cell phone.

So, the study does NOT compare performance in a ban vs. a no-ban condition. It DOES compare performance in classes where technology was required at times (to take quizzes), and where it was used however students liked (as well as taking quizzes).

In other words: the problem wasn’t USE of technology. It was MISUSE of technology.

Here again, I think this insight brings us back to teacher judgment.

Should you ban technology from your classroom?

If the topic you’re covering doesn’t benefit from technology, then you have plenty of reasons to do so.

But, if you’ve got some great way to enhance instruction with technology–and you can monitor their technology use as G&K did–then you might get the same benefits that Glass and Kang’s students did when they took those quizzes on laptops.

Research guidance can shape our thinking. And, we should always blend it with our own experience and classroom skill.

A Hidden Strength of “Concreteness Fading”
Andrew Watson
Andrew Watson

In my last post, I wrote about a teaching strategy called “concreteness fading.”

If we start with concrete examples, and gradually transition to abstract formulas, we help our students understand and transfer math knowledge.

I think this technique includes an under-appreciated strength: its name clearly describes what the method advocates.

Here’s my point:

The Problem with Catchy Brands

If I, as a researcher or teacher, come up with a cool new teaching idea, I want people to adopt it. Obviously.

And so I’m tempted to come up with an upbeat, catchy name for it. For the sake of example, let’s say I devise a method of “awesome teaching.”

This brand name has the advantage of zany enthusiasm.

But, it distracts from a conversation about the merits of the method. Now, anyone who articulates doubts about my method seems to argue against being awesome. Which is to say: they seem like a bad person.

This problem came home to me recently when I talked at a (great) school about working memory and its limitations. A teacher, reasonably enough, noted that my argument contradicted some segments of an instructional methodology championed in her district.

Here’s the catch: that method’s name included the word “authentic” in it.

As a result, I found myself–bizarrely–arguing against “authenticity.”

Of course, I’m NOT opposed to being authentic. I am, however, opposed to using teaching methods that overwhelm working memory capacity–even if those methods are branded “authentic.”

So, in that case, the word “authentic” simply made it harder to have a sensible, research-based conversation about the teaching methods involved. I felt I had to repeat, over and over, “I’m not against being authentic, I’m against this particular thing that calls itself ‘authentic.’ ”

To be clear: “authentic” isn’t the only problem phrase–not by a long shot.

For instance, there’s a splendid strategy for giving feedback: one that I regularly encourage.

Alas, it calls itself “wise feedback.” Now, anyone who doubts the method seem to oppose being wise while giving feedback. That’s an unhelpful burden for those of us who want to rely on research.

Back to “Concreteness Fading”

At this point you’ll understand why I like the phrase “concreteness fading.”

Unlike other branding phrases (“student-centered,” anyone?), it’s not trying to sway you with its upbeat perkiness. It’s not a brand.

Instead, the label “concreteness fading” describes–literally, if a bit awkwardly–the method itself.

Step 1: Start concrete.

Step 2: Shift from concrete to abstract. (And, because abstract things are less concrete than concrete things, let’s us the verb “fade” to describe that shift.)

Other methods include this strength.

“Retrieval practice” means “practice by retrieving, not by reviewing.” The name is a literal description.

Lots of people doubt the usefulness of “project-based learning,” especially for novice learners. But, the name itself has the benefit of direct clarity. Those who doubt PBL can argue against it without constantly saying “I’m not against X, I’m against this thing called ‘X’.”

 

Let me conclude with a plea to people who name teaching methods: the more direct and literal your brand, the more honestly teachers and researchers can discuss it.

And, if the method itself has merit, then that honesty will work in your favor.

 

Concrete + Abstract = Math Learning
Andrew Watson
Andrew Watson

Early math instruction includes daunting complexities.

We need our students to understand several sophisticated concepts. And, we need them to learn a symbolic language with which to represent those concepts.

Take, for example, the concept of equivalence. As adults, you and I can readily solve this problem :   3+4 = 4 + __

Early math learners, however, can easily stumble. Often, they take the equals sign to mean “add up all the numbers,” and believe the correct answer to that question is “10.”

How can we help them through this stage of understanding?

Strategy #1: Switch from Abstract to Concrete

The first answer to the question seems quite straightforward. If the abstract, symbolic language of math (“3+4=___”) confuses students, let’s switch to a more concrete language.

For instance: “If my frog puppet has three oranges, and your monkey puppet has four oranges, how many oranges do they have together?”

It just seems logical: the switch from abstract to concrete ought to help.

Alas, those concrete examples have a hidden downside.

As Dan Willingham argues in Why Don’t Students Like School?, humans naturally focus on surface features of learning.

When children see monkeys and frogs and oranges, they associate the lesson with those specific entities–not with the underlying mathematical properties we want them to learn.

In edu-lingo, concrete examples can inhibit transfer. Students struggle to transfer a lesson about oranges and puppets to anything else.

Strategy #2: “Fade” from Concrete to Abstract

Taking their cue from Jerome Bruner, psychology researchers wondered if they could start with concrete examples and then, over time, switch to more abstract examples.

For instance, students might start learning about mathematical equivalence by using a balance. When they put an equal number of tokens on both sides, the balance is level.

In the second step, they do practice problems with pictures of a balance and tokens.

And, in the final step, they see abstract representations: 2 + 5 = 5 + __.

They describe this technique as “concreteness fading.”

And, sure enough, it worked. In this case, “worked” meant that students who learned equivalence though a concreteness fading method transferred their knowledge to different–and more difficult–problems.

They did so better than students who learned in a purely abstract way. And, better than students who learned in a purely concrete way. (And even, as a control condition, better than students who started with an abstract representation, and then switched to concrete.)

By the way: these researchers tested their hypothesis both with students who had a relatively low level of knowledge in this area, and those who had a high level of knowledge. They got (basically) the same results both times.

An Essential Detail

When we teachers try to incorporate psychology research into our teaching, we can sometimes find that it conflicts with actual experience.

In this case, we might find that our young math learners just “get it” faster when we use frog puppets. Given that experience, we might hesitate to fade over to abstract teaching.

This research shows an intriguing pattern.

Sure enough, students who began with concrete examples made fewer mistakes on early practice problems. And, that finding was true for both the “concrete only” group and the “concreteness fading” groups.

In other words, the “abstract only” group did worse on the early practice problems than did those groups.

But…and this is a CRUCIAL “but”…the “concrete only” group didn’t do very well on the transfer test. Their raw scores were the lowest of the bunch.

However, the “concreteness fading” group did well on the early problems AND on the transfer test.

It seems that, as the researchers feared, too much concrete instruction reduced transfer.

 

In sum: “concreteness fading” gives young math learners both a helpfully clear introduction to math concepts and the abstract understanding that allows transfer.


Fyfe, E. R., McNeil, N. M., & Borjas, S. (2015). Benefits of “concreteness fading” for children’s mathematics understanding. Learning and Instruction35, 104-120.

When Good Classroom Assignments Go Bad
Andrew Watson
Andrew Watson

As an English teacher, I rather love this assignment for 9th graders reading Romeo and Juliet:

Choose a character from the play.

Write a short monologue–20 lines or so–exploring that character’s feelings about a particular moment, or another character.

Be sure to write in iambic pentameter.

This assignment lets my students explore a character’s point of view in thoughtful detail. It encourages empathy and imagination. And, it allows them to play with a poetic meter that’s been at the rhythmic heart of English literature since we had English literature.

So, again, as an English teacher I love it.

But as someone who knows from cognitive science, I fear it’s simply not going to work (for most 9th graders on the planet).

Good Intentions Meet Cognitive Limitations

Regular readers know that students use their working memory all the time to grok their classroom work.

Working memory is vital to all classroom learning. And, alas, we just don’t have very much of it.

And, this assignment (almost certainly) places far too great a demand on my students’ WM.

Students must use their WM to…

…choose among the characters of the play. (Yes: choices take up WM resources.)

…choose among the dramatic events their chosen character experiences.

…create a wisely empathetic response to a dramatic event. (Yes: creativity requires working memory.)

And, on top of that, to…

…express richly Shakespearean logic and emotion within a tightly structured, largely unpracticed poetic meter. (If you doubt that writing in iambic pentameter takes working memory, try rewriting this sentence in iambic pentameter. Your prefrontal cortex will be aching in no time.)

So much cognitive load will overwhelm all but the most inventive of students.

Solving the Problem

Given that this assignment could be so powerful, how might we adapt it to fit within working memory limitations?

Two strategies come quickly to mind.

Firstredistribute the working memory demands. That is: don’t have them do all the WM work at the same time.

In this case, that suggestion can be easily implemented.

First night’s homework: choose the character, and describe or outline the dramatic moment.

Second night’s homework: write the monologue in modern English.

This approach spreads out the working memory demands over time. All the choosing, and some of the creativity, happens on the first night. The rest of creativity happens on night #2.

Secondreduce the working memory demands. Unless your students have practiced with iambic pentameter A LOT more than my students have, they’re likely to struggle to compose 20 fresh lines.

My own teacherly instincts would be to have them experiment with existing poetry. For instance, a fun sonnet might serve as a scaffold for early, tentative work.

In sonnet 130, Shakespeare famously laments the use of extravagant metaphors to hyper-praise women:

My mistress’ eyes are nothing like the sun.

Coral is far more red than her lips’ red.

And yet, by heav’n, I think my love as rare

As any she belied with false compare.

Can my students devise their own version of these sentiments? And, can the preserve the meter?

My boyfriend’s eyes are not as blue as sky.

For reals, his abs just aren’t what you’d call “shredded.”

And yet, by heav’n, I think my guy as hott

As any bae that Beyoncé has got.

Of course, scaffolding is called “scaffolding” because we can take it down. So, once students can manage iambic pentameter with this level of support, we can prompt them to devise more and more free-form iambic creations.

With enough practice, they might–some day–be able to compose 20 fresh lines of their own.

Can Multiple-Choice Tests Really Help Students?
Andrew Watson
Andrew Watson

Multiple-choice tests have a bad reputation. They’re easy to grade, but otherwise seem…well…hard to defend.

After all, the answer is RIGHT THERE. How could the student possibly get it wrong?

Given that undeniable objection, could multiple-choice tests possibly be good for learning?

The Benefits of Distraction

A multiple-choice test includes one correct answer, and other incorrect answers called “distractors.” Perhaps the effectiveness of a multiple-choice question depends on the plausibility of the distractors.

So, a multiple choice question might go like this:

“Who was George Washington’s Vice President?”

a) John Adams

b) Mickey Mouse

c) Tom Brady

d) Harriet Tubman

In this case, none of the distractors could possibly be true. However, I could ask the same question a different way:

“Who was George Washington’s Vice President?”

a) John Adams

b) Thomas Jefferson

c) Alexander Hamilton

d) James Madison

In THIS case, each of the distractors could reasonably have held that role. In fact, all three worked closely with–and deeply admired–Washington. Two of the three did serve as vice presidents. (And the other was killed by a VP.)

Why would the plausibility of the distractor matter?

We know from the study of retrieval practice that pulling information out of my brain benefits memory more than repeatedly putting information into it.

So, we might hypothesize this way:

If the distractors are implausible, a student doesn’t have to think much to figure out the correct answer. No retrieval required.

But, if the distractors are plausible, then the student has to think about each one to get the answer right. That’s lots of retrieval right there.

In other words: plausible distractors encourage retrieval practice, and thereby might enhance learning.

Better and Better

This line of reasoning leads to an even more delicious possibility.

To answer that question about Washington’s VP, the student had to think about four people: Adams, Jefferson, Hamilton, Madison.

Presumably she’ll learn the information about Adams–who was the correct answer to the question.

Will she also learn more about the other three choices? That is: will she be likelier to answer a question about Alexander Hamilton correctly? (“Who created the first US National Bank as Washington’s Secretary of the Treasury?”)

If the answer to that question is YES, then one multiple-choice question can help students consolidate learning about several different facts or concepts.

And, according to recent research, the answer is indeed YES.

The research paradigm used to explore this question requires lots of complex details, and goes beyond the scope of a blog post. If you’re interested, check out the link above.

Classroom Implications

If this research holds up, we might well have found a surprisingly powerful tool to help students acquire lots of factual knowledge.

A well-designed multiple-choice question–that is: one whose plausible distractors require lots of careful thought–helps students learn four distinct facts or concepts.

In other words:

“Multiple-choice questions…

a) are easy to grade

b) help students learn the correct answer

c) help students learn information about the incorrect answers

or

d) all of the above.”

Me: I’m thinking d) sounds increasingly likely…

Default Image
Andrew Watson
Andrew Watson

Earlier this month, I wrote about the distinction between autobiographical memory and semantic memory.

Both kinds help us live meaningful lives.

But, schools focus on semantic memory: we want our students to know facts and skills over the long term.

We don’t really need them to remember the class or the exercise (or even the teacher) who taught them those facts and skills. That’s autobiographical memory.

That blog post was inspired by Clare Sealy’s recent essay ironically entitled “Memorable Experiences Are the Best Way to Help Children Remember Things.”

Happily, Sealy is the guest on a recent EdNext podcast: you can hear her in-depth explanation.

Equally happy, that podcast includes Sealy’s essay itself.

To understand Sealy’s argument, and its full implications, you can both have a look and have a listen.

Does Music Training Help Us Pay Attention?
Andrew Watson
Andrew Watson

Schools help students learn specific skills and facts: long division, and the preamble to the US Constitution, and glorious mysteries of the sonnet.

Wouldn’t it be great if schools could improve general cognitive capabilities?

For instance, it would be AWESOME if we could artificially increase working memory capacity. (Alas, we can’t. Really.)

It would be great if we could teach general critical thinking skills. (Alas: although we can teach those skills in discrete disciplinary topics, we probably can’t teach critical thinking generally.)

I would be super helpful if we could improve our students’ ability to pay attention…wait a minute: maybe we can.

We know that musicians must concentrate intensely to accomplish their marvelous work. To focus on the sheet music, ignore myriad distractions, accomplish nimble finger skills—all these require impressive degrees of attention.

Does all that attending help musicians both play music better and pay attention better? In other words: can they use those attention skills in other parts of their life?

Defining Attention

To answer that question, we have to start by defining the concept of “attention.”

Surprisingly, psychologists and neuroscientists don’t see attention as one unified thing. Instead, the see it as a behavior that takes place when three other things are happening.

First, they measure alertness. That’s a basic biological readiness: are the students awake enough? Or, so wildly overstimulated that they can’t focus? Those questions examine alertness. (Notice: they don’t directly examine attention—alertness is one small part of that bigger picture.)

Second, they measure orienting. When we ask about orienting, we consider the stimuli that the student is consciously perceiving.

So, for instance, at this moment I’m orienting to the letters on the screen as I type, to the mug of tea to my right, and to my cat Pippin who keeps nudging my arm. I’m not orienting to—say—the comfy chair in the corner, or the color of paint on the ceiling, or the gentle thump of the laundry machine downstairs.

I know all that stuff is there, but I’m not consciously processing it. (Well, I suppose, now that I’m writing about it, I must be processing it. But, I wasn’t orienting to it until I tried to identify stimuli that I wasn’t orienting to…)

Finally, to define the third part of attention, we consider executive attention. That segment takes much more time to describe and define, and overlaps a lot with working memory. It also includes our ability to ignore unimportant stimuli. We deliberately decide to focus on this topic here, not that one there.

So, when we ask the question “does music training improve attention,” we’re really asking three questions:

“Does music training improve alertness?”

“Does music training improve orienting?”

“Does music training improve executive attention?”

With these three questions in mind, we know what to do next.

Musician Inhibition

 To test attention, researchers often use the Attention Network Test (ANT) to measure all three sub-segments of our attentional processes.

In this study, scholars in Chile worked with about 40 adults. Half were “professional pianists,” with an average of more than 12 years of music training. The other half had never taken music lessons, and couldn’t read sheet music.

Did the musicians outperform the non-musicians on the ANT?

No, no, and yes.

That is: musicians and non-musicians did equally well at the first two parts of attention: alertness and orienting.

But, musicians scored higher on the executive attention part of the test than the non-musicians did.

Basically, they ignored irrelevant stimuli better than their age-matched peers.

What Does This Research Mean in the Classroom?

 You can probably anticipate all the reasons we shouldn’t over-react to this study.

It’s quite small: fewer than 40 people participated.

It doesn’t necessarily show cause and effect. It’s entirely possible that people who start with better executive attention are more likely to become professional musicians than people with lower executive attention.

The professional musicians had YEARS of musical experience: more than twelve, on average. So: even if music training does improve executive attention, it’s not a quick fix.

At the same time, this study does suggest something important: at least in this one case, we might be able to train a general cognitive capability.

That is: we can’t speed up our students’ working memory development. We can’t train a general critical thinking skill. We can’t improve processing speed.

But, maybe, we can find ways to strengthen executive attention.

Given how important attention is in the classroom, that’s potentially great news indeed.

Getting the Timing Right: Critical Thinking Online
Andrew Watson
Andrew Watson

If we want students to remember what we teach–and, what teacher doesn’t?–we’ve got a vital strategy: spread practice out over time.

We’ve got scads of research showing that the same number of practice problems results in a lot more learning if those problems are spread out over days and weeks, compared to being done all at once.

We call this the spacing effect, and it’s as solid a finding as we’ve got in the field of educational psychology.

As teachers interested in psychology research, we should always be asking: “yes, but does that work in my specific context.”

For instance: if research shows that college students learn stoichiometry better in a flipped-classroom model, that doesn’t necessarily mean that my 3rd graders will learn spelling better that way.

In the language of psychology research, we’re looking for “boundary conditions.” What are the limits of any particular technique?

The Spacing Effect Meets Critical Thinking

Researchers in Canada wanted to know: does the spacing effect apply to the teaching of critical thinking?

Of course, we want our students to be effective critical thinkers. But, there’s heated debate about the best way to teach this skill.

Lots of people doubt that critical thinking can be taught as a free-standing skill. Instead, they believe it should be nested in a specific curriculum.

That is: we can be critical thinkers about sonnets, or about football play-calling strategy, or about the design of bridges. But, we can’t learn to think critically in an abstract way.

The Canadian researchers start with that perspective, and so they teach critical thinking about a specific topic: the reliability of websites. And, they go further to ask: will the spacing effect help students be better critical thinkers?

In other words: if we spread out practice in critical thinking, will students ultimately practice their critical craft more effectively?

The Research; The Results

To answer this question, researchers used a 3-lesson curriculum exploring the credibility of websites. This curriculum asked 17 questions within 4 categories: the authority of the website’s authors, the quality of the content, the professionalism of the design, and so forth.

Half of the 4th-6th graders in this study learned this curriculum over 3 days. The other half learned it over 3 weeks.

Did this spacing matter? Were those who spread their practice out more proficient critical website thinkers than those who bunched their practice together?

In a word: yup.

When tested a month later, students who spread practice out were much likelier to use all four categories when analyzing websites’ reliability. And, they used more of the 17 questions to explore those four categories.

To Sum Up

This research leads us to two encouraging, and practical, conclusions.

First: we can help our students be better critical thinkers when they analyze websites. (Heaven knows that will be a useful skill throughout their lives.)

Second: we can improve their ability by relying on the spacing effect. As with so many kinds of learning, we get better at critical thinking when we practice over relatively long periods of time.

Can a Neuromyth Result in a Truce?
Andrew Watson
Andrew Watson

We teachers feel passionately about our work, and so–no surprise–our debates and disagreements get heated.

Few debates rage as fiercely as that between champions of direct instruction (with or without capital “D” and “I”), and champions of constructivism (in its many forms: project-based learning, student-centered learning, etc.).

In a recent essay, writer and blogger Tom Sherrington would like soothe this ferocity by declaring the whole debate a myth.

As his title declares: it’s a myth that “teacher-led instruction and student-centred learning are opposites.” (Sherrington is British, so we can overlook the missing “e” from “centred.”)

In brief, he argues: no matter how passionately we disagree about pieces of this debate, almost everyone agrees on a sensible core of ideas. We’re arguing at the margins, but could just as easily refocus on our agreements at the center.

Passionate Debates

One well-known meta-analysis sports this dramatic title: “Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching.”

Not much grey area there.

But, as Sherrington notes in his essay (I’ve tweaked the punctuation to make it blog-friendly):

[The authors] present their case most strongly for novice and intermediate learners but they appear to concede that for students approaching a more expert position, the different approaches are at least ‘equally effective.’

This means the debate is more about sequencing approaches appropriately in the learning journey.

Students will reach a point where these approaches represent a genuine choice.

And, critics of that meta-analysis also find a middle ground (again with the punctuation tweaking):

The more important questions to ask are: under what circumstances do these guided inquiry approaches work? What are the kinds of outcomes for which they are effective? What kinds of valued practices do they promote?

In other words: even the champions of the strongest claims concede that they see both approaches being appropriate at different times.

Specifically: novices need (relatively more) direct instruction. Experts benefit from (relatively more) open-ended, project-y methods.

Beyond Knowledge

Sherrington argues for a truce between direct instruction and PBL, first, because even strong advocates admit that the “other side’s” methods have a place under certain circumstances.

Teaching novices? Try direct instruction. Working with relative experts? Bring on the projects.

Second, he argues that schools exist both to help students acquire knowledge and to help them acquire social habits and practices we value.

As Sherrington writes: “there are many aspects of student activity and teacher-student engagement that are desirable simply because we value them as social constructs.”

So, for example: our society–heck, our very form of government–requires that people be able to work together effectively. For that reason, we benefit our students when we help them learn how to do so.

When we coach students along with group work, that teaches them skills that our society values–above and apart from the knowledge they gain while doing that work.

Of course, Sherrington’s essay includes many other thoughtful points beyond these two: it’s worth reading in full.

A Recent Example

Sherrington’s first argument struck me because I’ve been trying to make it for some time now.

Just ten days ago on this blog, I wrote about a huge study from South American purporting to show that collaborative, inquiry based learning produced substantial advantages.

And yet, as I found when I read its methods, the study didn’t contrast student-centered teaching with teacher-centered teaching.

Instead, it contrasted good teaching (combining both explicit instruction and projects) with really bad teaching (“copy down the names of the 206 bones of the human body”). Unsurprisingly, bad teaching produces bad results.

In other words: I’d like to spread the word of Sherrington’s truce. I hope you’ll join me!


Sherrington’s essay appears in The researchED guide to education myths: An evidence-informed guide for teachers, published by John Catt.

I wrote about Clare Sealy’s essay in this collection last week as well, so you can tell I think it’s got lots of quality work.

I don’t agree with everything I read in this guide, but neither does its editor (Craig Barton) or the series editor (Tom Bennett). They want to foster the debate, and this volume does that admirably.