constructivism/direct instruction – Education & Teacher Conferences Skip to main content
The Best Way to Teach: When Clarity Leads to Muddle
Andrew Watson
Andrew Watson

Most teachers want to be better teachers. You’re probably reading this blog for research-based guidance on doing so.

A young student wearing plastic goggles carefully pours something into a beaker slightly filled with green liquid

I recently read a study that offers emphatic — and paradoxical — guidance. Exploring this research — as well as its paradoxes — might be helpful as we think about being better teachers.

Here’s the story.

A research team, led by Louis Deslauriers, worked with students in an introductory physics class at Harvard. This class was taught by an experienced professor who mostly lectured; he also supplemented the class with “demonstrations, … occasional interactive quizzes or conceptual questions.”

Let’s call this approach “interactive lecture.”

In Deslauriers’s study, students also attended two additional classes. One was taught with Method A and the other with Method B.

In Method A, an experienced professor:

  • presented slides
  • gave explanations
  • solved sample problems
  • strove for fluency of presentation

What abotu Method B? Another experienced teacher:

  • used principles of deliberate practice
  • instructed students to solve sample problems together in small groups
  • circulated through the room to answer questions
  • ultimately provided a full and correct answer to the problems

The researchers strove, as much as possible, to make the class content identical; only the pedagogy differed.

What did the researchers learn about the relative benefits of Methods A and B?

Paradox #1: Effective and Unloved

First off, the students learned more from Method B.

That is: when they solved problems in small groups, wrestled with the content, and ultimately heard the right answer, students scored relatively higher on an end-of-class multiple choice test. When they experienced Method A (the prof explained all the info and solved all the problems), they scored relatively lower.

But — paradoxically — the students preferred Method A, and believed that they learned more from it. They even suggested that all their classes be taught according to Method A — the method that resulted in less learning.

The researchers offer several explanations for this paradox. The headlines sound like this:

  • When students hear straightforward explanations and see clear/succesful demonstrations of solutions strategies (Method A), the content seems easy and clear. Students think they understand.
  • But, when they have to do the cognitive heavy lifting (Method B), class feels more difficult. Result: students worry they didn’t understand.
  • Because the students are — relatively speaking — novices, they don’t know enough to know when they understand.

Team Deslauriers, sensibly enough, suggests that we can help students appreciate and accept the more challenging methods — like Method B — if we explain the reseasoning behind them.

I, by the way, take this suggestion myself. For instance: I explain the benefits of retrieval practice to my students. They don’t always love RP exercises, because retrieval practice feels harder than simple review. But they understand the logic behind my approach.

Paradox #2: Clarity vs. Muddle

Up to this point, Deslauriers and Co. pursue a sensible path.

They know that MOST college profs use Method A (the bad one), so they want those profs to change. To encourage that change, the undertake a study showing a better option: Method B!

Given these research results, Deslauriers and Co. offer two clear and emphatic suggestions:

First: teachers should use Method B teaching strategies, not Method A strategies.

Second: to counteract students’ skepticism about Method B, we should explain the logic behind it.

What could be more helpful?

Alas, these clear suggestions can lead to another muddle. This muddle results from the freighted NAMES that this study gives to Methods A and B.

Method B — the good one — is called “active.”

Method A — the bad one — is called (inevitably) “passive.”

So, this study summarizes its findings by saying that “active” teaching is better than “passive” teaching.

These labels create real problems with the research conclusions.

Because these labels lack precision, I can apply them quite loosely to any teaching approach that I believe to be good or bad.

For instance: recall the experienced professor who regularly teaches this physics course. He mostly lectures; he also supplements the class with “demonstrations, … occasional interactive quizzes or conceptual questions.”

If I disapprove of that combination, I can call it “passive” — he mostly lectures!

If I approve, I can call it “active” — consider all those demonstractions, interactions, and conceptual questions!!

These labels, in other words are both loaded and vague — a perilous combination.

The peril arises here: literally no one in the world of cognitive science champions Method A.

EVERYONE who draws on cognitive science research — from the most ardent “constructivist” to the most passionate advocate for direct instruction — believes that students should actively participate in learning by problem solving, discussion, creation, and so forth.

Advocates for those two groups have different names for this mental activity: “desirable difficulties,” “productive struggle.” They think quite differently about the best way to achieve all that active thinking. But they all agree that students must struggle with some degree of difficulty.

Slippery Logic

This naming muddle creates unfortunate logical slips.

The study certainly suggests that Method B benefits students more than Method A. But, it doesn’t suggest that Method B is better than other methods that might reasonably be called by the open-ended named “active.”

For instance: it doesn’t necessarily mean that “constructivism” is better than direct instruction. And yet — because of those highly flexible labels — the study can be misinterpreted to support that claim.

My concern isn’t hypothetical. Someone sent me this study precisely to support the argument that inquiry learning promotes more learning than direct instruction.

But: “Method B” isn’t inquiry learning. And direct instruction isn’t Method A.

The Big Picture

I said at the beginning of this post that teachers might draw on research to be better teachers.

I worry that readers will draw this inaccurate conclusion based on this study:

“Research proves that ‘active learning’ (like projects and inquiry) is better than ‘passive learning’ (like direct instruction).”

Instead, this study suggests that asking students to do additional, productive mental work results in more learning than reducing their mental work.

Champions of both projects/inquiry and direct instruction want students to do additional, productive mental work.

Those schools of though have sharply different ideas of the best ways to accomplish that goal. But dismissing one of them as “passive” — and therefore obviously bad — obscures the important insights of that approach.


Deslauriers, L., McCarty, L. S., Miller, K., Callaghan, K., & Kestin, G. (2019). Measuring actual learning versus feeling of learning in response to being actively engaged in the classroom. Proceedings of the National Academy of Sciences116(39), 19251-19257.

Constructivism, or Constructivism, Part II
Andrew Watson
Andrew Watson

Last week, I wrote about the often-testy debates that surround “constructivism.”

One possible cause for the heat behind these debates: we have at least two different meanings for the word “constructivism” in education.

Definition A emphasizes the physical activity and social interactivity of the classroom.

If students are up and moving about, if they’re working and talking and building ideas together, that’s “constructivism.”

Definition B emphasizes the mental activity within the student.

If the student is thinking and learning, then by definition they have “constructed” new meaning.

If students aren’t moving about and aren’t interacting with peers, but they ARE LEARNING — well, that’s constructivism. (Of course, if they learn while moving and interacting, then that’s also constructivism.)

Can we use these distinct definitions to develop research that sheds light on this debate?

Guides and Sages

A study published in 2019 — led by Dr. Elizabth Fischer and Dr. Martin Hänze — explores this question:

Is the definition A constructivism (students moving and interacting) the best way to accomplish the definition B constructivism (students learning)?

To answer this question, they undertake a remarkably thorough and complex study with German college students.

As I will explore in detail below, I do think the study has important limitations — limitations, by the way, which the authors themselves highlight.

But I do think …

… that they’ve undertaken a good-faith effort to answer this question with a perfectly reasonable approach, and

… that they’ve taken great care with the nitty-gritty research specifics.

In other words: this study might not be perfect (no study is), but it’s not somehow a partisan hack job.

Some highlights: Fischer and Hänze surveyed over 1700 students in 80 different college courses — from physics to social sciences to language to music. So, they have a large sample size from an unusually diverse range of disciplines.

College students working together at a table, lots of papers in front of them

These surveys — both at the beginning and the end of the term — measured everything from the students’ initial interest in the topic, to their sense of how much they learned, to the amount of mental effort that the course required.

And the researchers themselves evaluated these 80 courses for two different approaches to teaching:

They measured “student-activating methods,” where all students were working “without direct involvement of the teacher”: reading, exercises, group work, games.

The also measured “teacher-guided methods,” which focused primarily on the teacher’s instruction: lecture, demonstration, videos.

Here’s the logical chain they explored:

First: did the students’ “amount of perceived mental effort” in fact lead to a greater sense of learning?

Second: which classroom approach — “student activating methods” or “teacher-guided methods” — resulted in more mental effort?

In this way, Fischer and Hänze could distinguish between the two kinds of constructivism. Did “student-activating methods” (definition A) lead to more learning (definition B)?

Tentative Conclusions

Unsurprisingly, given the amount of data they gathered, these researchers reached a great many detailed conclusions.

For instance: teachers devoted A LOT more time to “teacher-guided methods” (64% of class time) than to “student-activating methods” (13.8%).

Let’s focus on the two questions listed above:

First: did the students’ “amount of perceived mental effort” in fact lead to a greater sense of learning?

Unsurprisingly: yes. Students who felt that they thought harder also thought that they learned more. (Teachers everywhere are breathing a sigh of relief.)

Second: which classroom approach — “student activating methods” or “teacher-guided methods” resulted in more mental effort?

The results were clear: students reported thinking harder and learning more with more “teacher-guided methods,” and less with “student-activating method.”

In other words: in these college classes, definition A constructivism (student activity/interaction) resulted in less definition B constructivism (student learning) than more “teacher-guided” methods.

Recommendations and Limitations

Given their findings, you might expect Fischer and Hänze to demand an end to “student-activating methods.”

THEY DON’T.

Do our findings indicate that university teachers should stop being the guide on the side and return to being the sage on the stage? We refrain from deducing this kind of prescription.

Instead, they want to slow the rush in the other direction. They don’t think their evidence strong enough to forbid guide-on-the-side teaching, but it certainly doesn’t recommend it.

Part of their hesitation stems from the limitations to their approach — limitations which they themselves highlight.

First: their data show correlation, not causation. (Always an important distinction.)

Second: they focus on student survey data. Such data are, by definition, subjective. For instance, students report that they learned more from “teaching guided methods,” but did they? We don’t know.

By the way, researchers had good methodological reasons to rely on surveys. But that reliance does require cautious interpretation.

Third: recall that they found “teacher-guided methods” during 64% of class time; so much more than “student-activating methods” (13.8%).

The researchers reasonably wonder if “student-activating methods” are less effective because the professors haven’t practiced them very much, and aren’t very good at them yet.

That’s a reasonable question.

Long-time readers know my own hypothesis. I suspect that “teacher-guided methods” probably help students more early in their learning process, and “student-activating methods” help more later in that process. This study doesn’t measure that variable, and doesn’t even try to answer the question.

So: Constructivism, or Constructivism?

When these German researchers make a good-faith effort to answer that question in college courses, they arrive at two tentative findings:

Unsurprisingly, students learn more when they think more. Thinking causes “construction” of new knowledge.

Perhaps surprisingly, “student activating methods” do not help students think more than “teacher guided methods.” Instead, students think more when they sit and listen to a well-prepared talk than they do moving about and interacting with one another.

Although Fischer and Hänze are appropriately modest in their recommendations, their research does suggest that we should slow down and ask challenging questions before we insist on too much student movement and interaction.


Fischer, E., & Hänze, M. (2019). Back from “guide on the side” to “sage on the stage”? Effects of teacher-guided and student-activating teaching methods on student learning in higher education. International Journal of Educational Research95, 26-35.

Constructivism, or Constructivism? Part I
Andrew Watson
Andrew Watson

If you want to launch a feisty debate in your next faculty meeting, stand up and ask “who here supports ‘constructivism’?”

This question all but guarantees heated discussion, because …

… MANY people identify strongly with one side or another (“I definately DO!” “Well, I certainly DON’T!!”)

and

… people on one side often think ill of people on the other (“How can you possibly hold that immoral position?”).

We’re talking Hadfields and McCoys here.

Typically, this blog would look to research to resolve such a debate. Alas the debate resists a research-based answer, because we can easily find research supporting both positions.

A construction site: a 6 story building with scaffolding all around, a crane against a sunset

Whether you’re championing PBL or direct instruction or inquiry learning or Direct Instruction or guiding from the side or a knowledge-rich curriculum, you’ve got research to bolster your claim.

Is there any way out of this tangle? Will we always disagree?

A Fresh Place to Start?

In many fields, basic definitions can be helpfully straightforward.

If I’m a carpenter, I can tell you what a “nail” is. In fact, I can show you several. They’re right there.

If I’m a surgeon, I can tell you what a “pancreas” is. I can tap it with my scalpel. It’s right there.

When a mechanic repairs my car, she can show me the “rust” on my “carburetor.” See? Right there.

And so forth.

In psychology and education research, alas, basic definitions get really tricky.

How, exactly, do I define “attention”? Or, “engagement”? Or, heck, how do I define “learning”?

If I can’t define these core concepts, then I can’t develop a precise way to measure them. And research relies on precise measurements…

For that reason, we might start reexamining our Hadfield/McCoy fued about “constructivism” by looking at basic definitions.

What, exactly, does the word “constructivism” mean?

Take Two

This potentially pedantic question turns out to have helpful results.

We’ve got (at least) two fundamentally different ways of thinking about constructivism.

First: physical + social activity = mental change.

If students are up and DOING carefully designed activities — if they’re moving about and (preferably) interacting with other students — this combination of physical and social work leads to the construction of new mental models.

This kind of “active” classroom work — unlike the “passive” classrooms where students sit and listen to a teacher explain — results in learning.

In other words: we can tell by looking at the students in the classroom whether or not “constructivist teaching” is happening.

Secondmental change = mental change.

This second definition focuses on learning more than on teaching.

It says: OF COURSE students learn by constructing their own mental models. (Unlike The Matrix, reality does not allow us to download mental models into our students.)

If students have learned, they have constructed. ALL LEARNING is by definition constructivist.

So they core question is: what teaching methods produce that result?

If, say, a lecture results in learning, then a lecture is “constructivist.”

If a jigsaw activity results in learning, then it too is “constructivist.”

In other words: we can’t tell by looking at the students whether or not constructivist teaching is happening.

Instead, we can tell only if we determine how much they learned. If they learned, they constructed. Voila: constructivism.

Rethinking Strong Claims

This definitional muddle might explain some of the “are you a constructivist” debate.

According to the first definition, I’m a constructivist teacher if I teach in particular ways: my students are up and about, working and talking with each other and — as much as possible — finding their own way through a particular concept.

According to the second definition, I’m a constructivist teacher if my students learn.

In other words: EVERYONE wants to be a constructivist according to the second definition. The first definition, however, prompts the feud.

Champions of the first definition believe those methods result in the second result. That is: “physical + social” constructivism (“guide on the side”) should result in more learning.

Critics of the first definition believe that “sage on the stage” teaching results in more learning — more construction of mental models.

Once we separate these two ways of thinking about “constructivism,” we can ask:

“How might we use this distinction in research? Can we determine which approach — saging or guiding — results in more construction of new mental models?”

I’ve found a study that tries to do just that. I’ll summarize it in next week’s blog post…


Fischer, E., & Hänze, M. (2019). Back from “guide on the side” to “sage on the stage”? Effects of teacher-guided and student-activating teaching methods on student learning in higher education. International Journal of Educational Research95, 26-35.

Translating Research to the Classroom: the Case of Discovery Learning
Andrew Watson
Andrew Watson

Here at Learning and the Brain, we want teachers and students to benefit from research. Obviously.

When psychologists discover important findings about the mind, when neuroscientists investigate the function of the brain, schools might well benefit.

Let’s start making connections!

At the same time, that hopeful vision requires care and caution. For instance, research (typically) operates in very specialized conditions: conditions that don’t really match most classrooms.

How can we accomplish our goal (applying research to the classroom) without making terrible mistakes (mis-applying research to the classroom)?

A Case in Point

Today’s post has been inspired by this study, by researchers Angela Brunstein, Shawn Betts, and John R. Anderson.

It’s compelling title: “Practice Enables Successful Learning under Minimal Guidance.”

Now, few debates in education generate as much heat as this one.

Many teachers think that — because we’re the experts in the room, and because working memory is so small — teachers should explain ideas carefully and structure practice incrementally.

Let’s call this approach “high-structure pedagogy” (although it’s probably better known as “direct instruction”).

Other teachers think that — because learners must create knowledge in order to understand and remember it — teachers should stand back and leave room for adventure, error, discovery, and ultimate understanding.

Let’s call this approach “low-structure pedagogy” (although it has LOTS of other names: “constructivism,” “project/problem-based learning,” “minimal guidance learning,” and so forth).

How can we apply the Brunstein study to this debate? What do we DO with its conclusions in our classrooms?

If you’re on the low structure team, you may assume the study provides the final word in this debate. What could be clearer? “Practice enables successful learning under minimal guidance” — research says so!

If you’re on the high structure team, you may assume it is obviously flawed, and look to reject its foolish conclusions.

Let me offer some other suggestions…

Early Steps

In everyday speech, the word “bias” has a bad reputation. In the world of science, however, we use the word slightly differently.

We all have biases; that is, we all have perspectives and opinions and experiences. Our goal is not so much to get rid of biases, but to recognize them — and recognize the ways they might distort our perceptions.

So, a good early step in applying research to our work: fess up to our own prior beliefs.

Many (most?) teachers do have an opinion in this high-structure vs. low-structure debate. Many have emphatic opinions. We should acknowledge our opinions frankly. (I’ll tell you my own opinion at the end of this post.)

Having taken this first vital step, let it shape your approach to the research. Specifically, try — at least temporarily — to convince yourself to change your mind.

That is: if you believe in low-structure pedagogy, look hard for the flaws in this study that seems to champion low-structure pedagogy. (BTW: all studies have flaws.)

If your biases tend you to high-structure pedagogy, try to find this study’s strengths.

Swim against your own tide.

Why? Because you will read the study more carefully — and therefore will likely arrive at conclusions that benefit your students more.

Gathering Momentum

Now that you have a goal — “change my own mind” — look at the study to answer two questions:

First: who was in the study?

Second: what, exactly, did they do?

You should probably be more persuaded by studies where…

First: …the study’s participants resemble your students and your cultural context, and

Second: …the participants did something that sensibly resembles your own possible teaching practice.

So, in this case: the participants were undergraduates at Carnegie Mellon University.

If you teach undergraduates at a highly selective university — the Google tells me that CMU currently admits 14% of their applicants — then this study’s conclusions might help you.

However, if you teach 3rd graders, or if you teach at any school with open admission, those conclusions just might not offer useful guidance.

After all, high-powered college students might succeed at “minimal guidance” learning because they already know a lot, and because they’re really good at school. (How do we know? Because they got into CMU.)

What about our second question? What exactly did the participants do?

In this study, participants used a computer tutor to solve algebra-ish math problems. (The description here gets VERY technical; you can think of the problems a proto-Kendoku, with algebra.)

What about the guidance they got? How “minimal” was it?

Getting the Definition Just Right

At this point, Brunstein’s study reminds us of an essential point.

When teachers talk about educational practice, we use handy shorthand phrases to capture big ideas.

Metacognition. Mindfulness. Problem-based learning.

However, each of those words and phrases could be used to describe widely different practices.

Before we can know if this study about “minimal guidance” applies to our students, we have to know exactly what these researchers did that they’re calling minimal guidance.

Team Brunstein says exactly this. They see discovery learning and direct instruction not as two different things, but as ends of a continuum:

“No learning experience is pure: students given direct instruction often find themselves struggling to discover what the teacher means, and all discovery situations involve some minimal amount of guidance.”

In this case, “minimal guidance” involved varying degrees of verbal and written instructions.

This study concludes that under very specific circumstances, a particular blend of structure and discovery fosters learning.

So, yes, in some “minimal guidance” circumstances, students learned — and practice time helped.

However — and this is a big “however”:

In one part of the study, 50% of the students at the extreme “discovery” end of the spectrum quit the study. Another 25% of them went so slowly that they didn’t finish the assignment.

In other words: this study in no way suggests that all kinds of minimal guidance/discovery/PBL learning are always a good idea.

The “just right” blend helped: perhaps we can recreate that blend. But the wrong blend — “extreme discovery” — brought learning to a standstill.

Final Thoughts

First: when using research to shape classroom practice, it helps to look at specific studies.

AND it helps to look at groups of studies.

Long-time readers know that I really like both scite.ai and connectedpapers.com. If you go to those websites and put in the name of Brunstein’s study, you’ll see what MANY other scholars have found when they looked at the same specific question about minimal guidance. (Try it — you’ll like it!)

Second: I promised to tell you my own opinion about the low- vs. high-structure debate. My answer is: I think it’s the wrong question.

Because of working memory limitations, I do think that teachers should provide high structure during early stages of studying a topic.

And, for a variety of reasons, I think we should gradually transition to lower-structure pedagogies as students learn more and more.

That is:

We should use high-structure pedagogy with novices, who are early in schema formation.

And, we should use low-structure pedagogy with experts, who are later in the process of schema formation.

The question is not “which pedagogy to use?”

The better question is: “how can we identify stages along the process of students’ schema development, so we know when and how to transition our teaching.”

Research into that question is still very much in the early phases.


Brunstein, A., Betts, S., & Anderson, J. R. (2009). Practice enables successful learning under minimal guidance. Journal of Educational Psychology101(4), 790.

Does Higher Engagement Promote Learning?
Andrew Watson
Andrew Watson

Long-time readers know: I thoroughly enjoy research that challenges my beliefs.

After all, I (probably) have lots to learn when a study makes me think anew.

In this case — even better! — I’ve found a study that (I suspect) challenges almost everybody’s beliefs.

Here’s the story…

The “Active vs. Passive” Debate

Education scholars often fiercely advocate for “active learning.”

This phrase serves as a catchy shorthand for several educational beliefs and practices.

People who champion a “constructivist” approach to schools, or embrace project pedagogies, or advocate student “voice and choice” often describe their approach this way.

And, they often point out one crucial benefit to active learning: student “engagement.” Students who shape their own learning feel invested in and energized by their efforts.

Other scholars zealously dispute this account.

Whereas their approach has been dismissed as merely “passive learning,” they often prefer phrases such as “direct instruction” to explain their views.

In this view of learning, limitations on working memory prevent novices from tackling overly complex problems.

Students benefit from highly structured pedagogy, in which expert teachers help students build mental models (“schema”) and thereby achieve their own expertise.

For champions of direct instruction, “engagement” might look good (“the students are all so active!”), but doesn’t necessarily result in actual learning. (Why? Because students might well experience working memory overload….)

If you attended our conference in San Francisco at the beginning of February, you heard speakers embrace both sides of this debate.

This Does Not Compute

A study published in 2019 splendidly complicates this tidy summary.

A research team led by Dr. Louis Deslauriers ran a straightforward experiment.

Researchers worked with two groups of students enrolled in an introductory physics class at Harvard.

The first group studied topic A in an “active learning” paradigm, and topic B with a “passive lecture.”

The second group switched that order: topic A was “passive lecture,” and topic B was “active learning.

The research team found a surprising set of results.

Students learned more from the “active learning” classes, but enjoyed (and thought they learned more from) the “passive lecture.”

Paradoxically, passive learning enhanced engagement but reduced understandingActive learning enhanced learning but reduced engagement.

Almost everyone will find that combination of results surprising, even disappointing.

Puzzle #1 (with a potential explanation)

Members of Team Active Learning, I suspect, predicted that the students would learn more when their professors followed that approach. Voila: they did.

And (almost certainly) teachers on that team predicted that active learning would result in higher engagement. Yet — as measured in this study — it didn’t.

Students clearly preferred the “passive lecture.”

For instance, survey results show that students wanted other physics courses to be taught with passive lecture/direct instruction.

 

The researchers have a hypothesis explaining this puzzling result. They wonder if the additional cognitive challenge created by active learning resulted in “desirable difficulty.”

That is: the students had to think harder — a challenge they didn’t really enjoy.

And this extra thought resulted in more learning. (You can watch a short video here to learn more about this hypothesis.)

Puzzle #2 (with another potential explanation)

Members of Team Direct Instruction, no doubt, are delighted that students preferred the (misnamed) “passive lecture.” According to the survey results, students felt they learned more from it than from the “active learning.”

And yet, Direct Instruction advocates no doubt feel genuine puzzlement that their preferred approach resulted in less learning. How could that be?

 

I myself have a hypothesis explaining this puzzling finding.

Contrary to many stereotypes, direct instruction advocates do NOT champion uninterrupted lecture.

Instead, they suggest that teachers start with straightforward explanation of core concepts.

Once those have been presented clearly, then students should do substantial independent mental work with those ideas.

In other words, advocates of direct instruction heatedly reject the label “passive learning.” Students do plenty of active cognitive work after they get the benefit of initial priming from instructors.

And yet, in this study, students in the passive learning group had to, in the researchers’ words, “adjust to a complete elimination of any active engagement” — such as “demonstrations, … interactive quizzes, or conceptual questions.”

NO educational thinker feels surprise that students learn less in the total absence of active engagement.

That’s not “direct instruction.” That’s … well … that’s a very bad idea. (To be clear: a very bad idea that happens all too frequently.)

A (Potential) Resolution

Because the “passive learning” condition subjected the students to pure lecture, then this study seems much less surprising (to me).

With “passive learning,”

Students learned LESS from uninterrupted lecture. (Why? They didn’t do any independent mental work with the material.)

Because the professor’s explanation made sense, on the other hand, they FELT they understood the material better.

With “active learning,”

Students learned MORE, because they interacted with the concepts and problems individually.

Alas, they FELT they understood less because they experienced the “difficult” half of “desirable difficulties.”

In other words: the study results seem confusing because the labels don’t mean what we thought they meant.

Until we know EXACTLY what happened in both “passive” and “active” learning, we can’t really judge how well those phrases align with our preconceptions — and with our own teaching practices.

One more thought

If a particular diet benefits, say, professional athletes, will it benefit me?

I’m going to be honest: I’m not a professional athlete.

A diet that benefits their level of physical fitness, metabolism, professional goals, etc., might not be healthy for me. (In his swimming prime, Michael Phelps ate 8000-10,000 calories a day. I suspect my doctor would discourage me from doing so.)

If Harvard even remotely lives up to its reputation, then students in Harvard physics classes understand an impressive amount of science. They have a great deal of motivation to learn more about science. They’ve been impressively successful in academic pursuits.

If a teaching method works with Harvard physics students, will it work with my 10th grade English students? Will it work with your 2nd graders? Maybe … but also, maybe not.

In general: I’m hesitant to apply research done at Harvard (or Stanford, or Oxford, or the US Naval Academy…) to most K-12 learning.

It’s entirely possible that the method “works” not because of the method, but because of the extraordinary background of the students who participate in it.

TL;DR

Before we embrace research “active learning” or “direct instruction,” we should know…

… EXACTLY what those labels mean in the research, and

… the GOODNESS OF FIT between those research participants and our students.

Dan Willingham has wisely written: “one study is just one study, folks.”

The Benefits of Direct Instruction: Balancing Theory with Practice
Andrew Watson
Andrew Watson

When teachers hear that “research shows we should do X,” we have at least two broad questions:

First Question: what’s the research?

Second Question: what EXACTLY does X look like in the classroom?

People who have the expertise to answer the first question (researchers) might not have the K-12 classroom experience to answer the second question.

And, of course, people who can make it work in the classroom (teachers) might not know or understand the research.

Wouldn’t it be great if we could find one book that answers both sets of questions?

In fact, it would be especially great if that book focused on a controversial topic. In that case, we could see a complete argument – both the why and the how – before we make a judgment about the controversy.

Does that sound tempting? I have good news…

Embracing Controversy

A feisty battle has raged in edu-circles for many years now: “direct instruction” vs. “constructivist pedagogy.” *

In one corner, “constructivists” argue that problems or projects or independent inquiries help students discover and build enduring understanding. And, such exploration fosters authentic motivation as well.

In the other corner, “direct instruction” advocates argue that working memory limitations sharply constrain students’ cognitive workspace. For that reason, teachers must explicitly shape learning experiences with small steps and carefully-designed practice.

Both approaches can be – and frequently are – parodied, misunderstood, and badly practiced. So, a book explaining the WHY (research) and the HOW (classroom practice) would be greatly helpful.

Sage on the Page

Adam Boxer teaches chemistry at a school in London, and has been blogging about his work for some time now. (If you follow our twitter account, @LearningandtheB, you’ve seen links to his work before.)

In his book Explicit & Direct Instruction: An Evidence-Informed Guide for Teachers, Boxer gathers eleven essays that explain the research background and then then get SUPER specific with classroom suggestions.

In the first chapter, Kris Boulton tells the history of “Project Follow Through,” a multi-decade program to discover the best way of teaching children.

Researchers tracked more than 200,000 children in 13 different programs over several years, and compared their learning across three dimensions: basic skills, cognitive skills, and affective skills.

Which approach proved most effective?

Direct Instruction, created by Siegfried Engelmann.** It was, in fact, the only program of the 13 that benefitted students in all three dimensions.

When advocates of Direct Instruction (and direct instruction) insist that research shows its effectiveness, they reasonably enough point to Project Follow Through. (Can others critique this study? Of course…)

Both Boulton and Greg Ashman (in the second chapter) then emphasize the alignment of direct instruction with psychology models: cognitive load theory, schema theory, and so forth.

In brief: we’ve got LOTS of research explaining why direct instruction should work, and showing that it does work.

Let’s Get Practical

After Boulton and Ashman explain the why, the next several chapters deliver on the classroom how.

For me, the book’s great success lies in the number, variety, and specificity of these chapters.

What does direct instruction look like for teaching math?

How about science?

How about writing?

What’s the best number of examples to use?

And so forth.

I especially enjoyed Sarah Cullen’s chapter on fading. Cullen begins with an important question/critique:

How, then, can a teaching method that so depends on instruction – on teachers leading learning and controlling the content to which pupils are exposed – foster autonomy?

Her answer focuses on having scaffolds and removing scaffolds – aka, “fading.”

In particular, Cullen wisely conceptualizes fading over many different time spans: fading across grades (which requires planning across years), fading within a term’s curriculum (requiring planning across months), and fading within a lesson (requiring skill, insight, and practice).

Like the book’s other chapters, Cullen’s offers many specific examples for each of her categories. In other words, she ground theoretical understanding with highly specific classroom realities.

In Brief

If you already think direct instruction sounds right, you’ll be glad to have a how-to guide.

If you think it sounds suspect (or even oppressive), you’ll be glad to read a straightforward explanation of the research behind the approach. (You might not be persuaded, but you’ll understand both sides of the argument more clearly.)

And, if you want realistic classroom examples explained with loving detail, this book will launch 2022 just right.


* I’ve put those labels in quotation marks because both are familiar, but neither one really works.

** Direct Instruction (with capital letters) is the name of Engelmann’s specific program. On the other hand, direct instruction (without capital letters) is a broader approach to thinking about teaching and learning.

Pure Inquiry, Guided Inquiry, and PISA
Andrew Watson
Andrew Watson

Because scientists work by inquiring, it makes rough-n-ready sense that we should teach science through the process of inquiry. Indeed “inquiry-based learning,” like “problem-based” and “project-based” learning, has emphasized students’ construction of their own understanding.

According to a well-known definition, this pedagogy focuses on students…

… asking questions,

… planning and conducting investigations,

… using appropriate tools and techniques to gather data,

… thinking critically and logically about relationships between evidence and explanations,

… constructing and analyzing alternative explanations,

And so forth.

Of course, we should also inquire: does inquiry-based learning in fact help students learn? This question leads to lots of controversy…

Many Methods

We can explore that question in several ways.

We might, for instance, have one group of students learn a topic through inquiry learning, and a control group learn it through direct instruction. When we test them later, we’ll get a good sense of who learned the material better.

That method — if we do everything right — gives us a clear answer.

But: it focuses on a small group of people learning only one thing. Who knows if that clear answer applies in other circumstances?

Or, we might look at large groups of people who studied many things. If we can find out what method their teachers used, and measure how well they learned, we’ve got another useful strategy for answering our question.

Of course, we’ll be less certain about the quality of the teaching than in the highly-controlled environment. Who knows if the inquiry-based teaching was, in fact, well done?

Following this second approach, researchers in the UK looked at PISA data (PISA = Program for International Student Assessment ), and aligned it with high-stakes testing scores in England: the GCSE (General Certificate of Secondary Education).

The PISA data help here because students rank how much time they spent in various inquiry-learning practices: “every lesson, most lessons, some lessons, never.” For instance, students rate how often they are “allowed to design their own experiments.”

So: by linking PISA data about teaching practices with GCSE scores, those researchers can draw some conclusions about the effectiveness of inquiry learning.

What did they find?

Negotiating Our Own Biases

Before I answer that question, let’s acknowledge a problem.

Many teachers already have opinions — strong opinions — about inquiry learning.

Those opinions bias our responses to new information.

If (for example), I don’t think inquiry learning works, and this research shows that it does, I’m inclined to dismiss the study.

“Well, look at all of these methodological problems!”

Yet (the contrary example) if I’m an inquiry-learning champion, research showing its effectiveness automatically seems wise and well-designed.

“Gosh: it’s hard to think of a fairer way to answer this question! Now we KNOW it works…”

So, here’s my suggestion: decide right now — before you know what the researchers concluded — whether or not you’re confident in this study design.

All study designs have some flaws. This one, for instance, relies on student self report. And, as noted above, it doesn’t have any way to control for the quality of inquiry learning practices.

You might reasonably conclude those flaws are too great. In that case, you don’t get to cite this study even if it ends up confirming your beliefs.

Or, you might reasonably conclude that — on average — errors in self-report and quality control will balance themselves out; and this research method gives a big-enough data pool to draw meaningful conclusions. In that case, you have to pay attention to the study even if it contradicts your beliefs.

So: push yourself to decide now…

The Envelope Please

Now that you’ve decided to give credence to this methodology (or not to do so), here’s what they found.

“In summary, neither high inquiry with low guidance, nor high guidance with low inquiry are related to improved science attainment.” (emphasis added)

That is: pure inquiry learning doesn’t result in more learning that plain-old explanation. And, (as Professor Christian Bokhove notes) it doesn’t result in less learning either.

But:

“There is, however, some tentative evidence that moderate levels of inquiry delivered in conjunction with high guidance may have a small positive impact upon science achievement.” (emphasis added)

That is: lots of plain-old guidance PLUS a moderate amount of inquiry gives a little boost.

By the way: what do the researchers count as “guidance”? Here, they’re looking at PISA questions about teachers’ suggestions for improving performance, meeting learning goals, extra help, and so forth.

Teaching Implications

As I noted above, the “direct instruction vs. inquiry” debate generates A LOT of passion.

I think this study invites us to step back and get some perspective. It seems that — at least in this research paradigm — a healthy dose of teacher explanation and support helps students learn. And, some degree of independent inquiry enhances that result. (“Small positive impact.”)

But: inquiry learning neither yields a whole new level of scientific understanding, nor prevents students from learning much at all.

I suspect that, ultimately, we need to ask more granular questions.

Does inquiry give a greater lift in later grades than earlier ones? Perhaps it helps when scientific findings confirm our prior knowledge, but not when then contradict it? Does the teacher’s experience level matter?

Research into these questions might give us specific and practical classroom guidance.

Today’s Humble Pie: 206 Bones
Andrew Watson
Andrew Watson

Back in early November, I wrote about a study seeming to contrast direct instruction with more constructivist approaches.

I argued that those labels simply didn’t apply to the actual teaching methodologies measured in the research.

So, the “inquiry and problem-based pedagogy” [IPP] used all sorts of direct instruction. Here’s the authors’ summary of that method; I’ve put some words in bold:

“When done well, IPP includes elements of explicit instruction and scaffolding.

Teachers facilitate learning by guiding students through a series of steps and explicitly relating learning to students’ prior knowledge and experiences.

Teachers guide learners through complex tasks with explicit instructions that are relevant to the problems at hand.

They provide structure and scaffolding that help students not only carry out specific activities, but also comprehend why they are doing those activities and how they are related to the set of core concepts they are exploring.”

So, as you can see “direct instruction” techniques are built into this method.

And, the method described as “traditional” seems to me an obvious straw man. Again, quoting from the research:

“Students copy facts about bone tissues and the names of 206 bones of the human skeleton that teachers have written on the blackboard into notebooks.”

I mean, seriously, who does that? Copies the names of 206 bones? Except for Charles Dickens’s Gradgrind — “facts, facts, facts!” — who would do such a thing?

Slice of Pie

I was discussing this study with a friend recently, and it turns out: her college professor would do such a thing. Right here in Massachusetts — home of the very first Learning and the Brain conference!  — her vertebrate anatomy professor put up slides of skeletons and labeled all the bones.

Slide after slide after slide. After slide. (Apparently he interspersed them with funny stories, just to keep the students awake. In my friend’s telling: his students remembered the stories, but not the anatomy.)

Except for the funny stories, Gradgrind would be proud indeed.

In any case, it’s clear that this “traditional” method is NOT a straw man, and at least one professor seems to think it a good idea.

So, to be clear: I do think asking students to memorize some core facts is not only plausible but beneficial. Without essential information in long-term memory, working memory will be overloaded by too much external information.

But: I can’t think of any research-based argument for an entire class — much less an entire course! — devoted to listing bones. That’s not direct instruction. That’s purgatory.

Two Core Points

Point from November’s post: as I wrote back in November, we can’t use this research to champion a pure constructivist approach to learning, because IPP includes lots o’ direct instruction.

Point from today’s post: “direct instruction” does not mean “presenting unbroken lists of facts, and then calling for them to be repeated.” Even if that really happens [shudder], that’s a profound misunderstanding of research and terminology.

“Direct instruction” does mean introducing enough factual or conceptual information to allow students to work thoughtfully — and increasingly independently — on a well-scaffolded series of problems.

Of course, this definition can be expanded and formalized. But: whatever you call “copy the names of 206 bones,” please don’t call it direct instruction.

Can a Neuromyth Result in a Truce?
Andrew Watson
Andrew Watson

We teachers feel passionately about our work, and so–no surprise–our debates and disagreements get heated.

Few debates rage as fiercely as that between champions of direct instruction (with or without capital “D” and “I”), and champions of constructivism (in its many forms: project-based learning, student-centered learning, etc.).

In a recent essay, writer and blogger Tom Sherrington would like soothe this ferocity by declaring the whole debate a myth.

As his title declares: it’s a myth that “teacher-led instruction and student-centred learning are opposites.” (Sherrington is British, so we can overlook the missing “e” from “centred.”)

In brief, he argues: no matter how passionately we disagree about pieces of this debate, almost everyone agrees on a sensible core of ideas. We’re arguing at the margins, but could just as easily refocus on our agreements at the center.

Passionate Debates

One well-known meta-analysis sports this dramatic title: “Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching.”

Not much grey area there.

But, as Sherrington notes in his essay (I’ve tweaked the punctuation to make it blog-friendly):

[The authors] present their case most strongly for novice and intermediate learners but they appear to concede that for students approaching a more expert position, the different approaches are at least ‘equally effective.’

This means the debate is more about sequencing approaches appropriately in the learning journey.

Students will reach a point where these approaches represent a genuine choice.

And, critics of that meta-analysis also find a middle ground (again with the punctuation tweaking):

The more important questions to ask are: under what circumstances do these guided inquiry approaches work? What are the kinds of outcomes for which they are effective? What kinds of valued practices do they promote?

In other words: even the champions of the strongest claims concede that they see both approaches being appropriate at different times.

Specifically: novices need (relatively more) direct instruction. Experts benefit from (relatively more) open-ended, project-y methods.

Beyond Knowledge

Sherrington argues for a truce between direct instruction and PBL, first, because even strong advocates admit that the “other side’s” methods have a place under certain circumstances.

Teaching novices? Try direct instruction. Working with relative experts? Bring on the projects.

Second, he argues that schools exist both to help students acquire knowledge and to help them acquire social habits and practices we value.

As Sherrington writes: “there are many aspects of student activity and teacher-student engagement that are desirable simply because we value them as social constructs.”

So, for example: our society–heck, our very form of government–requires that people be able to work together effectively. For that reason, we benefit our students when we help them learn how to do so.

When we coach students along with group work, that teaches them skills that our society values–above and apart from the knowledge they gain while doing that work.

Of course, Sherrington’s essay includes many other thoughtful points beyond these two: it’s worth reading in full.

A Recent Example

Sherrington’s first argument struck me because I’ve been trying to make it for some time now.

Just ten days ago on this blog, I wrote about a huge study from South American purporting to show that collaborative, inquiry based learning produced substantial advantages.

And yet, as I found when I read its methods, the study didn’t contrast student-centered teaching with teacher-centered teaching.

Instead, it contrasted good teaching (combining both explicit instruction and projects) with really bad teaching (“copy down the names of the 206 bones of the human body”). Unsurprisingly, bad teaching produces bad results.

In other words: I’d like to spread the word of Sherrington’s truce. I hope you’ll join me!


Sherrington’s essay appears in The researchED guide to education myths: An evidence-informed guide for teachers, published by John Catt.

I wrote about Clare Sealy’s essay in this collection last week as well, so you can tell I think it’s got lots of quality work.

I don’t agree with everything I read in this guide, but neither does its editor (Craig Barton) or the series editor (Tom Bennett). They want to foster the debate, and this volume does that admirably.

Inquiry- and Problem-Based Pedagogy: Dramatic Results in South America (?)
Andrew Watson
Andrew Watson

A recent study, published by the Center for Effective Global Action, sees big benefits from teaching build around student collaboration, inquiry, and problem-solving.

Working with in four countries (!), in ten different schools, (!!), with over 17,000 students (!!!), researchers find that K-4 students made more progress in math and science when they explored questions, compared with students who listened to lectures.

They report these results in stats-y language that doesn’t translate well: after 7 months, students averaged 0.18 standard deviations higher in math, and 0.14 in science. After four years, those differences bloomed to 0.39 and 0.23.

That not as sexy sounding as, say, they scored X% higher on a standardized test. But, however you look at it, those are eye-catching numbers.

Inquiry Learning vs. What?

Despite these dramatic numbers and claims, I don’t think the study supports the strong claims made by these researchers.

Here’s why.

First, the research purports to study the difference between “inquiry and problem based pedagogy” with “traditional instruction.”

If you look over the description of the classes, however, I think you’ll quickly see that it studies the difference between “good” teaching and “bad” teaching.

So, for instance, in a “traditional” unit on the skeletal system in Argentina:

[S]tudents copy facts about bone tissues and the names of 206 bones of the human skeleton that teachers have written on the blackboard into notebooks.

That’s not traditional. That’s absurd. They copy the names of two hundred and six bones? The mind boggles.

And, by the way, the “inquiry and problem based pedagogy” [IPP] is full of good, old-fashioned direct instruction:

When done well, IPP includes elements of explicit instruction and scaffolding.

Teachers facilitate learning by guiding students through a series of steps and explicitly relating learning to students’ prior knowledge and experiences.

Teachers guide learners through complex tasks with explicit instructions that are relevant to the problems at hand.

They provide structure and scaffolding that help students not only carry out specific activities, but also comprehend why they are doing those activities and how they are related to the set of core concepts they are exploring.

So, yes, these students are inquiring and problem solving. And, they’re getting lots of explicit teacherly guidance.

So, again, the labels used in this study don’t fully align with the concepts we typically use them to mean.

Compared to Whom?

A second questions jumps out here as well.

The teachers who used IPP methods got impressive training and support. For instance:

They got 20 hours of professional training in these methods. (When was the last time your school provided twenty hours of training on one topic?)

They got lesson plans. They got teaching materials.

They got “continuous in-school teacher support.”

What did the teachers in the control-group schools get? The study doesn’t say.

That silence leads to the possibility that they got…nothin’.

Which is to say: the study compares teachers who got lots and lots (and lots) of support, with teachers who didn’t get any support.

So, the difference might have come from the specifics of the teaching method: in this case, “IPP.”

Or, it might have come from the energizing effects of working at a school getting so much researcher support and attention.

We simply don’t know. And, if I’m right that this was a “business as usual” control group, then the study design doesn’t let us know.

Strong Conclusions

Based on this study, I think we can conclude that…

4th graders should not have to copy 206 vocabulary words into their notebooks. (I’ll go out on a limb and say NO ONE should have to do that.)

Some level of explicit teacherly support and guidance is essential.

Once foundational knowledge has been established, an appropriate level of independent questing can solidify and extend knowledge.

Most of us, I suspect, would have agreed with all of those statements before we read the study.

I don’t think, however, we can conclude from this study that “Inquiry and Problem Based Pedagogy” (as we typically use those words in the US) is the best approach. Because: that’s not what this study tested and measured.