Andrew Watson – Page 42 – Education & Teacher Conferences Skip to main content

Andrew Watson About Andrew Watson

Andrew began his classroom life as a high-school English teacher in 1988, and has been working in or near schools ever since. In 2008, Andrew began exploring the practical application of psychology and neuroscience in his classroom. In 2011, he earned his M. Ed. from the “Mind, Brain, Education” program at Harvard University. As President of “Translate the Brain,” Andrew now works with teachers, students, administrators, and parents to make learning easier and teaching more effective. He has presented at schools and workshops across the country; he also serves as an adviser to several organizations, including “The People’s Science.” Andrew is the author of "Learning Begins: The Science of Working Memory and Attention for the Classroom Teacher."

How to (Un)Make System-Wide Changes Based on Research
Andrew Watson
Andrew Watson

If you’re reading this blog you already know: brain researchers can offer fascinating and practical advice to improve teaching and schooling.

There’s SO MUCH good advice, you might want to make lots of changes right away to get all those benefits.

Before you do that, let me offer two suggestions.

First: Take Incremental Steps

I worked with a school where the math department dramatically overhauled its entire program, with research guidance along the way.

The department adopted an inquiry based pedagogical model. And, it developed a syllabus designed to space and interleave practice. And it championed a group-work approach. And, to help students manage math anxiety, it adopted a mindful meditation program. And it incorporated lots of exciting technology.

The results: not good. By every measure, the students didn’t learn as much math. (In some cases, the results were quite alarming.) And: stress levels among teachers and students went up considerably.

Here’s the vexing point: the teachers didn’t know what to do next, because they didn’t know which of the changes they made created the problem.

Should they go back to direct instruction? Should they return to the traditional syllabus? Give up on group work and stop mindfulness breaks? Return all the iPads?

Because the grand combination of changes had produced those bad results, teachers didn’t know which one piece (or combination of pieces) to undo.

Potential solution: make one change at a time. Get a good measure of its success or failure before you add the next.

Second: Define Success and Failure in Advance

When we put lots of effort into a new project, we’re naturally inclined root for its success. In fact, we’re inclined to exaggerate its success.

For example: if I commit to a systematic program of retrieval practice with my students, I’m likely to see its benefits. And: to overlook its shortcomings.

To compensate for these natural blind spots, I think, we should define success and failure before we make the changes.

For instance, I might decide that all my retrieval practice work is succeeding if class averages on weekly vocabulary quizzes go up by 5%. It’s failing if that average falls by 5%. Any result in between is inconclusive noise.

(No doubt, what you measure and how you measure it will differ considerably from this hypothetical.)

At the end of the following term, I can check out my class averages, and have a plausible measurement of the results.

When I calculate those averages, I will hope that all my work paid off in more vocabulary learning. But I’ll be less tempted to overhype success — “look, they’ve having so much more fun!” — if cold data throw water on my enthusiasm.

And, that cold water will be all the more bracing if I myself decided on that measurement — well before the experiment even began.

To Sum Up

We should indeed make changes to schools based on psychology and neuroscience research.

To ensure those changes produce the most benefit, we should introduce them at a reasonable pace.

And: we should know before we start how to define success.

Working Memory Overload Throws Neurons Out of Synch
Andrew Watson
Andrew Watson

My English classroom often includes discussions like these:

When we read Zora Neale Hurston’s Their Eyes Were Watching God, I might ask my students “who is the antagonist?”

To answer this question, my students must recall several bits of factual information:

the definitions of “antagonist” and “protagonist”

the major characters of the novel

their most important actions and goals

Once they’ve recalled those facts, my students have to rearrange all that information into new conceptual patterns.

Which character’s actions and goals best align with the definition of “protagonist”? (In this case, that’s an easy question. Janie Crawford is far and away the likeliest nominee. )

Who’s the antagonist? That is, which character’s actions and goals thwart Janie’s?

That’s a much harder question, and students must wrestle with several possibilities as they develop a plausible argument.

Let’s Talk About the Mind

Where do my students hold and process all this information?

For a psychologist, that’s an easy question: working memory.

Working memory allows students to select, hold, reorganize, and combine information held in long-term memory: in this case, the novel’s events.

It also allows them to select, hold, reorganize, and combine information perceived from the environment: the question I just asked about antagonists.

Because we constantly ask our students to hold and combine bits of information, our students use working memory all the time.

When we ask students to calculate the volume of a solid, or to compare historical figures, or to explain a trophic cascade, or to predict what will happen when I roll a ball down a ramp, we’re asking them to use working memory.

By the way: this truth hold for skills and processes as well. Why is learning to drive a stick shift so hard? Because you must hold, combine, and co-ordinate several distinct physical processes.

And, here’s an essential point: we don’t have lots of working memory to use.

Let’s Talk About the Brain

We know a lot about the mental processes involved in working memory. (I might have written a book about them.)

But, the neuroscience of working memory has been harder to study.

In the world of psychology, we know that WM can be easily overwhelmed.

But, in the world of neuroscience, we don’t know exactly what happens at that moment.

In other words: what’s happening in the physical object of the brain that accounts for the mental difficulty?

What happens, for example, when I can’t shift gears properly on this stupid manual car?

Are neurons somehow disconnecting from one another? Are electrical signals going haywire? Perhaps neurotransmitters are watching kitten videos on Youtube ?

Today’s News

We’re starting to get an answer to that question.

New research suggests that successful working memory functioning requires that distinct brain regions operate synchronously.

When they reach overload, those regions fall out of synch.

Once those regions no longer synchronize, then students might struggle to solve math problems, or sound out a word with new phonics rules, or conjugate a verb in a freshly learned tense.

Like much neuroscience research, this study is fantastically complicated. Luckily, it’s been described quite well by Jordana Cepelewicz over at Quanta Magazine. (No need to worry about the “seven plus or minus two” formula.)

The good news here is clear: we’re starting to get a clearer picture about the neuroscience of working memory overload. Because teachers should be obsessed with working memory overload, we might well be intrigued by this news.

We should keep in mind, by the way, that this research so far has been done with monkeys. Whenever considering new research, always keep this rule in mind:

Never, never, never change your teaching practice based on research into non-human animals.

At some point, we might get neuroscience research that helps teachers manage working memory load. Although that day isn’t today, we should be glad that research possibility is clearer now than before.

Is Failure Productive? (Hint: We Should Ask a Better Question)
Andrew Watson
Andrew Watson

Do students learn better after they experience failure? Two recent studies over at The Science of Learning help us answer that question.

In the first study, professors in a Canadian college wanted to help their Intro Bio students learn difficult concepts more effectively. (Difficult concepts include, for example, the “structural directionality of genetic material.”)

They had one Intro Biology section follow a “Productive Failure” model of pedagogy. It went like this.

First, students wrestled with conceptual problems on these difficult topics.

Second, they got in-class feedback on their solutions.

Third, they heard the professor explain how an expert would think through those topics.

Another Intro Bio section followed these same steps but in a different order:

First, they heard the professor explain how an expert would think .

Second, students wrestled with conceptual problems.

Third, they got in-class feedback on their solutions.

So, all students did the same steps. And, they all followed an “active pedagogy” model. But, one group struggled first, whereas the other group didn’t.

Who Learned More?

This answer proves to be unusually complicated to determine. The researchers had to juggle more variables than usual to come up with a valid answer. (If you want the details, click the link above.)

The headlines are:

On the next major test, students who experienced productive failure learned more.

On the final exam, however, only the “low performing” students did better after productive failure. For the middle- and upper- tier students, both strategies worked equally well.

Conclusion #1:

So, we can’t really conclude that productive failure helps students learn.

Instead, we’re on safer ground to say that – over the longer term – productive failure helps “low performing” students learn (compared to other kinds of active learning).

But Wait, There’s (Much) More

Two weeks after they published the study about Canadian college students in Biology classes, Science of Learning then published a study about German fifth graders learning fractions.

(As we discussed in this post, watching students learn fractions helps researchers measure conceptual updating.)

In particular, these researchers wanted to know if students learned better after they struggle for a while. (Again, for details click the link.)

In this case, the answer was: nope.

So, we arrive at Conclusion #2:

Some college students, but not most, learned more from productive failure in a biology class – compared to those who learned via other active learning strategies.

However, fifth graders did not learn more about fractions – compared to those who learned via direct instruction.

Got that?

The Biggie: Conclusion #3

When teachers come to research-world, we can be tempted to look for grand, once-and-for-all findings.

A particular study shows that – say – students learn better when they use an iPad to study astronomical distances. Therefore, we should equip all our students with iPads.

But, that’s NOT what the study showed. Instead, it showed that a particular group of students studying a particular topic with a particular technology got some benefit – compared to a particular alternate approach.

So, Conclusion #3:

Teachers can often find helpful research on teaching strategies.

We should assume that results vary depending on lots of highly specific conditions. And therefore, we should seek out research that includes students (and classroom subjects) as much like our own as possible.

And so: if you teach biology to college students, you might give the first study a close look to see if its methods fit your students well. (Given that it worked particularly well with struggling students, that variable probably matters to you.)

If, however, you teach fractions to fifth graders, you should probably hold off on productive failure – unless you find several other studies that contradict this one.

In other words: teachers can learn the most from psychology and education research when we investigate narrow and specific questions.


A final thought. I’ve only recently come across the website that published these studies. Congratulations to them for emphasizing the complexity of these research questions by publishing these studies almost simultaneously.

I’m sure it’s tempting to make research look like the last word on a particular topic. Here, they’ve emphasized that boundary conditions matter. Bravo.

The Simplest Motivation Strategy that You’re (Probably) Not Using
Andrew Watson
Andrew Watson

In school as in life, sometimes we just need to get stuff done. And, truthfully, getting stuff done can be a real challenge.

For instance: I’m about to start writing a book. Based on previous book-writing experiences, I can predict the mundane problems that will get in my way.

My cats invariably need attention just as I’m starting to get in the zone.

The alerts from my email account lure me away from difficult writing passages.

I can never decide: stop for a snack now, or wait until lunch?

Luckily, we’ve got a remarkably simple strategy to get over these predictable hurdles.

Give Me Three Steps

Step 1: make a list of the potential problems. (I’ve already done that.)

Step 2: figure out the most plausible solutions.

So, for instance: instead of responding to my email alerts, I can simply close that browser. Problem solved.

Step 3: turn the first two steps into an “if-then” plan.

IF I get an email alert while working on my book, THEN I’ll close my email browser rather than look at the email.

Believe it or not, this simply process makes it much likelier that I will, in fact, ignore the email. (Or the cat, or my hunger.) And, because I’ve taken care of the most common obstacles, I’m much likelier to get my book written.

(Ask me six months from now how it’s going.)

Two More Steps?

This technique is even more effective when combined with another technique called “mental contrasting.”

In a recent article summarizing research in these fields, Marc Hauser describes mental contrasting this way:

In [mental contrasting], the individual first identifies and vividly describes a desired goal or wish. To be effective, this wish has to be feasible, but not easy.

Next, the individual identifies an obstacle that might get in the way of achieving this goal and vividly describes it [too].

Doing both together — vividly describing the goal AND vividly describing the obstacle — turns out to be much more helpful than doing just one or the other.

The Proof in the PSAT, and the Pudding

These techniques seem so simple that it’s hard to believe they work. In fact: why should we believe it?

Well, we’ve got some good research to persuade us. Hauser’s article, in fact, does a very helpful job summarizing both the theoretical background behind these strategies, and the studies that show their effectiveness.

For instance, Angela Duckworth (yes, that Angela Duckworth) worked with high-school students who wanted to prepare for the PSAT. Those who went through this process did 60% more practice problems than those who did a control task instead.

In fact, we’ve got good findings for non-academic tasks as well: limiting drinking, smoking, snacking, and so forth.

Practical Applications for Students

This technique, it seems to me, could be VERY easy for teachers to use. When we talk with our students about their homework habits, we can guide them through this process.

In fact, when I work with students in schools, I bring a specific form to guide them through the process.

(Here’s another approach from Ollie Lovell.)

Equally helpfully, we can use this technique to get our own work under control as well. We might not all have books to write, but we all have plenty of lesson-planning to do.

IF my phone rings while I’m preparing tomorrow’s class, THEN I’ll switch the phone to airplane mode without looking at the caller ID.

Problem solved!

The Debate Continues: Being Bilingual Doesn’t Improve Executive Function
Andrew Watson
Andrew Watson

Being bilingual, of course, offers lots of advantages. You can speak with more people, and — depending on potential cultural differences — gain valuable additional perspectives.

For a long time, researchers have wanted to know: does being bilingual offer additional cognitive benefits? Can it, in a meaningful way, help us think differently, and think better?

Some researchers — including my grad-school professor Gigi Luk — have argued that it increases cognitive control.

This possibility has been studied, for instance, with immigrants who learn a second language in Head Start.

However, this claim has long provoked controversy.

In this recent study, using highly sophisticated statistical procedures, researchers found that being bilingual did not improve executive function for Turkish immigrants to Germany.

What Should Teachers Do?

I have, over the years, seen studies on this topic go back and forth.

For instance, the Head Start research cited above seems quite persuasive. In that study, all students increased self-control during their Head Start year; after all, they were getting older.

Crucially, the students who also became bilingual showed greater increases in self-control. The likely explanation? The cognitive control required to be bilingual helps with other kinds of self-control as well.

And yet, as seen in the study of Turkish immigrants, that conclusion just might not be plausible.

So, my suggestions:

First: if your school currently makes strong claims about the executive-function benefits of bilingualism, you should think strongly about acknowledging the controversy in this field. That claim does have support. It also faces lots of strong counter-evidence.

Second: use this study as a reminder to seek out contradictory findings before you make changes to your classroom.

That is: if you hear persuasive research about topic X, be sure to look for anti-X research before you start X-ing.

Third: somewhat glumly, I wonder if this question ever can be answered finally and persuasively. The category “executive function” is alarmingly nebulous. And, the reasons that people become bilingual vary dramatically.

Given so many kinds of variety, I increasingly doubt we’ll be sure of an answer here.

With that point in mind, I think we should highlight this important point: learning a second language has value even if doing so doesn’t produce additional executive function benefits.

When we learn new languages, we create new opportunities to meet and connect with a world full of people. That benefit alone makes all that hard work worth while.

Factual Knowledge Must (Not?) Precede Higher Order Thinking
Andrew Watson
Andrew Watson

Last year, Tom Sherrington put out a recap of “evidence-informed ideas every teacher should know about.”

His statement about Bloom’s taxonomy goes like this:

“Never teach in a way that relegates knowing things to the bottom of the pile, placing creativity and ‘synthesis’ at the top, or get overly bogged down in ideas about ‘higher order thinking skills’ as if they are separate from knowing things.  They aren’t.

Re-think your sense of Bloom’s taxonomy to view knowledge that is the foundation of all else – and knowing things for the sake of it is good. Because there is always a sake and knowing things never stifles creativity; one fuels the other.”

In this summary, Sherrington makes a strong case for the primacy of factual knowledge. In this view, learners simply can’t undertake “higher order” thinking skills — like synthesis or creativity — without a strong foundation of factual knowledge.

Among teachers, this principle may be best known from Daniel Willingham’s Why Don’t Students Like School? He sums up the matter briskly as chapter 2’s core principle:

“Factual knowledge must precede skill.”

Of course, many scholars have rejected this perspective. Best known among them, Jo Boaler has insisted that math education currently relies excessively on memorization of disconnected facts.

Rather that drill times tables, she argues, teachers should prompt inquiry, exploration, and curiosity.

In other words: we can get to the top of the pyramid without worrying overly about the bottom layer.

Yes, but What Does Recent Research Show?

Researcher Pooja Agarwal specializes in cognitive science, with a focus on memory formation. In fact, she’s particularly keen on doing research in classrooms — not just psychology labs — to ensure that research findings generalize to real-world learning.

(We’ve interviewed Dr. Agarwal for the blog before. And, she’ll be offering a one-day Learning and the Brain seminar on powerful teaching in April.)

Agarwal recently explored the relationship between factual knowledge and skill. Her findings might surprise you. (They certainly surprised me.)

Contra Willingham, Agarwal found that …

“…building a foundation of factual knowledge via retrieval practice did not enhance students’ higher order learning.”

Instead, students did best when the form of the practice questions matched the form of the test questions. (‘Higher order’ here means ‘higher on Bloom’s taxonomy.’):

“Fact quizzes enhanced final fact test performance and higher order quizzes enhanced final higher order test performance.”

That is: when students didn’t review a particular set of facts, they could still reason with them — as long as they had practiced doing that kind of reasoning.

Ultimately, Agarwal ends up advocating for “mixed” practice quizzes, which include both factual and ‘higher order’ questions. (Here‘s a link to her latest blog post summarizing this research.)

Lots More to Learn

Willingham has not yet responded to Agarwal’s study. I don’t doubt that he will; keep an eye out on his blog.

In fact: I haven’t seen any research response to this study. It will be a fascinating debate to watch.

I suspect one line of debate will go like this: Agarwal’s study creates a plausible way to measure the tension between “factual knowledge” and “higher-order thinking.” However, that difference as measured in this study might not be just what Sherrington and Willingham mean.

As you can infer, these differences get technical quickly. Rather than dig into them now, I think teachers should have two responses:

First: be very happy that thoughtful people will be rigorously testing this highly complicated question. We really do need to know the answer to this question…and we don’t yet.

In fact, we probably don’t even have the right vocabulary and the right categories to answer it yet.

Seconddon’t yet make any big changes based on this research.

I hope that Agarwal’s study will launch a fresh round of investigation. We should wait to see where that leads us before we make big school plans.

The Better Choice: Open- or Closed-Book Quizzes
Andrew Watson
Andrew Watson

Psychology research offers lots of big ideas for improving student learning: self-determination theory, or the spacing effect, or cognitive load theory.

Once we make sense of that research, we teachers work to translate those big idea to practical classroom strategies.

In some cases, we can simply do what the researcher did. In most cases, however, we have to adapt their test paradigm to our specific classroom world.

So, for example, Nate Kornell explored the spacing effect with flashcards. He found that 1 deck of 20 cards produced more learning 4 decks of 5 cards. Why: a deck with 20 cards spaces practice out more than a deck with five cards.

That “big idea” gives teachers a direction to go.

But: we should not conclude that 20 is always the right number. Instead, we should adapt the concept to our circumstances. 20 flashcards might be WAY TOO MANY for 1st graders. Or, if the concepts on the cards are quite simple, that might be too few for college students studing vocabulary.

Translating Retrieval Practice

We know from many (many) studies that retrieval practice boosts learning.

In brief, as summarized by researcher Pooja Agarwal, we want students to pull ideas out of their brains, not put them back in.

So, students who study by rereading their notes don’t learn much; that’s putting ideas back in. Instead, they should quiz themselves on their notes; that’s pulling ideas out.

This big idea makes lots of sense. But, what exactly does that look like in our classrooms?

Over the years, teachers and researchers have developed lots of suggestions. (You can check out Dr. Agarwal’s site here for ideas.)

Thinking about retrieval practice, researchers in Germany asked a helpful question. In theory, closed-book quizzes ought to generate more learning than open-book quizzes.

After all: if my book is closed, I have to pull the information out of my brain. That’s retrieval practice.

If my book is open, I’m much likelier simply to look around until I find the right answer. That’s not retrieval practice.

These researchers wanted to know: does this sensible prediction come true?

The Results Please

Sure enough, closed-book quizzes do produce more learning. This research team retested students on information twice: one week after, and eight weeks after, they heard information in a lecture.

Sure enough, the students who took closed-book quizzes did substantially better than those who took open-book quizzes. (The cohen’s d values were above 0.80.)

In brief: we now have one more research-supported strategy for creating retrieval practice.

As always, I think we should be careful to think about limits on such research.

In the first place, this study took place with college students. If you teach younger students, and your experience tells you that an open-book strategy will work better under particular circumstances, you might ask a trusted colleague for a second opinion. Research like this gives us excellent guidance, but it can’t answer all questions.

In the second place, other variables might come strongly into play. For instance: stress. If your school culture has always allowed open-book quizzes, your students might freak out at the prospect of a closed-book alternative. If so, the benefits of retrieval practice might be lost to anxiety overload.

In this case, you’ll need to take the time to explain your reasoning, and to ease your students into new learning habits.

In any case, we can be increasingly confident that many varieties of retrieval practice produce the desirable difficulties that help students learn. (For a fun exception to this rule, click here.)

 

The Limitations of Neuroscience in Guiding Teachers
Andrew Watson
Andrew Watson

At a professional development talk on long-term memory formation, a teacher politely scolded me: I should have spent more time discussing alpha waves and gamma waves.

After all, she said, that was the really important stuff when it came to brains and learning.

Of course, the differences between alpha and gamma waves can fascinate us. And, pictures of various graphs can look dramatic — especially if the graphic designer has made the colors particularly attractive.

And yet, this kind of neuroscience information offers almost no useful guidance to teachers.

Here’s why.

What Should Teachers Do?

Pretend for the moment that we can plausibly say “this brain region shows gamma waves when it is learning, and alpha waves when it isn’t.”

(By the way, we almost never can say that plausibly. But, we’re pretending here.)

What should teachers do with that information?

Presumably we should ask: how can we reduce alpha waves and enhance gamma waves?

The answer to that question will always include a particular teaching practice. We should use retrieval practice. Or, we should space out repetitions. Or, we should reduce working memory load.

In every case, we know about the effectiveness of those teaching techniques by studying psychology, not neuroscience.

We can, of course, see changes in brain activity when use various classroom techniques.

But, we can determine their effectiveness only by measuring some behavioral outcome. Did the students do better on the test? Did they pay more attention to the stimulus? Or, did they demonstrate higher working memory scores? In every case, those are psychology questions.

Today’s News

I write about this topic every few months, because confusion between the two disciplines crops up fairly regularly.

For today, I want to highlight a blog post over at the Learning Scientists, where they’ve gathered several resources to explore this distinction.

Some of their resources explore the topic in a general way. The final link leads to a hot topic indeed: Daniel Willingham and Daniel Ansari challenge Jo Boaler and Tanya Lamar’s interpretation of neuroscientific data.

If you’ve been following debates about prior knowledge and math teaching, grab some popcorn and surf on over to that link.

The Best (Counter-intuitive) Sleep Advice You’ll Get This Year
Andrew Watson
Andrew Watson

Most of us — and most of our students — need more sleep.

One reason we’re short on z’s: even once we get in bed, the stresses of the day keep us anxiously awake.

We could get more sleep if we could de-stress, and fall asleep faster.

But: how would we do that?

So Crazy, It Just Might Work

We’ve written before about “dumping” as a strategy to reduce stress before exams.

Could a similar strategy work before we go to bed?

Michael Scullin and colleagues hypothesized that students might stress about upcoming tasks. If so, they might feel less stress if they could somehow get a handle on those tasks.

Perhaps, to get that handle, students could make a to-do list of upcoming responsibilities.

To test his hypothesis, Scullin worked with adults (18-30) right before bed. Half of them wrote specific lists of their accomplishments during the day. The other half wrote specific lists of impending to-dos.

So, What Happened?

Of course, it’s possible this technique might backfire. If I write down tomorrow’s responsibilities, then I might ramp up my stress level as I worry about getting them done.

In this case, however, that’s not what happened.

On average, students who wrote to-do lists fell asleep ten minutes faster than those who cataloged their accomplishments.

(These results conceptually mirror those pre-exam stress studies, which show that “dumping” before an exam increases exam performance.)

I particularly like Scullin’s technique, because it’s so gosh-darn practical. Simply put, students can do this. It took only five minutes. And, it helped!

Because this is the first study looking at this technique, we don’t know about boundary conditions. I myself assume that, at some age, children are too young to be kept awake by their mental list of tomorrow’s responsibilities. If that’s true, perhaps some alternate form of writing might help.

Until we know about those boundary conditions, we should use our best judgment in recommending this strategy to students and parents.


h/t to Christine Martin for pointing out this study to me.

Studying Wrong Answers Helps Learn the Right Ones
Andrew Watson
Andrew Watson

With teaching as with baking, sometimes you should follow steps in a very particular order. If you don’t do this, and then that, and then the other, you don’t get the best results.

Two researchers in Germany wanted to know if, and when, and how, students should study incorrect answers.

To explore this question, they worked with 5th graders learning about fractions. Specifically, they taught a lesson about comparing fractions with different denominators.

(When studying this topic, students can’t rely on their instincts about whole numbers. For that reason, it’s a good subject to understand how students update conceptual models.)

They followed three different recipes.

One group of 5th graders saw only correct answers.

A second group saw both correct and incorrect answers.

A third group saw correct and incorrect answers, and were specifically instructed to compare correct and incorrect ones.

Which recipe produced the best results?

The Judges Have Made Their Decision

As the researchers predicted, the third group learned the most. That is: they made the most progress in updating their conceptual models.

In fact: the group prompted to compare right and wrong answers learned more than the group that saw only the right answers. AND they learned more than the group that saw (but were not prompted to compare) right and wrong answers.

In other words: the recipe is very specific. For this technique to work, students should first get both kinds of information, and second be instructed to compare them.

Important Context

I’ve held off on mentioning an important part of this research: it comes in the context of problem-based learning.  Before these 5th graders got these three kinds of feedback, they first wrestled with some fraction problems on their own.

In fact, those problems had been specifically designed to go well beyond the students’ mathematical understanding.

The goal of this strategy: to make students curious about the real-world benefits of learning about fractions with different denominators in the first place.

If they want to know the answer, and can’t figure it out on their own, presumably they’ll be more curious about learning when they start seeing all those correct (and incorrect) answers.

As we’ve discussed before, debates about direct instruction and problem-based learning (or inquiry learning) often turn heated.

Advocates of both methods can point to successes in “their own” pedagogy, and failures in the “opposing” method.

My own inclination: teachers should focus the on relevant specifics. 

In the link above, for example, one study shows that PBL helps 8th graders think about deep structures of ratio. And, another study shows that it doesn’t help 4th graders understand potential and kinetic energy.

These German researchers add another important twist: giving the right kind of instruction and feedback after the inquiry phase might also influence the lesson’s success.

Rather than conclude one method always works and the other never does, we should ask: which approach best helps my particular students learn this particular lesson? And: how can I execute that approach most effectively?

By keeping our focus narrow and specific, we can stay out of the heated debates that ask us to take sides.

And: we can help our students learn more.