Skip to main content

Andrew Watson About Andrew Watson

Andrew began his classroom life as a high-school English teacher in 1988, and has been working in or near schools ever since. In 2008, Andrew began exploring the practical application of psychology and neuroscience in his classroom. In 2011, he earned his M. Ed. from the “Mind, Brain, Education” program at Harvard University. As President of “Translate the Brain,” Andrew now works with teachers, students, administrators, and parents to make learning easier and teaching more effective. He has presented at schools and workshops across the country; he also serves as an adviser to several organizations, including “The People’s Science.” Andrew is the author of "Learning Begins: The Science of Working Memory and Attention for the Classroom Teacher."

The Rise and Fall and Rise of Growth Mindset
Andrew Watson
Andrew Watson

Few theories in education have had a more dramatic story arc than Carol Dweck’s “Mindset.”

Based on research she started in the early 1970s, Dweck published her sumptuously-titled book Mindset, The New Psychology of Success: How We Can Learn to Fulfill our Potential in 2006. By the time I entered this field in 2008, it had gone beyond “must read” status to the “what do you mean you haven’t read it?” pantheon.

Like millions of other teachers, I read it. (In fact, I wrote my own book about Mindset: Learning Grows.)

Across the country, Growth Mindset posters went up on classroom walls. Grading standards changed to include the words “not yet.”

Like any big target, Dweck’s work attracted detractors. Doubts reached their pinnacle in 2018, when Sisk and Burgoyne published two meta-analyses. Their findings:

Growth mindset doesn’t really make much of difference for student learning.

Programs designed to enhance growth mindset have little meaningful effect.

Other large-scale studies, including this one from Argentina, reported similarly doubts.

Mindset’s potential, contrary to Dweck’s subtitle, remained unfulfilled.

Fresh Plot Twist?

Since the Sisk & Burgoyne meta-analyses, it has become fashionable to say “Successful mindset interventions have one variable in common: Carol Dweck did them.”

This critique is both untrue — lots of other researchers have found positive results — and unprofessional: it implies (without directly accusing) that Dweck either has been sloppy or has cooked her data.

And yet, anyone who reads Dweck’s research over the years would hesitate to throw such shade.

A freshly released heap o’ data, in fact, might restore some interest in Mindset.

Every three years, the Programme for International Student Assessment (PISA) tests 15-year-olds in science, math, and reading. They gather all sorts of data and publish those results.

In fact, recently released data from the 2018 sitting connects a Growth Mindset with higher academic achievement. (You can read a summary article here, and see the full chapter here.)

As Sarah Sparks summarizes the data:

After controlling for students’ and schools’ socioeconomic differences, students with a strong growth mindset scored significantly higher on all subjects—31.5 points in reading, 27 points in science, and 23 points in math—compared with students who believed their intelligence was fixed.

Unsurprisingly, Sparks reports, teachers matter:

Students with supportive teachers—for example, those who show interest in every student learning and a willingness to provide extra help and explanation until a student understands—were 4 percentage points more likely to have a growth mindset than those without a supportive instructor.

In other words: when we look past the shade and the snark, we find that growth mindsets might help learning, and that teachers can help foster them.

Stop the Pendulum

Our profession, alas, tends to extremes. We might embrace Mindset Theory as our school’s shining mission; we might reject it as fashionable pseudo-science.

I hope this time we can aim for a modest middle ground. A few points to keep in mind:

First: the PISA data show correlation, not causation.

Second: they come from self-report.

Third: they show wide differences across country and culture. (For instance: this graph caught my eye.)

Rather than put all our energies into this one strategy (or, into denigrating this one strategy), I think we can adopt a sensible logical chain:

A: Motivated students learn more, but teachers can struggle to motivate students. (Let’s admit it: much of what we study in schools isn’t intrinsically motivating for most students.)

B: On average, a growth mindset offers many students a motivational boost.

C: On average, specific teaching practices make it somewhat likelier that students will adopt a growth mindset.

D: If we can easily adopt — and easily maintain — culturally-appropriate teaching practices that enhance a growth mindset, our efforts will help some students learn.

E: Therefore, let’s do so.

Do I think a one-shot mindset intervention will help? Probably not. (I don’t think a one-shot intervention of anything will help.)

Do I think that Mindset strategies — consistently and modestly applied — will help? I do.

Should those strategies be accompanied by many other research-supported approaches (retrieval practice, metacognition, cognitive-load monitoring, attention-fostering, stress-reduction)? Indeed they should.

A True Story

I did some consulting at a summer camp two years ago. When I went to the archery department, they asked if I wanted to try my hand with a bow.

NO, reader, I DID NOT.

As a camper at this very camp decades before, I had experienced repeated humiliation; I only rarely hit the target, and often missed comically/catastrophically. Honestly, it was dreadful — one of those experiences that, 40 years later, can STILL make me blush.

After a moment of terror, I said to myself:

“Okay, Andrew, you talk about Growth Mindset all the time. Give it a try. Your goal shouldn’t be to get a perfect score. Just try to learn a bit and improve. That’s what you tell your students. Practice what you preach.”

What happened next was an archery miracle.

It turns out that I am right handed, but I sight with my left eye. I had been humiliated all those years ago because I was shooting with the wrong bow.

Once they got a lefty bow into my hand, taught me the stance and a few breathing tricks, I found that I’m a passable archer.

I’m no Robin Hood, but I felt like I hit the bullseye.

Proxy Battles: The Value of Handshakes at the Door
Andrew Watson
Andrew Watson

Should teachers welcome students to the classroom with elaborate individual handshakes?

Or — in these COVIDian days of ours — with elaborate dances? (If you’re on Twitter, you can check out @thedopeeducator’s post from March 17 of 2021 for an example.)

This question generates a surprising amount of heat. Around here that heat means: let’s look for research!

What Does “The Research” Say?

Truthfully, I can’t find much research on this question. Teachers have strong feelings on the subject, but the topic hasn’t gotten much scholarly attention.

The exception to this rule: Dr. Clayton Cook’s study on “Positive Greetings at the Door” from 2018.

As I described this study back in 2019, researchers trained teachers in a two-step process:

First: greet each student positively at the door: “Good morning, Dan — great hat!”

Second: offer “precorretive” reminders: “We’re starting with our flashcards, so be sure to take them out right away.”

The researchers trained five teachers (in sixth, seventh, and eighth grades) in these strategies.

Their results — compared to an “active” control group — were encouraging:

For the control group, time on task was in the mid-to-high 50%, while disruptive behaviors took place about 15% of the time.

For the positive greeting group, researchers saw big changes.

Time on task went from the high-50% to more than 80% of the time.

Disruptive behaviors fell from ~15% to less than 5% of the time.

All that from positive greetings.

A Clear Winner?

Handshake advocates might be tempted to read this study and declare victory. However, we have many good reasons to move more deliberately.

First: although handshakes are a kind of “positive greeting,” they’re not the only “positive greeting.” Researchers didn’t specify handshakes; they certainly didn’t require elaborate dances.

So, we can’t use this research to insist on either of those approaches. Teachers’ greetings should be specific and positive, but needn’t be handshake-y or dance-y.

Second: the “positive greetings” strategy requires an addition step — “precorrective guidance.” Once the handshake/greeting is complete, the teacher should offer specific directions about the next appropriate step…

… start the exercise on the board,

… take out your notebook and write the date,

… remember the definitions of yesterday’s key words.

Handshakes alone don’t match this research strategy. We need to do more to get these results.

Third: this research took place in a very specific context. Researchers asked principals to nominate classes that had seen higher-than-average levels of disruption.

That is: if your class is already well behaved, you might not see much of a change. (Of course, if your class is already well behaved, you don’t really need much of a change.)

And One More Thing (Well, TWO More Things)

I think Dr. Cook’s study helpful, clear, and well-done. However, as far as I know, it’s one of a kind. His research hasn’t been replicated (or, for that matter, contradicted). According to both Scite.ai and ConnectedPapers.com, this one study is everything we know from a research perspective.

In brief: the little research we have is encouraging. But: it doesn’t require elaborate choreography. It does require “precorrective guidance.” And, as Daniel Willingham says: “One study is just one study, folks.”

A final thought:

I suspect that “handshakes at the door” generate so much controversy because they’re a proxy for a wider battle of extremes.

Extreme #1: “If teachers put all our energy into forming relationships, students will inevitably learn more!”

Extreme #2: “That’s obviously dreadful nonsense.”

That is: “handshakes at the door” stand in for “relationships-first” teaching. Hence all the passion on Twitter.

This battle, I think, largely sacrifices sensible nuance to passionate belief.

On the one hand: of course, students (on average) learn more when they feel a sense of safety, respect, and connection. Some students (especially those who have experienced trauma) might struggle to learn without those things.

And, on the other hand: of course students can learn from teachers they don’t really like, and from teachers with whom they have no real connection. Lecture-based college courses depend on that model completely. So do military academies.

Handshakes at the door might help us connect with students if they feel comfortable and fun for us. But: plenty of individual teachers would feel awkward doing such a thing. Many school or cultural contexts would make such handshakes seem weird or silly or discomforting.

If such handshakes strengthen relationships, they might be a useful tool. If your relationships are already quite good, or if you’d feel foolish doing such a thing, or if your cultural context looks askance at such rituals, you can enhance relationships in other ways.

As is so often the case, we don’t need to get pulled onto a team — championing our side and decrying the other. We can, instead, check out available research, see how its conclusions apply to our context, and understand that optimal teaching practices might vary from place to place.

The 10-Minute Rule: Is The Lecture Dead?
Andrew Watson
Andrew Watson

The “10-minute rule” offers teachers practical guidance. It typically sounds something like this:

If students aren’t intrinsically interested in material, they can pay attention to it for no more than 10 minutes.

Ergo: teachers should do something different every ten minutes.

Ergo: the lecture is dead.

I first heard the “10-minute rule” at a conference in 2008, and run across it frequently when I work with teachers. They too, it seems, heard it at a conference.

Any rule that gets so much love at teaching conferences must be true, right?

Research-Aligned Teaching Advice

If you’re reading this blog, you want your teaching to have research behind it. So, what exactly is the research behind the “10-minute rule?”

Neil Bradbury is glad you asked. He looked into its history, and came up with some astonishing results: results that would be funny if they weren’t so alarming.

Let’s start with a Johnstone and Percival study from 1976, where two researchers visited 90 lecture classes (!). By comparing observations, they agreed that attention started to wane within the first five minutes (!!), with another decrease in the 10-18 minute range (!!!).

As Bradbury reports, however, this conclusion gets murky quickly:

First: they visited only 13% of those lectures together. In other words: 87% of their data come from one lone observer.

Second: they don’t report how they measured attention, or — for that matter — lapses in attention.

That student looking out the window: is she distracted by a bird, or concentrating on the professor’s complex argument?

That student looking keenly at the slides: is he engrossed in the topic, or trying to remember his lines for tonight’s rehearsal?

Johnstone and Percival have no way to know.

In other words: the “10-minute rule” rests on the hunchy sense of two visitors who were — as far as we can tell — simply relying on their guts. Whatever we call that, we don’t call it “research.”

And, whatever we do with their hunches, we shouldn’t change our teaching because of them.

Measuring Attention

This study highlights a complex problem. Attention, of course, takes place inside our heads. How can we measure it?

One solution: keep track of students’ note taking. Perhaps, students take more notes when they pay attention, and fewer notes when they don’t?

If that hypothesis is true, then students who write less are paying less attention. When we find a steep decline in note taking, we’ve found the moment when attention has dropped off. Sure enough: a 10-minute increment turns out to be crucial.

Alas, as Bradbury points out, this approach also collapses.

First: students take notes relatively consistently throughout a lecture. Their note taking falls off in the final ten minutes, not after the first ten minutes.

Second: in fact, the quantity of note taking results from the professor’s lecture, not from the point in the lecture. When the speaker makes key points, students write more. When the professor is recapping, or simply winding down — as she might do at the end of a lecture — they take fewer notes.

As Bradbury pithily summarizes this approach:

Note taking is not a good proxy for attention whatsoever, and even it if were, it does not support a 10- to 15- minute limit on student engagement.

BOOM.

Let’s Get Physical

If note-taking doesn’t measure attention, perhaps we can use biological measures instead.

Research by Bligh used a pulsemeter to measure students’ alertness. This study found that their pulses dropped roughly 14% over the course of the class.

At last: research confirmation of the “10-minute rule”?

Alas, Bligh’s research found the same results during a discussion class as during a lecture.

We might think that a lower pulse suggests less attention. If it does, then neither class format sustains attention.

Classroom Implications

In brief, the “10-minute rule” isn’t a rule, and doesn’t last ten minutes.

More precisely: we have no research suggesting it’s a rule with a consistent time limit.

Given that truth, what should we teachers do?

First: focus on the obvious truth that people are different.

Older students can (probably) pay attention longer than younger ones.

Hungry students (probably) pay less attention than less-hungry ones. (Except right after lunch.)

Some cultures prioritize focused attention more than others.

Some lecturers know how to hold an audience better than others.

Your approach to teaching should vary based on your specific circumstances, not be dictated by an arbitrary rule (which sounds authoritative but has no research backing.)

For instance: I’m currently teaching two sections of the same class — one in person and the other online. I teach them differently because attention can be more difficult online. (And because the online section meets earlier in the day — a real challenge for high school students.)

Second: study the science of attention.

Surprisingly, attention isn’t one thing.

Instead, attention is a behavioral combination of three distinct mental processes.

The more teachers understand that complex mix, the more successful we can be in creating the behavior by managing the mental processes.

I’ve written a book on this subject: Learning Begins: A Classroom Teacher’s Guide to Working Memory and Attention. (Ryan Reynolds will play me in the movie, I’m sure.)

Or, you can read LOTS of great articles: here’s one place to start.

Whichever approach you take, don’t let implausible absolute rules shape your thinking. Pay attention to your students, and to attention itself. Those two beacons will guide you on your classroom journey.


In the past, I’ve cited Wilson and Korn’s 2007 discussion of this topic. My thanks to Zach Groshell (Twitter handle: @MrZachG) for pointing to Bradbury’s wonderful article.

To Grade or Not to Grade: Should Retrieval Practice Quizzes Be Scored?
Andrew Watson
Andrew Watson

We’ve seen enough research on retrieval practice to know: it rocks.

When students simply review material (review their notes; reread the chapter), that mental work doesn’t help them learn.

However, when they try to remember (quiz themselves, use flashcards), this kind of mental work does result in greater learning.

In Agarwal and Bain’s elegant phrasing: don’t ask students to put information back into their brains. Instead, ask them to pull information out of their brains.

Like all teaching guidance, however, the suggestion “use retrieval practice!” requires nuanced exploration.

What are the best methods for doing so?

Are some retrieval practice strategies more effective?

Are some frankly harmful?

Any on-point research would be welcomed.

On-Point Research

Here’s a simple and practical question. If we use pop quizzes as a form of retrieval practice, should we grade them?

In other words: do graded pop quizzes result in more or less learning, compared to their ungraded cousins?

This study, it turns out, can be run fairly easily.

Dr. Maya Khanna taught three sections of an Intro to Psychology course. The first section had no pop quizzes. In the second section, Khanna gave six graded pop quizzes. In the third, six ungraded pop quizzes.

Students also filled out a questionnaire about their experience taking those quizzes.

What did Khanna learn? Did the quizzes help? Did grading them matter?

The Envelope Please

The big headline: the ungraded quizzes helped students on the final exam.

Roughly: students who took the ungraded pop quizzes averaged a B- on the final exam.

Students in the other two groups averaged in the mid-to-high C range. (The precise comparisons require lots of stats speak.)

An important note: students in the “ungraded” group scored higher even though the final exam did not repeat the questions from those pop quizzes. (The same material was covered on the exam, but the questions themselves were different.)

Of course, we also wonder about our students’ stress. Did these quizzes raise anxiety levels?

According to the questionnaires, nope.

Khanna’s students responded to this statement: “The inclusion of quizzes in this course made me feel anxious.”

A 1 meant “strongly disagree.”

A 9 meant “strongly agree.”

In other words, a LOWER rating suggests that the quizzes didn’t increase stress.

Students who took the graded quizzes averaged an answer of 4.20.

Students who took the ungraded quizzes averaged an answer of 2.96.

So, neither group felt much stress as a result of the quizzes. And, the students in the ungraded group felt even less.

In the Classroom

I myself use this technique as one of a great many retrieval practice strategies.

My students’ homework sometimes includes retrieval practice exercises.

I often begin class with some lively cold-calling to promote retrieval practice.

Occasionally — last Thursday, in fact — I begin class by saying: “Take out a blank piece of paper. This is NOT a quiz. It will NOT be graded. We’re using a different kind of retrieval practice to start us off today.”

As is always true, I’m combining this research with my own experience and classroom circumstances.

Khanna gave her quizzes at the end of class; I do mine at the beginning.

Because I’ve taught high school for centuries, I’m confident my students feel comfortable doing this kind of written work. If you teach younger grades, or in a different school context, your own experience might suggest a different approach.

To promote interleaving, I include questions from many topics (Define “bildungsroman.” Write a sentence with a participle. Give an example of Janie exercising agency in last night’s reading.) You might focus on one topic to build your students’ confidence.

Whichever approach you take, Khanna’s research suggests that retrieval practice quizzes don’t increase stress and don’t require grades.

As I said: retrieval practice rocks!

What (and Why) Should Students Memorize? Confidence and Fluency for the Win
Andrew Watson
Andrew Watson

In our profession, memorization has gotten a bad name.

The word conjures up alarming images: Dickensian brutes wielding rulers, insisting on “facts, facts, facts!”

In a world when students “can look up anything on the interwebs,” why do we ask students to memorize at all?

One answer from cognitive science: we think better with information we know well.

Even when we can find information on the internet, we don’t use that information very effectively. (Why? Working memory limitations muddle our processing of external information.)

A second answer appears in intriguing recent research.

Reasonable Plans, Unstated Assumptions

As a classroom teacher, I might operate on this reasonable plan:

Step one: we discuss ideas and information in class.

Step two: students write down the important parts.

And, step three: when students need that information later, they look at their notes.

This plan — the core of most high school classes I know — relies on unstated assumptions:

Assumption one: students’ notes are largely correct.

Assumption two: if students write down information INcorrectly, they’ll recognize that mistake. After all, we discussed the correct information in class.

But what if that second assumption isn’t true?

What if students trust external information (their notes) more than internal information (their memories)?

Assumptions Thwarted

In 2019, Risko, Kelly, & Gaspar studied one version of this question.

They had students listen to word lists, and type them into a storable file. After distraction, students got to review their lists. They then were tested on those words.

On the final list, however, these scholars did a sneaky thing: they added a word to the stored list. Sure enough, 100% of their students wrote down the additional word, even though it hadn’t in fact been on the initial word list.

Students trusted their written document (external “memory”) more than their own actual memory. When tested even later, students still included the additional word, even though it wasn’t a part of their initial learning.

In other words: the “reasonable plan” that teachers often rely on includes an assumption that — at least in this research — isn’t true.

Ugh.

Classroom Implications

This research, I think, reminds us that the right kind of memorization has great value for students.

We want students to know certain bedrock facts and processes with absolute certainty. We want them, for instance, to define key terms and ideas fluently. Crucially, we want them to reject — with confidence borne of certain knowledge — inaccurate claims.

For instance:

I just completed a unit on tragedy. My sophomores read August Wilson’s Fences and Shakespeare’s Macbeth.

 

On the very first day of the term, I went over a four-part definition of “tragedy.” (It’s a quirky definition, I admit, but it works really well.)

 

We reviewed that definition almost daily, increasingly relying on retrieval practice. For instance, I might give them this handout to fill in. Or we might do that work together on the board.

Over time, I started including inaccurate prompts in my questions: “So, tragedy ends in death or marriage, right?”

By this point, my students knew the definition so well that they confidently rejected my falsehoods: “No, you’re trying to trick us! Tragedy ends in death or banishment!”

For an even trickier approach, I encouraged students to correct one another’s (non-existent) mistakes:

Me: “T: what does comedy represent, and why?”

T: “The marriage (and implied birth) at the end of a comedy implies the continuity of society, and in that way contrasts tragedy’s death and banishment, which represent the end of society.”

Me: “M: what did T get wrong.”

M [confidently]: “Nothing. That was exactly right.”

Me [faking exasperation]: “S, help me out here. What did T and M miss?”

S [not fooled]: “Nothing. I agree with them both.”

Me: “Congrats to T for getting the answer just right. And congrats to M and S for not letting me fool you. It’s GREAT that you’re all so confident about this complex idea.”

Because these students knew this complex definition cold — because they had memorized it — they could stand firm when questioned skeptically. As a result, they did a great job when asked to apply that definition at the end of the term:

“How does Wilson’s Fences fit the definition of tragedy AND of comedy?”

To Sum Up

Despite all the bad press, the right kind of memorization can enhance learning.

When students know foundational information and processes by heart, they

BOTH process questions more fluently

AND resist misleading information from “external memory” sources.

Greater cognitive fluency + greater confidence in their knowledge = enduring learning.

Prior Knowledge: Building the Right Floor [Updated]
Andrew Watson
Andrew Watson

When I first published this post back in 2019, the underlying research wasn’t publicly available; now it is!


Take a gander at this passage from Michael Jones’s recent biography of The Black Prince:

“In the fourteenth century England used a silver standard of currency. The unit of account was the pound sterling (£) which was equal to one and a half marks of silver. The pound was divided into twenty shillings (s), each of twelve pence (d). There was also, from 1344, a gold coinage based on the noble, which was conventionally worth 6s 8d, but was rarely used. It would, however, be significant in the calculation of the ransom of King John II and also in the introduction of gold coinage into Gascony and then the principality of Aquitaine by the Black Prince.”

Many readers, I suspect, felt tempted to give up relatively quickly. (Don’t blame yourself if you did.) Unless you’re really up to speed on 14th century English currency–both silver and gold!–the paragraph quickly becomes overwhelming.

The vocabulary in this passage probably doesn’t strain our cognition. Except for the phrase “marks of silver,” I know what all those words mean. (And, I can guess from context that a “mark” is some unit of measurement.)

However, the passage does place several mental demands on the reader.

First, it invites you to undertake several rapid mathematical calculations. (Quick: how many shillings in a mark?)

Second, it requires you to learn abbreviations as you go. To understand the fourth sentence, you need to remember the (wildly counter-intuitive) abbreviation of “pence” as “d” from the third sentence.

Third, it assumes you recall several events and places unfamiliar–I suspect–to most Americans. Who was King John II? Why was he ransomed…was he kidnapped? Where are Gascony and Aquitaine? They don’t sound very English — why did an English prince introduce coinage to them? Actually: why is a prince empowered to introduce new currency?

Essential Background Knowledge

I thought of this paragraph when I read a recent article by Robert Pondiscio. In it, Pondiscio summarizes a study trying to determine how much background knowledge is necessary for comprehension.

In this study, students who scored higher than a 59% on a background knowledge test understood a reading passage substantially better than those who scored below 59%.

As summarized by Pondiscio, the study’s authors see some clear teaching implications here.

First, we can meaningfully measure our students’ relevant background knowledge.

Second, students who fall short on that measure will benefit A LOT if we provide them with the essentials.

For instance, students who understood that “habitat,” “species,” and “ecosystems” were relevant vocabulary for the study of ecology understood the reading passage more deeply. (The study included 3500 students, so I believe they controlled for various confounds. I haven’t read the study itself–it’s behind a paywall.)

I think those conclusions point to another:

Third: models of teaching that focus on “pure discovery” will create substantial challenges for students who lack background knowledge. Students who don’t know the basics of a topic simply can’t understand the field of inquiry within which they’re meant to discover.

And, they won’t feel motivated by curiosity to find out. They’ll feel discouraged by their confusion. (Few readers, I suspect, were motivated by the paragraph above to learn more about medieval English currency.)

A Final Thought

This study finds that 59% was the essential tipping point. Students who scored lower than 59% on the prior knowledge test found themselves in a different cognitive category than those who scored above.

Howeverthat percentage does not necessarily apply to all circumstances.

In other words: we shouldn’t give our students prior-knowledge tests, and focus only on those who score 58% and below.

Instead, we should plan our lessons and units knowing that some floor-level of knowledge will be crucial for learning most things.

In every case–as you hear me say so often–we’ll have to rely on the teacher’s judgment to discover that level.

Researchers can remind us that the floor exists. But they can’t identify it for every teacher in every classroom. Ultimately, with that research guidance in mind, we’ll find the right place for the floor. And, we’ll build it.

Assembling the Big Classroom Picture
Andrew Watson
Andrew Watson

The last 20 years have brought about powerful new ways to think about teaching and learning.

When teachers combine our experience, professional traditions, and instincts with the scientific insights of psychology and neuroscience research, we find new ways to understand our classroom work.

As I’ve written before, we currently suffer from an embarrassment of riches.

With SO MANY ideas that have strong and consistent research backing, how can we understand and prioritize all the suggestions coming our way? (One recent answer: Chew and Cerbin’s approach, focusing on the cognitive challenges students face.)

The Big 6

You’d like another way to think about all this research?

Over at The Learning Scientists, several wise scholars have been championing a blend of six research-based approaches to teaching: spacing, interleaving, retrieval practice, elaboration, concrete examples, and dual coding.

Frequent readers will recognize almost every strategy on that list. I write about at least one of those each month. (If you want more information on “retrieval practice,” for instance, click on the tag in the right column.)

Two of the Learning Scientists, Dr. Yana Weinstein and Dr. Megan Sumeracki, have written a great book: Understanding How We Learn, a Visual Guide. If you want to explore those big 6 in depth, you should put it on your “must read” pile right away.

If, however, you’d like a brisker introduction to these ideas, I’ve recently found a review article by Weinstein and Sumeracki (and Christopher Madan) that summarizes and illustrates all six.

Newcomers to this field you can read its fifteen pages in an hour.

Veterans can instead scan it for a refresher, and for new insights and combinations.

For Example

Although I’ve written frequently about most of the Big Six, I don’t often discuss elaboration. More than the other concepts, it resists easy definition.

Luckily — more than most psychology terms (I’m looking at you, “theory of mind”) — the term means what it says. When we ask students to elaborate on new information or new skills, this deeper processing helps consolidate long-term memories.

For instance, in preparation for teaching Their Eyes Were Watching God, I took some time in Thursday’s class to define the terms “voice” and “agency” as English scholars use them. These concepts will be essential to understand Janie’s development over the course of the novel.

We then elaborated on those definitions by discussing two plays we had just finished reading: Fences and Macbeth.

When does Lady Macbeth have “voice”?

How does Troy claim “agency”?

What strategies does Roes use to claim voice and agency in her life?

How does the set of Fences represent limitations on agency?

And so forth.

By connecting these two new words to events they already knew and understood, students elaborated on their definitions. (And — as a bonus — got some retrieval practice about Fences and Macbeth.)

By the way, the Enser’s handy book Generative Learning in Action offers LOTS of well-researched elaboration strategies.

In Sum

At Learning and the Brain, we bring teachers and researchers together for productive conversation.

Sometimes, we should focus on research details to ensure they’re persuasive enough for us to reshape our teaching practices.

At other times, we should refocus on the big picture — to understand how each of the techniques adds up to a coherent teaching practice.

Weinstein, Madan, and Sumeracki help us do just that.

Enjoy!

Retrieval Practice and Metacognition: What and How Do Students Think about This Powerful Learning Strategy?
Andrew Watson
Andrew Watson

Ask almost anyone in Learning and the Brain world, they’ll tell you: retrieval practice benefits students.

More than most any other technique we have, this one both has lots of research support and can easily be integrated into our classrooms. (For a handy review of its benefits, check out Agarwal and Bain’s great bookPowerful Teaching.)

Of course, because retrieval practice offers so many benefits, we want our students to use it.

Do they?

Metacognition and Retrieval Practice

The short answers are: not enough, and not very well.

Michelle Rivers wanted to know why, and so explored research into students’ metacognitive beliefs about retrieval practice. That is:

What do they believe about retrieval practice?

How do they use it to monitor their learning?

How do they use its insights to control their learning activities?

The more we understands students’ metacognitive perspective on their learning, the more wisely we can guide them.

What did she find?

Beliefs about Retrieval Practice

Sadly, most students don’t understand RP’s benefits.

In 10 studies that asked “why do you quiz yourself,” only 26% of students say they do so in order to learn more.

Instead, most students (52%) do so “to figure out how well I’ve learned the information I’m studying.”

In other words: even the students who use RP most often do so for the wrong reasons.

Of course: they’re not harming themselves by using retrieval practice this way. But — and this is a big but — they’re not getting the benefits that RP can offer.

In fact, Rivers’s survey suggests one reasons students might not use retrieval practice to help themselves learn. Studies suggest that when students try both methods, they don’t predict that they’ll remember more after retrieval practice. (Check out this study by Kornell and Son.)

I find this research pool baffling, even disheartening. Even when students experience greater success with RP than with simple rereading, they don’t internalize the broader lesson that active retrieval helps them learn.

Little wonder, then, that most students review material (43%) or copy their notes (11%) as a go-to strategy, rather than self-testing (8%).

Uses of Retrieval Practice

Given these flawed beliefs, how do students use RP?

Among Rivers’s findings: students try retrieval practice …

… when the questions are easy

right before a test

relatively late in the learning process.

… relatively few times for any given pool of information.

Of course, retrieval practice benefits students when they do so…

… with questions that are challenging

well before a test (in fact, RP immediately before a test might be counterproductive)

throughout the learning process

several times for any given pool of information.

Simply put: even when students use this excellent study strategy, they do so in less-than-optimal ways.

Next Steps: Learning How to Learn

So far, this is quite the glum post. A potentially powerful learning strategy is largely going to waste.

What can we teachers do?

I’ve got two suggestions.

First, this recent post summarizes a promising approach from Mark McDaniel and Gilles Einstein. Their multi-step process not only works to persuade students of RP’s benefits; it encourages them to make specific retrieval practice plans and to follow through on them.

In other words: we shouldn’t just tell our students about its benefits. We shouldn’t just tell them to do it. We should go the next steps to create plans and structures.

Second, I’ve seen LOTS of online programs to help teachers and students with their retrieval practice.

For instance, Adam Boxer has created a program called Carousel. This program allows teachers to create retrieval questions, and to jumble them together in useful ways. It allows students to self-score their work (and teachers to correct any errors). It keeps track of right and wrong answers, so we can see how well our students are learning specific topics and questions.

I have used Carousel enough to find it intriguing; I haven’t used it enough to make strong claims about it. (Given responses to it on Twitter, however, it seems that teachers kind of love it.)

Whichever program you choose, I think students will learn how to learn more effectively if we build tools like these into our teaching practice.

In Sum

A: We know that retrieval practice can help students learn, but only if they use it correctly.

B: We know that, for the most part, they don’t.

A + B = We should focus more on helping students use this strategy wisely. And, we’ve got the tools to do so!

 

“Kids These Days!”: A (Partial) Defense of Ignorance and Distractibility
Andrew Watson
Andrew Watson

You’ve seen the videos.

An earnest reporter wielding a microphone accosts a college student and asks extremely basic questions:

“What are the three branches of government?”

“What is the capital of France?”

“Who wrote the Declaration of Independence?”

When students can’t answer these questions, the reporter eyes the camera wryly, as if to say, “What’s wrong with Kids These Days?”

One such video made the rounds recently. Middle schoolers (I think) didn’t know what D-Day was: they hypothesized he might be a rapper.

So, really: what is wrong with these kids? How can they POSSIBLY not know about D-Day?

Beyond History

Our lament frequently goes beyond students’ lack of historical knowledge.

We worry about “kids and their devices.”

They’re always looking at screens! (I’m here to tell you: back in the ’80s, we never looked at screens.)

They’re always texting! (We never texted.)

They’re so distractible! (Nope. Not us.)

If students know so little and concentrate so badly, we have to wonder what’s up with them.

Distateful Mirrors

I understand the frustration. (I’ve taught a class of well-educated students who didn’t know the story of Noah’s Ark. That was a shocker.)

At the same time, I think that question distracts us from the essential underlying point.

If an entire room of students didn’t know what D-Day was, it’s clear that the school system didn’t teach them about D-Day; or — at least — that didn’t teach them well enough for them to consolidate that knowledge.

If we can easily find college students who don’t know from geography and history, we can blame the college students. But I think we should first pause to consider the education system in which it’s possible to complete high school without enduring knowledge of such things.

It is, in my view, simply not fair or helpful to blame students for being in the system that adults created.

Those videos shouldn’t make us condemn the students; they should instead make us look in the mirror.

We might not like what we see. But: their shortcomings tell us more about our education system than about our students.

The Ingenious Tech Switcheroo

The same argument, in my view, applies to laments about technology. Notice the impressive blame-shifting here:

Step 1: technology companies invent a must-have gadget. They market it to — say — 10 year olds.

Step 2: 10-year-olds want the gadget, and pester their parents to buy it. (The tech company doesn’t need to get money from adults; they persuade children to get money from their parents and give it to the company. BRILLIANT!)

Step 3: the tech industry then highlights the narrative that the 10-year-olds are to blame for being distractible. The problem is not in the adults’ behavior; it’s in the children! “Why oh why can’t these kids focus!”

Here again, the students’ behavior gives us essential feedback about adults.

If we want today’s students to concentrate better, maybe we should create — and provide them with — fewer distractions. Perhaps we should model the behavior we want to see. (Quick: how many browser tabs do you have open?)

Caveats, Always Caveats

One: not everyone worries when students don’t know stuff.  (I do, but some don’t share that concern.) For adults who don’t emphasize factual knowledge, those videos seem trivial, not alarming.

Two: not all the data suggest that students are “more distractible.” Perhaps they simply have more distractions. (How many times did you check your cell phone in the last hour?)

Three: Of course, students bear responsibility for working effectively within the systems adults create. If twenty-four of my students learn something and one doesn’t, we can reasonably wonder what’s going on with that one. But, if none of my students know the importance of the Treaty of Versailles, I should think about the adult-driven systems, not “what’s up with kids these days.”

Four (this is a biggie): As I strive refocus popular outrage away from the students and toward the system in which they learn, I might seem to be blaming teachers and school leaders. I very much don’t mean to do that.

In my experience, the great majority of both groups work extremely hard, and do so with the best intentions. (Few people say: “I went into teaching to become rich.”)

At the same time: our well-intentioned efforts simply aren’t producing the results we want. That feedback — evident in those videos — should prompt honest and searching self-reflection.

In Sum

I promised a (partial) defense of ignorance and distractibility. Here goes:

Of course we want our children to know important information and skills, and to be able to concentrate on them.

If most students don’t and can’t, the fault probably lies with the education system, not children who learn within it.

Children who don’t know what D-Day is don’t deserve to be ridiculed on Twitter. They do deserve a curriculum that fosters knowledge, skill, and concentration. They deserve pedagogy that helps them master all three.

At Learning and the Brain, we connect education with psychology and neuroscience in order to start conversations. Conversations that include those three perspectives can help create such a curriculum; can help foster such pedagogy.

We hope you’ll join us!

When Evidence Conflicts with Teachers’ Experience
Andrew Watson
Andrew Watson

Here’s an interesting question: do students — on average — benefit when they repeat a grade?

As you contemplate that question, you might notice the kind of evidence that you thought about.

Perhaps you thought: “I studied this question in graduate school. The research showed that answer is X.”

Perhaps you thought: “I knew a student who repeated a grade. Her experience showed that the answer is X.”

In other words: our teaching beliefs might rest on research, or on personal experience. Almost certainly, they draw on a complex blend of both research and experience.

So, here’s today’s question: what happens when I see research that directly contradicts my experience?

If I, for instance, think that cold calling is a bad idea, and research shows it’s a good idea, I might…

… change my beliefs and conclude it’s a good idea, or

… preserve my beliefs and insist it’s a bad idea. In this case, I might…

… generalize my doubts and conclude education research generally doesn’t have much merit. I might even…

… generalize those doubts even further and conclude that research in other fields (like medicine) can’t help me reach a wise decision.

If my very local doubts about cold-calling research spread beyond this narrow question, such a conflict could create ever-widening ripples of doubt.

Today’s Research

A research team in Germany, led by Eva Thomm, looked at this question, with a particular focus on teachers-in-training. These pre-service teachers, presumably, haven’t studied much research on learning, and so most of their beliefs come from personal experience.

What happens when research contradicts those beliefs?

Thomm ran an online study with 150+ teachers-in-training across Germany. (Some were undergraduates; others graduate students.)

Thomm’s team asked teachers to rate their beliefs on the effectiveness of having students repeat a year. The teachers then read research that contradicted (or, in half the cases, confirmed) those beliefs. What happened next?

Thomm’s results show an interesting mix of bad and good news:

Alas: teachers who read contradictory evidence tended to say that they doubted its accuracy.

Worse still: they started to rely less on scientific sources (research) and more on other sources (opinions of colleagues and students).

The Good News

First: teachers’ doubts did not generalize outside education. That is: however vexed they were to find research contradicting prior beliefs about repeating a year, they did not conclude that medical research couldn’t be trusted.

Secondteachers’ doubts did not generalize within education. That is: they might have doubted findings about repeating a year, but they didn’t necessarily reject research into cold calling.

Third: despite their expressed doubts, teachers did begin to change their minds. They simultaneously expressed skepticism about the research AND let it influence their thinking.

Simply put, this research could have discovered truly bleak belief trajectories. (“If you tell me that cold calling is bad, I’ll stop believing research about vitamin D!”) Thomm’s research did not see that pattern at work.

Caveats, Caveats

Dan Willingham says: “one study is just one study, folks.” Thomm’s research gives us interesting data, but it does not answer this question completely, once and for all. (No one study does. Research can’t do that.)

Two points jump out at me.

First, Thomm’s team worked with teachers in Germany. I don’t know if German society values research differently than other societies do. (Certainly US society has a conspicuously vexed relationship with research-based advice.) So, this research might not hold true in other countries or social belief systems.

Second, her participants initially “reported a positive view on the potency of research and indicated a higher appreciation of scientific than of non-scientific sources.” That is, she started with people who trusted in science and research. Among people who start more skeptical — perhaps in a society that’s more skeptical — these optimistic patterns might not repeat.

And a final note.

You might reasonably want to know: what’s the answer to the question? Does repeating a year help students?

The most honest answer is: I’m not an expert on that topic, and don’t really know.

The most comprehensive analysis I’ve seen, over at the Education Endowment Foundation, says: NO:

“Evidence suggests that, in the majority of cases, repeating a year is harmful to a student’s chances of academic success.” (And, they note, it costs A LOT.)

If you’ve got substantial contradictory evidence that can inform this question, I hope you’ll send it my way.