Skip to main content

Andrew Watson About Andrew Watson

Andrew began his classroom life as a high-school English teacher in 1988, and has been working in or near schools ever since. In 2008, Andrew began exploring the practical application of psychology and neuroscience in his classroom. In 2011, he earned his M. Ed. from the “Mind, Brain, Education” program at Harvard University. As President of “Translate the Brain,” Andrew now works with teachers, students, administrators, and parents to make learning easier and teaching more effective. He has presented at schools and workshops across the country; he also serves as an adviser to several organizations, including “The People’s Science.” Andrew is the author of "Learning Begins: The Science of Working Memory and Attention for the Classroom Teacher."

The Limits of “Desirable Difficulties”: Catching Up with Sans Forgetica
Andrew Watson
Andrew Watson

We have lots of research suggesting that “desirable difficulties” enhance learning.

That is: we want our students to think just a little bit harder as they practice concepts they’re learning.

Why is retrieval practice so effective ? Because it requires students to think harder than mere review.

Why do students learn more when they space practice out over time? Because they have to think back over a longer stretch — and that’s more difficult.

We’ve even had some evidence for a very strange idea: maybe the font matters. If students have to read material in a hard-to-read font, perhaps their additional effort/concentration involved will boost their learning.

As I wrote last year, a research team has developed a font designed for exactly that reason: Sans Forgetica. (Clever name, no?) According to their claims, this font creates the optimal level of reading difficulty and thereby could enhance learning.

However — as noted back then — their results weren’t published in a peer-reviewed journal. (All efforts to communicate with them go to their university’s publicity team. That’s REALLY unusual.)

So: what happens when another group of researchers tests Sans Forgetica?

Testing Sans Forgetica

Testing this question is unusually straightforward.

Researchers first asked participants to read passages in Sans Forgetica and similar passages in Arial. Sure enough, they rated Sans Forgetica harder to read.

They then ran three more studies.

First, they tested participants’ memory of word pairs.

Second, they tested memory of factual information.

Third, they tested understanding of conceptual understanding.

In other words, they were SUPER thorough. This research team didn’t just measure one thing and claim they knew the answer. To ensure they had good support behind their claims, they tested the potential benefits of Sans Forgetica in many ways.

So, after all this thorough testing, what effect did Sans Forgetica have?

Nada. Bupkis. Nuthin.

For example: when they tested recall of factual information, participants remembered 74.73% of the facts they read in Sans Forgetica. They remembered 73.24% of the facts they read in Arial.

When they tested word pairs, Sans Forgetica resulted in lower results. Participants remembered 40.26% of the Sans Forgetica word pairs, and 50.51% of the Arial word pairs.

In brief, this hard-to-read font certainly doesn’t help, and it might hurt.

Practical Implications

First, don’t use Sans Forgetica. As the study’s authors write:

If students put their study materials into Sans Forgetica in the mistaken belief that the feeling of difficulty created is benefiting them, they might forgo other, effective study techniques.

Instead, we should encourage learners to rely on the robust, theoretically-grounded techniques […] that really do enhance learning.

Second, to repeat that final sentence: we have LOTS of study techniques that do work. Students should use retrieval practice. They should space practice out over time. They should manage working memory load. Obviously, they should minimize distractions — put the cell phone down!

We have good evidence that those techniques work.

Third, don’t change teaching practices based on unpublished research. Sans Forgetica has a great publicity arm — they were trumpeted on NPR! But publicity isn’t evidence.

Now more than ever, teachers should keep this rule in mind.

“Doing Science” or “Being a Scientist”: What Words Motivate Students?
Andrew Watson
Andrew Watson

Teachers often find that small changes in wording produce big benefits.

One recent example: a research team in New York explored the difference between “being a scientist” and “doing science.”

The first phrasing — “being a scientist” — might imply that scientist is a kind of fixed, exclusive identity. In the same way that dogs are dogs and can’t also be cats, so too young children might infer that people who are artists or athletes or authors can’t also be scientists.

The second phrasing — “doing science” — might clear away that rigidity. This classroom exercise is something we’re all doing. It doesn’t have immediate identity implications one way or another.

If this simple switch in phrasing helps motivate students, that would be the least expensive, least time-consuming intervention EVAH.

The Research

Three researchers prepared a science lesson about friction for pre-kindergarten students.

Half of the teachers (62) saw a training video that modeled specific language: “Today we are going to do science! The first part of doing science is observing with our senses.”

The other half (68) saw a similar video that didn’t include such modeling. (Researchers assumed that most teachers — without clear modeling — would using phrasing about ‘being a scientist’ rather than ‘doing science.’ Indeed, that’s what happened.)

Teachers then ran those friction lessons, where toy cars rolled down ramps with different surfaces: carpet, sandpaper, wrapping paper.

A few days later, these pre-K students had the chance to play a tablet-based video game that resembled their science experiment. The game was programmed in such a way that all students got the first round right (success!) and the second round wrong (struggle!).

So, how long did these children persist after struggle? And: did the “doing science” vs. “being a scientist” language matter?

The Results

Sure enough, students in the “do science” lessons persisted longer than those in the the “be a scientist” lessons.

That is: when teachers spoke of science an action we take, not an identity that we have (or don’t have), this subtle linguistic shift motivated to students to keep going longer.

The effects, although statistically significant, were quite small.

Students in the “do science” lessons were 6% likelier to continue after they got one question wrong. And they were 4% likelier to keep going three problems later. (You read that right: six percent, and four percent.)

We might read these results and throw our hands up in exasperation. “Six percent! Who cares?”

My answer is: we ought to care. Here’s why.

Students experienced this linguistic change exactly once. It cost nothing to enact. It took no time whatsover. Unlike so many educational interventions — pricey and time consuming — this one leaves our scarcest resources intact.

Now: imagine the effect if students heard this language more than once. What if they heard it every time their teacher talked with them about science. (Or, art. Or, creativity. Or, math. Or, any of those things that feel like ‘identities’ rather than ‘activities.’)

We don’t (yet) have research to answer those questions. But it seems entirely plausible that this FREE intervention could have increasingly substantial impact over a student’s school career.

One Step More

In two ways, this research reminds me of Mindset Theory.

First: Dweck’s work has taken quite a drubbing in recent months. In some some social media circles, it’s fashionable to look down on this research — especially because “the effects are so small.”

But, again: if one short mindset intervention (that is FREE and takes NO TIME) produces any effect — even a very small effect — that’s good news. Presumably we can repeat it often enough to make a greater difference over time.

I’m not arguing that promoting a growth mindset will change everything. I am arguing that even small boosts in motivation — especially motivation in schools — should be treasured, not mocked.

Second: this research rhymes with Mindset Theory. Although the researchers didn’t measure the students’ mindsets — and certainly didn’t measure any potential change in mindset — the underlying theory fits well with Dweck’s work.

That is: people who have a fixed mindset typically interpret success or failure to result from identity: I am (or am not) a “math person,” and that’s why I succeeded (or failed).

People with a growth mindset typically interpret success or failure to result from the quality of work that was done. If I work effectively, I get good results; if I don’t, I don’t.

So: this study considered students who heard that they should think about science as an identity (“being a scientist”) or as a kind of mental work (“doing science”). The results line up neatly with mindset predictions.

To Sum Up

First: small changes in language really can matter.

Second: encouraging students to “do this work” rather than “be this kind of person” can have motivational benefits.

Third: small changes in student motivation might not seem super impressive in the short term. But, if they add up over time, they might be well worth the very small investment needed to create them.

Unbearable Irony: When Dunning-Kruger Bites Back…
Andrew Watson
Andrew Watson

More than most psychology findings, the Dunning-Kruger effect gets a laugh every time.

Here goes:

Imagine that I give 100 people a grammar test. If my test is well-designed, it gives me insight into their actual knowledge of grammar.

I could divide them into 4 groups: those who know the least about grammar (the 25 who got the lowest scores), those who know the most (the 25 highest scores), and two groups of 25 in between.

I could also ask those same 100 people to predict how well they did on that test.

Here’s the question: what’s the relationship between actual grammar knowledge and confidence about grammar knowledge?

John Cleese — who is friends with David Dunning — sums up the findings this way:

In order to know how good you are at something requires exactly the same skills as it does to be good at that thing in the first place.

Which means — and this is terribly funny — that if you’re absolutely no good at something at all, then you lack exactly the skills that you need to know that you’re absolutely no good at it. [Link]

In other words:

The students who got the lowest 25 scores averaged 17% on that quiz. And, they predicted (on average) that they got a 60%.

Because they don’t know much grammar, they don’t know enough to recognize how little they know.

In Dunning’s research, people who don’t know much about a discipline consistently overestimate their skill, competence, and knowledge base.

Here’s a graph, adapted from figure 3 of Dunning and Kruger’s 1999 study, showing that relationship:

Adapted from figure 3 of Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77(6), 1121-1134.

Let the Ironies Begin

That graph might surprise you. In fact, you might be expecting a graph that looks like this:

Certainly that was the graph I was expecting to find when I looked at Kruger & Dunning’s 1999 study. After all, you can find that graph — or some variant — practically everywhere you look for information about Dunning-Kruger.

It seems that the best-known Dunning-Kruger graph wasn’t created by Dunning or Kruger.

If that’s true, that’s really weird. (I hope I’m wrong.)

But this story gets curiouser. Check out this version:

This one has thrown in the label “Mount Stupid.” (You’ll find that on several Dunning-Kruger graphs.) And, amazingly, it explicitly credits the 1999 study for this image.

That’s right. This website is calling other people stupid while providing an inaccurate source for its graph of stupidity. It is — on the one hand — mocking people for overestimating their knowledge, while — on the other hand — demonstrating the conspicuous limits of its own knowledge.

Let’s try one more:

I am, quite honestly, praying that this is a joke. (The version I found is behind a paywall, so I can’t be sure.)

If it’s not a joke, I have some suggestions. When you want to make fun of someone else for overestimating their knowledge,

First: remember that “no nothing” and “know nothing” don’t mean the same thing. Choose your spelling carefully. (“No nothing” is how an 8-year-old responds to this parental sentence: “Did you break the priceless vase and what are you holding behind your back?’)

Second: The Nobel Prize in Psychology didn’t write this study. Kruger and Dunning did.

Third: The Nobel Prize in Psychology doesn’t exist. There is no such thing.

Fourth: Dunning and Kruger won the Ig Nobel Prize in Psychology in 2000. The Ig Nobel Prize is, of course, a parody.

So, either this version is a coy collection of jokes, or someone who can’t spell the word “know” correctly is posting a graph about others’ incompetence.

At this point, I honestly don’t know which is true. I do know that the god of Irony is tired and wants a nap.

Closing Points

First: Karma dictates that in a post where I rib people for making obviously foolish mistakes, I will make an obviously foolish mistake. Please point it out to me. We’ll both get a laugh. You’ll get a free box of Triscuits.

Second: I haven’t provided sources for the graphs I’m decrying. My point is not to put down individuals, but to critique a culture-wide habit: passing along “knowledge” without making basic attempts to verify the source.

Third: I really want to know where this well-known graph comes from. If you know, please tell me! I’ve reached out to a few websites posting its early versions — I hope they’ll pass along the correct source.

Music and Memory: A Learning Strategy?
Andrew Watson
Andrew Watson

Ever since the “Mozart Effect” was debunked, teachers have wanted to understand the relationship between music and learning.

If simply listening to music doesn’t “make us smarter” in some abstract way, can we use music strategically to help us learn specific subjects or topics?

A group of researchers at Baylor University wondered if the key is sleep.

That is: if students learn a topic while listening to (quiet) music, and then listen to that same music while they sleep, will it cause the brain to replay the academic content associated with the music? And, will that replay help students learn?

Intriguing, no?

This technique — called “targeted memory reactivation” — has been studied before. But, most of that research uses odors to reactivate memories.

That is: students learn X with the scent of roses in the background. That night while they sleep, the scent of roses is piped into the room. When they’re tested the next day — voila! — they remember more X than the students who didn’t get the “targeted memory reactivation” at night.

Of course, using odors for such reactivation is interesting in sleep labs. But it might not be very practical for the rest of us. So, researchers wondered if music would also reactivate memories.

The Research

Chenlu Gao, Paul Fillmore, Michael K. Scullin asked students to watch a 30-minute interactive video lecture on economics. During that lecture, classical music played quietly in the background. (The sound level was “soft background noise in a library.”)

So: students’ brains associated the music — Beethoven, Vivaldi, Chopin — with the topic — economics.

That night, while they slept, half of the students heard that same music played again. The other half heard white noise instead. The music/white noise started once students entered a particular phase of sleep, called “slow wave sleep.” (In this case, slow wave sleep began about 35 minutes after they fell asleep.)

Gao, Fillmore, and Scullin wanted to know:

Compared to students who heard white noise while sleeping, would the students who heard the music remember the lecture better?

Would they be able to apply its principles better?

Might there be a gender difference in those results?

So: what effect did Beethoven have?

The Results

Sure enough, targeted memory reactivation had interesting and measurable effects.

First: the next morning, students who heard music at night were likelier to “pass” a quiz (by scoring 70%) than those who didn’t.

Second: those differences came largely in two categories. The music helped women (but not men). And the music helped students answer application questions (but not factual questions).

Third: researchers measured students’ brain activity during sleep. In brief, students who heard music had different brain wave patterns than those who heard the white noise. And, those who did better on the quizzes had different patterns than those who didn’t.

These results get SUPER technical. But the headline is: we can quite plausibly connect mental behaviors (answers to quizzes) to neurobiological behaviors (“theta power”).

Fourth (This is really important): Researchers found NO DIFFERENCES when they tested the students nine months later. So, this targeted memory reactivation (with music) produced a short-term difference, but not a long-term one.

Implications for Teaching and Learning

This musical version of targeted memory reactivation feels temptingly practical. But: trying it out in real life requires some extrapolation and some technology.

I briefly corresponded with the lead researcher, Michael Scullin, about translating this technique from the sleep lab to everyday life. Here’s a quick overview of key points:

PROBLEM: In this study, students heard the music as they first learned the material. But, it’s REALLY unlikely that teachers/professors will play music while they teach. So, how can we apply use targeted memory reactivation in a more typical learning situation?

SOLUTION: The technique just might work if students play the right kind of music while they study, and then replay that music while they sleep. In this case, “the right kind of music” means instrumental, not distracting, relatively quiet.

However, this approach probably won’t work if students are otherwise distracted — by cellphones or video games, say — while they study.

PROBLEM: Presumably I can’t use the same piece of music to reactivate all memories of all the academic topics I want to learn. Does that mean I have to build some huge library of music cues: this Beethoven piece to recall the Battle of Bunker Hill, that Chopin piece to practice balancing chemical equations?

SOLUTION: Alas, it’s true: each piece of music would be distinctively paired with a particular topic. (This conclusion hasn’t been tested, but is likely true.)

So, the best compromise is probably this: choose the topics that are most difficult to understand or remember, and use the technique sparingly for that subset of academic information.

PROBLEM: Won’t playing music at night keep students awake, or wake them up?

SOLUTION: That’s an important technical question. Ideally, the music would play quietly.  And, as we saw in the research described above, it would start only after slow wave sleep started.

So, whatever technology the students have, they should program it to start the music at very low levels — ideally starting about 30 minutes after they fall asleep.

QUESTION: The technique helped in the short term, but not nine months later. Can we use targeted memory reactivation to consolidate learning over the long term?

ANSWER: We haven’t tested that yet. It seems plausible (even likely?) that repeating the music over time would help. That is: listening to that music once a fortnight for a few months might really firm up memories.

But, again, that approach hasn’t been tested. I (Andrew Watson, not Michael Scullin) am speculating that it might work. But we don’t know.

In Sum…

This research — contrary to lots of earlier work — suggests that we might be able to learn while we sleep.

But, the specifics are very much in the early days. Targeted memory reactivation clearly produces benefits in the sleep lab. Its application to everyday teaching and learning needs to be explored, practiced, and refined.


I wrote about another one of Scullin’s studies a year ago. If you’d like some advice on how to fall asleep faster, click here.

How Do Children Explain Academic Success? (And: How Do We Know?)
Andrew Watson
Andrew Watson

If teachers want our students to succeed academically, we should understand how they explain academic success.

Typically, adults work with a two-variable formula: “effort + skill = performance.”

That is: If I’m good at singing, and I practice singing, then my recital will go well. But: if I’m just not much of a singer, or if I waste my talents by not practicing, the recital will be a dud.

In the short term, I have control over effort, but not over skill. (Over the long term, I can increase my skill by practicing.)

For several decades now, psychologists have thought that this theory develops over time. Young children — in this view — start with an even simpler theory: “effort = performance.”

If I work hard, I’ll succeed. If I don’t, I won’t.

Research Says So

The idea that children gradually develop the two-variable formula rests on several research findings. The most important uses a well-established research paradigm.

Children hear a story, and see accompanying pictures, of two students learning a topic. One spends the whole story working (“consistent effort”). The other works part of the time, but goofs off at other times (“inconsistent effort”).

Here’s the kicker: both students get a perfect score on the test!

Researchers ask the children, which student is smarter (that is, more skilled)?

According to the “effort + skill” theory, if a student gets a high score despite a low effort, he must be especially smart/skilled.

According to the “effort = performance” theory, the student who works harder is demonstrating greater skill.

And, sure enough, in the 1970s and 80s, researchers found that younger children tend to say that the consistent-effort student is smarter. Older children, with their “more mature” theory, say that the inconsistent-effort student is smarter.

In brief: children’s explanations of success develop over time.

Wait: Does Research Really Say So?

Two researchers — Melis Muradoglu and Andrei Cimpian — thought that they spotted a problem with this chain of logic.

What if the research paradigm that produced this answer was itself flawed? If the scale I’m using to measure my ingredients isn’t accurate, my recipes won’t come out right. If the test I’m using to understand children’s thought patterns isn’t well designed, my conclusions won’t come out right either.

Specifically, Muradoglu and Cimpian worried about the question that researchers asked children before they asked who was smarter. That question was: “which student worked harder?”

This question, Muradoglu and Cimpian feared, draws attention to the importance of effort, and might prompt children to misunderstand the following question about “being smart.”

So, they reran the experiment. Half the time, they did it the original way. The other half, they did it with a different question: “did the students think the task was difficult?” **

Sure enough, as they predicted, the children’s answer depended on the version of the research paradigm.

In the new version, children consistent said that the student who worked less and got a perfect score was smarter than the student who worked consistently and got a perfect score.

That is: even children as young as 5 have the 2-variable theory: “performance = effort + skill.”

Teaching Implications, Take 1

Regular readers know my mantra: “don’t just do this thing; instead, think this way.”

In this case, Muradoglu & Cimpian don’t suggest a particular teaching strategy.

Instead, their research lets us know that we can think with young learners about academic success as we do older learners. We can talk with them both about their effort and about their (current) skill level — and, they’ll understand both those concepts.

Each of us who teaches younger students will use that insight differently. But, we might well teach differently than if we had thought our learners thought only about effort and not about skill.

Teaching Implications, Take 2

If these researchers are right, then teachers have been getting bad advice on this topic for a few decades at least.

That realization might bring us to a cynical conclusion: “if psychologists keep getting things wrong, why should we listen to them at all?”

I understand the frustration. But I do think we should look below the surface for the good news.

Here’s an unfortunate truth: we all get things wrong. We just do.

Psychology as a discipline has lots of flaws. But — like other sciences — it has the advantage of being gradually self-correcting over time.

At its best, psychology encourages researchers to test and retest. It encourages new methods. It offers clear guidelines to measure improvement.

Yes, indeed, people can and do game this system. But, when it’s working well, it can produce remarkably helpful results.

In this case: we have good reasons to believe that even our young students will respond to two different kinds of encouragement.

In the short term, they can improve their effort.

In the longer term, that improved effort will improve their skill level.

Both improvements can help them achieve ultimate success — whatever the goal.


** Muradoglu and Cimpian had a second concern about the initial study, which they also tested. It’s a little hard to explain, so I’m omitting it from this post. If you’re curious, you can read about it on page 4 of the study.

“Seductive Details”: When Do Cool Stories and Videos Interfere with Learning?
Andrew Watson
Andrew Watson

As an English teacher, I really do enjoy (almost) everything I teach. I love discussing Macbeth, and coaching strong writing, and parsing English grammar. (No, really!)

My students? Not so much with the enjoying.

They’re a good-natured lot, and so — for the most part — amiably play along with my enthusiasm. But, at times I feel I should enliven a topic with a story or a picture or a video.

If our Macbeth conversation slips into neutral, I might talk about an actor’s a funny mistake during a recent production of The Crucible.

If they just can’t focus on the difference between a predicate nominative and a predicate adjective, I might start using my Godfather voice and ask them to show respect for the family. (They LOVE it when I do that…)

But here’s the question: do those funny/gross/intriguing side treks ultimately benefit or harm learning?

Seductive Details

In research world, we call these additions “seductive details.”

For instance, my lesson plan might focus on the geological forces that cause volcanoes to erupt. Past experience tells me that the lesson itself can be a bit of a slog, so I start it off with the story of Krakatoa’s eruption — which killed more than 35,000 people.

Maybe I break out some pictures of Vesuvius — an eruption that buried and preserved an entire Roman city.

In these cases, my stories enliven the general topic: “volcanoes are super deadly!” But they don’t add to the specific learning goal: geological forces that cause eruptions.

Such stories are “seductive,” but not — shall we say — “substantive” in their contribution to the lesson.

This topic gets lots of scholarly interest, and has led to many publications. Quite recently, Dr. NarayanKripa Sundararajan (Kripa Sundar) and Dr. Olusola Adesope published a meta-analysis that crunched the data of 68 different experiments.

What did they learn?

Many Questions, Helpful Answers

Given so many studies to examine, these researchers had lots of ways to parse the data:

Does it matter if the “seductive detail” is a photo or a video or an audio recording?

Does it matter how researchers measure ultimate learning?

Perhaps it matters if the “seductive details” is at the beginning, middle, or end of the lesson?

Does it matter if the students had some prior knowledge of the materiel?

With so many variables (and lots more), Sundararajan and Adesope have LOTS of conclusions to report. Rather than list them all, I’ll highlight a few that struck me as most important.

First: seductive details matter. They do, in fact, interfere with learning. Depending on which variable they studied, the researchers found different effect sizes. But, quite consistently, additional “seductive” information ended up lowering final measurements of learning.

For video and audio. If the details were at the beginning, in the middle, or at the end. For novices and experienced students. Etc…

Second: the length of the lesson matters. Specifically, seductive details have a considerable effect in short lessons (less than 5 minutes), but no statistically significant effect on longer ones (more than 10 minutes).

Practically speaking, I think this means that typical classroom lessons (which very rarely last less than 5 minutes) won’t suffer terribly from the inclusion of seductive details.

But — and this is an important exception — our current climate of online teaching might well prompt us to create brief lessons. In such lessons, seductive details will be much more distracting.

Third: the meta-analysis suggests that suggestive details create bigger problems for novices than for experts — or even for students who have some baseline knowledge of the topic. So, as Dr. Sundararajan wrote to me, “perhaps it’s not a good idea to use [seductive details] when introducing new content but perhaps not too bad in a review.”

What Should Teachers Do Now?

To start off, in the researchers’ words: “educators should minimize the use of seductive details in their instruction.”

That advice holds true especially if those details might…

… distract students from the essential ideas under discussion,

… remind students of prior misconceptions that you want to override,

… take up lots of space (say — a diagram on a page),

… take place in a relatively short (online?) lesson.

Another strategy: rework those seductive details so that the DO connect DIRECTLY to the learning goal. So, rather than simply focus the Vesuvius story on the destruction it wrought, instead talk about Roman conceptions of the causes of volcanoes. Those mythical explanations aren’t literally true, but they point the way toward — and might even align with — the scientific content you want to cover.

Yet another strategy: don’t sweat too muchSeductive details don’t permanently destroy all possibility of understanding. If used the wrong way, yes, they can get in the way. But — as all teachers know — sometimes students need a lively distraction to perk them up. As long as we use those details sparingly and thoughtfully, we don’t have to panic about occasional side-tracks.

By the way: I’m not the only one who thinks this. When I emailed Dr. Sundararajan with questions about the meta-analysis, part of her answer was:

“I think it is also important for teachers to remember that the effect is fairly small and not to feel more guilt than they need to – avoid seductive details when possible.

If your kids need a laugh, bring [seductive details] in and be aware that you might want to revisit that content with other learning strategies to help reinforce the learning (e.g. retrieval practice, or note-taking).

Along the same lines, keep in mind that kids are kids and sometimes you just have to let them process the distraction and restart.”

In other words: seductive details matter, but other things matter too. As long as we keep this research in mind as we make our teaching decisions, we’re welcome to talk about Super Deadly Volcanoes from time to time.


By the way: Dr. Sundararajan has expressed an interest in working with teachers on questions and materials. You can reach her at her website, and find her on twitter: @kripasundar.

Beyond Retrieval Practice: The Benefits of Student-Generated Questions
Andrew Watson
Andrew Watson

Retrieval Practice has gotten a lot of press in recent years — especially at our conference last fall on Deeper Learning.

The short version: students don’t benefit much from simple review — say, rereading a passage. But, they benefit a lot from actively trying to recall information — say, answering questions about that passage.

Dr. Pooja Agarwal puts it this way: Students should practice not by trying to put information into their brains, but by trying to take information out.

(She and Patrice Bain have written a great book on the topic: Powerful Teaching.)

We have LOTS of research showing that retrieval practice yields great benefits. Can other strategies match it?

Here’s an idea: maybe instead of having students answer questions (retrieval practice), we should have them create questions to be answered. Just perhaps, generating questions might boost learning more than simple review. Or — let’s get crazy: maybe generating questions boosts learning as much as retrieval practice? Even more?

Generating Research

Over the years, the “generation effect” has been studied occasionally — alas, not as much as retrieval practice. Often, research in this area includes a training session where students learn how to ask good questions. That step makes sense … but it might discourage teachers from adopting this strategy. Who has the time?

Researchers in Germany had three groups of college students read slides from a lecture about infant developmental psychology.

The first group practiced the information by rereading it. Specifically, the were instructed to memorize the content of those slides.

Group two practiced by answering questions on each slide. They if they couldn’t remember the answer, they were allowed to go back and review the slide. In effect, this was “open-book retrieval practice.”

In group three,

“students were instructed to formulate one exam question in an open response format for the content of each slide [,] and also to provide an answer to that question.”

That is: they generated questions.

So, here’s the big question: when they took a surprise quiz, how did students in each group do?

Drum Roll Please…

First: Students who generated questions scored ~10% higher on that surprise quiz than those who tried to memorize information.

Second: Students who generated questions did as well as those who used retrieval practice.

Third: Questioners got these benefits even without explicit training in how to ask good questions.

Fourth: Question generators (and retrieval practicers) scored higher than mere reviewers on both factual question and transfer questions.

Fifth: Researchers got these impressive results even though the surprise quiz took place one week later. (In research like this, those quizzes often happen right away. Of  course, a week’s delay looks a lot more like genuine learning.)

We could hardly ask for better results than these. In this research paradigm, question generation worked as well as retrieval practice — which works better than almost anything else we know of to help students learn.

Explaining Amazing Results

Why would this be? Why does generating questions help students as much as answering them?

This study doesn’t answer that question directly, but it suggests a rough-n-ready answer.

Both common sense and lots o’ research tell us: students learn more when they think hard about something. (Obvi.)

If we increase the challenge of the thinking task, we prompt students to think harder and therefore to learn better.

Psychologists talk about “desirable difficulties”: a level of mental challenge that forces students to work their synapses but doesn’t overtax them.

In this case, we can reasonably hypothesize that students who must create a question on a topic have to think hard about it. To come up with a good question, they have to think at least as hard as students answering questions on that topic.

And, they have to think considerably harder than students who simply reread a passage.

Voila! Generating questions help students learn.

A Few Caveats

As always, research provides teachers with helpful guidance. But: we need to adapt it to our own circumstances.

First: this study took place with college students. We should take care that our students can — in fact — come up with good questions.

For instance, I’m a high-school English teacher. I would use this technique with Their Eyes Were Watching God or Passing or Sula. But I don’t think I’d use it with The Scarlet Letter or Hamlet. My students struggle to understand the basics with those texts; I’m not sure they’d do a good job coming up with resonant exam questions.

More precisely: I’d structure those assignments quite differently. I suspect I could be open-ended with an assignment to create Passing questions, but would offer a lot more guidance for Scarlet Letter questions.

Second: yes, this study found that retrieval practice and question generation resulted in additional learning. And, we have a reasonably hypothesis about why that might be so.

But, we have MUCH more research about retrieval practice. Before we invest too heavily in question generation, we should keep our eyes peeled for more studies.

Third: In this paradigm, trying to memorize resulted in less learning. However, we shouldn’t conclude that students should never try to memorize. At times, “overleaning” is essential for reducing working memory load — which facilitates learning.

As long as we keep these caveats in mind, we can be excited about trying out a new review technique.

And: this can work in online settings as well!

An Exciting Event In Mindfulness Research [Repost]
Andrew Watson
Andrew Watson

I’ve been reviewing old posts, looking for information that might be particularly helpful in today’s strange times.

This post — from September — gives us greater confidence that mindfulness helps reduce stress.

It’s particularly persuasive research because it studies both mental behavior (psychology) and neural behavior (neuroscience) at the same time.

And, we could all use a little stress reduction today…


Let’s imagine a GREAT study on the benefits of mindfulness.

As school people, we’re happy that mindfulness might be helpful at home or at work, but we really want it to be helpful to students. So, we’d love for this study to take place at school.

We’d like the study to show that mindfulness changes mental processes. For instance, we’d love to know that it helps students feel less stress.

And, we’d like the research to look at brains as well as minds. That is: we’d like to have some fMRI data showing relevant changes in brain regions.

At the same time that students report they feel less stress (that’s the mind), we might see neural modulation typical of less stress (that’s the brain).

Finally, the study’s methodology would hold up to scrutiny. It would, for instance, include a plausible control group. (I’ve written about problems with control groups, including this study about mindfulness.)

Lo and Behold

Sure enough, this study exists!

Working with 6th graders at a school outside Boston, Clemens Bauer randomly assigned half to a mindfulness program and half to a coding training program.

Both groups devoted 45 minutes, four times a week to this effort, for 8 weeks. And, by the way, students in both groups enjoyed this time equally. (So: AT LAST we’ve got a plausible and active control group.)

Bauer’s team had students fill out a stress survey before and after this 8-week stretch. (Sample question: “In the last month, how often have you been upset because of something that happened unexpectedly?”)

And, he performed fMRI scans on them before and after as well.

When looking at those scans, Bauer’s team had a specific prediction. High stress responses typically includes elevated amygdala activation. Often, we can manage that stress response by using the prefrontal cortex–the part of the brain right behind your forehead.

If mindfulness helps manage stress, we would expect to see…

…greater connectivity between the prefrontal cortex and the amygdala, and

…concomitantly reduced activity in the amygdala.

That is, we’d be able to see that mindfulness strengthened connections between self-control systems in the prefrontal cortex. In turn, this increase in self-control would help mitigate stress responses in the amygdala.

Of course, I’m offering a very simplified version of a fantastically complex neural story. Books have been written on these connections, and it’s not blog-friendly kind of information.

Results, Please

If you’re a fan of mindfulness, you’re going to LOVE these results.

Students who practiced mindfulness reported less stress than those in the control group.

They showed higher levels of prefrontal cortex connectivity with the amygdala.

They showed lower levels of amygdala activity when they looked at angry faces.

So: both in their mental activity (reported stress level) and in the neural activity (in the amygdala, between the amygdala and the prefrontal cortex), eight weeks of mindfulness led to beneficial results for these students.

Technically speaking, that’s a home run.

What’s Next

First: to repeat, this study is powerful and persuasive. We can simply revel in its conclusions for a while.

Second: as teachers, we’re glad that student stress levels are lower. The next question is: do students learn more? We can assume they do, but we should measure as well. (To be clear: I think lower stress is an important goal on its own, whether or not it leads to more learning.)

Third: as the study’s authors acknowledge, the sample size here is relatively small. I hope they get funding to repeat it on a much larger scale.

As noted in this study, there’s a disappointing history in the world of mindfulness research. Small studies–often lacking random assignment or a control group–come to promising conclusions. But, the bigger the study–and the better the methodology–the smaller the results.

So: now that we’ve gotten strong effects with a randomized study and a plausible control group, I hope to see these same results at a much larger scale.

I might go sit quietly for a while, and try to clear my mind of extraneous thoughts.

Dr. Kurt Fischer: A Tribute
Andrew Watson
Andrew Watson

Professor Kurt Fischer changed my professional life. If you’re reading this blog, odds are good he helped change yours as well.

Throughout most of the 20th century, teachers, psychologists, and neuroscientists had little to say to one another.

Even psychology and neuroscience — two fields that might seem to have many interests in common — eyed each other suspiciously for decades. Certainly teachers weren’t a welcome part of any wary conversation that might take place.

As we all know, and Dr. Fischer helped us see, these fields have so much to learn from each other.

Today’s growing consensus that these disciplines — and several others — should be in constant conversation results in large measure from his insight, effort, generosity, and wisdom. So: he’s changed our lives, and greatly benefited our students.

Since I heard of his death, I’ve been thinking how Dr. Fischer’s great skill was to keep the bigger picture in mind. He did so in at least two essential ways.

Creating Interdisciplinary Institutions

Academic disciplines exist for good reasons. And yet — despite all the good that they do — they can create barriers and restrict conversations.

To foster inter-disciplinary and multi-disciplinary conversations, Dr. Fischer knew we needed institutional systems. In our field, he helped start all the essential ones.

He helped create the Mind, Brain, and Education strand at Harvard’s Graduate School of Education. It was, I believe, the first such program in the world.

He helped found the International Mind Brain Education Society (Imbes.org), which works to “to facilitate cross-cultural collaboration in biology, education and the cognitive and developmental sciences.”

He helped found the Mind Brain Education Journal, which publishes vital interdisciplinary research.

And, of course, he helped organize the very first Learning and the Brain conference — to ensure that these conversations took place not simply in academic institutions, but with classroom teachers as well.

In starting all these institutions and starting all these conversations, Dr. Fischer created a generation of leaders — those who now champion the work we do every day.

That’s the bigger picture he could see from the beginning.

Understanding Brains in Context

Dr. Fischer saw the bigger picture in his teaching life as well.

As part of his work at Harvard’s School of Education, he taught a course on “Cognitive Development, Education, & the Brain.”

Over those weeks, he returned frequently to an especially damaging fallacy, which he called “brain in a bucket.”

That is, he wanted his students not to think about individual brains operating in some disembodied ether. Instead, he wanted us to think constantly about context:

How does the brain interact with the body?

In what ways is it shaped by development?

How do family interactions shape self? Social interactions? Cultural interactions?

How should we think about hormones, and about ethics, and about evolution, and about genetics?

In other words: neuroscience teaches us a lot about brains. But we should always think about the bigger picture within which that brain functions, and about the forces that created it in the first place.

Never focus on “a brain in a bucket,” because that brain makes no sense without the context that surrounds and shapes it.

In Conclusion

So for me, that’s Dr. Fischer’s legacy. He helped create the context that shaped so many of our brains:

Graduate programs in Mind, Brain, Education,

Learning and the Brain conferences (55 and going strong),

Professional associations and journals,

The scholars and conversations that inspire teachers and improve teaching.

The world is better because he lived, and a poorer place now that he’s gone. Happily for us, he left great wisdom and greater understanding behind.

Pure Inquiry, Guided Inquiry, and PISA
Andrew Watson
Andrew Watson

Because scientists work by inquiring, it makes rough-n-ready sense that we should teach science through the process of inquiry. Indeed “inquiry-based learning,” like “problem-based” and “project-based” learning, has emphasized students’ construction of their own understanding.

According to a well-known definition, this pedagogy focuses on students…

… asking questions,

… planning and conducting investigations,

… using appropriate tools and techniques to gather data,

… thinking critically and logically about relationships between evidence and explanations,

… constructing and analyzing alternative explanations,

And so forth.

Of course, we should also inquire: does inquiry-based learning in fact help students learn? This question leads to lots of controversy…

Many Methods

We can explore that question in several ways.

We might, for instance, have one group of students learn a topic through inquiry learning, and a control group learn it through direct instruction. When we test them later, we’ll get a good sense of who learned the material better.

That method — if we do everything right — gives us a clear answer.

But: it focuses on a small group of people learning only one thing. Who knows if that clear answer applies in other circumstances?

Or, we might look at large groups of people who studied many things. If we can find out what method their teachers used, and measure how well they learned, we’ve got another useful strategy for answering our question.

Of course, we’ll be less certain about the quality of the teaching than in the highly-controlled environment. Who knows if the inquiry-based teaching was, in fact, well done?

Following this second approach, researchers in the UK looked at PISA data (PISA = Program for International Student Assessment ), and aligned it with high-stakes testing scores in England: the GCSE (General Certificate of Secondary Education).

The PISA data help here because students rank how much time they spent in various inquiry-learning practices: “every lesson, most lessons, some lessons, never.” For instance, students rate how often they are “allowed to design their own experiments.”

So: by linking PISA data about teaching practices with GCSE scores, those researchers can draw some conclusions about the effectiveness of inquiry learning.

What did they find?

Negotiating Our Own Biases

Before I answer that question, let’s acknowledge a problem.

Many teachers already have opinions — strong opinions — about inquiry learning.

Those opinions bias our responses to new information.

If (for example), I don’t think inquiry learning works, and this research shows that it does, I’m inclined to dismiss the study.

“Well, look at all of these methodological problems!”

Yet (the contrary example) if I’m an inquiry-learning champion, research showing its effectiveness automatically seems wise and well-designed.

“Gosh: it’s hard to think of a fairer way to answer this question! Now we KNOW it works…”

So, here’s my suggestion: decide right now — before you know what the researchers concluded — whether or not you’re confident in this study design.

All study designs have some flaws. This one, for instance, relies on student self report. And, as noted above, it doesn’t have any way to control for the quality of inquiry learning practices.

You might reasonably conclude those flaws are too great. In that case, you don’t get to cite this study even if it ends up confirming your beliefs.

Or, you might reasonably conclude that — on average — errors in self-report and quality control will balance themselves out; and this research method gives a big-enough data pool to draw meaningful conclusions. In that case, you have to pay attention to the study even if it contradicts your beliefs.

So: push yourself to decide now…

The Envelope Please

Now that you’ve decided to give credence to this methodology (or not to do so), here’s what they found.

“In summary, neither high inquiry with low guidance, nor high guidance with low inquiry are related to improved science attainment.” (emphasis added)

That is: pure inquiry learning doesn’t result in more learning that plain-old explanation. And, (as Professor Christian Bokhove notes) it doesn’t result in less learning either.

But:

“There is, however, some tentative evidence that moderate levels of inquiry delivered in conjunction with high guidance may have a small positive impact upon science achievement.” (emphasis added)

That is: lots of plain-old guidance PLUS a moderate amount of inquiry gives a little boost.

By the way: what do the researchers count as “guidance”? Here, they’re looking at PISA questions about teachers’ suggestions for improving performance, meeting learning goals, extra help, and so forth.

Teaching Implications

As I noted above, the “direct instruction vs. inquiry” debate generates A LOT of passion.

I think this study invites us to step back and get some perspective. It seems that — at least in this research paradigm — a healthy dose of teacher explanation and support helps students learn. And, some degree of independent inquiry enhances that result. (“Small positive impact.”)

But: inquiry learning neither yields a whole new level of scientific understanding, nor prevents students from learning much at all.

I suspect that, ultimately, we need to ask more granular questions.

Does inquiry give a greater lift in later grades than earlier ones? Perhaps it helps when scientific findings confirm our prior knowledge, but not when then contradict it? Does the teacher’s experience level matter?

Research into these questions might give us specific and practical classroom guidance.