Andrew Watson – Page 26 – Education & Teacher Conferences Skip to main content

Andrew Watson About Andrew Watson

Andrew began his classroom life as a high-school English teacher in 1988, and has been working in or near schools ever since. In 2008, Andrew began exploring the practical application of psychology and neuroscience in his classroom. In 2011, he earned his M. Ed. from the “Mind, Brain, Education” program at Harvard University. As President of “Translate the Brain,” Andrew now works with teachers, students, administrators, and parents to make learning easier and teaching more effective. He has presented at schools and workshops across the country; he also serves as an adviser to several organizations, including “The People’s Science.” Andrew is the author of "Learning Begins: The Science of Working Memory and Attention for the Classroom Teacher."

Retrieval Practice is GREAT. Can We Make It Better?
Andrew Watson
Andrew Watson

By now you know that retrieval practice has lots (and lots) (and LOTS) of research behind it. (If you’d like a handy comprehensive resource, check out this website. Or this book.)

The short version: don’t have students review by putting information back into their brains — say, by rereading a chapter. Instead, have them pull information out of their brains — say, by quizzing themselves on that chapter.

It’s REALLY effective.

When we know that a technique works in general, we start asking increasingly precise questions about it.

Does it work for children and adult learners? (Yes.)

Does it work for facts and concepts? (Yes.)

Does it work for physical skills? (Yes.)

Does it work when students do badly on their retrieval practice exercises? Um. This is awkward. Not so much

That is: when students score below 50% on a retrieval practice exercise, then retrieval practices is less helpful than simple review.

How do we fix this problem?

“Diminishing Cues” and Common Sense

Let’s say I want to explain Posner and Rothbart’s “Tripartite Theory of Attention.” In their research, attention results from three cognitive sub-processes: “alertness,” “orienting,” and “executive attention.”

Depending on the complexity of the information I provide, this explanation might get confusing. If a retrieval practice exercise simply asks students to name those three processes, they might not do very well.

Common sense suggests a simple strategy: diminishing cues.

The first time I do a retrieval practice exercise on this topic, I provide substantial cues:

“Fill in these blanks: Posner and Rothbart say that attention results from al______, or_____, and ex_______ at______.”

A few days later, I might ask:

“Fill in these blanks: Posner and Rothbart say that attention results from ______, _____, and _______  ______.”

A week later:

“What three sub-processes create attention, in Posner and Rothbart’s view?”

And finally:

“Describe how attention works.”

The first instance requires students to retrieve, but offers lots of support for that retrieval. Over time, they have to do more and more of the cognitive work. By the end, I’m asking a pure retrieval question.

“Diminishing Cues” and Research

So, common sense tells us this strategy might work. In fact, I know teachers who have stumbled across this approach on their own.

Here at Learning and the Brain, we like common sense and we REALLY like research. Do we have research to support our instincts?

Yes.

In 2017, two researchers put together an impressive combination of studies.

They looked at different study strategies: review, retrieval practice, diminishing-cues retrieval practice.

They tested participants after different lengths of time: right away, 24 hours later, a week later.

They tested different amounts of studying: 3 sessions, 6 sessions…

You get the idea.

Because they ran SO MANY studies, they’ve got LOTS of data to report.

The short version: “diminishing cues retrieval practice” ALWAYS helped more than traditional review (rereading the chapter). And it OFTEN helped more than plain-old retrieval practice (self-quizzing on the chapter).

If you want the details, you can check out the study yourself; it’s not terribly jargony. The process is a bit complicated, but the key concepts are easy to grasp.

To Sum Up

Retrieval practice helps students learn.

If we want to ensure that it works optimally, we should use it multiple times — and successively remove more and more scaffolding from the retrieval practice questions we ask.

Common sense and research agree.

Executive Function Isn’t What You Think It Is (Maybe)
Andrew Watson
Andrew Watson

As a soccer coach, I want my students to get better at soccer.

As an English teacher, I want my students to get better at English.

And, as a hip-hop dance instructor, I want my students to get better at hip-hop dance.

To accomplish those goals, I usually teach them soccer, English, and hip-hop dance.

That is: I need to tailor my teaching SPECIFICALLY to the topic I want my students to learn. Sadly, for instance, when I teach English, I’m not helping students learn soccer (or math, or dance…)

Wouldn’t it be great if I could teach some GENERALLY useful skill that would boost their abilities in all those areas? This broad, overarching skill would make my students better soccer players, English essayists, and hip-hop dancers. That would be amazing

Answer Number One

For a few decades now, we have mostly thought that the answer to that question is “no.”

Despite all the hype, for example, teaching young children to play the violin doesn’t make them better at math later on.

The exception to that general rule: EXECUTIVE FUNCTIONS.

When children get better at, say, inhibition, they improve across all their studies.

In soccer, they resist the temptation to run to the ball, and instead play their position.

In English, they break their bad habits — like using too many dashes — and choose good ones instead.

And in dance, they follow the tricky choreography that steers them away from the (super-tempting) downbeat.

So, executive functions — task switching, prioritizing, self-control, etc. — help students generally.

No wonder we spend so much time talking about them.

Answer Number Two

Professor Sabine Doebel wonders: what if that account of executive function is just wrong.

  • What if executive functions — like so many other things — depend on specific, local circumstances.
  • What if we don’t develop general abilities to inhibit actions, but we learn specifically that we shouldn’t run to the soccer ball (or use dashes, or step on the downbeat)?
  • And, what if getting better at one of those local skills doesn’t make me better at any of the others?

She explains this argument in a Tedx talk. Happily, this one includes an adorable video of children trying the famous “Marshmallow Test.” (It also has an even more adorable video of children trying the less-well-known “Card Sorting Task.”)

She has also recently published a think piece on this question in Perspectives on Psychological Science. This document, naturally, is more technical than a Tedx video. But it’s certainly readable by non-experts who don’t mind some obscure technical terminology.

Why Do We Care?

If the traditional account of executive function is accurate, then we can help students generally by training their EFs.

If Doebel’s account is more accurate, then — alas — we can’t.

Instead, we have to help students learn these specific skills in specific contexts.

Because Doebel is proposing a new way to think about executive functions, I don’t doubt there will be LOTS of institutional resistance to her ideas. At the same time, if she’s right, we should allow ourselves to be persuaded by strong research and well-analyzed data.

This question won’t be answered for a long time.

But, we can use our (general or specific) executive function skills, restrain our impatience, and keep an open mind.

What’s Better than Attention? Attention + LEARNING!
Andrew Watson
Andrew Watson

To learn in school, I need to pay attention.

More precisely, I need to pay attention to the subject I’m learning.

If I’m attending to …

…the sudden snowfall outside, or

…the spider on the ceiling, or

…the odd squeaking sound coming from the radiator,

then I’m not paying attention to…

…the Euler bridge problem, or

…the subjunctive mood, or

…the process for setting group norms.

We teachers wrestle with this problem every day. What can we do to help students pay attention so that they learn?

But Does It Work in Real-World Classrooms?

This urgent question has an obvious answer — and that obvious answer has obvious problems.

Obvious Answer: exercise. We’ve got lots of research showing that exercise enhances various neural processes essential to long-term memory formation.

And, we’ve got research — especially with younger children — that movement and exercise in class enhance attention.

Obvious Problems:

First, all that research doesn’t answer the essential question: “do movement and exercise help students learn?” We know they enhance attention. And we know that extra attention should boost learning. But: does it really work that way?

Second, most of that research on in-class exercise happens with younger students. What about older students? And, by “older,” I mean “older than 3rd grade.”

Wouldn’t it be great if someone looked at the effect of exercise on attention and learning in older students?

Good News, and More Good News

A research team in Canada has explored these questions. And, they did so with a helpfully clear and sensible research paradigm.

They invited college students (who are, indeed, older than 3rd graders) to watch a 50 minute lecture on psychology.

One group watched that lecture straight through, with no breaks.

A second group took 3 breaks, each one lasting five minutes. During those breaks, they played a fun video game (“Bejeweled”). That is: they DID take breaks, but they DIDN’T exercise during those breaks.

A third group also took 3 breaks, each one lasting five minutes. During those breaks, they did aerobic exercises: jumping jacks, high knees, etc.. Like the second group, they DID take breaks. Unlike the second group they DID exercise.

The results?

Lots o’ good news:

First: the exercise group were considerably more alert during the whole lecture than the other two groups. (That is: their heart rate was measurably higher.)

Second: the exercise group paid attention much better. They remained on-task about 75% of the time during the full lecture.

By way of contrast, the no-break group started at 60% on task, and fell to 40%. And the video-game group — who took a break but didn’t exercise — fell from 70% to 30%. YIKES.

Third: We care about alertness and attention only if they lead to more learning. Well: 48 hours later, the exercisers remembered more.

That is: they remembered 50% of the lecture, whereas the other two groups remembered 42%.  (50% doesn’t sound like a lot. But the point is: it’s considerably more than 42%.)

So, this study tells us that older students (like younger students) benefit from exercise during a lesson.

Specifically, they remain more alert, stay on task more, and learn more.

BOOM.

Final Thoughts

First: I think it’s helpful to see how each research study builds on previous ones. This study gives us important new information. But, it does so by drawing on and extending research done by earlier teams.

In educational psychology, no ONE study shows anything. Instead, each study builds incrementally on earlier ones — and creates a more interesting, more useful, more complex, even more contradictory picture.

Second: in this study, the students watched video lectures. Their experience wasn’t EXACTLY like online learning. But: it was an interesting relative of online learning.

Should we extrapolate from this study to encourage our online learners to move? That doesn’t sound crazy to me.

Third: One interesting question in this study. The students who took breaks — including those who exercised — took MORE TIME than those who didn’t. The “no break” group took 50 minutes; the “exercise break” group took 65.

So: they learned more — AND it took more time for them to do so. We have to be honest with ourselves about that finding.

My own view: I’d rather give up some class time for exercise if it means students attend and learn more. And, if that means I have to present less content, I’m okay with that exchange.

After all: it doesn’t matter if I teach material that students don’t learn. My job is to help them remember. Exercise breaks do just that.

What’s the Ideal Size for Online Discussion Groups?
Andrew Watson
Andrew Watson

We’re all learning lots about online teaching these days: new software (Zoom), new vocabulary (“asynchronous”), new fads (teaching in pajamas).

In many cases, we’re just going with our instincts here. Relying on our experience, we know to [insert technique here].

But because this is Learning and the Brain, we’d like some research to support whatever technique we inserted.

I’ve been reading about “online social presence” lately, and the research here offers lots of helpful insights.

Defining and Exploring “Social Presence”

Unlike many terms in the world of educational psychology (I’m looking at you, “theory of mind”), “online social presence” means what it sounds like.

When we’re together in a classroom, my students and I have a social presence. We’re aware of ourselves as a functioning group. We rely on lots of familiar cues — body language, facial expression, direction of gaze — to navigate those social relationships.

Of course, those familiar cues barely function online. What does “direction of gaze” mean when my laptop camera sees me looking at the lower left image in a Zoom video array?

Many teachers I talk with instinctively know to focus on building a greater sense of online classroom community. Breakout rooms and discussion boards, for instance, let students work with each other in smaller groups.

While it’s hard to participate effectively in a discussion with 30 people — heck, it’s hard to think clearly in an online discussion that large — the right-sized group might foster better conversations and closer connections.

But: what’s the “right-sized group”?

Instincts and Research

In informal discussions, I keep hearing “four or five.”

For no explicit reason, it just seems plausible that we can track an online conversation among the five of us. More than that will get hard to track. Fewer than that will get awkwardly quiet.

Unsurprisingly, researchers have been looking at this question.

One research team, for instance, measured their students’ evaluations of “social presence” in an online masters class in — appropriately enough — “Assessment and Data Analysis.”

For half the term, these students participated in online discussion boards with all 16 members of the class.

For the second half, their discussion groups shrank to 4 or 5.

What did the researchers learn?

Initial Findings, and Beyond

Sure enough, the smaller groups made a big difference.

According to the students’ own ratings, they felt that the small groups enhanced social presence. And, intriguingly, they felt a greater sense of commitment to this smaller group. (Large groups often create a sense of “social loafing,” where participants feel that others will do the heavy lifting.)

In the students’ own words:

“I felt as though I became very familiar with another student’s ideas and thoughts when I was in a small group of four.”

“This format allows us to connect more to previous conversations instead of having to rehash material that was discussed in earlier conversations.”

In other words, we’ve got some research that supports our teacherly instincts: 4 or 5 students works well to promote online social presence.

Always with the Caveats

At the same time, I think we should keep an open mind on this topic.

First: we don’t have lots of research here. I’ve found a few studies, and they all point in roughly the same direction. But we don’t have nearly enough research to have strong opinions, or to be granular in our recommendations.

That is: we don’t know if different age groups benefit from different numbers in small groups. We don’t know about cultural differences. We don’t know if physics discussions benefit from larger numbers than do … say … history discussions. (I don’t know why that would be true, but we don’t have research either way.)

Second: I think we should focus particularly on the students’ age. Most of the research I’ve seen focuses on college students.

This study I’ve briefly summarized looked at graduate students — who had, by the way, signed up for an online masters program. In other words: they’re probably especially open to, and especially interested in, online discussions.

So, I wouldn’t be surprised if this research doesn’t apply directly to 2nd graders.

Because I’m a high school teacher, I don’t have a prediction if younger students would do better in smaller or larger groups. If you teach K-8, I hope you’ll let me know what your predictions would be.

In Sum

Teachers can foster social presence in online classrooms by having relatively small breakout groups and discussion boards.

Until we get more detailed research, we can follow our teacherly instincts to right-size those groups. The research we have suggests that 4 or 5 is the place to start.

“How to Study Less and Learn More”: Explaining Learning Strategies to our Students
Andrew Watson
Andrew Watson

Because cognitive science gives us such good guidance about learning, we want to share that information with our students.

“Study THIS WAY!” we cry. “Research says so!”

Alas, all too often, students don’t follow our advice.

A key part of the problem: the research that supports our advice is — ahem — really complicated and abstract. We might find it convincing, but our students’ eyes glaze over when we try to explain.

Because I talk frequently talk with students about brain research, I’m always on the lookout for research that…

… is methodologically sound,

… supports useful studying advice, and

… is easy to explain.

I’ve found such a study [updated link], and I think we can explain it to our students quite easily.

Two Are Better Than One

We all know the research showing that sleep helps consolidate long-term memory formation (fun studies here).

We all know the research showing that spreading practice out is better than doing it all at once (fascinating research here).

How about doing both? How about doing two study sessions, and sleeping in between them?

If we could convince our students to adopt those two strategies, that would be GREAT.

And, the research necessary to test that advice is — conceptually, at least — easy to do.

Students learned a topic: French-Swahili word pairs. (This research was done in France.)

Half of them did that at 9 am, and then tested themselves 12 hours later, at 9 pm. (Note: they did not sleep between these two sessions.)

How many times did these non-sleepers have to go through their flashcards to get all the answers right?

On average, they reviewed flashcards 5.8 times to get all those word pairs right. (For the sake of simplicity, let’s just call that 6.)

The other half learned the French-Swahili word pairs at 9 pm. They then got a good night’s sleep, and tested themselves 12 hours later, at 9 am.

How many times did the sleepers go through flashcards to get all the word pairs right? On average, they got them all right on the third attempt.

That’s right: instead of 6 review sessions, they needed 3.

Can We Do Better?

Okay, so far this study is easy to explain and shows real promise. Because they spread practice out AND slept, they cut study time IN HALF to get all the answers right.

But, so far this research measures learning 12 hours later. That’s not really learning. What happens if we test them later?

Specifically, what happens if we test them 6 months later?

Hold onto your hat.

When the researchers retested these students, the non-sleepers remembered 4 of those word pairs. The sleepers remembered 8 pairs.

So: HALF as much review resulted in TWICE as much learning 6 MONTHS later.

The Headline Please

When I talk with students about brain research, I start with this question: “Would you like to study less and learn more?”

I have yet to meet the student who doesn’t get behind that goal.

This easy-to-explain study shows students that half as much review leads to twice as much memory formation — if they both spread practice out over time and sleep between review sessions.

I think we have a winner.

What’s Better Than Caffeine (And Doesn’t Require Electrodes)?
Andrew Watson
Andrew Watson

Although we can’t improve our students’ working memory capacity, we can help them use the WM they’ve got more productively.

We have lots of teaching strategies to accomplish this goal. Well-designed visuals, for instance, divide WM demands between visual and auditory channels. In this way, they functionally reduce cognitive difficulties.

Our students could also do what our colleagues do: use caffeine to boost cognitive performance. When I have my morning tea, that jolt of caffeine doesn’t increase my working-memory capacity, but it helps me use it better. (In the short term, the cognitive result is the same.)

Is there anything else we can do that doesn’t involve drugs?

So Crazy That It Just Might Work

How about exercise?

If caffeine jolts me awake enough to help me use WM more effectively, couldn’t old-fashioned exercise have that same effect?

Researchers in Canada wanted to know just that. Is exercise as effective as caffeine in temporarily boosting WM performance?

To answer this question, they did all the things you’d want them to do. They had different groups of participants take WM tests before and after different combinations of caffeine and exercise.

They controlled for age. They controlled for the amount of caffeine that people usually drank. They controlled for the amount of exercise that people usually did. (If you want all the details, you can read ’em here.)

The result: sure enough, exercise temporarily boosts WM function as much as caffeine does.

And, it doesn’t lead to a post-caffeine crash they way caffeine use does. (Yes: the researchers did measure “caffeine withdrawal symptoms.”)

In this case, 20 minutes of moderately paced walking did the trick. In schools, I’m thinking recess, or PE, or even passing time between classes just might serve the same function.

If we want our students to think more clearly, let them move.

But Can’t We Zap the Brain with a Gizmo?

Given the importance of working memory for schools, you’d think someone would make a brain zap app.

Oh wait, they have. Lots of times.

My friend Scott MacClintic just sent me a link to this “biolelectric memory patch,” which claims what you expect it to claim. (They have in-house research to show that it works!)

Happily, the article Scott sent me includes many reasons to be skeptical of this gizmo. If you’d like another set of reasons, you can check out this article over at JSTOR daily.

The short version is: recent decades have see LOTS of products claiming to enhance WM capacity. With alarming consistency, those products just don’t work. Lumosity’s wallet is $2,000,000 lighter after a fine for misleading claims. (You read that right: two million dollars.)

So, who knows, maybe at last this will be the brain gizmo that works. If I had two million dollars, I wouldn’t bet on it.

Until we get better research, we’ve got two proven strategies to help students use working memory well: skillful teaching, and exercise.

The Limits of “Desirable Difficulties”: Catching Up with Sans Forgetica
Andrew Watson
Andrew Watson

We have lots of research suggesting that “desirable difficulties” enhance learning.

That is: we want our students to think just a little bit harder as they practice concepts they’re learning.

Why is retrieval practice so effective ? Because it requires students to think harder than mere review.

Why do students learn more when they space practice out over time? Because they have to think back over a longer stretch — and that’s more difficult.

We’ve even had some evidence for a very strange idea: maybe the font matters. If students have to read material in a hard-to-read font, perhaps their additional effort/concentration involved will boost their learning.

As I wrote last year, a research team has developed a font designed for exactly that reason: Sans Forgetica. (Clever name, no?) According to their claims, this font creates the optimal level of reading difficulty and thereby could enhance learning.

However — as noted back then — their results weren’t published in a peer-reviewed journal. (All efforts to communicate with them go to their university’s publicity team. That’s REALLY unusual.)

So: what happens when another group of researchers tests Sans Forgetica?

Testing Sans Forgetica

Testing this question is unusually straightforward.

Researchers first asked participants to read passages in Sans Forgetica and similar passages in Arial. Sure enough, they rated Sans Forgetica harder to read.

They then ran three more studies.

First, they tested participants’ memory of word pairs.

Second, they tested memory of factual information.

Third, they tested understanding of conceptual understanding.

In other words, they were SUPER thorough. This research team didn’t just measure one thing and claim they knew the answer. To ensure they had good support behind their claims, they tested the potential benefits of Sans Forgetica in many ways.

So, after all this thorough testing, what effect did Sans Forgetica have?

Nada. Bupkis. Nuthin.

For example: when they tested recall of factual information, participants remembered 74.73% of the facts they read in Sans Forgetica. They remembered 73.24% of the facts they read in Arial.

When they tested word pairs, Sans Forgetica resulted in lower results. Participants remembered 40.26% of the Sans Forgetica word pairs, and 50.51% of the Arial word pairs.

In brief, this hard-to-read font certainly doesn’t help, and it might hurt.

Practical Implications

First, don’t use Sans Forgetica. As the study’s authors write:

If students put their study materials into Sans Forgetica in the mistaken belief that the feeling of difficulty created is benefiting them, they might forgo other, effective study techniques.

Instead, we should encourage learners to rely on the robust, theoretically-grounded techniques […] that really do enhance learning.

Second, to repeat that final sentence: we have LOTS of study techniques that do work. Students should use retrieval practice. They should space practice out over time. They should manage working memory load. Obviously, they should minimize distractions — put the cell phone down!

We have good evidence that those techniques work.

Third, don’t change teaching practices based on unpublished research. Sans Forgetica has a great publicity arm — they were trumpeted on NPR! But publicity isn’t evidence.

Now more than ever, teachers should keep this rule in mind.

“Doing Science” or “Being a Scientist”: What Words Motivate Students?
Andrew Watson
Andrew Watson

Teachers often find that small changes in wording produce big benefits.

One recent example: a research team in New York explored the difference between “being a scientist” and “doing science.”

The first phrasing — “being a scientist” — might imply that scientist is a kind of fixed, exclusive identity. In the same way that dogs are dogs and can’t also be cats, so too young children might infer that people who are artists or athletes or authors can’t also be scientists.

The second phrasing — “doing science” — might clear away that rigidity. This classroom exercise is something we’re all doing. It doesn’t have immediate identity implications one way or another.

If this simple switch in phrasing helps motivate students, that would be the least expensive, least time-consuming intervention EVAH.

The Research

Three researchers prepared a science lesson about friction for pre-kindergarten students.

Half of the teachers (62) saw a training video that modeled specific language: “Today we are going to do science! The first part of doing science is observing with our senses.”

The other half (68) saw a similar video that didn’t include such modeling. (Researchers assumed that most teachers — without clear modeling — would using phrasing about ‘being a scientist’ rather than ‘doing science.’ Indeed, that’s what happened.)

Teachers then ran those friction lessons, where toy cars rolled down ramps with different surfaces: carpet, sandpaper, wrapping paper.

A few days later, these pre-K students had the chance to play a tablet-based video game that resembled their science experiment. The game was programmed in such a way that all students got the first round right (success!) and the second round wrong (struggle!).

So, how long did these children persist after struggle? And: did the “doing science” vs. “being a scientist” language matter?

The Results

Sure enough, students in the “do science” lessons persisted longer than those in the the “be a scientist” lessons.

That is: when teachers spoke of science an action we take, not an identity that we have (or don’t have), this subtle linguistic shift motivated to students to keep going longer.

The effects, although statistically significant, were quite small.

Students in the “do science” lessons were 6% likelier to continue after they got one question wrong. And they were 4% likelier to keep going three problems later. (You read that right: six percent, and four percent.)

We might read these results and throw our hands up in exasperation. “Six percent! Who cares?”

My answer is: we ought to care. Here’s why.

Students experienced this linguistic change exactly once. It cost nothing to enact. It took no time whatsover. Unlike so many educational interventions — pricey and time consuming — this one leaves our scarcest resources intact.

Now: imagine the effect if students heard this language more than once. What if they heard it every time their teacher talked with them about science. (Or, art. Or, creativity. Or, math. Or, any of those things that feel like ‘identities’ rather than ‘activities.’)

We don’t (yet) have research to answer those questions. But it seems entirely plausible that this FREE intervention could have increasingly substantial impact over a student’s school career.

One Step More

In two ways, this research reminds me of Mindset Theory.

First: Dweck’s work has taken quite a drubbing in recent months. In some some social media circles, it’s fashionable to look down on this research — especially because “the effects are so small.”

But, again: if one short mindset intervention (that is FREE and takes NO TIME) produces any effect — even a very small effect — that’s good news. Presumably we can repeat it often enough to make a greater difference over time.

I’m not arguing that promoting a growth mindset will change everything. I am arguing that even small boosts in motivation — especially motivation in schools — should be treasured, not mocked.

Second: this research rhymes with Mindset Theory. Although the researchers didn’t measure the students’ mindsets — and certainly didn’t measure any potential change in mindset — the underlying theory fits well with Dweck’s work.

That is: people who have a fixed mindset typically interpret success or failure to result from identity: I am (or am not) a “math person,” and that’s why I succeeded (or failed).

People with a growth mindset typically interpret success or failure to result from the quality of work that was done. If I work effectively, I get good results; if I don’t, I don’t.

So: this study considered students who heard that they should think about science as an identity (“being a scientist”) or as a kind of mental work (“doing science”). The results line up neatly with mindset predictions.

To Sum Up

First: small changes in language really can matter.

Second: encouraging students to “do this work” rather than “be this kind of person” can have motivational benefits.

Third: small changes in student motivation might not seem super impressive in the short term. But, if they add up over time, they might be well worth the very small investment needed to create them.

Unbearable Irony: When Dunning-Kruger Bites Back…
Andrew Watson
Andrew Watson

More than most psychology findings, the Dunning-Kruger effect gets a laugh every time.

Here goes:

Imagine that I give 100 people a grammar test. If my test is well-designed, it gives me insight into their actual knowledge of grammar.

I could divide them into 4 groups: those who know the least about grammar (the 25 who got the lowest scores), those who know the most (the 25 highest scores), and two groups of 25 in between.

I could also ask those same 100 people to predict how well they did on that test.

Here’s the question: what’s the relationship between actual grammar knowledge and confidence about grammar knowledge?

John Cleese — who is friends with David Dunning — sums up the findings this way:

In order to know how good you are at something requires exactly the same skills as it does to be good at that thing in the first place.

Which means — and this is terribly funny — that if you’re absolutely no good at something at all, then you lack exactly the skills that you need to know that you’re absolutely no good at it. [Link]

In other words:

The students who got the lowest 25 scores averaged 17% on that quiz. And, they predicted (on average) that they got a 60%.

Because they don’t know much grammar, they don’t know enough to recognize how little they know.

In Dunning’s research, people who don’t know much about a discipline consistently overestimate their skill, competence, and knowledge base.

Here’s a graph, adapted from figure 3 of Dunning and Kruger’s 1999 study, showing that relationship:

Adapted from figure 3 of Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77(6), 1121-1134.

Let the Ironies Begin

That graph might surprise you. In fact, you might be expecting a graph that looks like this:

Certainly that was the graph I was expecting to find when I looked at Kruger & Dunning’s 1999 study. After all, you can find that graph — or some variant — practically everywhere you look for information about Dunning-Kruger.

It seems that the best-known Dunning-Kruger graph wasn’t created by Dunning or Kruger.

If that’s true, that’s really weird. (I hope I’m wrong.)

But this story gets curiouser. Check out this version:

This one has thrown in the label “Mount Stupid.” (You’ll find that on several Dunning-Kruger graphs.) And, amazingly, it explicitly credits the 1999 study for this image.

That’s right. This website is calling other people stupid while providing an inaccurate source for its graph of stupidity. It is — on the one hand — mocking people for overestimating their knowledge, while — on the other hand — demonstrating the conspicuous limits of its own knowledge.

Let’s try one more:

I am, quite honestly, praying that this is a joke. (The version I found is behind a paywall, so I can’t be sure.)

If it’s not a joke, I have some suggestions. When you want to make fun of someone else for overestimating their knowledge,

First: remember that “no nothing” and “know nothing” don’t mean the same thing. Choose your spelling carefully. (“No nothing” is how an 8-year-old responds to this parental sentence: “Did you break the priceless vase and what are you holding behind your back?’)

Second: The Nobel Prize in Psychology didn’t write this study. Kruger and Dunning did.

Third: The Nobel Prize in Psychology doesn’t exist. There is no such thing.

Fourth: Dunning and Kruger won the Ig Nobel Prize in Psychology in 2000. The Ig Nobel Prize is, of course, a parody.

So, either this version is a coy collection of jokes, or someone who can’t spell the word “know” correctly is posting a graph about others’ incompetence.

At this point, I honestly don’t know which is true. I do know that the god of Irony is tired and wants a nap.

Closing Points

First: Karma dictates that in a post where I rib people for making obviously foolish mistakes, I will make an obviously foolish mistake. Please point it out to me. We’ll both get a laugh. You’ll get a free box of Triscuits.

Second: I haven’t provided sources for the graphs I’m decrying. My point is not to put down individuals, but to critique a culture-wide habit: passing along “knowledge” without making basic attempts to verify the source.

Third: I really want to know where this well-known graph comes from. If you know, please tell me! I’ve reached out to a few websites posting its early versions — I hope they’ll pass along the correct source.

Music and Memory: A Learning Strategy?
Andrew Watson
Andrew Watson

Ever since the “Mozart Effect” was debunked, teachers have wanted to understand the relationship between music and learning.

If simply listening to music doesn’t “make us smarter” in some abstract way, can we use music strategically to help us learn specific subjects or topics?

A group of researchers at Baylor University wondered if the key is sleep.

That is: if students learn a topic while listening to (quiet) music, and then listen to that same music while they sleep, will it cause the brain to replay the academic content associated with the music? And, will that replay help students learn?

Intriguing, no?

This technique — called “targeted memory reactivation” — has been studied before. But, most of that research uses odors to reactivate memories.

That is: students learn X with the scent of roses in the background. That night while they sleep, the scent of roses is piped into the room. When they’re tested the next day — voila! — they remember more X than the students who didn’t get the “targeted memory reactivation” at night.

Of course, using odors for such reactivation is interesting in sleep labs. But it might not be very practical for the rest of us. So, researchers wondered if music would also reactivate memories.

The Research

Chenlu Gao, Paul Fillmore, Michael K. Scullin asked students to watch a 30-minute interactive video lecture on economics. During that lecture, classical music played quietly in the background. (The sound level was “soft background noise in a library.”)

So: students’ brains associated the music — Beethoven, Vivaldi, Chopin — with the topic — economics.

That night, while they slept, half of the students heard that same music played again. The other half heard white noise instead. The music/white noise started once students entered a particular phase of sleep, called “slow wave sleep.” (In this case, slow wave sleep began about 35 minutes after they fell asleep.)

Gao, Fillmore, and Scullin wanted to know:

Compared to students who heard white noise while sleeping, would the students who heard the music remember the lecture better?

Would they be able to apply its principles better?

Might there be a gender difference in those results?

So: what effect did Beethoven have?

The Results

Sure enough, targeted memory reactivation had interesting and measurable effects.

First: the next morning, students who heard music at night were likelier to “pass” a quiz (by scoring 70%) than those who didn’t.

Second: those differences came largely in two categories. The music helped women (but not men). And the music helped students answer application questions (but not factual questions).

Third: researchers measured students’ brain activity during sleep. In brief, students who heard music had different brain wave patterns than those who heard the white noise. And, those who did better on the quizzes had different patterns than those who didn’t.

These results get SUPER technical. But the headline is: we can quite plausibly connect mental behaviors (answers to quizzes) to neurobiological behaviors (“theta power”).

Fourth (This is really important): Researchers found NO DIFFERENCES when they tested the students nine months later. So, this targeted memory reactivation (with music) produced a short-term difference, but not a long-term one.

Implications for Teaching and Learning

This musical version of targeted memory reactivation feels temptingly practical. But: trying it out in real life requires some extrapolation and some technology.

I briefly corresponded with the lead researcher, Michael Scullin, about translating this technique from the sleep lab to everyday life. Here’s a quick overview of key points:

PROBLEM: In this study, students heard the music as they first learned the material. But, it’s REALLY unlikely that teachers/professors will play music while they teach. So, how can we apply use targeted memory reactivation in a more typical learning situation?

SOLUTION: The technique just might work if students play the right kind of music while they study, and then replay that music while they sleep. In this case, “the right kind of music” means instrumental, not distracting, relatively quiet.

However, this approach probably won’t work if students are otherwise distracted — by cellphones or video games, say — while they study.

PROBLEM: Presumably I can’t use the same piece of music to reactivate all memories of all the academic topics I want to learn. Does that mean I have to build some huge library of music cues: this Beethoven piece to recall the Battle of Bunker Hill, that Chopin piece to practice balancing chemical equations?

SOLUTION: Alas, it’s true: each piece of music would be distinctively paired with a particular topic. (This conclusion hasn’t been tested, but is likely true.)

So, the best compromise is probably this: choose the topics that are most difficult to understand or remember, and use the technique sparingly for that subset of academic information.

PROBLEM: Won’t playing music at night keep students awake, or wake them up?

SOLUTION: That’s an important technical question. Ideally, the music would play quietly.  And, as we saw in the research described above, it would start only after slow wave sleep started.

So, whatever technology the students have, they should program it to start the music at very low levels — ideally starting about 30 minutes after they fall asleep.

QUESTION: The technique helped in the short term, but not nine months later. Can we use targeted memory reactivation to consolidate learning over the long term?

ANSWER: We haven’t tested that yet. It seems plausible (even likely?) that repeating the music over time would help. That is: listening to that music once a fortnight for a few months might really firm up memories.

But, again, that approach hasn’t been tested. I (Andrew Watson, not Michael Scullin) am speculating that it might work. But we don’t know.

In Sum…

This research — contrary to lots of earlier work — suggests that we might be able to learn while we sleep.

But, the specifics are very much in the early days. Targeted memory reactivation clearly produces benefits in the sleep lab. Its application to everyday teaching and learning needs to be explored, practiced, and refined.


I wrote about another one of Scullin’s studies a year ago. If you’d like some advice on how to fall asleep faster, click here.