Skip to main content

Andrew Watson About Andrew Watson

Andrew began his classroom life as a high-school English teacher in 1988, and has been working in or near schools ever since. In 2008, Andrew began exploring the practical application of psychology and neuroscience in his classroom. In 2011, he earned his M. Ed. from the “Mind, Brain, Education” program at Harvard University. As President of “Translate the Brain,” Andrew now works with teachers, students, administrators, and parents to make learning easier and teaching more effective. He has presented at schools and workshops across the country; he also serves as an adviser to several organizations, including “The People’s Science.” Andrew is the author of "Learning Begins: The Science of Working Memory and Attention for the Classroom Teacher."

How To Make Sure Homework Really Helps (a.k.a.: “Retrieval Practice Fails”)
Andrew Watson
Andrew Watson

Most research focuses narrowly on just a few questions. For instance:

“Does mindful meditation help 5th grade students reduce anxiety?”

“How many instructions overwhelm college students’ working memory?”

“Do quizzes improve attention when students learn from online videos?”

Very occasionally, however, just one study results in LOTS of teaching advice. For instance, this recent research looks at data from ELEVEN YEARS of classroom teaching.

Student Doing Homework with Laptop

Professor Arnold Glass (writing with Mengxue Kang) has been looking at the benefits of various teaching strategies since 2008.

For that reason, he can draw conclusions about those strategies. AND, he can draw conclusions about changes over time.

The result: LOTS of useful guidance.

Here’s the story…

The Research

Glass has been teaching college courses in Memory and Cognition for over a decade. Of course, he wants to practice what he preaches. For instance:

First, when Glass’s students learn about concepts, he begins by asking them to make plausible predictions about the topics they’re going to study.

Of course, his students haven’t studied the topic yet, so they’re unlikely to get the answers right. But simply thinking about these questions helps them remember the correct answers that they do learn.

In research world, we often call this strategy “pretesting” or “prequestions.”

Second, after students learn the topics, he asks them to answer questions about them from memory.

That is: he doesn’t want them to look up the correct answers, but to try and remember the correct answers.

In research world, we call this technique “retrieval practice” or “the testing effect.”

Third, Glass spreads these questions out over time. His students don’t answer retrieval practice questions once; they do so several times.

In research world, we call this technique “spacing.”

Because Glass connects all those pretesting and retrieval practice questions to exam questions, he can see which strategies benefit.

And, because he’s been tracking data for years, he can see how those benefits change over time.

The Results: Good & Bad

Obviously, Glass’s approach generates LOTS of results. So, let’s keep things simple.

First Headline: these strategies work.

Pretesting and retrieval practice and spacing all help students learn.

These results don’t surprise us, but we’re happy to have confirmation.

Second Headline: but sometimes these strategies don’t work.

In other words: most of the time, students get questions right on the final exam more often than they did for the pretesting and the retrieval practice.

But, occasionally, students do better on the pretest question (or the retrieval practice question) than on the final exam.

Technically speaking, that result is BIZARRE.

How can Glass explain this finding?

Tentative Explanations, Alarming Trends

Glass and Kang have a hypothesis to explain this “bizarre” finding. In fact, this study explores their hypothesis.

Glass’s students answer the “pretesting” questions for homework. What if, instead of speculating to answer those pretesting questions, the students look the answer up on the interwebs?

What if, instead of answering “retrieval practice” questions by trying to remember, the students look up the answers?

In these cases, the students would almost certainly get the answers right — so they would have high scores on these practice exercises.

But they wouldn’t learn the information well, so they would have low scores on the final exam.

So, pretesting and retrieval practice work if students actually do it.

But if the students look up answer instead of predicting, they don’t get the benefits of prequestions.

If they look up the answer instead of trying to remember, they don’t get the benefit of retrieval practice.

And, here’s the “alarming trend”: the percentage of students who look up the answers has been rising dramatically.

How dramatically? In 2008, it was about 15%. In 2018, it was about 50%.

Promises Fulfilled

The title of the blog post promises to make homework helpful (and to point out when retrieval practice fails).

So, here goes.

Retrieval practice fails when students don’t try to retrieve.

Homework that includes retrieval practice won’t help if students look up the answers.

So, to make homework help (and to get the benefits of retrieval practice), we should do everything we reasonably can to prevent this shortcut.

Three strategies come quickly to mind.

First: don’t just use prequestions and retrieval practice. Instead, explain the logic and the research behind them. Students should know: they won’t get the benefits if they don’t do the thinking.

Second: as must as is reasonably possible, make homework low-stakes or no-stakes. Students have less incentive to cheat if doing so doesn’t get them any points. (And, they know that it harms their learning.)

Third: use class time for both strategies.

In other words: we teachers ultimately can’t force students to “make educated predictions” or “try to remember” when they’re at home. But we can monitor them in class to ensure they’re doing so.

These strategies, to be blunt, might not work well as homework — especially not at the beginning of the year. We should plan accordingly.

TL;DR

Prequestions and retrieval practice do help students learn, but only if students actually do the thinking these strategies require.

We teachers should be realistic about our students’ homework habits and incentives, and design assignments that nudge them in the right directions.

 

Glass, A. L., & Kang, M. (2022). Fewer students are benefiting from doing their homework: an eleven-year study. Educational Psychology42(2), 185-199.

The Best Book on Cognitive Load Theory: Ollie Lovell to the Rescue
Andrew Watson
Andrew Watson

Teaching ought to be easy.

After all, we have a functionally infinite amount of long-term memory. You don’t have to forget one thing to learn another thing — really.

So: I should be able to shovel information and skills into your infinite long-term memory. Voila! You’d know everything

Alas, to get to your long-term memory, “information and skills” have to pass through your working memory. This very narrow bottleneck makes learning terribly difficult — as teachers and students well know.

If only someone would come up with a theory to explain this bottleneck. If only that theory would help teachers and students succeed despite its narrow confines.

Good News, with a Twist

Happily, that theory exists. It’s called “cognitive load theory,” and several scholars in Australia (led by John Sweller) have been developing it for a few decades now.

It explains the relationship between infinite long-term memory and limited working memory. It explores practical classroom strategies to solve the problems created by this relationship.

Heck, it even muses upon evolutionary explanations for some quirky exceptions to its rules.

In other words, it has almost everything a teacher could want.

Alas — [warning: controversial opinion] — it does include one glaring difficulty.

Cognitive load theory helps educational psychologists talk with other educational psychologists about these topics.

However, it relies on on a long list of terms, each of which describes complex — sometimes counter-intuitive — concepts.

If you start reading articles based on cognitive load theory, you might well discover that …

… a particular teaching practice works this way because of the “split attention effect” (which doesn’t mean exactly what it sounds like),

… but it works that way because of the “expertise reversal effect,”

… and “element interactivity” might explain these contradictory results.

For this reason, paradoxically, teachers who try to understand and apply cognitive load theory often experience cognitive overload.

As a result, teachers would really benefit from a book that explains cognitive load theory so clearly as not to overwhelm our working memory.

Could such a book exist?

Ollie Lovell To The Rescue

Yes, reader, it exists. Oliver Lovell has written Sweller’s Cognitive Load Theory In Action (as part of Tom Sherrington’s “In Action” series).

Lovell’s book does exactly what teachers want it to do: explain cognitive load theory without overloading our cognitive faculties.

Lovell accomplishes this feat with three strategies.

First, he has an impressive ability to explain cognitive load theory concepts with bracing clarity.

For instance, let’s go back to that “expertise reversal effect.” Why might a teaching strategy benefit a novice but not an expert?

Lovell’s answer: redundancy. Redundant information taxes working memory. And, crucially:

“What is redundant for an expert is not redundant for the novice, and instructional recommendations are reversed accordingly.”

That’s the “expertise reversal effect.” Pithy, clear, sensible.

Because he writes and explains so clearly, Lovell helps teachers understand all that cognitive load theory terminology without feeling overwhelmed.

Second, Lovell gives examples.

SO MANY CLASSROOM EXAMPLES.

Whatever grade you teach, whatever topic you teach, you’ll find your discipline, your grade, and your interests represented. (I believe Lovell is a math teacher; as a high-school English teacher, I never felt slighted or ignored.)

Geography, piano, computer programming. It’s all there.

Knowing that clear explanations of worked examples can reduce working memory load, he provides plenty.

Practicing What He Preaches

Third, Lovell simplifies needless complexities.

Students of cognitive load theory will notice that he more-or-less skips over “germane” cognitive load: a category that has (ironically) created all sorts of “extraneous” working memory load for people trying to understand the theory.

He describes the difference between biologically primary and biologically secondary learning. And he explains the potential benefits this theory offers school folk.

However, Lovell doesn’t get bogged down in this niche-y (but fascinating) topic. He gives it just enough room, but not more.

Heck, he even keeps footnotes to a minimum, so as not to split the reader’s attention. Now that’s dedication to reducing working memory load!

Simply put: Lovell both explains and enacts strategies to manage working memory load just right.

In Brief

No doubt your pile of “must read” books is intimidatingly large.

If you want to know how to manage working memory load (and why doing so matters), Lovell’s Cognitive Load Theory in Action should be on top of that pile.


A final note:

I suspect Lovell’s explanations are so clear because he has lots of experience explaining.

Check out his wise, thoughtful, well-informed podcasts here.

The Bruce Willis Method: Catching Up Post-Covid [Reposted]
Andrew Watson
Andrew Watson

Because of Covid, our students have fallen behind. How can we help them “catch up”?

As I argued back in June, Bruce Willis might (or might not) have helpful answers to that question.


In the third Die Hard movie, Brue Willis and his unexpected partner Samuel L. Jackson need to get to Wall Street a hurry. They commandeer a cab.

An experienced cab driver, Jackson suggests taking 9th Avenue south, but Willis insists on going through Central Park.

It turns out: he doesn’t mean taking the road that runs through the Central Park, but driving through the park itself — across crowded lawns, through busy playgrounds, past famous fountains, down winding bike-paths.

His desperate short-cut helps the team catch up.

In education these days, it seems that we need our very own Bruce Willis.

Because of Covid, our students are WAY BEHIND.

5th graders don’t know as much math as they used to. 2nd graders can’t read as well as they once could. 9th graders have lost even more social skills than 9th graders usually lose.

Because our students know less and can do less, we teachers want to help them CATCH UP.

And so we ask: what’s the educational analogue to driving through the park? How can we — like Bruce and Samuel — help our students learn faster?

Like lots of folks, I’ve been thinking about that question for a while now. I’ve got bad news, and worse news; and I’ve got good news.

The Bad News

The Bruce Willis Method does not exist in education.

We can’t “drive through the park.” We can’t, in other words, help students “learn the same amount, only faster.”

Here’s why I say so:

If we knew how to teach any faster, we would have been doing so already.

Seriously. Do you know any teacher who says, “I could have covered this curriculum in 10 weeks. But what the heck, I’m going to drag it out and take 12 or 13”?

I don’t. And I suspect you don’t either.

We have always been helping our students learn as best we could. If we knew better ways, we would have been using them.

Of course Willis can get through the park faster; it was a MOVIE!  Alas, we can’t follow his example.

I am, in fact, quite worried about all the talk of “catching up.” In my mind, it creates two clear dangers:

First Danger:

If we try to catch up, we’ll probably — in one way or another — try to speed up. We will, for instance, explore a topic in 2 weeks instead of 3 weeks. We will combine 3 units into 1.

However, the decision to speed up necessarily means that students spend less time thinking about a particular topic.

As Dan Willingham has taught us: “memory is the residue of thought.” If students spend less time thinking about a topic, they will learn less about it.

The result: they won’t catch up. Instead, they will be further behind.

In other words: such efforts to help students recover from Covid learning muddle will — paradoxically —  hinder their learning.

Second Danger:

If we believe that “catching up” is a realistic short-term possibility, we open ourselves up to inspiring-but-unfounded claims.

People who don’t work in schools will tell us that “you can’t solve problems with the same thinking that created those problems in the first place.”

Their claims might include words & phrases like “transformational” or “thinking outside the box” or “new paradigm” or “disrupt.”

These claims will almost certainly come with products to buy: new technology here, new textbooks there, new mantras yon.

They will sound uplifting and exciting and tempting and plausible.

But…

… any “research-based” claims will almost certainly extrapolate substantially beyond the research’s actual findings;

… these ideas won’t have been tested at scale in a realistic setting;

… such claims will defy core knowledge about cognitive architecture. (No, students can’t overcome working memory limitations simply because “they can look up everything on the internet.”)

In other words: because the goal (“catching up”) is so tempting, we might forget to be appropriately skeptical of inspiring claims (“your students can catch up if you only do THIS THING!”).

Now is the time to be more skeptical, not less skeptical, of dramatic claims.

The Good News

Despite all this gloomy news, I do think we have a very sensible and realistic option right in front of us.

I propose three steps for the beginning of the next school year.

Step 1: determine what our students already know.

In previous years, I could reasonably predict that my students know this much grammar and this much about Shakespeare and this much about analyzing literature.

Well, they just don’t anymore. I need to start next year by finding out what they really do know. (Hint: it will almost certainly be less — maybe dramatically less — than they did in the past.)

Step 2plan a realistic curriculum building from that foundation.

If we meet our students where they are, they are much likelier to learn the new ideas and procedures we teach them.

In fact, they’re also likelier to strengthen and consolidate the foundation on which they’re building.

Yes, I might feel like my students are “behind.” But they’re behind an abstract standard.

As long as they’re making good progress in learning new ideas, facts, and procedures, they’re doing exactly the right cognitive work. They won’t catch up this year.

But if they make steady progress for several years, they’ll be well back on track.

Step 3draw on the lessons of cognitive science.

In the paragraphs above, I’ve been highly skeptical of uplifting, simplistic quick-fix claims. (“If we revolutionize education with X, our students will learn calculus in 6th grade!”)

At the same time, I do think that teachers can make steady and coherent improvements in our work. When we understand the mental processes that lead to long-term memory formation, we can teach more effectively.

We should study…

working memory function: the core mental bottleneck that both allows and impedes learning;

… the importance of desirable difficulties — spacing, interleaving, retrieval practice — in forming long-term memories;

… the sub-components of attention that add up to concentration and understanding;

… a realistic framework for understanding student motivation.

And so forth.

Understanding these topics will not “revolutionize education overnight.”

However, teachers who design lessons and plan syllabi with these insights in mind can in fact help their students consolidate ideas more effectively.

In other words: don’t follow Bruce Willis through the park.

Instead, we should learn how learning takes place in the brain. When our teaching is guided by that knowledge, our students have the best long-term chance of getting back on track.

Do Classroom Decorations Distract Students? A Story in 4 Parts… [Reposted]
Andrew Watson
Andrew Watson

As we prepare for the upcoming school year, how should we think about decorating our classrooms?

Can research give us any pointers?

This story, initially posted in March of 2022, paints a helpfully rich research picture.


Teacher training programs often encourage us to brighten our classrooms with lively, colorful, personal, and uplifting stuff:

Inspirational posters.

Students’ art work.

Anchor charts.

Word walls.

You know the look.

We certainly hope that these decorations invite our students in and invigorate their learning. (We might even have heard that “enriched environments promote learning.”)

At the same time, we might worry that all those decorations could distract our students from important cognitive work.

So, which is it? Do decorations distract or inspire? Do they promote learning or inhibit learning? If only we had research on this question…

Part I: Early Research

But wait: we DO have research on this objection.

Back in 2014, a team led by Dr. Anna Fisher asked if classroom decorations might be “Too Much of a Good Thing.”

They worked with Kindergarten students, and found that — sure enough — students who learned in highly-decorated rooms paid less attention and learned less than others in “sparsely” decorated classroom.

Since then, other researchers have measured students’ performance on specific mental tasks in busy environments, or in plain environments.

The results: the same. A busy visual field reduced working memory and attention scores, compared to plain visual environments.

It seems that we have a “brain-based” answer to our question:

Classroom decorations can indeed be “too much of a good thing.”

Taken too far, they distract students from learning.

Part II: Important Doubts

But wait just one minute…

When I present this research in schools, I find that teachers have a very plausible question.

Sure: those decorations might distract students at first. But, surely the students get used to them.

Decorations might make learning a bit harder at first. But ultimately students WON’T be so distracted, and they WILL feel welcomed, delighted, and inspired.

In this theory, a small short-term problem might well turn into a substantial long-term benefit.

And I have to be honest: that’s a plausible hypothesis.

Given Fisher’s research (and that of other scholars), I think the burden of proof is on people who say that decorations are not distracting. But I don’t have specific research to contradict those objections.

Part III: The Researchers Return

So now maybe you’re thinking: “why don’t researchers study this specific question”?

I’ve got good news: they just did.

In a recently-published study, another research team (including Fisher, and led by Dr. Karrie Godwin, who helped in the 2014 study) wondered if students would get used to the highly decorated classrooms.

Research isn’t research if we don’t use fancy terminology, so they studied “habituation.” As in: did students habituate to the highly decorated classrooms?

In the first half of their study, researchers again worked with Kindergarteners. Students spent five classes studying science topics in plainly decorated classrooms. (The visual material focused only on the topic being presented.)

Then they spent ten classes studying science topics in highly decorated classrooms. (These decorations resembled typical classroom decorations: posters, charts, artwork, etc.)

Unsurprisingly (based on the 2014 study), students were more distractable in the decorated classroom.

But: did they get used to the decorations? Did they become less distractable over time? Did they habituate?

The answer: a little bit.

In other words: students were less distractible than they initially were in the decorated classroom. But they were still more distractible than in the sparsely decorated room.

Even after ten classes, students hadn’t fully habituated.

Part IV: Going Big

This 2-week study with kindergarteners, I think, gives us valuable information.

We might have hoped that students will get used to decorations, and so benefit from their welcoming uplift (but not be harmed by their cognitive cost). So far, this study deflates that hope.

However, we might still hold out a possibility:

If students partially habituate over two weeks, won’t they fully habituate eventually? Won’t the habituation trend continue?

Team Godwin wanted to answer that question too. They ran yet another study in primary school classrooms.

This study had somewhat different parameters (the research nitty-gritty gets quite detailed). But the headline is: this study lasted 15 weeks.

Depending on the school system you’re in, that’s between one-third and one-half of a school year.

How much did the students habituate to the visual distractions?

The answer: not at all.

The distraction rate was the same after fifteen weeks as it was at the beginning of the year.

To my mind, that’s an AMAZING research finding.

Putting It Together

At this point, I think we have a compelling research story.

Despite our training — and, perhaps, despite our love of decoration — we have a substantial body of research suggesting that over-decorated classrooms interfere with learning.

The precise definition of “over-decorated” might take some time to sort out. And, the practical problems of putting up/taking down relevant learning supports deserves thought and sympathetic exploration.

However: we shouldn’t simply hope away the concern that young students can be distracted by the environment.

And we shouldn’t trust that they’ll get used to the busy environment.

Instead, we should deliberately create environments that welcome students, inspire students, and help students concentrate and learn.


Fisher, A. V., Godwin, K. E., & Seltman, H. (2014). Visual environment, attention allocation, and learning in young children: When too much of a good thing may be bad. Psychological science25(7), 1362-1370.

Godwin, K. E., Leroux, A. J., Seltman, H., Scupelli, P., & Fisher, A. V. (2022). Effect of Repeated Exposure to the Visual Environment on Young Children’s Attention. Cognitive Science46(2), e13093.

Is “Cell Phone Addiction” Really a Thing? [Reposted]
Andrew Watson
Andrew Watson

A well-known Education Twitter personality claimed that “cell phones are as addictive as drugs.”

Are they? What should we do when someone makes that claim?

Reposted from November of 2021


I recently read a tweet asserting “the fact that cell phones are proven to be as addictive as drugs.”

Of course, people casually use the word “addictive” about all sorts of things: chocolate, massages, pumpkin-spice lattes. (No doubt somewhere Twitter is being described as “addictive.” My mother tells me that this blog is addictive.)

But all that casual language doesn’t add up “proving the fact” that cell phones are “as addictive as drugs.” So I started wondering: has this “fact” been “proven”?

Good News, Bad News (Good News, Meh News)

Over the years I’ve adopted a simple strategy. When someone makes a factual claim about research, I ask for the research.

In this case, I simply asked the tweep for the research behind the claim.

Good news: He answered!

I’ve been amazed over the years how often people make “research-based” claims and then refuse to provide research to support them.

In this case, he did in fact point toward research on the topic. (I found one website claiming that 16% of adolescents, and 6.3% of the population, are addicted; alas, no sources cited. Happily, they do provide treatment…)

The tweep’s willingness to cite evidence enhances his credibility. Let’s check it out…

Bad news: hmm. “Evidence,” in this case, means “links to newspaper articles.” Generally speaking, USA Today and Vice.com aren’t the best places to find research. Research is published in journals. (Heck, I’ve even criticized the New York Times for its research credulity.)

So: the tweep’s credibility clicks down slightly.

Good news: All three of the links do, in fact, point to underlying research! I didn’t get a direct connection to the promised research, but I can keep digging to find it.

Credibility clicks back up.

Meh news: it turns out that all three articles point to the same underlying research. That is: I didn’t find three studies supporting the claim that “cell phones are proven to be as addictive as drugs”; I got one.

Now: one study isn’t nothing. But [checks notes] one isn’t three.

This Just In: Correlation Isn’t…

Given how much is riding on this one study, let’s check it out.

First off, we can see right there in the title that the study focuses on correlation. As you’ve no doubt heard dozens (thousands?) of times, “correlation isn’t causation.”

In this case, the authors asked 48 people questions about their cell-phone usage. Based on their answers, they categorized some of those people as “addicted.” And they then found brain differences between the “addicted” and “not addicted” people.

This quick summary leads to several concerns.

First: one study of 48 people doesn’t “prove a fact.” It might be an interesting data point, but that’s all.

Second: this study doesn’t claim to “prove a fact.” Using a questionnaire, it DEFINES some folks as addicted and others as not addicted.

Third: “brain differences” always seems like a big deal, but trust me — they might not be.

People who throw the javelin probably have a different “average muscular profile” than people who run marathons, because they’re using different muscles.

People who play the piano probably have different neural profiles than people who dance ballet, because they’re spending more time using THIS part of the brain than THAT part.

It seems likely people who score high on this “cell-phone addiction” questionnaire behave differently than those who don’t; so it’s not dramatically surprising that their brains are different.

Did the phone cause to brain differences, or brain differences cause phone use? We don’t know. (Because, “correlation isn’t …”)

Important to Note

One interesting point does jump out. The brain differences found by this research team do — in some ways — align with plausible predictions about addiction.

Now, the researchers don’t make strong claims here: the word “drugs” appears only once in the body of the study.

This finding isn’t a big surprise to me. Very roughly, the  brain differences have to do with “our ability to control what we pay attention to.” It’s not hugely surprising that heavy cell-phone users have brain differences there (and that people addicted to drugs do too).

Don’t Stop Now

If the tweep’s study doesn’t support the claim that “cell phones are proven to be addictive,” does other research?

To answer that question, I did a simple google search (“cell phone addiction real”). The first scholarly article that pops up says…not so much.

Here’s their summary:

Although the majority of research in the field declares that smartphones are addictive or takes the existence of smartphone addiction as granted, we did not find sufficient support from the addiction perspective to confirm the existence of smartphone addiction at this time.

The behaviors observed in the research could be better labeled as problematic or maladaptive smartphone use and their consequences do not meet the severity levels of those caused by addiction.

In brief: “maladaptive,” yes; “addictive,” no.

As I continued clicking, I found other skeptical reviews (for instance, here), and also found some that embrace the category (with some open questions, here).

Oh, and, by the way: “cell phone addiction” isn’t included in the DSM-5.

In other words, I think we can reasonably describe the category of “cell phone addiction” as an active scholarly debate. To be clear, this conclusion means we can’t reasonably describe it as “a proven fact.”

Why I Care

I am, believe it or not, open to the idea that cell phones might be addictive. If they are — if at some point research “proves that fact” — then this label might help us treat a real problem effectively.

My objection springs from another source entirely.

I worry when debate about measurable claims sinks to applying insulting labels.

If I think that asking students to memorize is a bad idea, I could study research on the topic. Or, I could dismiss it as “drill and kill.” The insulting label replaces the argument.

If I think that teacher talk is bad, I could study research on the topic. Or, I could mock it as “sage on the stage.” The dismissive label replaces the argument.

If I think that cell-phone usage is bad for teens, I could study research on the topic. Or, I could call it “an addiction.” The alarming label replaces the argument.

If we’re going to rely on research to make decisions about teaching and education (which is, after all, the GOAL of our organization) we should never replace research with labels.

Instead, let’s try something else. Let’s replace labels with research…


Horvath, J., Mundinger, C., Schmitgen, M. M., Wolf, N. D., Sambataro, F., Hirjak, D., … & Wolf, R. C. (2020). Structural and functional correlates of smartphone addiction. Addictive behaviors105, 106334.

Panova, T., & Carbonell, X. (2018). Is smartphone addiction really an addiction?. Journal of behavioral addictions7(2), 252-259.

Billieux, J., Maurage, P., Lopez-Fernandez, O., Kuss, D. J., & Griffiths, M. D. (2015). Can disordered mobile phone use be considered a behavioral addiction? An update on current evidence and a comprehensive model for future research. Current Addiction Reports2(2), 156-162.

Gutiérrez, J., & Rodríguez de Fonseca, F. (2016). Gabriel Rubio.: Cell Phone Addiction: A Review. Front. Psychiatry7, 175.

The Best Teaching Advice We’ve Got
Andrew Watson
Andrew Watson

I’m on my annual vacation during this month, so I’ll be posting some articles that got attention during the last year.

This post, initially from December of 2021, looks at a proposed different way to “put all the research pieces together.”


You want to improve your teaching with psychology research?

We’ve got good news, and bad news.

And more good news.

Good News: we have lots and LOTS of research. We can talk about attention, or working memory, or the spacing effect, or motivation, or stress…the list is long. And super helpful.

So much practical advice!

Bad News: actually, the bad news is the same as the good news. We’ve got SO MUCH good research that it’s honestly hard to keep track of it all.

I mean, seriously. Should you start by looking at attention research? Or stress research?

Should we think about the motivational effects of student-teacher relationships, or the perils of working memory overload, or the benefits of desirable difficulty?

Which is most important?

Honestly, I think our next priority is not so much finding out new truths about learning, but organizing all the information we already have.

More Good News

If you agree that we really need someone to sort all these suggestions into a coherent system, you’ll be delighted to read this article by Stephen Chew (Twitter handle: @SChewPsych) and William Cerbin (@BillCerbin).

Other scholars — for instance, Barak Rosenshine — have put together a coherent system based on learning principles. Chew and Cerbin, instead, organize their system around cognitive challenges.

That is:

If students feel anxiety about a topic or discipline, that emotion will interfere with their learning.

If students have prior misconceptions, they will distort students’ understanding.

If classroom work or assignments go beyond working memory limits, students won’t learn effectively (or, at all).

When planning a course or a lesson or an assignment, teachers can think their way through these specific challenges. By contemplating each one, we can design our work to best facilitate learning.

Getting the Emphasis Right

If you’re thinking “this is such excellent news! It just can’t get any better!” — well — I’ve got some news: it gets better.

Chew and Cerbin write:

There is no single best teaching strategy for all students, topics, and situations. The proposed framework is not prescriptive … and can guide adaptation of teaching practice.

In other words, they’re not saying: here’s a list of things to do.

Instead, they are saying: here are several topics/problems to consider.

Teaching advice should not include “best practices.” (That’s a business concept.) It should include “best questions to ponder as we make decisions.” Chew and Cerbin make this point repeatedly.

Frequent readers know that I’ve been banging on for years with this mantra: “Don’t just do this thing; instead, think this way.”

We should think about our students’ working memory limitations. The strategies we use might differ for 1st graders and 8th graders.

We should think about the importance of transfer. A Montessori school and a KIPP school will (almost certainly) use differing strategies to reach that goal.

We should think about our students’ prior knowledge. The best way to measure that knowledge might be different for students with diagnosed learning differences.

Yes: we should consider these nine topics. But the ways we answer them must depend on our students, our schools, our curriculum, and ourselves.

For all these reasons, I recommend Chew and Cerbin’s article with great enthusiasm.

And, happily, you can meet Dr. Chew at our online conference in February! (In case you’re wondering: I was planning to write about this article before I knew he was joining the conference. A happy synchronicity.)

It’s All in the Timing: Improving Study Skills with Just-Right Reminders
Andrew Watson
Andrew Watson

Some research-based teaching advice requires complex rethinking of our work.

For instance:

We know that “desirable difficulties” like spacing and interleaving help students learn. At the same time, this strategy might require a fair amount of reorganization in our unit plans.

On the other hand, today’s suggestions could hardly be simpler. They go like this:

First: encourage students to use a straightforward self-talk strategy (see below).

Second: remind them frequently.

The likely result; they’ll learn more!

The First Step: A Self-Talk Strategy

Our students often know what they should do. But, they struggle to make themselves do it.

So many plausible excuses. So many reasons NOT to follow through!

Over two decades ago, researchers devised a remarkably simple way to redirect those excuses: “if-then” statements.

The strategy goes like this:

The student starts by setting a goal.

“I want to read 15 pages of the novel.”

She then lists the likely problems that could interfere with that goal.

“My dog could distract me.”

“I could get bored.”

“I could get a text.”

Next, she lists solutions to those likely problems — phrased as “if-then” statements.

If my dog distracts me, then I’ll ask my brother to play with him.”

If my phone buzzes, then I’ll turn it off and give it to my parents.”

We have good research suggesting that this “if-then” structure creates highly beneficial mental shortcuts.

At the moment of distraction, the student doesn’t have to decide what to do. She has already decided; she simply has to execute the plan she made for herself.

This strategy sounds too simple to work. But, we’ve got good research suggesting it does.

A Second Step: Well-timed Reminders

So far, this strategy could hardly be simpler. Students set goals, make “if-then” plans, and get to work.

A recent study asks a useful question: how often should teachers remind students about these “if-then” plans?

Here’s a useful analogy:

When you take a medication, you probably don’t take all of it all at once. Instead, you probably take one pill a day — or something like that. The whole course of medication is divided into doses.

Well, should teachers divide this strategy into doses? Does it matter how often we prescribe this study “medication”?

Researchers, led by Dr. Jasmin Breitwieser, explored these questions with medical students in Germany.

The details of the study get complicated quickly. But the headlines go like this:

Students in the control group set daily study goals, but did not make if-then statements.

Students in the study group also set daily goals, and “internalized” this statement:

If I am thinking about stopping [before I reach my goal], then I will tell myself that I will continue to answer questions until I have reached my intended workload!

Students in this second group got reminders on various schedules — as many as 3 days in a row, and as many as 3 days without a reminder.

So: what did the researchers find?

Results Please

First: Breitwieser’s team found that students who made the if-then commitments scored higher on the exam than those who didn’t. *

Second — an this is the big news: dosing mattered.

When students got several reminders in a row, they reached more of their goals.

When those same students got no reminders for 3 days in a row, they reached less of their goals.

In other words: if-then statements help students achieve — especially if they recommit to them frequently.

Teaching Implications

Teachers really struggle to help students find motivation in their school work.

This if-then strategy provides a remarkably simple way to help students achieve their own goals.

So, suggestion #1: teachers should take time to help students formulate these if-then statements.

Suggestion #2: one dose isn’t enough. We should have students return to these plans frequently.

The exact schedule will depend, I suspect, on your specific circumstances. It will be different for 1st graders, 5th graders, and 9th graders. It will be different depending on your school and your culture, and perhaps even your approach to teaching.

Based on other research pools, I suggest that the reminders be relatively frequent, but unpredictable. We don’t want our students going through this process by rote; they should engage with it as earnestly as possible each time.

If we get the dosing right, this simple strategy can help our students study more effectively — and therefore learn more.


* The difference was quite small: only 2 points. However, and this is a big however, the researchers worked with medical students preparing for a high-stakes exam. They are already highly successful and highly motivated students. I suspect we’ll see more dramatic effects for other groups of students.


Breitwieser, J., Neubauer, A. B., Schmiedek, F., & Brod, G. (2021). Self-regulation prompts promote the achievement of learning goals–but only briefly: Uncovering hidden dynamics in the effects of a psychological intervention. Learning and Instruction, 101560.

 

An Amazingly Simple Way to Help Struggling Students (with Potential Controversy)
Andrew Watson
Andrew Watson

Imagine that you work at a school where these students consistently struggle compared to those students.

As teachers and school leaders, you’d like to help these students do better than they currently do; maybe do as well as those students. (Lower down in the post, I’ll say more about the two groups of students.)

What can you do?

Values Affirmation

One simple strategy has gotten a fair amount of research in recent years.

The idea goes something like this. If I am, say, a 12-year-old student, I might not really see how my school life fits with the rest of my life. They seem like two different world.

If teachers could connect those “two different worlds” — even a little bit — students would feel more comfortable, less stressed, better invested. Heck, they might even learn more.

To accomplish this mission, researchers provide students a list of values: loyalty, faith, friendship, hard work, justice, happiness, family, and so forth.

They then have students undertake a brief writing assignment. The instructions go something like this:

Choose one or more of these values that are most important for you personally. Write about why they are important. You won’t be graded on spelling or grammar; focus on explaining your ideas, values, and beliefs clearly.

The control group gets the same instructions, except that students choose values “that are the least important for you personally, but might be important to someone else.”

This strategy gets the somewhat lumpy name “values affirmation” — because it gives students a chance to affirm the values they hold.

This approach has been used in the United States in several studies, and has had some success. (See, for instance, here).

Across the Atlantic

But: would this strategy work elsewhere? What about, say, England?

Back in 2019, a group of researchers tried this approach with different groups of underperforming students. (Again, more on this topic below.)

They had two groups of 11-14-year-old students (the underperformers, the typical performers) do three “values affirmation” writing sessions: one in September, one in January, and one in April.

Of course, one half of both groups did the “values affirmation” version of the writing exercises; the other half did the control writing assignment.

What results did they find?

As was true in the US, the values affirmation writings had no effect on the typical performers.

However, it had a dramatic effect on the underperformers. Their math grades were higher at the end of the year (compared to the group that did the control writing). And their stress levels were considerably lower.

Because of the statistical method that the researchers used, I can’t say that values affirmation translated a B average into an A average. I can say that it had an effect size of 0.35 standard deviations — which is certainly noteworthy, especially to people who read research studies in this field.

In brief: this strategy costs literally zero dollars. It takes one hour over the course of a school year. And it helps underperforming students.

SO MUCH TO LOVE.

The Story Behind the Story

Up to this point, I’ve described what the researchers did. But I haven’t explained the psychological theory behind their strategy — because the theory prompts some controversy.

Now that we’ve looked at the strategy, let’s get to that theory.

Back in 1995, Claude Steele proposed a theory that has come to be called “Stereotype Threat.” It proposes a complex and counter-intuitive hypothesis.

To describe the theory, let me make up a non-existent stereotype: “blue-eyed people are bad at grammar.” (For the record, I’m regularly complimented on my blue eyes, and I teach a lot of grammar.)

Why might a blue-eyed student struggle on a grammar test?

Steele’s research suggests a surprising internal process. When my blue eyes and I sit down to take the grammar test, I know the material well. However, I also know that stereotypes suggest I’ll do badly.

What happens? I do NOT (as many suspect) give in and let the stereotype become a self-fulfilling prophecy. Instead, I decide to fight back. And, in a terrible paradox, my determination to disprove the stereotype leads to all sorts of counter-productive academic behaviors.

For instance: I might spend lots of time working on a very easy problem to prove that I know this grammar. Alas, I took so long on the easy problems that I don’t have time for the harder ones.

In this unexpected way, Steele argues, stereotypes harm students’ learning.

Enter the Controversy

Over the years, many researchers explored Stereotype Threat. They found that stereotypes about almost anything (race, ethnicity, gender, sexuality, academic major) can affect performance on almost anything (math tests, sports performance, leadership aspirations).

Steele’s book Whistling Vivaldi is, in fact, an unusually easy-to-read book about a complex psychological phenomenon. Many Learning and the Brain speakers (Joshua Aronson, Sian Beilock) have studied and written about ST.

At the same time, other scholars have doubted this entire research field. They point to various statistical and procedural concerns to suggest that, well, there’s no real there there. (If you’re interested in the push back, you can read more here and here.)

Putting It All Together

In the English study I’ve been describing, the relevant stereotype is “people from relatively lower socio-economic status just aren’t as smart as others.” According to the study’s authors, this stereotype is the predominant academic stereotype in England, whereas US stereotypes focus more on race, ethnicity, and gender.

So, in their study, the authors explored the effect of Values Affirmation on students who did (and did not) receive free school lunches: a common proxy for socio-economic status.

Sure enough, Values Affirmation had no effect one way or the other on students who did not receive free lunches. Because they faced fewer stereotypes about their academic performance, they didn’t suffer the harm that ST might cause.

But, for the students who DID receive free lunches, that same writing exercise helped a lot. This strategy made them feel more like they belonged, so they presumably didn’t need to work as hard to disprove stereotypes.

For that reason, as described above, one hour’s worth of writing reduced stressed and increased grades.

TL;DR

This research suggests that a Values Affirmation writing assignment (it’s free!) can help some underperforming students learn more and feel less stress.

And, it also strengthens the case that Stereotype Threat might — despite the concerns about methodology — really be a thing.

Even if that second statement turns out not to be true, the first one is worth highlighting.

Want a simple, low-cost way to help struggling students? We’ve got one…


Hadden, I. R., Easterbrook, M. J., Nieuwenhuis, M., Fox, K. J., & Dolan, P. (2020). Self‐affirmation reduces the socioeconomic attainment gap in schools in England. British Journal of Educational Psychology90(2), 517-536.

Sherman, D. K., Hartson, K. A., Binning, K. R., Purdie-Vaughns, V., Garcia, J., Taborsky-Barba, S., … & Cohen, G. L. (2013). Deflecting the trajectory and changing the narrative: how self-affirmation affects academic performance and motivation under identity threat. Journal of Personality and Social Psychology104(4), 591.

“It’s Good for the Brain!”: The Perils of Pollution, the Benefits of Blueberries
Andrew Watson
Andrew Watson

When I talk with teachers about psychology and neuroscience research, I frequently get a question in this shape:

“I’ve heard that X is really good for the brain. Is that really true?”

In this sentence, X might be blueberries. It might be water. It might be nature walks. Perhaps it’s a good night’s sleep, or green tea, or coffee, or merlot ice cream. (I think I made up that last one, but anything’s possible…)

So, should schools start serving blueberries, merlot ice cream, and green tea (and black coffee) to our students? Perhaps with a side of salmon — brain food for sure!

Works (Almost) Every Time

Here is a completely unsurprising research finding: the brain is a part of the body.

The brain is, in fact, physically attached to the body.

For this reason, everything that is good for the body is good for the brain. (Because, again, the brain is a part of the body.)

Is sleep good for the brain? Well, it’s good for the body, so; yes.

How about water? Yup.

Fruits/veggies? Sure.

Exercise? I’m in!

Simply put, when we take care good care of our bodies, we simultaneously tend to our brains — as a physical, biological object.

Said the other way around: we don’t need to develop special “brain enhancing” diets or programs or regimens. Anything that promotes our students’ physical health will automatically help their brains.

I was, in fact, inspired to write this post by an article I saw today about pollution. The summary:

“Higher exposure to air pollution is associated with higher functional brain connectivity among several brain regions in preadolescents.”

This conclusion strikes me as entirely sensible. Pollution changes the body; unsurprisingly it changes the brain. (Say it with me: the brain is a part of the body.)

Checking the Details

This first answer to the question works most of the time.

If, however, we need a more specific answer, we can easily investigate.

I once heard that, because brains need appropriate levels of hydration, we should think of water as “brain food.” The speaker exhorted us with this cry: “A bottle of water on every desk!”

And yet, the speaker’s logic collapses immediately. Yes, too little water is bad for the brain (because it’s bad for the body). We do want students to be properly hydrated.

But this obvious truth does not remotely suggest that additional water above that level yields extra benefits.

Yes, we should let students drink if they’re thirsty. Yes, a hot day in an arid climate might prompt us to provide “a glass of water on every desk.”

But we don’t need to make a big deal about extra water as an avenue toward extra learning.

You won’t be surprised to know: when I googled “Water is brain food,” the top hits were NOT research studies. They were advertisements for companies selling water.

Magical Blueberries

For reasons I don’t fully understand, the “brain food” claim often settles on blueberries. They’ve got antioxidants, I’m told. They’re great.

I’ve done just a little research here, and so far I’m underwhelmed.

First: there honestly isn’t much research on this topic.

Second: the research often focuses on rats. (Long time readers know my mantra: “Never change your teaching based on research into non-human animals.)

Third: the research on humans focuses on aging and dementia.

Now, I’m 56. I’m ALL IN FAVOR of dietary changes that reduce the likelihood of dementia.

But the idea that “because blueberries are brain food, students should nosh on them before a test” has absolutely no research backing (that I can find).

Students should eat blueberries because fruits and vegetable — in the right proportion — provide health benefits for the body. As far as I can tell, we don’t need to focus on targeted brain benefits.

TL;DR

Most everything that is good for the body is also good for the brain. So, don’t worry about special “brain benefit” claims.

If, instead, someone claims that X is good for learning, we teachers should indeed pay close attention — and especially pay attention to the details of the research.

Getting the Order Just Right: When to “Generate,” When to “Retrieve”?
Andrew Watson
Andrew Watson

When teachers get advice from psychology and neuroscience, we start by getting individual bits of guidance. For instance…

… mindful meditation reduces stress, or

… growth mindset strategies (done the right way) can produce modest benefits, or

… cell phones both distract students and reduce working memory.

Each single suggestion has its uses. We can weave them, one at a time, into our teaching practices.

After a while, we start asking broader questions: how can we best combine all those individual advice bits?

For instance: might the benefits of growth mindset strategies offset the detriments of cell phones?

Happily, in recent years, researchers have started to explore these combination questions.

Retrieval Practice, Generative Learning

Long time readers know about the benefits of retrieval practice. Rather than simply review material, students benefit when they actively try to recall it first.

So too, generative learning strategies have lots of good research behind them. When students have to select, organize, and integrate information on their own, this mental exercise leads to greater learning. (Check out a handy book review here.)

Now that we have those individual bits of guidance, can we put them together? What’s the best way to combine retrieval practice with generative learning?

A recent study explored exactly this question.

Researchers in Germany had college students study definitions of 8 terms in the field of “social attribution.”

So, for instance, they studied one-sentence definitions of “social norms” or “distinctiveness” or “self-serving bias.”

One group — the control group — simply studied these definitions twice.

A second group FIRST reviewed these words with retrieval practice, and THEN generated examples for these concepts (that’s generative learning).

A third group FIRST generated examples, and THEN used retrieval practice.

So, how well did these students remember the concepts — 5 minutes later, or one day later?

The Envelope Please

The researchers wanted to know: does the order (retrieval first? generation first?) matter?

The title of their study says it all: “Sequence Matters! Retrieval practice before generative learning is more effective than the reverse order.”

Both 5 minutes later and the next day, students who did retrieval practice first remembered more than those who came up with examples first (and, more than the control group).

For a variety of statistical reasons, I can’t describe how much better they did. That is: I can’t say “These student scored a B, and these score a B-.” But, they did “better enough” for statistical models to notice the difference.

And so, very tentativelyI think we teachers can plan lessons in this way: first instruct, then have students practice with retrieval, then have them practice with generation.

Wait, Why “Tentatively”?

If the research shows that “retrieval first” helps students more than “generation first,” why am I being tentative?

Here’s why:

We can’t yet say that “research shows” the benefits of a retrieval-first strategy.

Instead, we can say that this one study with these German college students who learned definitions of words suggests that conclusion.

But: we need many more studies of this question before we can spot a clear pattern.

And: we’d like some 1st grade students in Los Angeles, and some 8th grade students in Reykjavik, and some adult learners in Cairo before we start thinking of this conclusion as broadly applicable.

And: we’d like to see different kinds of retrieval practice, and different kinds of generative learning strategies, before we reach a firm conclusion.

After all, Garvin Brod has found that different generative learning strategies have different levels of effectiveness in various grades. (Check out this table from this study.)

To me, it seems entirely plausible that students need to retrieve ideas fluently before they can generate new ideas with them: hence, retrieval practice before generative learning.

But, “entirely plausible” isn’t a research-based justification. It’s a gut feeling. (In fact, for various reasons, the researchers had predicted the opposite finding.)

So, I think teachers should know about this study, and should include it our thinking.

But, we shouldn’t think it’s an absolute conclusion. If our own students simply don’t learn well with this combination, we might think about switching up the order.

TL;DR

Students learn more from retrieval practice, and they learn more from generative learning strategies.

If we want to combine those individual strategies, we’ll (probably) help students more if we start with retrieval practice.

And: we should keep an eye out for future research that confirms — or complicates — this advice.


Roelle, J., Froese, L., Krebs, R., Obergassel, N., & Waldeyer, J. (2022). Sequence matters! Retrieval practice before generative learning is more effective than the reverse order. Learning and Instruction80, 101634.

Brod, G. (2020). Generative learning: Which strategies for what age?. Educational Psychology Review, 1-24.