Skip to main content
Why Don’t Students Like School? (2nd. ed.) by Daniel T. Willingham
Rebecca Gotlieb
Rebecca Gotlieb

Why don’t students like school? Daniel T. Willingham, Professor of Psychology at the University of Virginia, addresses this and nine other significant questions about how the human mind works and the implications for teaching in his book aptly titled, “Why Don’t Students Like School?” The second edition of this book, with new information about technology now included, was recently released. Willingham’s overarching advice to teachers is to “know your students;” the book explains what about one’s students a teacher should strive to know and how to act on that knowledge.

The ten cognitive science principles for teachers that Willingham highlights are principles that he argues: (1) are true all the time and across contexts, (2) have robust supporting evidence (which Willingham organizes into comprehensive lists to help readers learn more), (3) can impact student performance, and (4) have actionable implications for teachers. By offering insights into students’ minds, Willingham aims to help teachers improve their practice not by prescribing how to teach, but by giving insights into what teachers might expect from their students based on the teaching decisions they make. Willingham is par excellence in his ability to translate cognitive science for an educator audience; this evidence-based, comprehensive synthesis will be of great utility for many educators, and the thought-provoking questions he includes throughout make this book an excellent option for a teacher book club/discussion group.

Much of the reason that students don’t like school (aside from social challenges) has to do with the fact that school rarely finds the sweet spot between to-be-learned content being too hard and too easy, according to Willingham. While we are naturally driven to satiate our curiosity, we also find thinking to be difficult and will default to what we remember rather than puzzling through something new. To make students more inclined to learn teachers can pique curiosity by explaining the question behind content the teachers wish students to learn, connect with students in other ways they find engaging (e.g., through comedy, stories, and demonstrating care), and avoid overloading the amount of information students have to hold in mind at one time.

Many teachers are concerned that teaching students the kinds of facts they need to perform highly on standardized tests undermines efforts to help them develop deep thinking skills or to think like a “real scientist” or a “real historian.” Willingham argues that students are well-served to learn the background information that they do in school because they need these facts to become strong readers and critical thinkers who are able to connect disparate ideas, hold information in mind, and develop sound predictions. Additionally, Willingham explains that the more one knows, the more one is able to acquire additional knowledge.

Experts–people who can create new knowledge in their field after practicing in the field for many years—think qualitatively differently than do novices. As such, we should strive for students to develop a deep understanding, but not to do exactly what experts do, since these same behaviors may not be fruitful without first having that deep understanding. To facilitate deep understanding and abstract thinking educators can help students link new content to information they already know, provide diverse and familiar examples of a concept, and offer analogies. We remember what we think about; to help students remember content, educators should reflect on what their lessons make students think about. Persuading students of the value of knowing the content, using a story-like arc in lectures, and engaging students emotionally can facilitate long-term memory. For memorization of basic information, Willingham lists several common mnemonics (e.g., using acronyms) that can be helpful.

The role of intelligence in education is a perennial and thorny issue. Importantly, Willingham notes the inherent worth of all students regardless of intelligence or talents.  He provides convincing evidence that intelligence can change with hard work and is more affected by our environment than genes. Focusing on the learning process rather than raw abilities, teaching that hard work pays off and that proficiency requires practice, and normalizing failure can lead to a boost in students’ academic performance. (Willingham notes, however, that effects of a so-called “growth mindset” on academic performance are small and there is not sufficient evidence about how to teach this mindset successfully in school.) While there is true variability in students’ intellectual abilities, Willingham shows that there are not consistent differences across people in the way they learn (i.e., their “learning style”). Willingham argues that the content to be taught, more than the learning format preferences of students, should drive the way one teaches a lesson.

Educators have heard too many promises about a tech-based education revolution. In spite of this, Willingham argues that technology has not fundamentally changed how our minds work and the effects that it does have on cognition are often unexpected. Willingham suggests that before adopting new technologies in schools, educators consider the evidence about the tool. Screen time can take students away from devoting their time to activities that might provide greater cognitive benefit and a reprieve from social pressures. For these reasons, it may be beneficial to limit technology use.

After devoting considerable attention to the minds of students, Willingham concludes by considering how teachers can support their own cognitive and professional growth. Teaching, like any cognitively demanding skill, must be practiced to lead to improvement. That practice should include measures such as isolating individual subskills to refine, receiving feedback from knowledgeable colleagues, trying new techniques, watching tapes of one’s own teaching, learning more about human development, and recognizing that the process of improving may be hard on one’s ego.

Why Don’t Students Like School? is great summer reading for teachers looking to improve their practice. For other works by Daniel Willingham, see The Reading Mind and Raising Kids who Read.

Willingham, D. T. (2021). Why Don’t Students Like School? Second edition. Hoboken, NJ: Jossey-Bass.

 

Let’s Talk! How Teachers & Researchers Can Think and Work Together
Andrew Watson
Andrew Watson

Once you say it out loud, it’s so obvious:

Teachers benefit from learning about psychology and neuroscience.

AND, psychologists and neuroscientists (in certain fields) benefit from learning more about classroom teaching.

These beliefs inspire our conferences and seminars and summer institutes, and they motivate this blog.

However — and this is a big however — conversations among these disciplines can prove a real challenge.

Why? So many reasons…

… These conversations often start with the assumption that teachers should be junior partners in this collaborative work. (Hint: we’re equal partners.)

… Each of these disciplines — including ours — starts with its own assumptions, builds off its own traditions, and papers over its own shortcomings.

… We all use our own complex terminology and vexing acronyms. (Quick: does ToM result from activity in the vmPFC, and should we discuss it in our IEPs?)

Given all these muddles (and many more), it’s impressive these conversations happen at all.

Today’s Resource

Dr. Cindy Nebel invited me to discuss these questions for a podcast over at The Learning Scientists.

We explore all these problems, along with dual coding, working memory overload, the importance of boundary conditions, and the complexities of motivation research.

We agree about many topics, disagree about a few, and solve as many problems as possible. (As a bonus, the link has a discount code for my newest book: The Goldilocks Map, A Teacher’s Quest to Evaluate ‘Brain-Based’ Teaching Advice.)

I’ve known Dr. Nebel for several years now. She and the other Learning Scientists do great work in this translation field, and they DON’T start with the assumption that teachers are junior partners.

I hope you enjoy our conversation!

A Beacon in the Mindset Wilderness
Andrew Watson
Andrew Watson

For a few years now, I’ve been in the Mindset wilderness.

Three years ago, I spent lots of time tapping the brakes.

“Yes,” I’d say, “we do have plenty of good research behind this strategy. HOWEVER, let’s be realistic. A wall covered in upbeat slogans (“YET!”) just isn’t going to revolutionize education.”

I got a lot of side-eyes.

In 2018, several careful scholars published a blockbuster pair of meta-analyses, throwing doubt on the whole mindset enterprise. Their grim conclusions:

First: students’ mindset has little effect on their academic performance, and

Second: mindset intervention programs don’t provide much benefit.

Suddenly, I started sounding like a mindset enthusiast.

“Yes,” I’d say, “a focus on mindset won’t revolutionize education. HOWEVER: incremental increases in motivation can add up over time. We have SO FEW strategies to help with motivation, we shouldn’t ignore the ones that provide even modest benefits.”

I got even more side-eyes.

The Stickiest Wicket

In these conversations, one point has consistently created the greatest difficulties for my position.

Several mindset researchers have championed the efficacy of “one-shot interventions.”

That is: if students experience one carefully designed mindset-reshaping experience — a webinar, a presentation, an exercise of some kind — that “one shot” alone can help them transform a fixed mindset into a growth mindset.

I gotta say: I just don’t believe that.

My doubts stem not from research, but from experience. Having taught high-school students for thousands of years, I don’t think it ever happens that telling them something once meaningfully changes anything.

I don’t doubt the integrity of the researchers or the process they use. But their conclusion defies too much of my experience (and common sense) for me to take it on board.

Rarely do I use the “my experience trumps your research” veto; in this case, I’m really tempted.

What’s That? “A Beacon,” You Say?

A soon-to-be-published study — run by several of Team Mindset’s leading scholars — offers some support for this skepticism.

These scholars asked a perfectly sensible question: “can a one-shot mindset intervention help students whose teachers demonstrate a fixed mindset?”

That is: must the classroom context echo the explicit message of that one-shot intervention?

Or — in the words of the study — can the mindset “seed” grow in inhospitable “soil”? Are students (on average) independent agents who can overcome implicit classroom messages and act on their explicit mindset training?

To answer this question, the authors reviewed data from a very large study with more than 9000 high school students.

This study takes great procedural care to get the details right: students are randomly assigned to groups; teachers don’t know which student is in which group; teachers don’t know the hypothesis of the study — and so forth.

After a one-shot intervention at the beginning of 9th grade, researchers tracked students’ math grades at the end of the year.

The researchers also asked questions to learn about the teachers‘ mindsets. They wanted to know: did the teachers’ mindset shape the students’ response to the intervention?

The results?

Context Always Matters

Initially, no.

Immediately after the one-shot intervention, students who saw the growth-mindset messages expressed higher degrees of growthiness. Those in the control condition did not. And the teachers’ mindsets didn’t influence those early results.

However — this is a big however — at the end of the year that final sentence wasn’t true.

Students who BOTH heard the growth-mindset messages AND had growth-mindset teachers saw higher math grades.

Students who heard the growth mindset message BUT had fixed-mindset teachers did not.

And, to repeat, those results came months after the intervention itself.

To me, these results make perfect sense. A one-shot message won’t help if the daily classroom routine constantly undermines it; that message might sink in if classroom routines reinforce it.

After all, as the authors wisely write, “no psychological phenomenon works the same way for all people in all contexts.” *

Next Question

This research suggests that teachers’ classroom work can sustain explicit mindset interventions.

Here’s my question: do students need that intervention in the first place? Is the teacher’s classroom practice enough?

I do share LOTS of research with my students: research into retrieval practice, and multitasking, and spacing. I DON’T even mention mindset research, or exhort them to embrace their inner growth mindset.

Instead, I simply enact the mindset strategies.

The classroom rewrite policy encourages and rewards multiple drafts.

I frequently comment on the benefits of cognitive struggle. (“Good news! If you got some questions wrong on that retrieval practice exercise, you’re likelier to learn the answers in the future. The right kind of practice will help you learn.”)

I regularly emphasize what I don’t know, and am excited when I learn something new. (I recently told my sophomores that I have NO IDEA how to interpret the symbolism of Tea Cake’s rabies in Their Eyes Were Watching God. One of my students promptly offered up an explanation; I’m genuinely enthusiastic to have his insight — and the class knows that!)

As I see it, growth mindset isn’t something to talk about. It’s something we demonstrate: quietly, un-fussily, daily.

I’m hoping that — someday — research will support this belief as well.


* Although most psychology studies can put off even the most determined reader, this one has been written (it seems) with a lay reader in mind. Although the technical sections are indeed quite technical, the early sections are easy to read: clear, logical, straightforward. If you’re interested in the topic, I recommend giving these early sections a read.

“Compared to What”: Is Retrieval Practice Really Better?
Andrew Watson
Andrew Watson

When teachers turn to brain research, we want to know: which way is better?

Are handwritten notes better than laptop notes?

Is cold-calling better than calling on students who raise their hands?

Is it better to spread practice out over time, or concentrate practice in intensive bursts?

For that reason, we’re excited to discover research that shows: plan A gets better results than plan B. Now we know what to do.

Right?

Better than What?

More often than not, research in this field compares two options: for instance, retrieval practice vs. rereading.

Often, research compares one option to nothing: starting class WITH learning objectives, or starting class WITHOUT learning objectives.

These studies can give us useful information. We might find that, say, brief exercise breaks help students concentrate during lectures.

However, they DON’T tell us what the best option is. Are exercise breaks more helpful than retrieval practice? How about video breaks? How about turn-n-talks?

When research compares two options, we get information only about the relative benefits of those two options.

For that reason, we’re really excited to find studies that compare more than two.

Enriching Encoding

A recent podcast* highlighted this point for me.

A 2018 study compared THREE different study strategies: rereading, enriched encoding, and retrieval practice.

Participants studied word pairs: say, “moon-galaxy.” Some of them studied by reviewing those pairs. Some studied with retrieval practice (“moon-__?__”).

Some studied with enriched encoding. This strategy urges students to connect new information to ideas already in long-term memory. In this case, they were asked, “what word do you associate with both “moon” and “galaxy”?

My answer to that question: “planet.” Whatever answer you came up with, you had to think about those two words and their associated ideas. You enriched your encoding.

Because this experiment looked at three different study strategies, it gives us richer insights into teaching and learning.

For instance, students who reviewed remembered 61% of the word pairs, whereas those who enriched their encoding remembered 75% (Cohen’s d = 0.72). Clearly, enriched encoding is better.

But wait, what about students who used retrieval practice?

Even Richer

Students in the retrieval practice group remembered 84% of their word pairs.

So, yes: “research shows” that enriched encoding is “better than review.” But it’s clearly not better than retrieval practice. **

In fact, this point may sound familiar if you read last week’s blog post about learning objectives. As that post summarized Dr. Faria Sana’s research:

Starting class with traditional learning objectives > starting class without traditional learning objectives

but

Starting class with learning objectives phrased as questions  > starting class with learning objectives phrased as statements

In fact, Sana looked at a fourth choice:

Teachers immediately answer the questions posed in the learning objectives >?< teachers don’t immediately answer the questions posed in the learning objectives.

It turns out: providing answers right away reduces students’ learning.

Because Sana studied so many different combinations, her research really gives us insight into our starting question: which way is better?

Friendly Reminders

No one study can answer all the questions we have. We ALWAYS put many studies together, looking for trends, patterns, exceptions, and gaps.

For instance, boundary conditions might limit the applicability of a study. Sana’s research took place in a college setting. Do her conclusions apply to 10th graders? 6th graders? 1st graders? We just don’t know (yet).

Or, if you teach in a school for children with a history of trauma, or in a school for students with learning differences, or in a culture with different expectations for teachers and students, those factors might shape the usefulness of this research.

By comparing multiple studies, and by looking for studies that compare more than two options, we can gradually uncover the most promising strategies to help our students learn.


* If you’re not following The Learning Scientists — their website, their blog, their podcast — I HIGHLY recommend them.

** To be clear: this study focuses on a further question: the participants’ “judgments of learning” as a result of those study practices. Those results are interesting and helpful, but not my primary interest here.

Making “Learning Objectives” Explicit: A Skeptic Converted?
Andrew Watson
Andrew Watson

Teachers have long gotten guidance that we should make our learning objectives explicit to our students.

The formula goes something like this: “By the end of the lesson, you will be able to [know and do these several things].”

I’ve long been skeptical about this guidance — in part because such formulas feel forced and unnatural to me. I’m an actor, but I just don’t think I can deliver those lines convincingly.

The last time I asked for research support behind this advice, a friend pointed me to research touting its benefits. Alas, that research relied on student reports of their learning. Sadly, in the past, such reports haven’t been a reliable guide to actual learning.

For that reason, I was delighted to find a new study on the topic.

I was especially happy to see this research come from Dr. Faria Sana, whose work on laptop multitasking  has (rightly) gotten so much love. (Whenever I talk with teachers about attention, I share this study.)

Strangely, I like research that challenges my beliefs. I’m especially likely to learn something useful and new when I explore it. So: am I a convert?

Take 1; Take 2

Working with college students in a psychology course, Sana’s team started with the basics.

In her first experiment, she had students read five short passages about mirror neurons.

Group 1 read no learning objectives.

Group 2 read three learning objectives at the beginning of each passage.

And, Group 3 read all fifteen learning objectives at the beginning of the first passage.

The results?

Both groups that read the learning objectives scored better than the group that didn’t. (Group 2, with the learning objectives spread out, learned a bit more than Group  3, with the objectives all bunched together — but the differences weren’t large enough to reach statistical significance.)

So: compared to doing nothing, starting with learning objectives increased learning of these five paragraphs.

But: what about compared to doing a plausible something else? Starting with learning objectives might be better than starting cold. Are they better than other options?

How about activating prior knowledge? Should we try some retrieval practice? How about a few minutes of mindful breathing?

Sana’s team investigated that question. In particular — in their second experiment — they combined learning objectives with research into pretesting.

As I’ve written before, Dr. Lindsay Richland‘s splendid study shows that “pretesting” — asking students questions about an upcoming reading passage, even though they don’t know the answers yetyields great results. (Such a helpfully counter-intuitive suggestion!)

So, Team Sana wanted to know: what happens if we present learning objectives as questions rather than as statements? Instead of reading

“In the first passage, you will learn about where the mirror neurons are located.”

Students had to answer this question:

“Where are the mirror neurons located?” (Note: the students hadn’t read the passage yet, so it’s unlikely they would know. Only 38% of these questions were answered correctly.)

Are learning objectives more effective as statements or as pretests?

The Envelope Please

Pretests. By a lot.

On the final test — with application questions, not simple recall questions — students who read learning-objectives-as-statements got 53% correct.

Students who answered learning-objectives-as-pretest-questions got 67% correct. (For the stats minded, Cohen’s d was 0.84! That’s HUGE!)

So: traditional learning objectives might be better than nothing, but they’re not nearly as helpful as learning-objectives-as-pretests.

This finding prompts me to speculate. (Alert: I’m shifting from research-based conclusions to research-&-experience-informed musings.)

First: Agarwal and Bain describe retrieval practice this way: “Don’t ask students to put information into their brains (by, say, rereading). Instead, ask students to pull information out of their brains (by trying to remember).”

As I see it, traditional learning objectives feel like review: “put this information into your brain.”

Learning-objectives-as-pretests feel like retrieval practice: “try to take information back out of your brain.” We suspect students won’t be successful in these retrieval attempts, because they haven’t learned the material yet. But, they’re actively trying to recall, not trying to encode.

Second: even more speculatively, I suspect many kinds of active thinking will be more effective than a cold start (as learning objectives were in Study 1 above). And, I suspect that many kinds of active thinking will be more effective that a recital of learning objectives (as pretests were in Study 2).

In other words: am I a convert to listing learning objectives (as traditionally recommended)? No.

I simply don’t think Sana’s research encourages us to follow that strategy.

Instead, I think it encourages us to begin classes with some mental questing. Pretests help in Sana’s studies. I suspect other kinds of retrieval practice would help. Maybe asking students to solve a relevant problem or puzzle would help.

Whichever approach we use, I suspect that inviting students to think will have a greater benefit than teachers’ telling them what they’ll be thinking about.

Three Final Points

I should note three ways that this research might NOT support my conclusions.

First: this research was done with college students. Will objectives-as-pretests work with 3rd graders? I don’t know.

Second: this research paradigm included a very high ratio of objectives to material. Students read, in effect, one learning objective for every 75 words in a reading passage. Translated into a regular class, that’s a HUGE number of learning objectives.

Third: does this research about reading passages translate to classroom discussions and activities? I don’t know.

Here’s what I do know. In these three studies, Sana’s students remembered more when they started reading with unanswered questions in mind. That insight offers teachers a inspiring prompt for thinking about our daily classroom work.