classroom advice – Page 12 – Education & Teacher Conferences Skip to main content
The Better Choice: Open- or Closed-Book Quizzes
Andrew Watson
Andrew Watson

Psychology research offers lots of big ideas for improving student learning: self-determination theory, or the spacing effect, or cognitive load theory.

Once we make sense of that research, we teachers work to translate those big idea to practical classroom strategies.

In some cases, we can simply do what the researcher did. In most cases, however, we have to adapt their test paradigm to our specific classroom world.

So, for example, Nate Kornell explored the spacing effect with flashcards. He found that 1 deck of 20 cards produced more learning 4 decks of 5 cards. Why: a deck with 20 cards spaces practice out more than a deck with five cards.

That “big idea” gives teachers a direction to go.

But: we should not conclude that 20 is always the right number. Instead, we should adapt the concept to our circumstances. 20 flashcards might be WAY TOO MANY for 1st graders. Or, if the concepts on the cards are quite simple, that might be too few for college students studing vocabulary.

Translating Retrieval Practice

We know from many (many) studies that retrieval practice boosts learning.

In brief, as summarized by researcher Pooja Agarwal, we want students to pull ideas out of their brains, not put them back in.

So, students who study by rereading their notes don’t learn much; that’s putting ideas back in. Instead, they should quiz themselves on their notes; that’s pulling ideas out.

This big idea makes lots of sense. But, what exactly does that look like in our classrooms?

Over the years, teachers and researchers have developed lots of suggestions. (You can check out Dr. Agarwal’s site here for ideas.)

Thinking about retrieval practice, researchers in Germany asked a helpful question. In theory, closed-book quizzes ought to generate more learning than open-book quizzes.

After all: if my book is closed, I have to pull the information out of my brain. That’s retrieval practice.

If my book is open, I’m much likelier simply to look around until I find the right answer. That’s not retrieval practice.

These researchers wanted to know: does this sensible prediction come true?

The Results Please

Sure enough, closed-book quizzes do produce more learning. This research team retested students on information twice: one week after, and eight weeks after, they heard information in a lecture.

Sure enough, the students who took closed-book quizzes did substantially better than those who took open-book quizzes. (The cohen’s d values were above 0.80.)

In brief: we now have one more research-supported strategy for creating retrieval practice.

As always, I think we should be careful to think about limits on such research.

In the first place, this study took place with college students. If you teach younger students, and your experience tells you that an open-book strategy will work better under particular circumstances, you might ask a trusted colleague for a second opinion. Research like this gives us excellent guidance, but it can’t answer all questions.

In the second place, other variables might come strongly into play. For instance: stress. If your school culture has always allowed open-book quizzes, your students might freak out at the prospect of a closed-book alternative. If so, the benefits of retrieval practice might be lost to anxiety overload.

In this case, you’ll need to take the time to explain your reasoning, and to ease your students into new learning habits.

In any case, we can be increasingly confident that many varieties of retrieval practice produce the desirable difficulties that help students learn. (For a fun exception to this rule, click here.)

 

Studying Wrong Answers Helps Learn the Right Ones
Andrew Watson
Andrew Watson

With teaching as with baking, sometimes you should follow steps in a very particular order. If you don’t do this, and then that, and then the other, you don’t get the best results.

Two researchers in Germany wanted to know if, and when, and how, students should study incorrect answers.

To explore this question, they worked with 5th graders learning about fractions. Specifically, they taught a lesson about comparing fractions with different denominators.

(When studying this topic, students can’t rely on their instincts about whole numbers. For that reason, it’s a good subject to understand how students update conceptual models.)

They followed three different recipes.

One group of 5th graders saw only correct answers.

A second group saw both correct and incorrect answers.

A third group saw correct and incorrect answers, and were specifically instructed to compare correct and incorrect ones.

Which recipe produced the best results?

The Judges Have Made Their Decision

As the researchers predicted, the third group learned the most. That is: they made the most progress in updating their conceptual models.

In fact: the group prompted to compare right and wrong answers learned more than the group that saw only the right answers. AND they learned more than the group that saw (but were not prompted to compare) right and wrong answers.

In other words: the recipe is very specific. For this technique to work, students should first get both kinds of information, and second be instructed to compare them.

Important Context

I’ve held off on mentioning an important part of this research: it comes in the context of problem-based learning.  Before these 5th graders got these three kinds of feedback, they first wrestled with some fraction problems on their own.

In fact, those problems had been specifically designed to go well beyond the students’ mathematical understanding.

The goal of this strategy: to make students curious about the real-world benefits of learning about fractions with different denominators in the first place.

If they want to know the answer, and can’t figure it out on their own, presumably they’ll be more curious about learning when they start seeing all those correct (and incorrect) answers.

As we’ve discussed before, debates about direct instruction and problem-based learning (or inquiry learning) often turn heated.

Advocates of both methods can point to successes in “their own” pedagogy, and failures in the “opposing” method.

My own inclination: teachers should focus the on relevant specifics. 

In the link above, for example, one study shows that PBL helps 8th graders think about deep structures of ratio. And, another study shows that it doesn’t help 4th graders understand potential and kinetic energy.

These German researchers add another important twist: giving the right kind of instruction and feedback after the inquiry phase might also influence the lesson’s success.

Rather than conclude one method always works and the other never does, we should ask: which approach best helps my particular students learn this particular lesson? And: how can I execute that approach most effectively?

By keeping our focus narrow and specific, we can stay out of the heated debates that ask us to take sides.

And: we can help our students learn more.

How Can We Encourage Girls to Pursue STEM Disciplines?
Andrew Watson
Andrew Watson

When we see alarming statistics about gender disparities in STEM disciplines, we quite naturally wonder how to fix this imbalance.

(This hope – by the way – isn’t simply a do-goody desire to sing “It’s a Small World After All.” If we believe that men and women can contribute equally to a scientific understanding of our world, then every girl discouraged is a contribution lost.

In other words: we ALL benefit if boys and girls contribute to science.)

So, how can we encourage girls to participate in science?

To answer this question, we might first answer a related question: what discourages girls in the first place.

If we can undo the discouragement, we are – indirectly but effectively – encouraging.

So, what discourages girls?

Is Science Education Itself the Problem?

Here’s a disturbing possibility.

When students learn about genetics, and specifically about the genetics of sex differences, they might infer that genders have a fixed, absolute quality. All boys (and no girls) are this way; all girls (and no boys) are that way.

It’s in the genes, see?

This set of beliefs, in turn, might reinforce a fixed mindset about gender and ability.

Through this causal chain, a particular science curriculum might itself discourage girls from pursuing science.

Yikes!

Researcher Brian Donovan and his team explored this question in a recent study. To do so, they asked students to read different lessons about genes and sexual dimorphism.

Some 8th – 10th graders learned about the genetics of human sexual difference.

Others learned about the genetics of plant sexual differences.

Others read a curriculum that explicitly contradicted the notion that genetic sex differences directly cause differences in intelligence and academic ability.

Did these curricular differences have an effect?

The Results Envelope Please

Unsurprisingly, students who learned that we can’t draw a straight line from genes to gender roles and abilities believed that lesson.

To make the same point in reverse: students who studied a seemingly “neutral” scientific curriculum – “we’re just talking about genes here” – drew unsupported conclusions about absolute differences between men and women.

Amazingly, this finding held true both for the students who studied the genetics of human sexual differences AND those who studied plant sexual differences.

WOW.

Perhaps surprisingly, students who learned that genetic sex differences don’t cause gendered ability differences also expressed a greater interest in science.

In particular, the girls who studied the “genetics only” lesson expressed meaningfully less interest in a science major than those who got the alternative lesson. (The two lessons neither encouraged nor discouraged the boys.)

But, Why?

Here’s the likely causal chain:

A science curriculum that focused “purely” on genetics seemed to suggest that men and women are utterly different beings.

Students who read this “pure” lesson inferred that some human abilities – like, say, scientific competence – might differ between genders.

This inference, in turn, made gender stereotypes (e.g., “men do better at science than women”) more plausible.

And so, the women who got that seemingly neutral science lesson, discouraged by the stereotype it reinforced, felt less inclined to pursue science.

By this roundabout route, a traditional science lesson might itself discourage students from learning science.

Alternative Explanations

Of course, the topic of gender differences – especially in the realms of math and science – can generate lots of energetic debate.

When I asked Donovan for alternative explanations for his findings, he was quick to emphasize that we need lots more research in this field. His is the first study done on this specific question. As always, teachers shouldn’t assume that any one study has found THE answer.

Some people do in fact argue that math and science ability (or interest) differ by gender because of genes. (Dr. Donovan explicitly rejects an explanation that moves directly from genes to gender differences.)

Here’s a recent book review by Lise Eliot, emphasizing that gender differences in brain regions

a) are often exaggerated and mis-reported, and

b) result from societies that emphasize gender differences.

For others – like Simon Baron-Cohen – that argument goes too far. Another recent study suggests that brains differ by gender in utero — that is, before socialization can have strong effects upon them.

Teaching Implications

Donovan’s research suggests that teachers can and should do more to be sure we’re not discouraging some students from particular academic interests and career paths.

For one set of practical suggestions, this interview with Sapna Cheryan outlines several ways we can promote “ambient belonging” in our classrooms.

Taking Notes with Graphic Organizers
Andrew Watson
Andrew Watson

research-based advice for studentsWe’ve blogged (quite energetically) about the difference between handwritten and laptop notes.

Of course, other note-taking differences merit investigation as well.

For example: if students take handwritten notes, is it better to give them:

a complete lecture outline,

a partial lecture outline,

a bare-bones lecture outline,

or

a complete graphic organizer,

a partial one, or

an empty one?

Over at the Learning Scientists, Carolina Kuepper-Tetzel explores this question, and adds some thoughts of her own.

One Man’s Experience

This article particular caught my eye because it applies so directly to my own work.

When I talk with teachers, students, or parents about brains, I always provide them with option #5 above: an incomplete graphic organizer.

My goal: reduce working memory load. (I’m always focused on reducing extraneous working memory load.)

The informal feedback I get is strongly positive. Many teachers, in fact, tell me that they’ve started using the same form with their own students.

When you read Dr. Kuepper-Tetzel’s post, you’ll see how well (if at all) my practice accords with the research we have.

Do Collaborative Projects Reduce or Increase Working Memory Stress?
Andrew Watson
Andrew Watson

Should teachers ask students to work on projects in teams?

This question generates a great deal of heat.

Many education thinkers advocate for the benefits of teamwork. Others insist that learning happens one brain at a time, and so should not be cluttered with interference from other brains.

Working Memory: Blesses and Curses

Working memory allows humans to hold and reorganize facts and ideas in temporary mental storage.

When you do a word problem, you must decide which parts should be translated into an equation. (Those decisions take WM.) You have to recall the appropriate equation to use. (Ditto.) And, you must plug the correct data into the correct formula before you can arrive at an answer. (Re-ditto.)

Composing a new sentence in a foreign language? Lots of working memory demands.

Comparing Paul Lawrence Dunbar’s poetry with that of Countee Cullen? Yup.

Learning how to tell time? Once again – lots of working memory involved.

In other words, WM allows students to do practically everything that we want them to do in school.

And yet, this working memory blessing co-exists with a powerful curse: we just don’t have very much of it.

You probably can alphabetize five days of the work week. You probably can’t alphabetize 10 months of the year. The first task lies within WM limits; alas, the second goes way beyond them.

Collaboration’s WM Dangers

In a recent article, Paul Kirschner and others consider the WM benefits and perils of group work.

(These scholars, especially John Sweller, have elaborated “cognitive load theory” to explain the relationship between long-term memory, WM, and the external world of perception and experience. See here for a review.)

One important peril: the working memory demands created by collaboration. When students work together, they have to negotiate roles. They must create joint mental models. They have to schedule and prioritize and debate.

All these “musts” take up precious working memory space. The result might be that students get better at negotiating, modeling, and prioritizing. But, the WM devoted to those task might make it harder for them to learn the content at the heart of the project.

Of course: you might reasonably want your students to focus on the social-emotional skills. But, if you wanted them to focus on Shakespeare or Boyle’s law, then the project might not produce the results you hoped for.

Collaboration’s WM Benefits

At the same time, Kirschner & Co. also see working memory upsides to collaboration.

A particular cognitive task might include quite stiff WM demands. If the group includes members with the right kinds of background knowledge, then the WM chores can be divided up and managed more effectively.

Student A carries this part of the WM load.

Student B carries that part.

Student C takes care of the tricky last bit.

In this way, the WM whole can be greater than the sum of the parts.

In other words: if teachers can organize group projects so that a) the WM difficulties of collaboration remain low, and b) the benefits of sharing WM burdens remain high, then such collaboration truly help students learn.

Putting It Together

Kirschner’s article concludes with a list of key variables for teachers to track: task complexity, domain expertise, team size, and so forth.

Be aware that cognitive load theory gets a little jargony, and you’ll need some time to learn the lingo before the article makes sense.

However, if you can devote that time, I think you’ll benefit from its practical suggestions, and helpful frameworks for planning students’ collaborative learning.

There’s No Polite Way to Say “I Told You So”
Andrew Watson
Andrew Watson

Back in 2014, Pam Mueller and Dan Oppenheimer made headlines with their wittily titled study “The Pen Is Mightier Than The Keyboard.”

In that study, they found that students learn more from taking handwritten notes during a lecture than from laptop notes. Their conclusions spawned a thousand gloating posts. And (I don’t doubt) a multitude of well-intentioned anti-laptop policies.

Since I first read the study, I’ve been shouting that its conclusions simply don’t hold up.

Why?

Because M&O’s conclusions hold water only if you believe students can’t learn new things.

(That’s a very strange belief for teachers to have.)

If you believe that students can learn new things, then you believe that they can learn to take laptop notes correctly.

(“Correctly” = “rewriting the lecture’s main points in your own words; don’t just transcribe verbatim”)

If they do that, then this famous study actually suggests laptop notes will enhance learning, not detract from it.

You can find a summary of my argument — and its limitations — here.

Today’s News

Scholars have recently published an attempt at replication of Mueller & Oppenheimer’s study.

The results? Not much.

In the quiet language of research, they conclude:

“Based on the present outcomes and other available evidence, concluding which method [handwriting or laptops] is superior for improving the functions of note-taking seems premature.”

Not so much with the mighty pen.

By the way: a study from 2018 also concluded that — except in special circumstances — it just didn’t make much difference which method students use.

Why I Care

Perhaps surprisingly, I’m not an ardent advocate of laptop notes. Or, for that matter, of handwritten notes.

I advocate for teachers making classroom decisions informed by good research.

In this case, the Mueller and Oppenheimer study contains a perfectly obvious flaw. I have yet to meet anyone who doesn’t think a) that students can learn good note-taking skills, and b) that if they do, the study’s conclusions make no sense.

And yet, very few people have time to dig into research methodology. As a result, this one study had confirmed many teachers in their beliefs that technology harms learning during note-taking.

That statement might be true. It might be false. But this one study doesn’t give us good data to answer the question.

As a result, teachers might be taking laptops away from students who would learn more if they got to use them.

In brief: bad research harms learning.

I hope that this most recent study encourages teachers to rethink our classroom practices.

Two Helpful Strategies to Lessen Exam Stresses
Andrew Watson
Andrew Watson

Exam stress bothers many of our students. Sadly, it hinders students from lower socio-economic status (SES) families even more.

As a result, these students struggle — especially in STEM classes. And, exam stressthis struggle makes it harder for them to enter these important (and lucrative!) fields.

Can we break this cycle somehow?

Reducing Exam Stress: Two Approaches

Christopher Rozek tried a combination of strategies to help lower-SES science students manage exam stress.

This research stands out for a number of reasons: in particular, it included a large sample (almost 1200 students). And, it took place in a school, not a psychology lab. That is, his results apply to the “real world,” not just a hermetically sealed research space.

Rozek worked with students taking a 9th grade biology class. Before they took the two exams in the course, Rozek had students write for ten minutes.

One group spent their ten minutes writing about their current thoughts and feelings. This approach lets students “dump” their anxiety, and has been effective in earlier studies. (By the way: this earlier research is controversial. I’ve written about that controversy here.)

Another group read a brief article showing that the right amount of stress can enhance performance. This reading, and the writing they did about it, helps students “reappraise” the stress they feel.

A third group did shortened versions of both “dumping” and “reappraising” exercises.

And the control group read and wrote about the importance of ignoring and suppressing negative/stressful emotions.

So, did the “dump” strategy or the “reappraise” strategy help?

Dramatic Results

Indeed, they both did.

For example, Rozek and Co. measured the effect these strategies (alone or together) had on the exam-score gap between high- and low-SES students.

The result? They cut the gap by 29%.

Rozek also tracked course failure. Among low-SES students, these strategies cut the failure rate by 50%.

(In the control group, 36% of the low SES students failed the class; in the other three groups, that rate fell to 18%. Of course, 18% is high — but it’s dramatically lower than 36%.)

In his final measure, Rozek found that — after these interventions — low SES-students evaluated their stress much more like the high SES-students. The gap between these ratings fell…by 81%.

All this progress from a 10 minute writing exercise.

Classroom Guidance to Reduce Exam Stress

If you’ve got students who are likely to feel higher levels of anxiety before a test, you might adapt either (or both) of these strategies for your students.

The best way to make these strategies work will vary depending on your students’ age and academic experience.

You might start by reviewing Rozek’s research — click the link above, and look for the “Procedure” section on page 5. From there, use your teacherly wisdom to make those procedures fit your students, your classroom, and you.

Does Drawing a Simple Picture Benefit Memory?
Andrew Watson
Andrew Watson

If a picture is worth 1000 words, how many words is drawing a picture worth?

drawing benefits memory

More specifically, Jeffrey Wammes & Co. have been exploring this question: is it true that drawing benefits memory? If I draw a picture of a word, will I remember it better than if I simply wrote that word down several times?

To explore this question, Wammes and his team have run a series of studies over the last several years. Basically, they’re trying to disprove their own hypothesis. If they can’t disprove it…well, it’s increasingly likely to be true.

The basic studies took a fairly simple form. Students saw a word and then spent 40 seconds drawing a picture of it. Or, they saw a word and spent 40 seconds writing it down several times.

Which words did they remember better? Yup: the words that they had drawn.

This effect held up not only in a psychology lab, but also in a college lecture hall.

Drawing Benefits Memory: More Advanced Studies

This hypothesis makes a kind of rough-and ready sense, for a number of reasons.

For instance, it just seems plausible that drawing benefits memory because visuals aide memory. Or, because drawing requires a greater degree of cognitive processing than simply writing.

So: perhaps drawing is but one example of these other effects.

Wammes and Co. wanted to see if that’s true. (Remember: they’re trying to disprove their hypothesis.)

So, they repeated the study several more times. In some cases, students drew pictures for some words and looked at pictures of other words.

Or, in another study, they drew pictures of some words and wrote down key features of other words. (Writing down key features requires higher levels of processing.)

In every case, they found that drawing produces even greater benefits than each sub-strategy. Students remembered more words that they had drawn than words they had processed in all those other ways.

Classroom Implications

What should classroom teachers do with this information?

In the first place, keep in mind that we’re still in early days of testing this technique. Much of this research has focused on nouns that are relatively easy to draw: say, “apple.”

At the same time, Wammes ran one study where students either drew or copied verbatim definitions of words. For instance, “stratoscopes” are “airborne telescopes that are mounted on high altitude balloons.” Once again, drawing led to better memory than simple copying.

Wammes’s team is currently exploring drawings of more abstract words: I hope to see those results published soon.

With these caveats in mind, I think we can plausibly use this approach in our classrooms. If you think a word, definition, concept, or process can plausibly be drawn, give your students a change to “review by drawing.”

Or, if you’ve built in a moment for retrieval practice, encourage students to include a drawing as part of their retrieval.

You might conclude that a particular topic doesn’t lend itself to drawing. An an English teacher, I’m not immediately sure how to draw “ode” or “concatenation” or “litotes.”

But, if a word or concept seems drawable to you, you might give students a chance to try out this mnemonic aide.

A Final Note

I emailed Dr. Wammes with a few questions about his research. In his reply, he included this quite wonderful sentence:

“There certainly will be situations where it [drawing] doesn’t work, I just unfortunately haven’t found them yet.”

Too often, teachers can take research findings as absolute injunctions. When we learn about the 10 minute rule, we think: “okay, I have to change it up every ten minutes!”

But, that’s just not true.

Psychology findings will benefits some of our classroom situations, some of our students, some of our lesson plans, some of our schools.

But, almost no research finding always applies. We have to translate and adapt and tinker.

The field of Mind, Brain, Education is a partnership: teachers learn from researchers, and researchers learn from teachers.

So, when you try this technique in your classroom, keep track of your results. If you pass them on to me, I’ll let the researchers know.

 

 

Research Summary: The Best and Worst Highlighting Strategies
Andrew Watson
Andrew Watson

Does highlighting help students learn?

As is so often the case, the answer is: it depends.

highlighting

The right kind of highlighting can help. But, the wrong kind doesn’t help. (And, might hurt.)

And, most students do the wrong kind.

Today’s Research Summary

Over at Three Star Learning Experiences, Tim Surma & Co. offer a helpful overview of highlighting research.

The headlines: highlighting helps students if the highlight the right amount of the right information.

Right amount: students tend to highlight too much. This habit reduces the benefit of highlighting, for several reasons.

Highlighting can help if the result is that information “pops out.” If students highlight too much, then nothing pops out. After all, it’s all highlighted.

Highlighting can help when it prompts students to think more about the reading. When they say “this part is more important than that part,” this extra level of processing promotes learning. Too much highlighting means not enough selective processing.

Sometimes students think that highlighting itself is studying. Instead, the review of highlighted material produces the benefits. (Along with the decision making before-hand.)

Right information.

Unsurprisingly, students often don’t know what to highlight. This problem shows up most often for a) younger students, and b) novices to a topic.

Suggestions and Solutions

Surma & Co. include several suggestions to help students highlight more effectively.

For instance, they suggest that students not highlight anything until they’ve read everything. This strategy helps them know what’s important.

(I myself use this technique, although I tend to highlight once I’ve read a substantive section. I don’t wait for a full chapter.)

And, of course, teachers who teach highlighting strategies explicitly, and who model those strategies, will likely see better results.

Surma’s post does a great job summarizing and organizing all this research; I encourage you to read the whole thing.

You might also check out John Dunlosky’s awesome review of study strategies. He and his co-authors devote lots of attention to highlighting, starting on page 18. They’re quite skeptical about its benefits, and have lots to contribute to the debate.

For other suggestions about highlighting, especially as a form of retrieval practice, click here.

 

New Research: Personal Best Goals (Might) Boost Learning
Andrew Watson
Andrew Watson

Some research-based suggestions for teaching require a lot of complex changes. (If you want to develop an interleaved syllabus, you’re going to need some time.)

personal best goals

Others couldn’t be simpler to adopt.

Here’s a suggestion from researchers Down Under: encourage your students to adopt “personal best goals.”

The Research

In a straightforward study, Andrew Martin and Australian colleagues asked 10- to 12-year-olds to solve a set of math problems. After each student worked for one minute, she learned how well she had done on that group of problems.

Students then worked that same set of problems again. Martin measured their improvement from the first to the second attempt.

Here’s the key point: after half of the students heard their score, they got these additional instructions:

“That is your Personal Best score. Now we’re going to do these question again, and I would like you to set a goal where you aim to do better on these questions than you did before.”

The other half of the students simply heard their score and were told to try the problems again.

Sure enough, this simple “personal best” prompt led to greater improvement than in the control group.

To be clear: the difference was statistically significant, but relatively small. The Cohen’s d was 0.08 — lower than typically gets my attention.

However, as the researchers point out, perhaps the structure of the study kept that value low. Given the process — students worked the same problem sets twice — the obvious thing for students to do is strive to improve performance on the second iteration.

In other words: some students might have been striving for “personal bests” even when they weren’t explicitly instructed to do so.

In my own view, a small Cohen’s d matters a lot if the research advice is difficult to accomplish. So, if interleaving leads to only a small bump in learning, it might not be worth it. As noted above, interleaving takes a lot of planning time.

In this case, the additional instruction to “strive for your personal best” has essentially no cost at all.

Classroom Implications

Martin’s study is the first I know of that directly studies this technique.

(Earlier work, well summarized by Martin, looks at self-reports by students who set personal best goals. That research is encouraging — but self-reports aren’t as persuasive as Martin’s design.)

For that reason, we should be careful and use our best judgement as we try out this idea.

For example:

I suspect this technique works when used occasionally, not constantly.

In this study, the technique was used for the very short term: the personal best goals applied to the very next minute.

One intriguing suggestion that Martin makes: teachers could encourage personal best goals for the process not the result. That is: the goal could be “ask for help before giving up” rather than “score higher than last time.”

One final point stands out in this research. If you’re up to date on your Mindset research, you know the crucial difference between “performance goals” and “learning goals.”

Students with “performance goals” strive, among other things, to beat their peers. Of course, “personal best goals” focus not on beating peers but on beating oneself. They are, in other words, “learning goals.”

And, we’ve got LOTS of research showing that learning goals result in lots more learning.