L&B Blog – Page 26 – Education & Teacher Conferences Skip to main content
The Benefits of “Testing” Depend on the DEFINITION of “Testing.” And the TIMING. And…
Andrew Watson
Andrew Watson

Whenever I want to start a food fight at a faculty meeting (admit it: you know the feeling), I contemplate shouting: “What do we all think about testing?”

Almost certainly, several colleagues will decry the use of discriminatory high-stakes tests that stress and mis-categorize our students.

Others will angrily retort that research into retrieval practice shows that well-designed tests help students learn. Heck, “retrieval practice” yields great benefits because of “the testing effect.”

Still others will heatedly contrast formative assessment with summative assessment.

Like the Goddess of Discord with my Golden Apple, I will have inspired an enduring battle.

Of course, this battle results from confusion about definitions.

Scholars who champion retrieval practice or formative assessment might be said to “favor testing,” but they favor very specific kinds of testing. The testing they champion (probably) looks nothing like the high-stakes test that my first respondents so loathe.

In other words: my colleagues don’t necessarily disagree with each other. Because this one word has different meanings, they’re probably arguing about different topics without realizing it.

Confusion Gets Clearer

Even if I narrow my question to “pre-testing,” I’ve still created many opportunities for confusion.

I might, for instance, “pre-test” my students about the myth of the Golden Apple because I want to know what they already know.

My goal, in other words, isn’t to evaluate them. Instead, I want to align my lessons with their current knowledge. After all, I need one lesson plan for a class which has never heard of Zeus, Aphrodite, or Troy; and a completely different lesson plan for a class that read The Iliad last year.

I’m “pre-testing” as an early step in my own lesson planning.

On the other hand, we have a fascinating research pool suggesting that “pre-testing” itself might help students learn. That is: the act of taking that pretest can improve their ultimate understanding of the material later in the unit.

Amazingly, according to this research pool, these pre-tests benefit students even if they get all the answers wrong. (Of course they get the answers wrong. They haven’t yet done the unit with me.)

For instance, Dr. Lindsay Richland and Co. pre-tested students on a passage about color-blindness. They found that students who took a pretest (and got all the answers wrong) ultimately learned more than students who used that extra time to study the passage. (As I’ve written earlier, I love this study because Richland works SO HARD to disprove her own hypothesis.)

Getting the Specifics JUST RIGHT

So far, we’ve seen that the benefits of testing depend on the definition of testing. While we know that some tests stress and demotivate students, we’ve also got research suggesting that a very specific kind of pre-testing might help students learn.

Of course, we know that psychology research always includes boundary conditionsA teaching technique that works in one set of circumstances might not work in others.

So, for instance, a teaching technique that helps 3rd graders learn might not help college students. Or, a strategy to help students write well might not help them learn to pirouette in dance class.

We know that there will be boundaries for this (very specific kind of) pre-testing. What are they?

As is so often the case, this question has led to complexity and controversy. For instance: several scholars have found that pretesting helps students make new connections only if they already know a little bit about the material.

A study from 2014, however, suggests that pre-testing helps students even if they don’t know anything about the material.

For teachers, this distinction matters.

If students need at least a little prior knowledge for pretesting to be helpful, we will use this technique less often. If, however, they don’t need that prior knowledge, then our decision to limit its use robs them of a chance to learn.

To use this technique correctly, we really need to know the right answer.

Today’s Research: Activating Prior Knowledge

A recent study, led by Dr. Tina Seabrooke, tries to sort through this intricate and important question. In fact, because the details require such nuanced distinctions, she ended up running five different experiments. Only by considering all five together could she and her team reach a conclusion.

Of course, with five different studies underway, Seabrooke’s work has lots of nooks and subtleties to explore. Instead, let me cut to the chase:

TL; DR

Pre-testing will probably help students learn and understand a topic better if they already know something about it.

If students don‘t know anything about the subject, pretests don’t help much.

More specifically: pretests might help them remember some words from the questions, and some words from the answers. But — crucially — pretests won’t help students make connections between the questions and the answers.

Of course, we really want students to make those connections. Any definition of “understanding” a topic will include the ability to answer meaningful questions about it.

You might think about pretesting this way:

Pretests help students activate useful prior knowledge. If they don’t have relevant prior knowledge, then pretests don’t have anything useful to draw upon.

Final Word

Research into “pretesting” is still ongoing, and we’re still learning new and useful information.

I suggest that teachers use this technique from time to time as a way to activate prior knowledge. I wouldn’t require it as part of my daily routine.

And: I would keep my eyes peeled for the next relevant study. We’ve still got lots more to learn on this subject…

Can We Improve Our Students’ Executive Function? Will That Help Them Read Better?
Andrew Watson
Andrew Watson

Here’s a fun exercise. I’m going to give you a list of words. You try to sort them into two groups, based on the sound they begin with:

cup, bag, bread, can, box, cookie, cake, bucket, corn, beans, crate, banana

Presumably you came up with /k/ sounds and /b/ sounds:

cup, can, crate, cookie, cake, corn

beans, bread, banana, box, bag, bucket

Okay, now go back and RESORT those words into two groups, based on the category they belong to.

Presumably you came up with containers and foods:

cup, can, crate, box, bag, bucket

cookie, cake, corn, beans, banana, bread

If you succeeded, BRAVO! You demonstrated cognitive flexibility: an executive function that allows you to change your thought process mid-stream.

Believe it or not, we have to learn this particular skill.

In video below, for instance, 3-year-olds sort cards according to their color (“red goes here, blue goes there”). They’re usually good at that. However, when the rules change to focus on shape (“trucks go here, flowers go there”), they struggle to follow the different instructions.

https://www.youtube.com/watch?v=tXZau5VIIvU

Why? Because they haven’t yet developed the executive function of cognitive flexibility.

New Research: Improving Reading

For a number of reasons, we might think that this general executive function (cognitive flexibility) might support a specific academic skill (reading).

If that’s true, then maybe we can help struggling readers by training their cognitive flexibility. (This possibility relies on several assumptions; the scholars who did this work have lots of research supporting each one.)

To test this possibility, Kelly Cartwright & Co. had teachers spend several weeks training a group of 2nd – 5th  graders in cognitive flexibility.

Basically, those students repeated that word-sorting/resorting exercise you did at the top of this post. And, they tried a more complicated fill-in-the-blank version of that task as well.

The results?

Compared with other struggling readers, these students got better at cognitive flexibility. And — here’s the big news — they got better at reading as well. (More specifically: they didn’t get better at individual word recognition, but they got better at reading comprehension and grade level reading.)

So, in this research, Cartwright’s team found that training a particular executive function helps struggling readers do better.

Technically speaking, that’s awesome.

As Always, the Caveats

First: as Dan Willingham says in his Twitter bio, “One study is just one study, folks.” Even if Cartwright and Co. did everything right, it’s possible their results are a fluke. We won’t know until many other scholars succeed in replicating and extending this finding.

Second: We shouldn’t extrapolate too far based on this study. We don’t know if training other executive functions would help struggling readers. We don’t know if training EF benefits typical readers; or, people first learning to read; or, improves the performance of sophisticated readers.

Those questions are important — but not addressed directly by this research.

Third: Both reading instruction and executive function are hotly controversial topics. (Heck, I wrote a post about a month ago questioning the very idea of a “general” executive function.) I wouldn’t be surprised if this research (or my summary of it) prompted stern rebukes from scholars/practitioners with a different understanding of reading/EF processes.

I wouldn’t even be surprised if those stern rebukes were correct. If you’ve got an alternative perspective (and some research behind it), please let me know.

But, with those caveats in mind, this research strikes me as exciting and potentially powerful. Any strategy to help struggling readers should get our attention. One that a) costs essentially no money, b) doesn’t take very long, and c) can be done so easily might be a real boon to schools, students, and readers.

Watch this space…

The Best Length of Time for a Class [Repost]
Andrew Watson
Andrew Watson

Quite consistently, this post has been among the most searched for and most popular on the blog.

Teachers and administrators REALLY want to know: What is the optimal amount of time for our students to meet? What’s the very best schedule?

Here’s the best answer I have:


I met yesterday with several thoughtful teachers who had resonant questions about education research.

class length

How do we balance factual learning and deep thinking?

What’s “the right amount of stress” during a test?

How can we promote collaboration while honoring individual differences?

And:

What’s the optimal class length?

This question comes up often. Should we have lots of short classes, so every subject meets every day? Should we have a few longer classes, so that we can dig deeply into a particular topic without interruption?

Debates sometimes fall along disciplinary lines. Foreign language and math teachers often want frequent class meetings; English and History teachers typically like bigger chunks of time for discussions.

Science teachers just gotta have 80 minutes to run a lab well.

But: what does research show?

Class Length: What Research Tells Us

As far as I know, we just don’t have a clear answer to that question.

Over at the Education Endowment Fund, for example, they’ve investigated the benefits of block scheduling: that is, a few long periods rather than several short ones.

The finding: we can’t really say. Or, to quote EEF: “There is no consistent pattern in the evidence.”

More precisely:

The evidence suggests that how teachers use the time they are allocated is more important than the length of lesson or the schedule of lessons, and hence that the introduction of block scheduling is unlikely to raise attainment by itself.

By implication, a change away from block scheduling shouldn’t raise attainment either.

The point is not how long we teach but how well we teach with the time we’ve got.

For this reason, I often counsel schools and teachers: before you change your schedule, study human attention systems.

Once teachers know how attention works — and, it’s A LOT more complicated that we might have thought — we’ll be much better at helping students learn. (If you have the chance to attend a Learning and the Brain session about attention: RUN, don’t walk.)

Class Length: What Research Can’t Tell Us

Research doesn’t answer this question, I think, because it can’t. There’s no one correct answer.

If you teach 2nd graders or 7th graders or 11th graders, you’ll probably find that different lengths of time work better.

If you teach in cultures that inculcate patience and concentration, longer classes will work better than in cultures with a more get-up-and-go kind of pace.

The number of students in the class might matter.

The experience of the teacher almost certainly matters.

When your school starts investigating schedules, therefore, I suggest you start with these essentials:

First: study human attention.

Second: don’t design “the optimal schedule.” Design the optimal schedule for your school and your students. It might not work at anyone else’s school, but it doesn’t need to.

A schedule that works for you and your students is the closest to optimal that you can get.

Laptop Notes or Handwritten Notes? Even the New York Times Has It Wrong [Reposted]
Andrew Watson
Andrew Watson

You’ll often hear the claim: “research says students remember more when they take notes by hand than when they use laptops.”

The best-known research on the topic was done in 2014.

You’ll be surprised to discover that this conclusion in fact CONTRADICTS the researchers’ own findings. Here’s the story, which I wrote about back in 2018…


Here’s a hypothetical situation:

Let’s say that psychology researchers clearly demonstrate that retrieval practice helps students form long-term memories better than rereading the textbook does.

However, despite this clear evidence, these researchers emphatically tell students to avoid retrieval practice and instead reread the textbook. These researchers have two justifications for their perverse recommendation:

First: students aren’t currently doing retrieval practice, and

Second: they can’t possibly learn how to do so.

Because we are teachers, we are likely to respond this way: “Wait a minute! Students learn how to do new things all the time. If retrieval practice is better, we should teach them how to do it, and then they’ll learn more. This solution is perfectly obvious.”

Of course it is. It’s PERFECTLY OBVIOUS.

Believe It Or Not…

This hypothetical situation is, in fact, all too real.

In 2014, Pam Mueller and Dan Oppenheimer did a blockbuster study comparing the learning advantages of handwritten notes to laptop notes.

Their data clearly suggest that laptop notes ought to be superior to handwritten notes as long as students learn to take notes the correct way.

(The correct way is: students should reword the professor’s lecture, rather than simply copy the words down verbatim.)

However — amazingly — the study concludes

First: students aren’t currently rewording their professor’s lecture, and

Second: they can’t possibly learn how to do so.

Because of these two beliefs, Mueller and Oppenheimer argue that — in their witty title — “The Pen is Mightier than the Laptop.”

But, as we’ve seen in the hypothetical above, this conclusion is PERFECTLY OBVIOUSLY incorrect.

Students can learn how to do new things. They do so all the time. Learning to do new things is the point of school.

If students can learn to reword the professor’s lecture when taking notes on a laptop, then Mueller and Oppenheimer’s own data suggest that they’ll learn more. And yes, I do mean “learn more than people who take handwritten notes.”

(Why? Because laptop note-takers can write more words than handwriters, and in M&O’s research, more words lead to more learning.)

And yet, despite the self-evident logic of this argument, the belief that handwritten notes are superior to laptop notes has won the day.

That argument is commonplace is the field of psychology. (Here‘s a recent example.)

Even the New York Times has embraced it.

The Fine Print

I do need to be clear about the limits of my argument:

First: I do NOT argue that a study has been done supporting my specific hypothesis. That is: as far as I know, no one has trained students to take reworded laptop notes, and found a learning benefit over reworded handwritten notes. That conclusion is the logical hypothesis based on Mueller and Oppenheimer’s research, but we have no explicit research support yet.

Second: I do NOT discount the importance of internet distractions. Of course students using laptops might be easily distracted by Twinsta-face-gram-book. (Like everyone else, I cite Faria Sana’s research to emphasize this point.)

However, that’s not the argument that Mueller and Oppenheimer are making. Their research isn’t about internet distractions; it’s about the importance of reworded notes vs. verbatim notes.

Third: I often hear the argument that the physical act of writing helps encode learning more richly than the physical act of typing. When I ask for research supporting that contention, people send me articles about 1st and 2nd graders learning to write.

It is, I suppose, possible that this research about 1st graders applies to college students taking notes. But, that’s a very substantial extrapolation–much grander than my own modest extrapolation of Mueller and Oppenheimer’s research.

And, again, it’s NOT the argument that M&O are making.

To believe that the kinesthetics of handwriting make an essential difference to learning, I want to find a study showing that the physical act of writing helps high school/college students who are taking handwritten notes learn more. Absent that research, this argument is even more hypothetical than my own.

Hopeful Conclusion

The field of Mind, Brain, & Education promises that the whole will be greater than the sum of the parts.

That is: if psychologists and neuroscientists and teachers work together, we can all help each other understand how to do our work better.

Frequently, advice from the world of psychology gives teachers wise guidance. (For example: retrieval practice.)

In this case, we teachers can give psychology wise guidance. The founding assumption of the Mueller and Oppenheimer study — that students can’t learn to do new things — simply isn’t true. No one knows that better than teachers do.

If we can keep this essential truth at the front of psychology and neuroscience research, we can benefit the work that they do, and improve the advice that they give.

Growing Mindsets in Argentina? [Repost]
Andrew Watson
Andrew Watson

Since I first published this post a year ago, there’s been an important change to its argument: the study I’m writing about now HAS been published in a peer-reviewed journal.

As the Mindset skepticism movement gains further steam, I was struck by a comment on this study from the invaluable Dan Willingham. If I remember this correctly, he tweeted (roughly): “The mystery is that we haven’t been able to make this theory work in the classroom.”

Note the elegant middle ground this comment finds. Willingham acknowledges both the decades of scrupulous work that Dweck and her colleagues undertook, and that classroom interventions haven’t had the effect we’d like (for most students).

He doesn’t say (as others mean-spiritedly imply) that Dweck is a fraud. He doesn’t say (as others blithely imply) that we don’t need to worry about the rising number of classroom non-replications.

Instead, he says: “this intervention works under some circumstances, but not under others. We don’t yet know why. If we did, that would be SUPER helpful.”

I myself — as I argue below — think that we went too far thinking that upbeat posters on the wall would radically change students’ motivation, and now we’re going too far in arguing the whole Mindset thing is bunk.

OF COURSE one-time interventions don’t work. Are we truly surprised by this? PERHAPS creating a different school climate will work. Is that so preposterous an argument?

In any case: here’s what I wrote in July of 2019…


Mindset theory has faced increasing skepticism in recent years.

For four decades — literally!–Carol Dweck and other researchers ran thoughtful studies with thousands of students. Over and over, they found that students who think about about their work in particular ways (shorthand, “growth mindset”) do better than those who don’t (“fixed mindset”).

Like other areas of psychology (think “power poses”), Mindset Theory has been caught up in the “replication crisis.”

In brief: if Mindset theory is true, then a mindset intervention should help no matter who does the intervening. It should work when Dweck’s team does it with her students, and when I do so with mine.

If it works only for Dweck, well, that doesn’t really help the rest of us.

And, several researchers have found that various strategies didn’t replicate.

A much publicized meta-analysis, published last summer, suggests that Mindset interventions had very small effects. (I myself think this meta-analysis has been over-interpreted; you can see my analysis here.)

Today’s News

Researcher and NYU professor Alejandro Ganimian has published research about a large-scale mindset intervention in Argentina.

Ganimian had 12th graders at 100 (!) schools read a passage arguing that “persisting through difficult challenges can develop the brain.”

The 12th graders then wrote “a letter to a classmate of their choice on the three main lessons from the reading and how they might help him/her.”

To keep the growth mindset message fresh, those letters were posted in the classroom.

He compared these students to 12th graders at 102 other schools that had not used this intervention.

The results? Nada. Nothin’. Bupkis.

Specifically:

This intervention had “no effect on students’ propensity to find challenging tasks less intimidating.”

It didn’t increase the likelihood that they would pay attention in class.

By some rough/indirect measures, it didn’t have an effect on the participants’ academic success.

As Ganimian sums up his results:

In nearly all outcomes, I can rule out even small effects. …

This study suggests that the benefits of growth mindset interventions may be more challenging to replicate and scale in developing countries than anticipated.

What Should Teachers Do?

First: two clarifying points. a) Ganimian’s research hasn’t been peer reviewed and published in a journal. It is currently a working paper, hosted on his website. [Ed. 8/2020: Ganimian’s research now has been published: see link at the top of this post.]

And b) I myself am not a neutral source in this debate. I’ve written a book about mindset research, and so I read Ganimian’s work through that lens.

Second: I think mindset strategies are likeliest to have an effect when used all together as a consistent, unified approach to student motivation.

That is: I’m not at all surprised that a “one-shot” intervention doesn’t have big results. (Some research has found success with “one-shot” interventions; I’ve always been skeptical.)

So, if you want to use mindset research in your classrooms, don’t do just one thing, once. A motivational poster really won’t accomplish much of anything.

Instead, understand the interconnecting strategies that promote a growth-mindset climate, and use them consistently and subtly. Heck, I can even recommend a book that will show you the way.

Third: Here’s what I wrote last October:

We should not, of course, ask mindset to solve all our problems. Nor should we ask retrieval practice to solve all problems. Or short bursts of in-class exercise.

No one change fixes everything.

Instead, we should see Mindset Theory as one useful tool that can help many of our students.

Obsessed with Working Memory [Reposted]
Andrew Watson
Andrew Watson

I’m on vacation for the month of August, and so we’ll be reposting some of our most-viewed articles.

We’re starting with our series on working memory: one of the most essential concepts from the field of cognitive science.


When I attended my first Learning and the Brain conference, I had never even heard of working memory.

Now, I obsess over working memory. And, I think all classroom teachers should join me.

Heck, I think everyone who cares about learning, curriculum, teacher training, and education should think about working memory. All. The. Time.

In this series of posts, I’ll start by defining working memory (WM) today. And in succeeding posts, I’ll talk about using that knowledge most helpfully.

Trust me: the more we think about WM, the more our students learn.

Working Memory: An Example

As an example of WM in action, I’m going to give you a list of 5 words. Please put those words in alphabetical order. IN YOUR HEAD. (That’s right: don’t write anything down…)

Okay, here’s the list:

Think of the five workdays of the week. (Hint: if you live in a Western society, the first one is ‘Monday.’)

Now, go ahead and put those five words into alphabetical order. Don’t peek. I’ll wait…

 

Probably you came up with this list:

Friday, Monday, Thursday, Tuesday, Wednesday

I do this exercise with teachers often. For most everyone, that’s fairly simple to do. I’m guessing you got it right quite easily.

Working Memory: A Definition

To succeed at that task, you undertook four mental processes.

First, you selected relevant information. Specifically, you selected the instructions that you read. And, you looked into your long-term memory to select the workdays of the week.

Next, you held that information. If you had let go of the instructions, or of the days of the week, you couldn’t have completed the task.

Third, you reorganized the days of the week according to the instructions. You started with a chronological list (Monday, Tuesday, Wednesday…), and converted it into an alphabetical lest (Friday, Monday, Thursday…).

In many WM tasks (but not this one), you might not only reorganize, but also combine information. If, for instance, you added up 7+12+4+18+6 in your head, you selected, held, and combined those numbers into a new number.

So:

Working memory is a limited, short-term memory capacity that selects, holds, reorganizes, and combines information from multiple sources.

In a later post, I’ll talk about some finer points in the definition of WM. For the time being, focus on those four verbs: select, hold, reorganize, combine.

Working Memory: An Acronym

Because WM is so important, it would be great if there were a handy acronym. Happily, there is!

Select

Hold

REorganize

Kombine

What does that get you? SHREK! (I know: I misspelled ‘combine.’ But: I lived in Prague for a year, so you can forgive me for that useful alteration.)

Working Memory in the Classroom

Now, ask yourself: which of these classroom tasks requires working memory?

That is: in which of these cases do your students have to select, hold, reorganize, and/or combine information?

Solving a word problem.

Comparing W.E.B. du Bois and Booker T. Washington.

Transposing a song into a new key.

Applying a new phonics rule to various combinations of letters.

Choreographing a dance routine.

The correct answer is: ALL OF THEM.

In fact, practically everything we do in school classrooms requires working memory. Often, it requires A LOT of working memory.

To Sum Up

We use WM to select, hold, reorganize, and combine (SHREK) information.

Students use WM constantly in classrooms, for practically everything they do.

Simply put: no academic information gets into long-term memory except through working memory. It’s that important.

Up next: we’ll highlight key facts about WM. Then we’ll talk about using that knowledge in your teaching.


This series continues:

Part II: Three Core Ideas for Working Memory

Part III: Anticipating Overload

Part IV: Identifying Overload

Part V: Working Memory Solutions

Part VI: Working Memory Resources

Deliberate Practice Doesn’t Align with Schooling (Well: Not Precisely)
Andrew Watson
Andrew Watson

With his research into expertise – concert-level violinists, world-ranked chess players, elite runners – Anders Ericsson more-or-less created a new field of study.

How can we become amazingly awesome at challenging tasks? Ericsson has a system: deliberate practice.

As described in his book Peak (written with Robert Pool), deliberate practice has four key components:

Well defined, specific goals,

Focus,

Feedback (often from an expert, or an experienced teacher), and

Getting out of your comfort zone.

Gosh, that sounds a lot like school, doesn’t it? If we could structure our school thinking according to Ericsson’s research, perhaps we could help all our students become concert-level chemists, world-ranked fraction multipliers, and elite poetry analysts.

In fact, we already try to do so much of this, don’t we? We write goals on the board, encourage students to concentrate, give lots o’ feedback, and encourage students to try new things.

In other words: deliberate practice seems a perfect fit for schools. Obviously…

Or then again: maybe not.

The Popular Mistakes

Ericsson’s work has been most popularized by Malclom Gladwell’s book Outliers. You might oversimplify that book with this sentence: “The Beatles succeeded so spectacularly because they practiced 10,000 hours in Berlin.”

Peak briskly summarizes Gladwell’s inaccuracies:

First: 10,000 hours is a catchy round number, but lots of other numbers would have been just as accurate. 10,000 hours applies to one category of budding experts (musicians) at a particular stage (the age of 20) of learning one specific skill (the violin).

Second: even this much-touted number is correct only as an average. Half of the violinists whose data went into this number has practiced LESS than 10,000 hours.

Third: the Beatles weren’t practicing. They were performing. Ericsson’s research shows clearly: deliberate practice looks substantially different from ultimate successful performance.

These inaccuracies – important in themselves – also remind us: if we want to apply Ericsson’s research to our school work, we have to be more careful than Gladwell.

With that guidance in mind, let’s consider the fit between deliberate practice and education.

The GOALS Are Different

Research into deliberate practice focuses quite narrowly on specific kinds of learning.

He studied people wanting to be world champions in one (and only one) very specialized skill: chess, or hurdling, or concert piano playing.

He did NOT study what most teachers do: helping students be good enough at one skill to move on to the next.

For instance: I don’t want my students to win the “Angle-Side-Angle World Mathlympic Championship Gold Medal.” I want them to understand angle-side-angle well enough to move on to side-angle-side; and, ultimately, well enough to solve complex geometry proofs.

I don’t want them to win more National Mann Booker Nobel Book Prizes than anyone else. I want them to write good enough Macbeth essays so they’ll write even better Kindred essays.

In fact, I don’t want them to focus single-mindedly on any one thing. I want them to make gradual progress in all sorts of disciplines and skills: pottery, cooperation, Spanish, history, citizenship, driver’s ed.

It’s possible that deliberate practice will improve all kinds of learning – including school learning. But: let’s not be like Gladwell and simply make that assumption.

Our UNDERSTANDING OF TEACHING Is Different

Ericsson puts it this way:

One of the things that differentiates violin training from training in other areas – soccer, for example, or algebra – is that the set of skills expected of a violinist is quite standardized, as are many of the instruction techniques.

Because most violin techniques are decades or even centuries old, the field has had the chance to zero in on the proper or “best” way to hold the violin, to move the hand during vibrato, to move the bow during spiccato, and so on.

The various techniques may not be easy to master, but a student can be shown exactly what to do and how to do it. (Peak, p. 91)

Does that sound like education to you? Heck, we can’t get the field to agree on teaching strategies for one of education’s most foundational skills: learning how to read. Almost everything in our world is up for contentious debate.

Note that Ericsson is explicit: instruction techniques for algebra do not fit the pattern he studies. We don’t have decades-old tried-and-true techniques for teaching algebra (or grammar, or bunting).

That’s why education is hanging out with psychology and neuroscience: to develop and understand new techniques.

The Role of FEEDBACK Is Different

Ericsson’s model follows a precise feedback pattern:

The student practices a discrete skill.

The teacher provides specific feedback.

The student tries again, and improves.

The student recognizes her immediate progress, and continues to grow.

In education, however, the cause/effect relationship between feedback and progress gets MUCH more complicated.

Specifically, we know that short-term performance does not reliably predict long-term learning. In a research review that I cite often, Nick Soderstrom makes this important claim:

“Improvements in [short-term] performance can fail to yield significant [long-term] learning—and, in fact, … certain manipulations can have opposite effects on learning and performance.” (Emphasis added)

In fact, we’ve got an entire field of memory research that focuses on “desirable difficulties.” The relevant headline: if students get everything right immediately, their work isn’t difficult enough. We need them to be struggling more to ensure long-term learning.

If Soderstrom and the “desirable difficulties” team are right – and I certainly think they are – then the feedback pattern essential to deliberate practice doesn’t align with the kind of teaching and learning that schools prioritize.

We Think Differently about FUN

Throughout Peak, Ericsson and Pool emphasize that deliberate practice requires determination and focus, and rarely results in fun.

Experts don’t become experts because they enjoy this work more. They keep going despite their lack of enjoyment.

For instance, he describes a study of participants taking a singing lesson. Those participants who were NOT professional singers felt relaxed, energized, and elated after the lesson; it allowed them to express themselves in a way they didn’t usually get to do.

However, the participants who WERE professional singers felt relaxed and energized, but NOT elated. They were working, not expressing themselves. In Ericsson’s words, “there was focus but no joy” (p. 151).

Schools, however, want at least a little fun – maybe even a little joy – during the day. We needn’t focus excessively on making everything delightful. But, more than a deliberate practice model, we should keep in mind our students’ rightful need for connection and even elation.

In Conclusion

First: although I’m arguing that deliberate practice doesn’t necessarily promote the kind of learning that schools undertake, I do (of course!) admire this research pool, and Ericsson’s towering role in it.

Second: Education suffers from a strange problem right now: we’ve got too many varieties of plausible-sounding guidance.

The problem isn’t finding something to try. It’s deciding which of the dozens (hundred?) of options to choose.

I certainly think that a deliberate practice model might be useful for teachers to know – especially teachers who focus on creating world-level experts.

But: I don’t think it should be the primary educational model for most of us.

We should think about managing working memory overload. And fostering attention. And creating the optimal level of desirable difficulty.

Let’s not be like Gladwell and simplistically apply Ericsson’s model to our work. Let’s find the parts that fit us perfectly, and use those to help students reach their Peak.


If you’re the sort of person who reads this blog, you’re also the sort of person likely to know that Anders Ericsson passed away at the beginning of July. In addition to being a world-renowned scientist, he was also famous for being an immensely kind person.

Certainly that was our experience here at Learning and the Brain. We have so many reasons to miss him.

Retrieval Practice is GREAT. Can We Make It Better?
Andrew Watson
Andrew Watson

By now you know that retrieval practice has lots (and lots) (and LOTS) of research behind it. (If you’d like a handy comprehensive resource, check out this website. Or this book.)

The short version: don’t have students review by putting information back into their brains — say, by rereading a chapter. Instead, have them pull information out of their brains — say, by quizzing themselves on that chapter.

It’s REALLY effective.

When we know that a technique works in general, we start asking increasingly precise questions about it.

Does it work for children and adult learners? (Yes.)

Does it work for facts and concepts? (Yes.)

Does it work for physical skills? (Yes.)

Does it work when students do badly on their retrieval practice exercises? Um. This is awkward. Not so much

That is: when students score below 50% on a retrieval practice exercise, then retrieval practices is less helpful than simple review.

How do we fix this problem?

“Diminishing Cues” and Common Sense

Let’s say I want to explain Posner and Rothbart’s “Tripartite Theory of Attention.” In their research, attention results from three cognitive sub-processes: “alertness,” “orienting,” and “executive attention.”

Depending on the complexity of the information I provide, this explanation might get confusing. If a retrieval practice exercise simply asks students to name those three processes, they might not do very well.

Common sense suggests a simple strategy: diminishing cues.

The first time I do a retrieval practice exercise on this topic, I provide substantial cues:

“Fill in these blanks: Posner and Rothbart say that attention results from al______, or_____, and ex_______ at______.”

A few days later, I might ask:

“Fill in these blanks: Posner and Rothbart say that attention results from ______, _____, and _______  ______.”

A week later:

“What three sub-processes create attention, in Posner and Rothbart’s view?”

And finally:

“Describe how attention works.”

The first instance requires students to retrieve, but offers lots of support for that retrieval. Over time, they have to do more and more of the cognitive work. By the end, I’m asking a pure retrieval question.

“Diminishing Cues” and Research

So, common sense tells us this strategy might work. In fact, I know teachers who have stumbled across this approach on their own.

Here at Learning and the Brain, we like common sense and we REALLY like research. Do we have research to support our instincts?

Yes.

In 2017, two researchers put together an impressive combination of studies.

They looked at different study strategies: review, retrieval practice, diminishing-cues retrieval practice.

They tested participants after different lengths of time: right away, 24 hours later, a week later.

They tested different amounts of studying: 3 sessions, 6 sessions…

You get the idea.

Because they ran SO MANY studies, they’ve got LOTS of data to report.

The short version: “diminishing cues retrieval practice” ALWAYS helped more than traditional review (rereading the chapter). And it OFTEN helped more than plain-old retrieval practice (self-quizzing on the chapter).

If you want the details, you can check out the study yourself; it’s not terribly jargony. The process is a bit complicated, but the key concepts are easy to grasp.

To Sum Up

Retrieval practice helps students learn.

If we want to ensure that it works optimally, we should use it multiple times — and successively remove more and more scaffolding from the retrieval practice questions we ask.

Common sense and research agree.

What’s Better than Attention? Attention + LEARNING!
Andrew Watson
Andrew Watson

To learn in school, I need to pay attention.

More precisely, I need to pay attention to the subject I’m learning.

If I’m attending to …

…the sudden snowfall outside, or

…the spider on the ceiling, or

…the odd squeaking sound coming from the radiator,

then I’m not paying attention to…

…the Euler bridge problem, or

…the subjunctive mood, or

…the process for setting group norms.

We teachers wrestle with this problem every day. What can we do to help students pay attention so that they learn?

But Does It Work in Real-World Classrooms?

This urgent question has an obvious answer — and that obvious answer has obvious problems.

Obvious Answer: exercise. We’ve got lots of research showing that exercise enhances various neural processes essential to long-term memory formation.

And, we’ve got research — especially with younger children — that movement and exercise in class enhance attention.

Obvious Problems:

First, all that research doesn’t answer the essential question: “do movement and exercise help students learn?” We know they enhance attention. And we know that extra attention should boost learning. But: does it really work that way?

Second, most of that research on in-class exercise happens with younger students. What about older students? And, by “older,” I mean “older than 3rd grade.”

Wouldn’t it be great if someone looked at the effect of exercise on attention and learning in older students?

Good News, and More Good News

A research team in Canada has explored these questions. And, they did so with a helpfully clear and sensible research paradigm.

They invited college students (who are, indeed, older than 3rd graders) to watch a 50 minute lecture on psychology.

One group watched that lecture straight through, with no breaks.

A second group took 3 breaks, each one lasting five minutes. During those breaks, they played a fun video game (“Bejeweled”). That is: they DID take breaks, but they DIDN’T exercise during those breaks.

A third group also took 3 breaks, each one lasting five minutes. During those breaks, they did aerobic exercises: jumping jacks, high knees, etc.. Like the second group, they DID take breaks. Unlike the second group they DID exercise.

The results?

Lots o’ good news:

First: the exercise group were considerably more alert during the whole lecture than the other two groups. (That is: their heart rate was measurably higher.)

Second: the exercise group paid attention much better. They remained on-task about 75% of the time during the full lecture.

By way of contrast, the no-break group started at 60% on task, and fell to 40%. And the video-game group — who took a break but didn’t exercise — fell from 70% to 30%. YIKES.

Third: We care about alertness and attention only if they lead to more learning. Well: 48 hours later, the exercisers remembered more.

That is: they remembered 50% of the lecture, whereas the other two groups remembered 42%.  (50% doesn’t sound like a lot. But the point is: it’s considerably more than 42%.)

So, this study tells us that older students (like younger students) benefit from exercise during a lesson.

Specifically, they remain more alert, stay on task more, and learn more.

BOOM.

Final Thoughts

First: I think it’s helpful to see how each research study builds on previous ones. This study gives us important new information. But, it does so by drawing on and extending research done by earlier teams.

In educational psychology, no ONE study shows anything. Instead, each study builds incrementally on earlier ones — and creates a more interesting, more useful, more complex, even more contradictory picture.

Second: in this study, the students watched video lectures. Their experience wasn’t EXACTLY like online learning. But: it was an interesting relative of online learning.

Should we extrapolate from this study to encourage our online learners to move? That doesn’t sound crazy to me.

Third: One interesting question in this study. The students who took breaks — including those who exercised — took MORE TIME than those who didn’t. The “no break” group took 50 minutes; the “exercise break” group took 65.

So: they learned more — AND it took more time for them to do so. We have to be honest with ourselves about that finding.

My own view: I’d rather give up some class time for exercise if it means students attend and learn more. And, if that means I have to present less content, I’m okay with that exchange.

After all: it doesn’t matter if I teach material that students don’t learn. My job is to help them remember. Exercise breaks do just that.

What’s the Ideal Size for Online Discussion Groups?
Andrew Watson
Andrew Watson

We’re all learning lots about online teaching these days: new software (Zoom), new vocabulary (“asynchronous”), new fads (teaching in pajamas).

In many cases, we’re just going with our instincts here. Relying on our experience, we know to [insert technique here].

But because this is Learning and the Brain, we’d like some research to support whatever technique we inserted.

I’ve been reading about “online social presence” lately, and the research here offers lots of helpful insights.

Defining and Exploring “Social Presence”

Unlike many terms in the world of educational psychology (I’m looking at you, “theory of mind”), “online social presence” means what it sounds like.

When we’re together in a classroom, my students and I have a social presence. We’re aware of ourselves as a functioning group. We rely on lots of familiar cues — body language, facial expression, direction of gaze — to navigate those social relationships.

Of course, those familiar cues barely function online. What does “direction of gaze” mean when my laptop camera sees me looking at the lower left image in a Zoom video array?

Many teachers I talk with instinctively know to focus on building a greater sense of online classroom community. Breakout rooms and discussion boards, for instance, let students work with each other in smaller groups.

While it’s hard to participate effectively in a discussion with 30 people — heck, it’s hard to think clearly in an online discussion that large — the right-sized group might foster better conversations and closer connections.

But: what’s the “right-sized group”?

Instincts and Research

In informal discussions, I keep hearing “four or five.”

For no explicit reason, it just seems plausible that we can track an online conversation among the five of us. More than that will get hard to track. Fewer than that will get awkwardly quiet.

Unsurprisingly, researchers have been looking at this question.

One research team, for instance, measured their students’ evaluations of “social presence” in an online masters class in — appropriately enough — “Assessment and Data Analysis.”

For half the term, these students participated in online discussion boards with all 16 members of the class.

For the second half, their discussion groups shrank to 4 or 5.

What did the researchers learn?

Initial Findings, and Beyond

Sure enough, the smaller groups made a big difference.

According to the students’ own ratings, they felt that the small groups enhanced social presence. And, intriguingly, they felt a greater sense of commitment to this smaller group. (Large groups often create a sense of “social loafing,” where participants feel that others will do the heavy lifting.)

In the students’ own words:

“I felt as though I became very familiar with another student’s ideas and thoughts when I was in a small group of four.”

“This format allows us to connect more to previous conversations instead of having to rehash material that was discussed in earlier conversations.”

In other words, we’ve got some research that supports our teacherly instincts: 4 or 5 students works well to promote online social presence.

Always with the Caveats

At the same time, I think we should keep an open mind on this topic.

First: we don’t have lots of research here. I’ve found a few studies, and they all point in roughly the same direction. But we don’t have nearly enough research to have strong opinions, or to be granular in our recommendations.

That is: we don’t know if different age groups benefit from different numbers in small groups. We don’t know about cultural differences. We don’t know if physics discussions benefit from larger numbers than do … say … history discussions. (I don’t know why that would be true, but we don’t have research either way.)

Second: I think we should focus particularly on the students’ age. Most of the research I’ve seen focuses on college students.

This study I’ve briefly summarized looked at graduate students — who had, by the way, signed up for an online masters program. In other words: they’re probably especially open to, and especially interested in, online discussions.

So, I wouldn’t be surprised if this research doesn’t apply directly to 2nd graders.

Because I’m a high school teacher, I don’t have a prediction if younger students would do better in smaller or larger groups. If you teach K-8, I hope you’ll let me know what your predictions would be.

In Sum

Teachers can foster social presence in online classrooms by having relatively small breakout groups and discussion boards.

Until we get more detailed research, we can follow our teacherly instincts to right-size those groups. The research we have suggests that 4 or 5 is the place to start.