L&B Blog – Page 4 – Education & Teacher Conferences Skip to main content
Learning Goals Reconsidered (No, Not THOSE Learning Goals)
Andrew Watson
Andrew Watson

I’ve been discussing a topic with colleagues in recent months, and want to share my thinking with you.

The outline for this blog post is:

An observation/question that has been nagging at me, then

A theory about the answer to the question, then

Possible implications of that theory.

Here goes:

A Puzzling Problem

In recent years, as I listen to discussions on BIG EDUCATIONAL TOPICS, I frequently find myself surprised by this truth:

Education is full of folks who are

  1. Obviously smart,
  2. Obviously well informed,
  3. Obviously concerned about their students, our profession, and our society, and
  4. FEROCIOUSLY, ANGRILY at odds with one another.

Seriously, if folks could punch one another online, Twitter would have LOTS of broken noses.

This observation leads to a straightforward question: why? Why do these ferocious disagreements persist?

A man and woman sit across a small table talking with each other. He shrugs his shoulders in puzzlement, she points in irritation.

If many of us think ably and know a lot and care deeply, it’s surprising that we disagree…and keep disagreeing. The heated arguments just don’t change much.

In many scientific fields, heated arguments result – over time, at least – in the ultimate persuasiveness of one case or another.

Plate tectonics used to be a controversial topic, with lots of heated rhetoric. Today, those debates among smart, knowledgeable, and caring people have resulted in something like consensus: the theory is almost certainly correct.

Surgeons now wash their hands before surgery. That practice was initially scorned, but now the anti-handwashing argument seems impossible to understand. OF COURSE one side prevailed in the debate.

Why haven’t educational debates followed this pattern? Why haven’t our disagreements led ultimately to agreements?

One Possibility

I’ve been discussing a possible answer with several friends: here goes.

I wonder if our debates remain so heated because we don’t agree on the GOALS of education. In fact, we aren’t even in the habit of discussing those goals – or the habit of connecting them to teaching practices.

Off the top of my head, I can imagine all sorts of big-picture goals for the millions of dollars and millions of hours our society devotes to creating and maintaining its educational system.

If I were to ask 100 people this question, I can imagine a wide range of answers: “the goal of our educational system is to…

  1. Create a workforce for the future,” or
  2. Help students understand and enact the word of God,” or
  3. Know the best that has been thought or said,” or
  4. Ensure that individual students develop to their fullest potential,” or
  5. Promote justice, equity, and peace,” or
  6. Preserve our culture and way of life,” or
  7. Raise scores on key benchmark assessments,” or
  8. Prepare children and society for an unpredictable future,” or
  9. Develop students who see themselves as readers, historians, scientists, etc.,” or
  10. Ensure that students know the curriculum,” or
  11. Create an informed and civic-minded electorate,” or
  12. Foster a love of learning so that students become life-long learners,” or …

…and so forth.

Perhaps the reason our debates about teaching strategies go nowhere is that we’re in fact trying to go different places.

That is:

I might read about a teaching strategy and think “Pish posh. That’s obviously bonkers. It simply won’t accomplish the core aims of education.”

And yet, the person proposing the strategy might well have entirely different aims. And – sure enough – the teaching strategy being proposed might achieve their core aims, if not mine.

If, for example, I practice the Awesome Watson Learning Method, I might to do because it fosters a love of learning (goal 12) and ensures that students see themselves as writers and programmers and doctors (goal 9).

A critic might respond: “that pedagogy won’t accomplish our goal!” And that criticism might be sincere, because the pedagogy doesn’t (let’s say) help students “learn the greatest that has been thought or said” (goal 3). Yet because I’m not striving for goal 3, I’m genuinely vexed and puzzled by my critic’s (obviously incorrect) critique.

Humbling Implications

If I’m right that our “debates” simply talk past one another because we don’t share — or discuss — educational goals, that realization suggests several next steps.

Step A:

The next time I hear someone espouse a teaching method that strikes me as foolish, I should switch from contempt to curiosity.

Rather than “pish posh,” I should say: “That’s intriguing — tell me more!”

If I ask the right questions in an open-minded, non-snarky way, I might discover an entirely unexpected goal at the end of the process. I might not agree about the importance of that goal, but I might …

…understand why the other person champions it, and

…recognize that the teaching strategy I once thought foolish might in fact accomplish it.

Sadly, this “switch from contempt to curiosity” is really difficult. I will face the constant temptation to ask leading questions and trap my interlocutors into admitting that my goal surpasses theirs in wisdom and beauty.

(The best book I’ve read that discusses this problem is David McRaney’s How Minds Change. It has really shaped my thinking on this challege.)

Step B:

Since 2008, I’ve been thinking about using scientific research — especially in psychology and neuroscience — to improve my teaching.

Obviously, this approach focuses on numerical measurements: calculations, graphs, statistics.

In other words: I believe that my teaching strategies accomplish my goals because I’ve got numbers that say so.

However, several of the big-picture goals listed above simply can’t be measured.

How would I know if the Awesome Watson Teaching Method…

… helps students become life-long learners?

… ultimately fosters civic engagement?

… encourages students to live and act according to God’s word?

The end point for these goals (and others) lies decades away — and will be influenced by THOUSANDS of other forces.

This fact, however, does not necessarily invalidate the potential importance of those goals.

Teachers might not be able to show a study — with p-values in the appropriate range, and a Cohen’s d above 0.2 — concluding that their teaching method promotes justice and peace. But that impossibility does not mean that their goal has no merit.

In other words: I’m attracted to a science-y approach to thinking about teaching practice, and I like being able to cite all those numbers. (92% of in-classroom studies show that  retrieval practice promotes long-term memory better than control conditions!)

But science-y approaches can’t routinely dictate answers to moral or ethical questions.

Another Possibility?

Of course, I have a MUCH simpler explanation for the fact that many people disagree with me — often angrily:

Those other people could be daft, ignorant, and/or immoral.

That explanation has several benefits.

  • It’s easy to summarize.
  • It converts me from a person into a hero/protagonist.
  • It frees me from the need to listen to their foolish, ill-informed, morally-tainted ideas.

At the same time, I find this simpler explanation unsatisfying — because I disagree with many people who don’t strike me as daft or wicked.

Perhaps there’s a third explanation?

TL;DR

I’m trying to focus less on why others are wrong. I’m trying to focus more on their implied goals for education — goals that have led them to teaching advice that puzzles or alarms me.

When I understand their goals, I might better understand — and learn from — their teaching suggestions.

Perhaps you’ll join me in this effort — and let me know what you learn.


 

In case the title of this post doesn’t make sense: researchers in the world of mindset encourage a less focus on performance goals (test scores, etc.) and more focus on learning goals (“look! I made progress!”).

This blog post isn’t about mindset-y learning goals, but about society’s broader goals for education.

Incremental Steps with Growth Mindset
Andrew Watson
Andrew Watson

The field of education often races to extremes, and the field of Growth Mindset has been an especially good example of this trend.

Back in the 2006 when Carol Dweck published her book, schools rushed to be as Growth Mindset-y as possible. Posters adorned walls; Janelle Monaie made a Sesame Street video reminding children about “the Power of ‘Yet’.”

Little Asian boy with stadiometer near green wall

All those enthusiasts felt quite a shock in 2018, when two mega-meta-analysis crunched all the numbers and found that

a) mindset doesn’t really have much of an effect on academic performance, and

b) all those mindset interventions don’t really do anything anyway.

In some academic quarters, loud praise gave way to ridicule and — in some cases — snark. (The loaded claim that “all the research showing a positive effect was done by Dweck herself” simply isn’t true.)

Since then, competing camps wave conflicting studies to support their pro/anti Growth Mindset position.

I’d like to advocate for an alternative approach. I believe Dr. Dan Willingham said something like this:

“Some studies suggest that a growth mindset helps some students; other studies suggest that creating the conditions to develop and enact that mindset is REALLY TRICKY.

We shouldn’t simply abandon this approach. We should focus our efforts on finding when it does and doesn’t help which students, and how to foster those conditions for them.”

In other words: a growth mindset won’t promptly and easily cure all motivation problems. But let’s try to find where and how and whom it benefits.

With that goal in mind, I want to explore a recent study. My goal is NOT to say “this team is right; that one is wrong.”

Instead, I want to show how this study gives us reasons to be hopeful and curious — but should still not return us to the days of A Poster in Every Classroom.

Best Case Scenario

In this study from 2022, roughly 80 children aged 7-10 had 2 fMRI scans separated by four weeks.

They also filled out a mindset survey, agreeing or disagreeing with statements like “I can get better at math if I work hard to solve problems,” or “people are born smart or not smart, and there’s not much they can do to change that.”

For half of those children – the control group – that was that.

The other children — during the intervening four weeks — went through a specially designed math tutoring program.

This tutoring program emphasized progress and understanding, not simply scores or comparisons to others. If you know from growth mindset, you know the terminology here: the program emphasized “mastery/learning goals,” not “performance goals.”

So, what did the research team find after four weeks?

Several important findings jump out from all the data and charts:

First:

BOTH GROUPS saw an increase in their growth mindset “score.” For instance, they were likelier to disagree with the statement that “people can’t do much to change their math ability.”

However – and this is an important however – the GROUP IN THE TUTORING PROGRAM saw a bigger change. If you think having a growth mindset is good, you’d interpret these data to say that the tutoring group “made more progress.”

In other words: contrary to that big-news meta-analysis from 2018, this study found that — under these conditions — growth mindset training did help students develop more of a growth mindset.

Second:

We care about mindset because it should motivate students to learn more. To say the same thing in different words: if students who do have a growth mindset learn as much as those who don’t, why would we focus so much energy on developing that mindset?

The research team wanted to know if students who had a more of a growth mindset before the tutoring program learning more math during the tutoring program.

The technical answer to this question is: “yup.”

Third:

When the research team compared changes in fMRI scans after four weeks, they found that the changes in growth mindset correlated with specific changes in neural regions and networks.

If you want to get your neuro-geek on: in the scans of children with higher mindset scores, they found

  • greater activation at the top of the front of the cingulate cortex (“dorsal ACC”)
  • greater activation in the top of the right striatum
  • greater activation in the right hippocampus

They also found that changes in the circuitry connecting these regions “emerged as the strongest predictor of growth mindset gains.”

Recapping the Best Case

Yes: we have had reasonable doubts about the importance of mindset. (Reasonable doubts = that 2018 meta-analysis, among other studies.)

But, this study arrives at three striking conclusions:

a) a well-designed math tutoring program can foster a growth mindset,

b) growth mindset before the tutoring program results in greater math learning during that program, and

c) we see brain regions both activating and connecting differently in conjunction with growth mindset self-report.

Even for skeptics, that’s an impressive combination!

Pushing Back

I can see at least two reasons this study isn’t a slam dunk for Team Mindset.

Reason A:

When researchers compare two groups — as they did in this case — we want those groups to be as much alike as possible.

While these groups did resemble each other in lots of important ways (average age, average mindset score, average IQ, etc.), they differed in a crucial one: one group got something; the other group got nothing.

That is: one group received four weeks of tutoring. The control group simply went about “business as usual.”

They did not — for instance — get a different kind of tutoring that did NOT focus on a growth mindset.

We can therefore wonder if these students developed a growth mindset not because they got a special kind of tutoring, but because they got any tutoring. Maybe the tutoring, not the mindset ingredients, made the difference.

Reason B:

If you read this blog often, you know that I’m very wary of people who make strong claims based on neuroscience research.

Lots of people make claims like these: “when teachers do X, students get a hit of oxytocin. So everyone has to X!”

Here’s my argument: until we test X with actual students doing something in actual classrooms, we don’t know whether or not extra oxytocin does anything beneficial in these circumstances. (Yes, someone is ACHING to call oxytocin the “love hormone.” No, it’s really not.)

So, can we think of other reasons these students’ brain structures and networks might have changed?

Here’s a possibility: perhaps their brains responded to extra math tutoring.

Because they had different experiences for four weeks, it’s not wholly shocking that their brains developed differently from those in the control group.

In other words: just because this study includes complicated brain terminology does NOT mean that we must be persuaded by its conclusions. LOTS of people make strong claims about neuroscience; not all of them hold up well.

(To be fair: an earlier study with college students found the dorsal ACC to be an important part of the growth mindset network. This reduplication clearly makes the current neuro-claim more plausible.)

A Final Verdict

Now that I’ve made arguments both championing and questioning this study, you might reasonably want a clear answer.

Rather than provide false certainty, I’ll go a different direction.

As Dan Willingham (I think) said: we’re trying to figure out where, when, and with whom mindset interventions work.

Based on this study, we can say: “it seems likelier that youngsters given a particular kind of tutoring develop more of a growth mindset; it also seems likely that this mindset helps them learn math.”

That’s not a universal claim; it’s quite a narrow one.

To develop a more complete and persuasive understanding, we will need all sorts of incremental steps:

One research group will work with 5th graders in science classes.

Another will focus on the neuroscience of mindset in high-stress situations.

A third will turn its attention to adults who return to school to pursue a second career.

And so on.

Piece by piece, study by study, we will gradually accumulate a clearer mental model. In a few decades, we will probably be saying: “we used to talk about mindset in this crude, outdated, and puzzling way. But now that we understand this mental phenomenon so much better, we know that…”

And the advice that follows will be granular, targeted, and perhaps surprising to us who got our start making mindset posters.


Sisk, V. F., Burgoyne, A. P., Sun, J., Butler, J. L., & Macnamara, B. N. (2018). To what extent and under which circumstances are growth mind-sets important to academic achievement? Two meta-analyses. Psychological science29(4), 549-571.

Chen, L., Chang, H., Rudoler, J., Arnardottir, E., Zhang, Y., de Los Angeles, C., & Menon, V. (2022). Cognitive training enhances growth mindset in children through plasticity of cortico-striatal circuits. npj Science of Learning7(1), 30.

Mangels, J. A., Butterfield, B., Lamb, J., Good, C., & Dweck, C. S. (2006). Why do beliefs about intelligence influence learning success? A social cognitive neuroscience model. Social cognitive and affective neuroscience1(2), 75-86.

Even More Questions (3rd of a Series)
Andrew Watson
Andrew Watson

This blog post continues a series about research into questions.

I started with questions that teachers should ask BEFORE students’ learning begins: “pre-questions,” measuring prior knowledge.

I then turned to questions that we ask DURING early learing: retrieval practice, checking for understanding.

Now — can you guess? — I’ll focus on questions that we ask LATER in learning, or “AFTER” learning.

To structure these posts, I’ve been focusing on three organizing questions:

When to ask this kind of question? (Before, during/early, during/later)

Who benefits most immediately from doing so?

What do we do with the answers?

Let’s dive in…

A Controversy Resolved?

At some point, almost all teaching units come to an end. When that happens, teachers want to know: “how much did my students learn?”

To find out, we typically ask students questions. We might call these questions “quizzes” or “tests” or “assessements” or “projects.”

A young girl reads and draws in a garden

Whatever we call such questions, students answer by writing or saying or doing something.

Who benefits from all these activities? Well, here we arrive at a controversy, because reasonable people disagree on this point.

OVER HERE, some folks argue that assessments basically benefits school systems — and harm others. After assessments, school systems can…

  • sort students into groups by grade, or
  • boast about their rising standardized test scores, or
  • evaluate teachers based on such numbers.

I don’t doubt that, in some cases, assessments serve these purposes and no others.

OVER THERE, more optimistically, others argue that assessments can benefit both teacher and student.

Students benefit because

  • They learn how much they did or didn’t learn: an essential step for metacognition; and
  • The act of answering these questions in fact helps students solidify their learning (that’s “retrieval practice,” or “the testing effect”).

Teachers benefit because

  • We learn how much our teaching strategies have helped students learn, and
  • In cumulative classes, we know what kinds of foundational knowledge our students have for the next unit. (If my students do well on the “comedy/tragedy” project, I can plan a more ambitious “bildungsroman” unit for their upcoming work.)

In other words: final assessments and grades certainly be critiqued. At the same time, as long as they’re required, we should be aware of and focus on their potential benefits.

Digging Deeper

While I do think we have to understand the role of tests/exams/capstone projects at the “end” of learning, I do want to back up a step to think about an intermediate step.

To do so, I want to focus on generative questions — especially as described by Zoe and Mark Enser’s excellent book on the topic.*

As the Ensers describe, generative questions require students to select, organize, and integrate information — much of which is already stored in long-term memory.

So:

Retrieval practice: define “bildungsroman.”

Generative learning: can a tragedy be a bildungsroman?

The first question asks a student to retrieve info from long-term memory. The second requires students to recall information — and to do mental work with it: they organize and integrate the parts of those definitions.

For that reason, I think of retrieval practice as an early-in-the-learning-process question. Generative learning comes later in the process — that is, after students have relevant ideas in long-term memory to select, organize, and integrate.

The Ensers’ book explores research into, and practical uses of, several generative learning strategies: drawing, mind-mapping, summarizing, teaching, and so forth.

In my thinking, those distinct sub-categories are less important that the overall concept. If students select, organize, and integrate, they are by definition answering generative learning questions.

For instance: the question “can a tragedy be a bildungsroman” doesn’t obviously fit any of the generative learning categories. But because it DOES require students to select, organize, and integrate, I think it fits the definition.

(I should fess up: technically, retrieval practice is considered a generative learning strategy. For the reasons described above, I think it’s helpful to use RP early in learning, and generative learning later in learning. My heresy could be misguided.)

“Generative learning” is a BIG category; teachers can prompt students to think generatively in all sorts of ways. A recent review by Garvin Brod suggests that some strategies work better than others for different age groups: you can check out those guidelines here.

TL;DR

In most school systems, teachers must ask some kind of summary questions (tests, projects) at the end of a unit. Such questions — if well designed — can benefit both teachers and students.

After students have a bedrock of useful knowledge and before we get to those final test/project questions, teachers should invite students to engage in generative learning. By selecting, organizing, and reintegrating their well-established knowledge, students solidify that learning, and make it more flexible and useful.


Brod, G. (2021). Generative learning: Which strategies for what age?. Educational Psychology Review33(4), 1295-1318.


* Grammar nerds: if you’re wondering why I wrote “Zoe and Mark Enser’s book” instead of “Zoe and Mark Ensers’ book” — well — I found that apostrophe question a stumper. I consulted twitter and got emphatic and contradictory answers. I decided to go with the apostrophe form that makes each Enser and invidivual — because each one is. But, I could be technically wrong about that form.

The Best Way to Teach: When Clarity Leads to Muddle
Andrew Watson
Andrew Watson

Most teachers want to be better teachers. You’re probably reading this blog for research-based guidance on doing so.

A young student wearing plastic goggles carefully pours something into a beaker slightly filled with green liquid

I recently read a study that offers emphatic — and paradoxical — guidance. Exploring this research — as well as its paradoxes — might be helpful as we think about being better teachers.

Here’s the story.

A research team, led by Louis Deslauriers, worked with students in an introductory physics class at Harvard. This class was taught by an experienced professor who mostly lectured; he also supplemented the class with “demonstrations, … occasional interactive quizzes or conceptual questions.”

Let’s call this approach “interactive lecture.”

In Deslauriers’s study, students also attended two additional classes. One was taught with Method A and the other with Method B.

In Method A, an experienced professor:

  • presented slides
  • gave explanations
  • solved sample problems
  • strove for fluency of presentation

What abotu Method B? Another experienced teacher:

  • used principles of deliberate practice
  • instructed students to solve sample problems together in small groups
  • circulated through the room to answer questions
  • ultimately provided a full and correct answer to the problems

The researchers strove, as much as possible, to make the class content identical; only the pedagogy differed.

What did the researchers learn about the relative benefits of Methods A and B?

Paradox #1: Effective and Unloved

First off, the students learned more from Method B.

That is: when they solved problems in small groups, wrestled with the content, and ultimately heard the right answer, students scored relatively higher on an end-of-class multiple choice test. When they experienced Method A (the prof explained all the info and solved all the problems), they scored relatively lower.

But — paradoxically — the students preferred Method A, and believed that they learned more from it. They even suggested that all their classes be taught according to Method A — the method that resulted in less learning.

The researchers offer several explanations for this paradox. The headlines sound like this:

  • When students hear straightforward explanations and see clear/succesful demonstrations of solutions strategies (Method A), the content seems easy and clear. Students think they understand.
  • But, when they have to do the cognitive heavy lifting (Method B), class feels more difficult. Result: students worry they didn’t understand.
  • Because the students are — relatively speaking — novices, they don’t know enough to know when they understand.

Team Deslauriers, sensibly enough, suggests that we can help students appreciate and accept the more challenging methods — like Method B — if we explain the reseasoning behind them.

I, by the way, take this suggestion myself. For instance: I explain the benefits of retrieval practice to my students. They don’t always love RP exercises, because retrieval practice feels harder than simple review. But they understand the logic behind my approach.

Paradox #2: Clarity vs. Muddle

Up to this point, Deslauriers and Co. pursue a sensible path.

They know that MOST college profs use Method A (the bad one), so they want those profs to change. To encourage that change, the undertake a study showing a better option: Method B!

Given these research results, Deslauriers and Co. offer two clear and emphatic suggestions:

First: teachers should use Method B teaching strategies, not Method A strategies.

Second: to counteract students’ skepticism about Method B, we should explain the logic behind it.

What could be more helpful?

Alas, these clear suggestions can lead to another muddle. This muddle results from the freighted NAMES that this study gives to Methods A and B.

Method B — the good one — is called “active.”

Method A — the bad one — is called (inevitably) “passive.”

So, this study summarizes its findings by saying that “active” teaching is better than “passive” teaching.

These labels create real problems with the research conclusions.

Because these labels lack precision, I can apply them quite loosely to any teaching approach that I believe to be good or bad.

For instance: recall the experienced professor who regularly teaches this physics course. He mostly lectures; he also supplements the class with “demonstrations, … occasional interactive quizzes or conceptual questions.”

If I disapprove of that combination, I can call it “passive” — he mostly lectures!

If I approve, I can call it “active” — consider all those demonstractions, interactions, and conceptual questions!!

These labels, in other words are both loaded and vague — a perilous combination.

The peril arises here: literally no one in the world of cognitive science champions Method A.

EVERYONE who draws on cognitive science research — from the most ardent “constructivist” to the most passionate advocate for direct instruction — believes that students should actively participate in learning by problem solving, discussion, creation, and so forth.

Advocates for those two groups have different names for this mental activity: “desirable difficulties,” “productive struggle.” They think quite differently about the best way to achieve all that active thinking. But they all agree that students must struggle with some degree of difficulty.

Slippery Logic

This naming muddle creates unfortunate logical slips.

The study certainly suggests that Method B benefits students more than Method A. But, it doesn’t suggest that Method B is better than other methods that might reasonably be called by the open-ended named “active.”

For instance: it doesn’t necessarily mean that “constructivism” is better than direct instruction. And yet — because of those highly flexible labels — the study can be misinterpreted to support that claim.

My concern isn’t hypothetical. Someone sent me this study precisely to support the argument that inquiry learning promotes more learning than direct instruction.

But: “Method B” isn’t inquiry learning. And direct instruction isn’t Method A.

The Big Picture

I said at the beginning of this post that teachers might draw on research to be better teachers.

I worry that readers will draw this inaccurate conclusion based on this study:

“Research proves that ‘active learning’ (like projects and inquiry) is better than ‘passive learning’ (like direct instruction).”

Instead, this study suggests that asking students to do additional, productive mental work results in more learning than reducing their mental work.

Champions of both projects/inquiry and direct instruction want students to do additional, productive mental work.

Those schools of though have sharply different ideas of the best ways to accomplish that goal. But dismissing one of them as “passive” — and therefore obviously bad — obscures the important insights of that approach.


Deslauriers, L., McCarty, L. S., Miller, K., Callaghan, K., & Kestin, G. (2019). Measuring actual learning versus feeling of learning in response to being actively engaged in the classroom. Proceedings of the National Academy of Sciences116(39), 19251-19257.

A Smartphone Has, and Is, a Mirror Function
Andrew Watson
Andrew Watson

A recently published study looks at the role that ADULTS play in their children’s phone use.

In this post, I want to outline the study’s findings, and then consider their broader implications for schools and teachers.

The Study

In this recent study, led by Jason Nagata, researchers studied survey data from over 10,000 (yes, ten THOUSAND) adolescents. They looked for connections between “media parenting practices” and adolescent behavior.

So: if the family use cellphones at the dinner table, does that practice have an effect on — say — problems with school work.

Because we’re asking so many questions from so many people (10,000!), the researchers find A LOT of associations. (By the way, because these are survey data, we might be skeptical about the answers we get.)

Rather than go through each finding, I’ll highlight just a few:

  • An increase in cellphone usage at meals correlates with an increase in “problematic” social media behavior…
  • …and “problematic” video game behavior.
  • Ditto: an increase in cellphone usage in the child’s bedroom.
  • When parents reward or punish by allowing/restricting media usage, this practice correlates with increase in “problematic” video game usage.

This list could go on (and include dozens of caveats), but those headlines captured my attention.

In brief: if we’re worried about perils of technology usage, the cellphone might not be the best place to focus our efforts. Instead, the parents might be a good place to focus.

The Bigger Picture

Of course, this blog doesn’t focus on parenting. This study, instead, promted me to consider other questions:

Because the “should we ban cellphones in schools” debate runs so hot, it resists a nuanced conversation.

A baby wearing a suit with a "binkie" in his mouth, hoding and staring at a cellphone

I want to use this study to approach that debate from a slightly different angle.

As I read the case for banning cellphones, it typically includes the phrase “kids these days.” Sometimes directly, often indirectly, the argument implies that students themselves deserve the blame for their cellphone use.

I want to argue against that implication, for two reasons. (And then one more reason.)

First: for the most part, students do not buy themselves cellphones. Adults buy children cellphones. And we buy them the carrier plans and tchotchkes that go with them.

In other words, the public lament often sounds to me like this:

“How strange and terrible that children use the objects we bought them!”

If we don’t want our children to use an object, we don’t have to make them available.

(I understand that there’s a safety argument: “children need phones in case of emergency.” I also understand there are MANY ways to allow for emergency communication without smartphones. I’m looking at you, flip phone.)

This study described above helpfully highlights this point. If adults conjure up a problem, we should not turn around and criticize children for participating in it.

Resisting Temptation

The first reason not to blame children: adults probably bought them the smartphones.

The second reason not to blame children: everyone involved in the smartphone industry makes them as tempting as possible.

For instance: if I create video games for a living, I’m not bothered that children play my game during their classes — I’m DELIGHTED.

Presumably that kind of obsessive devotion means that the game will go, or has gone, viral.

Presumably I’ll get rich.

Heck, I might even reach the pinnacle of social success; I could become an influencer.

To expand on the point made above, the public lament often sounds to me like this:

“How strange and terrible that children use the objects that an  entire industry of experts has made as tempting and enjoyable as possible!”

If we don’t want our children to use a super fun object, we don’t have to make them available. *

And One More

I wrote above that we’ve got at least two reasons to spend less time blaming children for their cellphone use. I’d like to add one further reason.

(I should say: I am, at this moment, skating out on VERY thin ice. I hope you won’t be too upset with me.)

I’ve spent the last 12 years of my life leading PD workshops for teachers: workshops about cognitive science research, and the uses of that research in day-to-day teaching.

I have A LOT of anecdotal data about teachers in groups.

My biggest concern during these sessions is NOT “resistent behavior.” I almost never have grumpy teachers tell me to get off their lawn. Teachers typically react with great enthusiasm when they get to talk about a cog-sci approach to teaching.

My biggest challenge is: cellphones.

Many teachers simply won’t put them down.

Many school administrators simply won’t put them down.

Honestly: a PD talk sometimes feels like that screen-obsessed scene from WALL-E.

Here’s my point: it strikes me as arresting for teachers to be so angry at students for doing precisely what we’re doing.

To expand (once again) on the point made above, the public lament often sounds to me like this:

“How strange and terrible that students use their phones in the same way that we teachers do!”

If we don’t want our students to use smart phones, we should model the restraint we expect.

More important: the DIFFICULTY we have modeling that restraint holds up an unwelcome mirror. Every time I send a text during a faculty meeting, I should acknowledge my participation in the very cultural habit that I decry.

Solution, Please?

You might reasonably ask me: “if everything you’ve written is true, what should my school do about smartphones?”

That’s an entirely reasonable question. I have a tentative answer, and a confident answer.

Tentatively:

I see almost no upside to studets’ having and using phones during class; I see LOTS of downsides.

For that reason, I think that in most cases, schools should do everything they reasonably can to limit or prevent cellphones from being out during class.

I suspect — but don’t know — that most schools would benefit from a substantial ban on cellphone use during the day on campus. Students will talk with each other more if they’re texting each other less.

Confidently:

I think a school’s cellphone policy should include teachers’/adults’ cellphone use as well.

Of course: a school’s relationship with a teacher differs from its relationship with a student. But: teachers’ cellphone usage can absolutely undermine our authority to insist on students’ abstinence.

The phone is a mirror. It both communicates with others, and shows us more about ourselves. We ought to look into that discomforting mirror.


* Important note: I made this entire argument without using the word “addicted.” As I’ve written elsewhere, we cannot currently claim that “cellphones are addictive.” That’s a lively research-based debate, not a settled conclusion.


Nagata, J. M., Paul, A., Yen, F., Smith-Russack, Z., Shao, I. Y., Al-Shoaibi, A. A., … & Baker, F. C. (2024). Associations between media parenting practices and early adolescent screen use. Pediatric Research, 1-8.

Again with the Questions (Second of a Series)
Andrew Watson
Andrew Watson

Three weeks ago, I started a short series of blog posts about asking questions.

After all, we’ve got SO MUCH RESEARCH about questions, we need to keep track and make sense of it all.

To structure these posts, I’ve been focusing on these three topics:

When to ask a particular kind of question?

Who benefits most immediately from doing so?

What do we do with the answers?

So, for questions that we ask BEFORE learning starts (“before” is the “when”):

Teachers check our students’ prior knowledge for our own benefit; now we know how best to plan an upcoming unit.

and

We ask “prequestions” for students’ benefit; it turns out that — even though students don’t know the answers to prequestions — they benefit from trying to answer them.

So: here’s the next “when”…

DURING Class

We’ve explored the questions to ask BEFORE students start learning (prior knowledge, “prequestions”).

What about DURING the learning process?

Two students raising their hands and laughing as they look at each other

Here again, I think we’ve got two essential categories. Importantly, we should plan and think about these questions differently.

Firstchecking for understanding.

Of course, we want our students to understand the ideas and processes that we’re discussing in class today. Alas, “understanding” happens invisibly, inside our students’ minds.

The only way to ensure that they understand: ask them questions. Their answers will make that invisible understanding visible — or, perhaps, audible.

When checking for understanding, we should keep some key principles in mind:

We should check for understanding frequently throughout a lesson. The correct number of times will vary depending on context. As a high school teacher, I rarely go more than seven or eight minutes without some kind of check.

As Doug Lemov says: “reject self report.” Our students don’t understand today’s topic well enough to know whether or not they understand — so it NEVER helps to ask students “got it?”

Be sure that everyone answers the checking-for-understanding questions. Whether we use mini-whiteboards or cold calling or quizlet, we want as broad a sample as possible of our students before we move on to the next step of the topic.

We should ask tricky questions, but not trick questions. That is: the questions should be difficult enough to ensure that students genuinely understand the topic (that’s “tricky”), but we’re not trying to fool them (“trick”).

Of course, wise thinkers have LOTS more to say about checking for understanding, but these few principles give us a strong start.

Important Distinctions

So, “who benefits” from checking for understanding? And: “what do we do with the answers”?

Roughly speaking, the teacher benefits from checking for understanding. If I C4U and discover that my students DO understand, I know:

a) that my teaching method for those several minutes worked as I had hoped, and

b) that I can continue the lesson.

If my students DON’T understand, I know:

a) it didn’t, and

b) I shouldn’t.

In other words: checking for understanding provides useful feedback to the teacher.

What should I do with the answers to these questions?

The right and wrong answers I see/hear will guide me as I decide what to do next.

If, for instance, my students answer a question about direct objects incorrectly, I might not provide the correct answer right away. But I will draw on that feedback when I think about revising my lesson plan for the next 10 minutes.

During Class, Part 2

Of course, not all in-class questions focus on checking for understanding new material. I might — in fact, I should — devote some class time to reviewing and consolidating ideas that students have already learned.

To meet this goal, I will almost certainly rely on retrieval practice.

This blog has written EXTENSIVELY about retrieval practice, so I won’t do a deep dive here. You can check out previous posts, or savor this awesome website.

The headline is: students learn more not by reviewing material but by actively trying to retrieve it.

Rather than say: “Remember, the Ideal Gas Law says that pv=nrt.”

I should say

“Okay, please try to remember the Ideal Gas Law. Don’t should out; I’ll ask you to write it on your mini-whiteboards in a moment.”

We’ve got heaps o’ research showing that the extra mental effort required by retrieval helps consolidate memories.

Notice; I’m NOT trying to see if students have an initial understanding. When I taught this concept last week, I checked for understanding. My students DID understand it.

Instead, I’m trying to consolidate the understanding they had back then.

Important Distinctions

Once again: “who benefits” from retrieval practice? And: “what do we do with the answers”?

Whereas I, the teacher, benefit from  checking for understanding, MY STUDENTS benefit from retrieval practice. That mental effort helps them consolidate and transfer the ideas they retrieved.

(Of course, I do get useful feedback about the stickiness of their prior learning, but that’s not the primary goal of RP.)

What should I do with the answers? Especially wrong answers?

This question leads to a surprisingly intricate answer. The short version goes like this:

If I have time, it’s helpful to correct wrong answers to retrieval practice questions ASAP.

I should do so ESPECIALLY if the question touches on an important core idea or procedure.

But:

Students get the benefit of retrieval practice even if they get the answer wrong. As long as they come across the correct answer eventually, they’ll benefit.

This topic gets nuanced quickly, but the headline is: wrong answers aren’t tragedies in retrieval-practice world.

To Sum Up

We ask students questions BEFORE learning; we take stock of their prior knowledge, and seed future learning with prequestions.

DURING class, we frequently check for understanding to ensure that current learning is happening as we hoped.

And we ask retrieval practice questions about ideas and procedures learned before, in order to help them consolidate that learning.

If we understand the differences among the purpose for and response to these questions, we will use them more effectively.

The Great Exam Debate: A Conversation with a Colleague
Andrew Watson
Andrew Watson

A colleague recently reached out to ask me about research into the benefits and perils of final exams in high school. Her question inspired a fun exploration of research on the topic; I thought it would be helpful to share both our conclusions and our process with you.

A hand holding a pencil filling out answers on a stardardized test

Before I dig into our discussion, it might be helpful to pause and ask yourself this question: “do I already have a firm opinion about final exams?”

That is: do you believe that exams provide much-needed accountability and a chance for meaningful accomplishment? Do you believe they subtract valuable instruction time and add needless academic stress? Your prior beliefs will shape the way you read the upcoming post, so you’ll probably learn more if you recognize your own preconceptions.

With this first step in place, let’s explore…

Not as Easy as it Looks

Our conversation started with a frank admission: it would be INCREDIBLY difficult to investigate this question directly.

To do so, we’d need to teach two identical courses – one of which does have a final exam, and the other of which does not.

This proposal, however, quickly becomes impossible.

Let’s say that one section of my English class has a cumulative final exam, and the other has a cumulative final project. The differences between an exam and a project require all sorts of other changes to the course…so the two experiences wouldn’t be similar enough to compare as apples and apples.

Almost any other attempt to answer questions about final exams directly leads to similar problems.

This realization might discourage those of us who regularly turn to research. At the same time, it forces us to rethink our question quite usefully.

Instead of asking:

“Are final exams good or bad?”

We can ask:

“When we think about a year-long learning experience, how can we conclude those months most helpfully? What set of cognitive experiences consolidates learning most effectively? And: how does the answer to that question depend on the specific context of my school?”

With that framework in mind, let’s get started…

Old Friends

Longtime readers know that I rely on several websites to launch my research journeys. In this case, my colleague and I started at elicit.org. I put in this question:

“Do final exams in high school benefit or harm learning?”

The results from this search highlight the complexity of the question.

This paper by Khanna et al argues that cumulative final exams benefit students more than non-cumulative exams; these benefits appear both in the short term – immediately after the exam – and up to 18 months later. (Technically speaking, 18 months is a LONG TIME.)

When I checked out that study on my two other go-to websites (connectedpapers.com, scite.ai), I found other papers that, roughly speaking, arrived at the same conclusion. Strikingly, those other papers suggested that cumulative exams especially benefit either struggling students, or students with less prior knowledge.

On the other hand, back at my elicit.org search, this study by Holme et al produces this bleak conclusion:

“High school exit exams have produced few expected benefits and been associated with costs for disadvantaged students.”

A quick search on connectedpapers.com finds that – sure enough – other researchers have reached roughly similar conclusions.

“Promising Principles”

As expected, our review of existing research shows the difficulty of answering this final-exam question directly.

So, let’s try a different strategy: returning to our reformulated question:

“When we think about a year-long learning experience, how can we conclude those months most helpfully? What set of cognitive experiences consolidates learning most effectively?”

Two Learning and the Brain stalwarts – David Daniel and John Almarode – often invite teachers to think about cognitive science not as rules to obey (“do this”), but as “promising principles” that guide our work (“think about this, then decide what to do”).

So: do we have any promising principles that might guide our question about final exams? Indeed we do!

This blog has written about spacing, interleaving, and retrieval practice so often that there’s no need to rehash those ideas in this post. And, it’s easy to see how to apply these promising principles to cumulative final exams. After all, such exams…

… almost REQUIRE spacing,

… almost REQUIRE interleaving,

… create MANY opportunities for retrieval practice.

Of course, almost anything can be done badly – and preparation for final exams is no exception. But – done well – final exams invite exactly the kind of desirable difficulty that cognitive science champions.

Slam Dunk?

Perhaps, then, we have answered my colleague’s question: schools should—no, schools MUST—use cumulative final exams to enact cognitive science principles.

…Insert sound of record scratch…

That statement overlooks the second part of the revised question above:

“What set of cognitive experiences consolidates learning most effectively? And: how does the answer to that question depend on the specific context of my school?”

In this case, my colleague works at a school that champions a progressive educational philosophy. In other words, final exams sound like a terrible idea.

Her school has long favored cumulative capstone projects. And even a cursory discussion makes it clear that such projects – like cumulative final exams – invite spacing, interleaving, and retrieval.

(Yes, yes: capstone projects can be designed very badly. So can final exams. Both can also be done well.)

As long as those capstone projects deliberately and thoughtfully enact all those promising desirably difficult principles, they too can consolidate a year’s worth of learning.

TL; DR

My collegial conversation suggests that cognitive science research neither forbids nor requires final exams.

Instead, that research gives us ways to think about the summary work that we do with students. We can adapt these promising principles to align with our own school philosophy. That conceptual combination – more than a specific research study – will guide us most wisely.

Do Gestures Improve Learning? Baby Steps Toward a Conclusion
Andrew Watson
Andrew Watson

The idea that “gestures improve learning” has be increasingly popular in recent years. I first heard about this idea from Dr. Susan Goldin-Meadow at a Learning and the Brain conference back in 2015, and the claim has gotten more emphatic with every passing year.

Drawings of men and women making a variety of hand gestures

This emphasis on gesture — within the broader field of “embodied cognition” — is approaching certainty in the field.

As is so often the case, I myself am less confident than others about many of these claims.

In this post, I want to

  • Explain my concerns about the “gestures promote learning” claims
  • Explore a recent study that pushes back against my concerns, and
  • Highlight the features of that study that support my concerns.

In other words: we’re taking baby steps here. This study that I’ll explain does offer modest support to this claim — and offers some useful specifics.

But it certainly does not make the “gestures promote learning” a slam dunk. And it flatly contradicts some of the most vehement statements you’ll hear when people champion embodied cognition and gestures.

Initial Concerns

I am certainly open to the idea that gestures might promote learning. No, honestly: it could be true!

At the same time, I want to

  • understand the SPECIFICS of that claim, and
  • have confidence in the research that supports them.

These requests feel reasonable to me.

So, specifically:

Who is making the gesture? The teacher? The learner? Both? A video?

Does some kind of learning get more benefit from gestures than others?

For example, I know how I could use a gesture to reinforce the word “hatchet.” (A hatchet gesture is easy to envisage.)

But: how would I make a gesture to help students learn “protagonist” and “antagonist”? Or, “pleistocene”? Or “tradition”?

What if students learn the same gesture in two different classes? Say: the “hatchet” gesture means “hatchet” in my Spanish class but “guillotine” in my history class.

Do gestures help students of all ages? Are there cultural differences?

In other words: what are the rules to govern the creation and use of gestures? And: what research supports each specific component?

Last year, I wrote about a study that starts answering some of these questions. But: we’ve got a long way to go.

Frankly, the questions listed above fit quite nicely under the heading of “boundary conditions”: we need to know the “boundaries” within which teaching suggestions work, and outside of which they don’t. I don’t quite understand why people look vexed when I ask such questions.

A Recent Study Strengthens the Case

Because I have outlier views on this topic, I’m always on the lookout for research that will change my mind.

Recently, I came across a study from 2021 that offers lots of helpful information.

This study does, in fact, give me some increased confidence in the ideas that gestures can help. And, it offers helpful specifics about ways that gestures MIGHT NOT help. That is: it clarifies those boundary conditions.

This research has LOTS of moving parts, so I’ll try to summarize it accurately and briskly.

Twenty-two German adults spent 4 days studying foreign language vocabulary.

Specifically, they learned 90 word pairs: a word in German, and its “translation” in a made-up language called “Vimmi.” (Researchers used made-up words so that the participants’ prior knowledge wouldn’t muddle their results.)

45 of those word pairs were accompanied by a gesture. (The participants saw someone making the gesture; and they also made the gesture themselves.)

The other 45 pairs were accompanied by a drawing of the word; in other words, not a gesture.

This group of non-gesture word pairs gives the researchers a useful comparison point. They can ask not just “did gestures helps participants learn?,” but “did the gestures help compared to the pictures?”

Now things get REALLY interesting.

Researchers then used a fancy gizmo to DISRUPT NEURAL PROCESSING in a part of the brain that coordinates gestures.

Specifically, several days after participants learned the word-pairs, researchers asked participants to recall them out loud. While participants did so, researchers used the (perfectly safe) neuro-scrambler for half of the words; they then faked the neuro-scrambling for the other half of the words. The participants, in other words, didn’t know when the scrambler was on or off.

Let’s add two more cool variables — variables that help answer my pesky questions:

Researchers tested participants five days and five months later. So: they could see how long these effects lasted.

And, they also tested two kinds of words: concrete nouns (“window”) and abstract nouns (“benefaction”).

Based on all these moving pieces, you can see that this study might finally convince me that gestures help — or give us some ideas about boundary conditions.

Yup: (Some) Gestures Help (Some) Learning (Somewhat)

When researchers tested the participants five days later, they found that seeing and producing gestures helped participants learn concrete nouns.

Specifically, the neuro-scrambler (presumably) reduced the neural encoding of the gestures for half of the words. And participants took longer to remember those words than the un-scrambled ones.

The likeliest interpretation: part of the memory of the word pair had been encoded in the motor cortex. When we interfere with that part of the memory, we interfere with the memory overall.

So, even though I’m naturally cautious about such things, I’m comfortable saying:

“This research supports the idea that seeing and making gestures about concrete objects helps learn those words in the short term.”

Baby steps.

Yup: Important Limits Exist

At the same time, you’ve probably noticed the boundary conditions in that statement.

  • The gestures did help participants remember concrete nouns (“window,” “hatchet”); at the same time, participants did not remember abstract nouns (“benefaction”, “protagonist/antagonist”) as smoothy over time. (I should be clear: this statement summarizes a very complex set of findings; others might look at the findings and disagree with that statement.)
  • The gestures did help after 5 days, but not after five months. So: short-term memory got a boost. But long-term memory — not so much.

I should also say that this study design (like ALL study designs) includes some features that require our caution.

  • 22 is not a huge number of participants. I wouldn’t often make big decisions based on such a small group. (Of course, as more and more small studies agree, they might add up to compelling recommendations.)

Also, this study pushes back against one of the more common statements within the broader field of “embodied cognition,” and also in the narrower field of “gestures promote learning.”

“Learning doesn’t just happen in the brain; it happens in the body.

Now, I spend LOTS of time on this blog warning readers against over-reliance on neuroscience. We spend far too much time saying “ooh, look, BRAIN.” (I think we should focus much more on psychology: the study of the mind.)

This study, however, requires that we take the brain itself seriously. Those brain scramblers, after all, scrambled signals in the motor cortex: a part of the brain. They did not do anything to the arm or the rest of the body.

We can, I think, reasonably say that the learning happened WITH the body. But it didn’t happen IN the body; more precisely, it happened IN the part of the body called “the brain.” The arm didn’t change and learn. The brain changed. The mind learned.

TL; DR

This small study adds to a growing body of research that suggests seeing and making the right kind of gesture can help people learn some things, at least for a few days.

It does not reach grander conclusions than those.

And, it strongly contradicts the idea that “learning happens in the body.” Bodily movements can help help the brain change; but the brain itself changes, and the mind itself learns.


 

A Final Note

This study includes substantially more complicated procedures and calculations than I typically try to summarize on the blog. As a result, reasonable readers might believe I have over-simplified the study and its conclusions; you might even think I have MIS-summarized both.

I have, quite genuinely, tried to do justice to the study and its conclusions. And, I include this link so you can check for yourself.


Mathias, B., Waibel, A., Hartwigsen, G., Sureth, L., Macedonia, M., Mayer, K. M., & von Kriegstein, K. (2021). Motor cortex causally contributes to vocabulary translation following sensorimotor-enriched training. Journal of Neuroscience41(41), 8618-8631.

Questions, Questions (First of a Series)
Andrew Watson
Andrew Watson

Because teachers spend so much time asking questions, we’d be grateful if we had research to guide us as we do so.

On many pedagogy topics, we face a tricky problem: not enough research to paint a clear picture.

One this topic — questions — we face a completely different problem: SO MUCH RESEARCH.

In fact, we’ve got so many studies on so many different sub-topics here, we can easily get lost in the muddle of terminology, recommendations, and limitations.

My goal in this series of posts is to sort through the several kinds of research focused on questions, trying to nudge them into useful categories. I’ll also try to point out some easy mistakes to make along the way.

To organize this post, I’ll focus on three points:

When to ask this kind of question?

Who benefits most immediately from doing so?

What do we do with the answers?

Before the Thinking, part I

Daniel Willingham tells us that “memory is the residue of thought.” If we prompt our students to do the right kind of thinking, they’ll get the right kind of residue: useful long-term knowledge.

The first group of questions comes well before we want our students to do that “right kind of thinking.”

To teach an effective unit, teachers need to know our students’ prior knowledge on the subject.

To learn what the word “tragedy” means, students need to know who a “protagonist” is. Heck: they need to know what a “play” is. Do they? I should ask.

To learn about covalent bonds, students need to know what elements are, and what electrons are. Do they? I should ask.

And so forth.

Practically ANY unit draws on this kind of prior knowledge. If my students don’t already know these essential facts, ideas, or skills, they’ll experience working memory overload when the unit gets underway. (If you’re interested in the importance of prior knowledge, check out Graham Nuthall’s bookThe Hidden Lives of Learners.)

Female High School Teacher Asking Pointing at a student who is raising her hand

So: the very first questions I ought to ask my students explore the depth and breadth of their knowledge on a topic they’ll be learning about next week, next month, next semester.

Notice who benefits first from these questions: I — the teacher — do. Because…

What do I do with the answers? Once I evaluate my students’ prior knowledge, I can design a more effective and targeted lesson—one that neither bores nor overwhelms my students.

Recap of “prior knowledge” questions:

When? Before the unit – sometimes well before the unit.

Who benefits? Initially, the teacher – who now has information that will help plan the full lesson.

What to do with the answers? Design upcoming instruction effectively.

Before the Thinking, part II

Of course, not all questions that precede the lesson are for my benefit. We have a growing body of research into “prequestions.”

Notice this distinction:

I ask “prior knowledge questions” to find out what my students do and don’t know.

I ask “prequestions” about important information I’m already sure my students DON’T KNOW.

In fact, I can’t ask effective “prequestions” until I check their prior knowledge; otherwise, my prequestion might ask them about important information they DO already know.

Now, why would I do such a thing? Why ask students to define the Five Pillars of Islam if I’m sure they just don’t know?

Well, this growing body of research suggests that such questions prime students for upcoming learning.

They are likelier to learn those Five Pillars if I ask prequestions about them – even though they almost certainly can’t and won’t answer those questions correctly.

Let’s be honest: this strategy seems peculiar. Asking students questions when I’m sure they don’t know the answer feels a little mean. But, if this research pool is correct, doing so gets them mentally ready to learn new stuff.

Recap of “prequestions”:

When? Before the unit, probably right at the beginning.

Who benefits? The student – who is now better primed to learn this new information.

What to do with the answers? Don’t sweat the incorrect answers. We assume they’ll get incorrect answers! Instead, carry on with the lesson, knowing that students are mentally prepared for this new learning.

Digging Deep

Even this brief review suggests important distinctions.

I should ask both kinds of questions “before the thinking”; that is, before the students have started learning this topic.

But I will plan these questions quite differently.

If I’m asking “prior knowledge” questions, I need to consider quite a broad range of possibilities. Because I don’t yet know what my students don’t know, I should probably ask …

… both conceptual questions and factual questions,

… questions that seem easy to me, and questions that seem hard to me,

… questions from early in the upcoming unit, and questions later in the unit.

And so forth.

However, if I’m asking “prequestions,” I should focus quite narrowly on…

… information that is most important in the upcoming unit, and

… information that I’m confident most of my students don’t already know.

Also, I will respond to the answers quite differently.

If I’m asking “prior knowledge” questions, my students’ answers are simply data. If I ask the right questions, their answers tell me what they do and don’t know — and in this way guide me as I design the upcoming lesson.

If I’m asking “prequestions,” my response will be rather different. Because I’m almost sure my students don’t know the answers, I expect that most answers will be wrong.

That wrongness is a feature, not a bug. (If most students get the answer right, then I didn’t accomplish the goal of priming future learning.)

I should probably be on the lookout for “prior misconceptions.” That is: if several students answer a “prequestion” with a commonly held false belief, that information will be important and helpful to me.

Devils in Details

As I write about these topics, I think two details merit attention.

First: I’ve written above about “prequestions” — asking students information I’m sure they don’t know.

I should confess that most people have a different name for this technique; they call it “pretesting.”

Now, I think “pretesting” is a TERRIBLE name. No teacher and no student wants anything to do with tests. And, the technique isn’t a test! No grading required!!!

In this post and others, I’m trying to rebrand “pretesting ” as “prequestioning.” However, you’ll see the term “pretesting” more often.

Second: you may know Graham Nuthall’s famous finding: students already know about 50% of what we’re going to teach them, but that each one knows a different 50%.

If that’s true, this finding makes both “prior knowlegde” questions and “pretesting” questions very tricky.

I myself worry less about this finding than others do.

While Nuthall did find this statistic to be true, his finding hasn’t been replicated (as far as I know.) We should remember Dan Willingham’s motto: “one study is just one study, folks.”

By the way, Nuthall’s methodology is so extraordinarily complex that I’d be surprised if it has could be replicated frequently.

I also suspect that this 50% statistic will vary widely from situation to situation.

If you teach Spanish 4 — well, it’s likely that most of your students have successfully completed Spanish 3. They’ll have LOTS of prior knowledge in common.

But if you teach 9th grade history in a school that draws from many different feeder schools, you might well work with students whose priok knowledge varies quite widely.

TL;DR

BEFORE we start teaching a unit, we should ask students two different kinds of questions.

Prior knowledge questions help us learn what our students already know.

“Prequestions” help prime students to learn new information and concepts.

Although all these questions are, in fact, questions, their purpose, form, and result differ in important ways. We should plan accordingly.

Experts, Expertise, and Teachers (and Students!)
Andrew Watson
Andrew Watson

Researchers often focus on experts and expertise. And bloggers like me often follow their leads.

You’ll read about the novice-expert continuum, the differences between novices and experts, and the expertise-reversal effect.

A substantial collection of tools organized on a peg board above a workbench

But let’s pause for a minute and ask: what is an expert? What is this “expertise” that novices gradually acquire on their way to becoming an expert?

A recent book by Roger Kneebone — Expert: Understanding the Path to Mastery — takes on these fascinating questions.

Biography, and Beyond

Kneebone himself followed an unusual trajectory to this set of questions. He started his professional life training to be a surgeon; his stories of emergency surgery in South Africa will set the squeamish on edge.

By the way, while not slicing his way through gory neck wounds, Kneebone also spent time learning how to fly at a local airport. Here again, his mishaps as a pilot provide important examples for his investigation of expertise.

After some number of years as a surgeon, he decided to retool himself as a general practitioner in rural England — the kind of doctor we would now call a “primary care provider.”

That is: instead of snipping gall bladders out of patients he barely knows, he discusses hang-nails with patients he’s know for years.

Oh, by the way, he also takes up playing the harpsichord — he even builds one of his own. You guessed it: this pursuit also informs his book.

He finally ends up with yet another career: he helped found a program for training surgeons. He is — rather curiously — an expert in expertise.

Sample Size

To explore the nature of expertise, Kneebone reaches outside his own experience to talk to a remarkable variety of experts. As in:

An expert taxidermist

An expert tailor

An expert harpsichord maker

An expert magician

An expert fighter pilot

An expert ceramicist

And so forth.

In these conversations, Kneebone finds remarkably consistent patterns. That is: the path to becoming an expert surgeon is surprisingly like the path to being an expert tailor or an expert magician — even though the actual work of these professions differs substantially.

In his book, he maps out this path, using examples and stories from all those professions.

I won’t trace the entire path from “apprentice” to “journeyman*” to “master*” — you should read Kneebone’s book if you want the details, but I do want to share a few of his insights.

First, Kneebone sees the phase transition from apprentice to journeyman as a change in focus. An apprentice teacher (for example) focuses on what s/he is doing: what does my lesson plan look like? Am I covering learning objectives?

A journeyman teacher focuses on the effect of those actions on students. Are they learning? Did they understand that example? How do their mistakes this week compare to their mistakes last week?

As a developing teacher, I can’t do the second part (focusing on students) until I’ve made the first part (focusing on myself) routine. But that switch really makes all that initial work worthwhile.

Second: the phase change from journeyman to mastery — if I’m understanding Kneebone correctly — involves another such change in focus. Journeyman teachers focus on their students. Master teachers focus on helping other teachers help their students. They switch to a meta-level, and think about the profession itself: how to pass on — and improve! — professional skills, norms, and knowledge.

Once again, this journeyman-to-mastery switch can’t happen until after MANY years of journeyman-level effort. And, in fact, lots of people never make this second leap: they stay focused on the proximate, not the ultimate, effects of their work.

If you’ve been teaching for a while, perhaps you can see these steps in your work, and your colleagues’. Certainly I can see that progression in the schools where I have worked.

Teaching Implications

As teachers, we’re understandably tempted to ask: “How should I think about helping my students along this path? How can I help my students arrive at expertise?”

Kneebone doesn’t address this question directly, but I suspect I know part of the answer.

In Kneebone’s model, the path from apprentice to journeyman to mastery takes…literally…years. Probably decades.

Kneebone doesn’t object to repetitive drudgery; in fact, he considers it an essential step in the process of developing mastery.

For instance: the master tailor he interviews spent literally months sewing a specialized part of a pocket…over and over (and over) again. While he was doing so, he often felt irritated and confused — all too aware of the seeming pointlessness of the exercise. Only once he’d travelled further along the path did he recognize all the subtleties he had absorbed along the way.

So, I suspect Kneebone would tell me: “Andrew, get real. Your high-school sophomores will not become experts at writing — or Shakespeare, or grammar — in a year. Becoming an expert in Shakespeare — in anything — takes DECADES.”

Instead, I found Kneebone’s book to be most helpful as we think about teacher training: how we can reasonably expect apprentices in our profession explore and sift their experiences on their way to later stages of expertise.

A Final Distinction

While I think Kneebone’s book gives better guidance for training teachers (over several years) than teaching students (over several months), I do think the terms “novice” and “expert” are useful in understanding our day-to-day classroom work.

Specifically, we should be aware that our students (almost always) know much less than we do about the topic we’re teaching; they are, relatively speaking, “novices.” We should not act as if they’re experts; doing so will almost certainly overwhelm their working memory.

And, we should not abandon “expertise” as a goal — as long as we focus on “relative expertise.”

That is: my sophomores won’t be Shakespeare experts at the end of the year. But — if I’m doing my job right — they will have more expertise than they did before.

They’re better at parsing Shakespearean syntax.

They know more about King James I’s obsession with witches, and with deception. (Hello, Gunpowder Plot.)

They’re on the lookout for the words “do,” “done,” and “deed” as they make their way through the poetry.

They’re not experts, but they’re relative experts: that is, experts relative to themselves at the beginning of the year.

As long as we keep the goal of “relative” expertise in mind, then the novice/expert distinction provides lots of useful guidance for our work with students.

As long as we recognize that Kneebone’s insights apply more to teaching training than to student instruction, I think his book provides importand and helpful insights into the nuances, trials, and joys of our work.


* These terms, of course, raise questions. Kneebone considers them, and sticks with this terminology.


Kneebone, R. (2020). Expert: Understanding the path to mastery. Penguin UK.