Skip to main content

Andrew Watson About Andrew Watson

Andrew began his classroom life as a high-school English teacher in 1988, and has been working in or near schools ever since. In 2008, Andrew began exploring the practical application of psychology and neuroscience in his classroom. In 2011, he earned his M. Ed. from the “Mind, Brain, Education” program at Harvard University. As President of “Translate the Brain,” Andrew now works with teachers, students, administrators, and parents to make learning easier and teaching more effective. He has presented at schools and workshops across the country; he also serves as an adviser to several organizations, including “The People’s Science.” Andrew is the author of "Learning Begins: The Science of Working Memory and Attention for the Classroom Teacher."

The Best Way to Take Class Notes
Andrew Watson
Andrew Watson

Teachers often ask me: “how should my students take notes?”

That question typically springs from a heated debate. Despite all the enthusiasm for academic technology, many teachers insist on hand-written notes. (Long-time readers know: I have a provocative opinion on this topic.)

For the time being, let’s set that debate aside.

Instead, let’s ask a more important question: what kind of mental processing should my students do while they take notes?

If students get the mental processing right, then perhaps the handwriting/laptop debate won’t matter so much.

Possibilities and Predictions

To study complicated questions, we start by simplifying them. So, here’s one simplification: in class, I want my students to…

…learn specific facts, ideas, and procedures, and

…learn connections and relationships among those facts, ideas, and procedures.

Of course, class work includes MANY more complexities, but that distinction might be a helpful place to start.

So: should students’ note-taking emphasize the specific facts? OR, should it emphasize the connections and relationships?

The answer just might depend on my teaching.

Here’s the logic:

If my teaching emphasizes facts, then students’ notes should focus on relationships.

If my teaching emphasizes relationships, then their notes should focus on factual specifics.

In these cases, the note-taking strategy complements my teaching to be sure students think both ways.

Of course, if both my teaching and students’ notes focus on facts, then mental processing of relationships and connections would remain under-developed.

In other words: we might want notes to be complementary, not redundant, when it comes to mental processing.

In fact, two researchers at the University of Louisville — Dr. David Bellinger and Dr. Marci DeCaro — tested such a prediction in recent research

Understanding Circulation

Bellinger and DeCaro had college students listen to information-heavy lecture on blood and the circulatory system.

Some students used guided notes that emphasized factual processing. This note-taking system — called “cloze notes” — includes a transcript of the lecture, BUT leaves words out. Students filled in the words.

Bellinger, D. B., & DeCaro, M. S. (2019). Note-taking format and difficulty impact learning from instructor-provided lecture notes. Quarterly Journal of Experimental Psychology, 72(12), 2807-2819.

Others students used guided notes that emphasized conceptual/relational processing. These notes — “outline notes” — organized the lecture’s ideas into conceptual hierarchies, which the students filled out.

And, to be thorough, Bellinger and DeCaro used both “more challenging” and “less challenging” versions of these note systems. As you can see, examples A and B above leave much larger blanks than examples C and D.

So, which note-taking system helped students more?

Because the lecture was “information heavy,” a note-taking system that highlights facts (the “cloze notes”) would be “redundant,” while a system that highlights conceptual relationships (the “outline notes”) would be “complementary.”

That is: students would get facts from the lecture, and see relationships highlighted in the outline notes.

For this reason, Bellinger and DeCaro predicted that the outline notes would help more in this case.

And, sure enough, students remembered more information — and applied it more effectively — when they used the challenging form of the outline notes.

Classroom Implications

Based on this study, do I recommend that you use outline notes with your students?

NO, READER, I DO NOT.

Remember, the “outline notes” worked here because (presumably) they complemented the factual presentation of the lecture.

If, however, the lecture focused more on relationships and connections, then (presumably) “cloze notes” would help more. They would be “complementary.”

As is so often the case, I don’t think we teachers should DO what research says we should DO.

Instead, I think we should THINK the way researchers help us THINK.

In this case, I should ask myself: “will my classroom presentation focus more on facts, or more on relationships and connections?”

Honestly: that’s a difficult question.

In the first place, I lecture only rarely.

And in the second place, my presentations (I hope) focus on both facts and relationships.

But, if I can figure out an answer — “this presentations focuses on relationships among the characters” — then I should devise a complementary note system. In this case, “cloze notes” would probably help, because they highlight facts (and my presentation highlights connections).

In other words: this research — and the theory behind it — doesn’t offer a straightforward, simple answer to the question that launched this post: “how should my students take notes?”

Because learning is complicated, such a usefully intricate answer might be all the more persuasive.


Bellinger, D. B., & DeCaro, M. S. (2019). Note-taking format and difficulty impact learning from instructor-provided lecture notes. Quarterly Journal of Experimental Psychology72(12), 2807-2819.

What is “Mind, Brain, Education”? Defining the Undefinable…
Andrew Watson
Andrew Watson

Here at Learning and the Brain, we bring together psychology (the study of the MIND), neuroscience (the study of the BRAIN), and pedagogy (the study of EDUCATION).

That is: we bring together THREE complex fields, and try to make sense of their interactions, differences, and commonalities.

Such interdisciplinary work creates meaningful challenges.

In any one of those fields, scholars argue about basic definitions and concepts. So, you can imagine the debates that rage when all 3 disciplines together. (Quick: what does the word “transfer” mean? Each field defines that word quite differently…)

So, who decides what “we” think in the field of MBE? What core beliefs hold us together, and how do we know?

One Answer: Ask Delphi

To solve this puzzle, Dr. Tracy Tokuhama-Espinosa, Dr. Ali Nouri, and Dr. David Daniel organized a “Delphi Panel.”

That is: they asked 100+ panelists to respond to several statements about the intersection of psychology, neuroscience, and education. (Full disclosure: I’m almost sure I was 1 of the 100 — but I don’t have specific memories of my contributions.)

They then crunched all those answers to determine a) the panelists’ points of agreement, and b) their enduring concerns about those points.

For instance, 95% of the panelists agreed with this statement:

Human brains are unique as human faces. While the basic structure of most humans’ brains is the same (similar parts in similar regions), no two brains are identical. The genetic makeup unique to each person combines with life experiences and free will to shape neural pathways.

However, several participants disagreed with the inclusion of the phrase “free will” — including some who agreed with the statement overall.

This Delphi Panel method, in other words, BOTH looks for points of consensus, AND preserves nuanced disagreements about them.

 21 Tenets, and Beyond…

So, what do “we” in the world of MBE believe?

The Delphi Panel supported 6 principles and 21 tenets across a wide range of topics: motivation, facial expression, tone of voice, sleep, stress, novelty, even nutrition. (91% of panelists agreed with the statement “NUTRITION influences learning. Basic nutritional needs are common to all humans, however, the frequency of food intake and some dietary needs vary by individual.”)

Taken all together, they add up to several Key Concepts — almost all of which matter to teachers who read this blog.

For instance:

Teachers should understand some basic definitions, and beware of some enduring neuromyths. (“Learning styles,” here’s looking at you.)

We should know that attention networks can improve, and so can executive functions. (I’m a little concerned about this last statement, as it implies false hopes about working memory training.)

Teachers should know that affect matters as much as cognition; that retrieval practice and spacing really work; that growth mindset is a thing; that interleaving helps.

Excellent Timing

In fact, several of this Delphi Panel’s conclusions align with our upcoming conference on Calming Anxious Brains (starting November 19).

For instance:

STRESS influences learning. However, what stresses one person and how may not stress another in the same way. (95% agreement)

ANXIETY influences learning. However, what causes anxiety in one person may not cause anxiety in another. (97% agreement)

In other words: our students aren’t little learning computers. Their emotional systems — when muddled by the stress and anxiety of Covid times — influence learning profoundly.

Teachers should attend to our students’ emotional lives not because of some misguided mushiness; instead, we do so because those lives can make learning much harder, or much more fluent and natural.

MBE research, and the Delphi Panel, say so.


As a bonus, here’s Dr. Tokuhama-Espinosa explaining the “The Difference between Mind, Brain and Education, Educational Neuroscience and the Learning Sciences”:

Changing the System: Where Do We Start?
Andrew Watson
Andrew Watson

I recently spent two hours talking with a group of splendid teachers from Singapore about Mindset Theory.

We talked about “charging” and “retreating.” We discussed “performance goals” and “learning goals.” Of course, “precise praise” merited lots of attention.

At the end of our session, several of their insightful questions focused on systemic change:

How can we help teachers (not just students) develop a growth mindset?

How can we change our grading system to promote GM goals?

What language should we use throughout the school to talk about learning and development?

These questions — and others like them — got me thinking:

We know that psychology and neuroscience research has so much to offer teachers, learners, and education. What systems should be in place to spread the word? 

Thinking Big

This question gets complicated quickly.

In the first place, teaching will (almost) always be INDIVIDUAL work taking place within a complex SYSTEM.

In some cases, we want teachers to have lots of freedom — say, to try out teaching strategies suggested by cognitive science.

In other cases, we want teachers to follow their school leaders’ guidance — say, when leaders follow wise psychology research.

How can we get that balance right?

  • In England, I believe, a national agency (OFSTED) has evaluation standards that apply to all schools and teachers.
  • France is in the process of creating a Council to vet research-based advice to schools and teachers. (LatB speaker Stanislas DeHaene is taking a leading role.)

In the US, of course, local control of schools makes such a system hard to imagine.

What might we do instead? What levers can we push?

I know of one organization — Deans for Impact — that focuses on teacher education.

Their logic makes great sense.

If we can ensure that teacher training programs incorporate cognitive science wisely, we can change the beliefs and practices of a generation of teachers.

Now THAT would — as they say — “move the needle.”

D4I has published a number of immensely useful summaries and reports. This one, for instance, briskly summarizes six core principles of learning: the research behind them, and their classroom implications.

Focus on Schools

Instead of teacher training, we might focus on schools as systems.

Eric Kalenze (blog here) has written a splendid book about creating a school within a schoolWhat The Academy Taught Us doesn’t focus on cognitive science, but it does offer a chalk-in-hand view of building new systems from scratch.

In Kalenze’s telling, a supportive and inspiring principal created just the right combination to allow for meaningful change. (And a school district’s overly rigid policies brought this hopeful experiment to an end.)

I know of several independent schools that are doing exactly this work. The Center for Transformative Teaching and Learning at St. Andrew’s School has been guiding their faculty — and teachers across the country — for over a decade.

The Peter Clark Center for MBE at the Breck School and the Kravis Center for Excellence in Teaching at Loomis Chaffee (the school where I work) both do excellent work in this field.

Perhaps this “Center” model will spread widely throughout schools in the US. If so, these highly local “Deans for Impact”-like initiatives just might — gradually but powerfully — shape the future of teaching.

One By One

At the same time, my own experience suggests the importance of working teacher by teacher.

I attended my first Learning and the Brain conference in 2008. Inspired by the possibilities of combining psychology, neuroscience, and education, I began my own independent exploration.

Although I don’t run a school or supervise teachers, I’m able to spread the word — both as a classroom teacher, and in my work as a consultant (hello Singapore!).

And here’s where Learning and the Brain conferences continue to be so valuable.

The more individual teachers who attend — the more groups of teachers who pool together to share excitement and ideas — the more we can expand networks and create the movement we need.

Perhaps the best way to change the complex system is: one teacher at a time.

I hope you’ll join us in Boston in November!

Understanding Adolescents: Emotion, Reason, and the Brain
Andrew Watson
Andrew Watson

Kurt Fischer — who helped create Learning and the Brain, and the entire field of Mind, Brain, and Education — used to say: “when it comes to the brain, we’re all still in kindergarten.”

He meant: the brain is so FANTASTICALLY complicated that we barely know how little we know.

Yes, we can name brain regions. We can partially describe neural networks. Astonishing new technologies let us pry into all sorts of secrets.

And yet, by the time he left the program he founded at Harvard, Dr. Fischer was saying: “when it comes to the brain, we’re now just in 1st grade.”

The brain is really that complicated.

Fascinating Questions

Adolescents — with their marvelous and exasperating behavior — raise all sorts of fascinating questions.

In particular, we recognize a real change in their ability to think abstractly.

Unlike their younger selves, teens can often “infer…system-level implications…and lessons that transcend the immediate situation.”

We can say in a general way that, well, teens improve at this cognitive ability. But: can we explain how?

More specifically, can we look a their brains and offer a reasonable explanation? Something like: “because [this part of the brain] changes [this way], teens improve at abstract thinking.”

A research team at the University of Southern California wanted answers.

Networks in the Brain

These researchers showed 65 teens brief, compelling videos about “living, non-famous adolescents from around the world.” They discussed those videos with the teens, and recorded their reactions.

And then they replayed key moments while the teens lay in an fMRI scanner.

In this way, they could (probably) see which brain networks were most active when the teens had specific or abstract reactions.

For example, the teen might say something specific and individual about the teen in the video, or about themselves: “I just feel so bad for her.”

Or, she might say something about an abstract “truth, lesson, or value”: e.g., “We have to inspire people who have the potential to improve society.”

If some brain networks correlated with specific/individual statements, and other networks with abstract/general statements, that correlation might start to answer this question.

As usual, this research team started with predictions.

They suspected that abstract statements would correlate with activity in the default mode network.

And, they predicted that concrete statements would correlate with activity in the executive control network.

What did they find?

Results and Conclusions

Sure enough, the results aligned with their predictions. The orange blobs show the teens’ heightened neural activity when they made abstract statements.

And: those blobs clearly overlap with well-established regions associated with the Default Mode Network.

Neural correlates of abstract construals. Results from a whole-brain analysis reveal regions whose activity while responding to documentary-style stories positively correlates with abstract construal scores from the interview (N = 64). The image is subjected to a cluster forming threshold of P < 0.001, and cluster extent thresholded at k = 177 voxels (for illustrative purposes). The in-set image depicted in purple correspond to 6 mm spherical ROIs located in the DMN. The in-set scatterplot depicts participants’ average parameter estimates (β) from all voxels within the identified ROIs relative to abstract construal scores. Each dot represents one participant. Inf-post PMC = inferior/posterior posteromedial cortices; DMPFC = dorsomedial prefrontal cortex; VMPFC = ventromedial prefrontal cortex. Unless provided in the caption above, the following copyright applies to the content of this slide: © The Author(s) 2021. Published by Oxford University Press.This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.

The study includes a second (even more intricate!) picture of the executive control network — and its functional overlap with concrete statements.

The headline: we can see a (likely) brain basis for concrete and abstract thought in teens.

Equally important, a separate element of the study looks at the role of emotion in adolescent cognition. (One of the study’s authors, Dr. Mary Helen Immordino-Yang, has worked on this topic for years.)

In brief, emotions don’t necessarily limit thinking. They can focus and motivate thinking:

“Rather than interfering with complex cognition, emotion in the context of abstract thinking may drive adolescents’ thinking forward.”

The much-discussed emotionality of teenage years might not be a bug, but a feature.

A Final Note

I’m especially happy to share this research because its lead author — Dr. Rebecca Gotlieb — has long been the book reviewer for this blog.

If you’ve ever wondered how she knows so much about the books she reviews, well, now you know.

Because of work that she (and so many other) researchers are doing, Dr. Fischer could now say that we’re entering 2nd grade in our understanding of the brain…


A Final Final Note

Neuroscience studies always include more details than can be clearly summarized in a blog post. For those of you who REALLY want to dig into the specifics, I’ll add three more interesting points.

First: knowing that scientific research focuses too much on one narrow social stratum, the researchers made a point to work with students who aren’t typically included in such studies.

In this case, they worked with students with a lower “socio-economic status” (SES), as measured by — among other things — whether or not they received free- or reduced-priced lunch. Researchers often overlook low SES students, so it’s exciting this team made a point to widen their horizons.

Second: researchers found that IQ didn’t matter to their results. In other words, “abstract social reasoning” isn’t measured by IQ — which might therefore be less important than some claim it to be.

Third: teachers typically think of “executive function” as a good thing. In this study, LOWER activity in the executive control network ended up helping abstract social thought.

Exactly what to make of this result — and how to use it in the classroom — is far from clear. But it underlines the dangers of oversimplification of such studies. Executive functions are good — obviously! But they’re not always beneficial for everything.


Rebecca Gotlieb, Xiao-Fei Yang, Mary Helen Immordino-Yang, Default and executive networks’ roles in diverse adolescents’ emotionally engaged construals of complex social issues, Social Cognitive and Affective Neuroscience, 2021;, nsab108, https://doi.org/10.1093/scan/nsab108

Let’s Get Practical: Signaling a Growth Mindset
Andrew Watson
Andrew Watson

Most teachers know about Mindset Theory: the idea that students’ beliefs about intelligence shape their success in learning.

Specifically:

If I think that intelligence (whatever that is) can’t change, I learn less.

If I think that intelligence can change, I learn more.

Once widely believed and championed, this theory now faces real doubts — especially following two meta-analyses by Sisk and Burgoyne showing that mindset strategies produce (on average) negligibly small effects.

Alas, Mindset debates often fall into two extreme camps:

“Tell students about growth mindsets — they’ll learn more!” or,

“Mindset research is nonsense; skip the whole thing.”

Ugh.

Perhaps we can do better?

Doing Better

Dan Willingham (I believe) has argued that contrary findings about growth mindset don’t exactly “disprove” Mindset Theory. Instead, they remind us that getting mindset strategies right takes precision and care.

We shouldn’t blithely think: “I’ll just do some mindset stuff now.”

Instead, we should think: “I need to ensure my mindset strategy aligns with research, and with my students, quite precisely.”

For instance: I’m skeptical that simply telling students about mindset — the most common strategy I hear about — has much enduring effect.

Instead, I think we need to have quiet and consistent classroom policies and procedures that re-enforce Growth Mindset messages.

Obviously, if we tell our students that intelligence CAN change and act as if we believe it CAN’T, our actions reveal what really matters to us.

One Recent Example

One research group from Washington State wondered if the syllabus of a college course might be enough to communicate a professor’s mindset.

They created two mindset versions of a Calculus syllabus.

The Fixed Mindset Syllabus, for instance, said:

“If you have not mastered these concepts, you should consider dropping the course.”

“I do not take attendance in class [because] I do not penalize students with strong math abilities.”

It also had one heavily-weighted final exam.

The Growth Mindset Syllabus, by contrast, said:

“I you have not mastered these concepts, you should see me or a teaching assistant and we will provide resources.”

“All students will learn something new and attending class is the best way to learn.”

This syllabus had many exams, equally weighted.

Sure enough: both men and women assumed a) that the professor who wrote the FM syllabus indeed had a fixed mindset, and b) that this professor probably assumed that women are “naturally worse at math” than men.

And, women (but not men) who read the FM syllabus did worse on a subsequent math test than those who read the GM syllabus.

Beyond the Syllabus

These perceptions, it turns out, influenced learning beyond the syllabus.

This research team had students rate their professors’ mindsets.

In 46 courses across the university, students — both male and female — rated their STEM professors’ mindsets similarly. That is: some professors rated strongly at the fixed mindset end of the scale — and the students’ gender didn’t matter in that rating.

And, both male and female students assumed that fixed-mindset professors believed that “women struggle to do well in advanced math.”

Sure enough: men had higher average grades in classes taught by FM professors. Women had higher average grades in classes taught by GM professors.

In other words: those syllabus policies — combined with other classroom factors — influence students’ learning.

It might be hard to identify exactly what causes this effect, but mindset certainly seems to be an important part of the equation.

What Should K-12 Teachers Do?

Few pre-college teachers have a syllabus with the gravitas of a college syllabus.

We do, however, have about policies and procedures. We talk about policies and procedures with our students. This study (and many others) encourages us to rethink those policies with their mindset implications in view.

For instance: does our rewrite policy suggest we think that students can get smarter? (I say to my students: “If you say to me you want to work harder and learn more, why would I say no to that? OF COURSE you can revise the essay!”)

Do we have different policies for “smart students” than “other-than-smart students”?

Do we — heaven help us — have a program for students we call “Gifted”?

In brief: we should not think of mindset as a topic we discuss once in a special program.

Instead, we should consider — bit by bit, day by day — what signals we send to our students. If the language we use, the policies we communicate, the procedures we follow all demonstrate “I think you can get smarter,” our students just might believe us.

If we think they can, they will.

How Do Experts Think?
Andrew Watson
Andrew Watson

Perhaps you’ve heard the saying: “To a hammer, everything looks like a nail.”

It means, more or less, we see what we’re trained to see.

If I bring a problem to a plumber, she’ll think about it like a plumbing problem. An economist, like an economics problem. A general, a military problem.

What does research tell us about this insight? And, does that research give us guidance about teaching and learning?

The Geoscientists and the Balloon

A research team led by Dr. Micah Goldwater wanted to explore this topic.

So, they asked a few hundred people these questions:

“A balloon floating is like _________ because _________.”

“Catching a cold is like _________ because _________.”

Those who answered the question fell into four distinct groups:

Expert geoscientists — who had an MA or PhD in geoscience

Intermediate geoscientists — who were studying geoscience

Expert vision scientists –who had an MA or PhD in vision science

Non-expert adults — who had not studied science in college

Goldwater’s team wanted to know: how often would people offer causal analogies? “A balloon floating is like hot water rising in a cold sea because they result from the same underlying causal principle.”

Deeper still, they wanted to know how often people offer those causal analogies spontaneously, and how often they need to be prompted to do so. (The research details get tricky here, so I’m simplifying a bit.)

Archimedes Catches a Cold

Sure enough, expert geoscientists spontaneously offered causal analogies for the balloon question — because they have a relevant geoscientific rule, called “Archimedes’ principle.”

However, expert vision scientists did not spontaneously give causal analogies, because their branch of science does not include a causally relevant analogy.

And neither group spontaneously proposed many causal analogies for “catching a cold,” because neither field builds on underlying relevant principles.

This finding — along with other parts of Goldwater’s research — suggests this conclusion: hammers typically see nails.

That is: experts spontaneously perceive, contemplate, and understand new information (“floating balloons”) through core principles of their field (“Archimedes’ principle”) — even though balloons don’t come up very often in geoscience.

Teaching Implications: Bad News, and Good

As I visit schools, I often hear teachers say “I want my students to think like historians” or “think like scientists” or “think like artists.” To accomplish this goal, some pedagogies encourage us to give students “expert tasks.”

Alas, Goldwater’s findings (and LOTS of other research) suggest that this bar might be MUCH too high. It takes years — decades? — to “think like a researcher” or “think like a coach.”

Even people with PhD’s in vision science don’t think causally about floating balloons unless explicitly prompted to do so.

As Dan Willingham writes in Why Don’t Students Like School?, “cognition early in training is fundamentally different from cognition late in training” (127).

This message often feels like bad news.

All those authentic tasks we’ve been giving students might not have the results we had hoped for. It’s extraordinarily difficult for students to think like a mathematician, even when we give them expert math tasks.

However, I see glimmers of hope in this gloomy conclusion.

My students (I teach high school English) won’t think like literary critics. However, I think they can and do become “experts” in much smaller sub-sub-sub-fields of English. (Warning: I’m about to switch from summarizing research to speculating about a classroom anecdote.)

When Comedy is Tragic

For instance: I recently gave my students a fairly complex definition of “comedy and tragedy.” This section of the unit required LOTS of direct instruction and LOTS of retrieval practice. After all: I’m the expert, and they’re novices.

My students then read a short story by Jhumpa Lahiri called “A Temporary Matter.” I asked them to look for elements of comedy and tragedy in that story.

Not only did they find those elements, they SPONTANEOUSLY pointed out Lahiri’s daring: she uses traditionally comic symbols (food, music, celebration, childbirth) as indicators of tragedy (“death and banishment”).

And, since then, they’ve been pouncing on tragic/comic symbolism, and other potentially innovative uses thereof.

These students aren’t (yet) expert literary critics. But on this very narrow topic, they starting to be flexible and inventive — a sign of budding expertise.

As long as I have a suitably narrow definition, a focused kind of pre-expertise is indeed a reasonable and achievable goal.

In Sum

Like lots of research in the field of “novices and experts,” Goldwater’s study warns us that experts really do think differently from novices, and that true expertise takes years to develop.

However, that insight shouldn’t scare us away from well-defined tasks that build up very local subsections of developing expertise. Our students aren’t yet capital-E Experts. And, the right-sized educational goals can move them towards ultimate Expertise.

 

Teachers’ Gestures Can Help Students Learn
Andrew Watson
Andrew Watson

Over the years, I’ve written about the importance of “embodied cognition.

In other words: we know with our brains, and we know with and through our bodies.

Scholars such as Dr. Susan Goldin-Meadow and Dr. Sian Beilock have done splendid and helpful work in this field.

Their research suggests that students might learn more when they make the right kind of gesture.

Other scholars have shown that — in online lectures — the right kind of pointing helps too.

What about the teachers‘ gestures? Can we help students learn in the way we use our hands?

Dr. Celeste Pilegard wanted to find out

Steamboats, East and West

Pilegard invited college students to watch brief video lectures. The topic: the differences between Eastern and Western steamboats. (You think I’m joking. I’m not joking.)

These students watched one of four versions:

In the first version, the teacher’s gestures focused on the surface features of the steamboats themselves (how deep they sit in the water, for instance).

In the second version, the gestures focused on the structure of the lesson (“Now I’m talking about Eastern steamboats, and NOW I’m talking about Western steamboats.”).

Third version: gestures emphasized BOTH surface AND structural features.

Fourth version: a control group saw a video with neutral, content-free gestures.

Did those gestures make a difference for learning?

Pilegard, in fact, measured learning in two ways:

Did the students remember the facts?

Could the students apply those facts by drawing inferences?

So, what did she discover?

No, but Yes

Researchers typically make predictions about their findings.

In this case, Pilegard predicted that neither the surface gestures (about steamboats) nor the structural gestures (about the logic of the lesson) would help students remember facts.

But, she predicted that the structural gestures would help students draw inferences. (“If a steamboat operates on a shallow river, what does that tell you about the pressure of the steamboat’s engine?”) Surface gestures, she predicted, would not improve inferences.

Sure enough, Pilegard was 2 for 2.

Watching gestures didn’t help students remember facts any better. However, students who watched structural gestures (but not surface gestures) did better on inference questions. (Stats types: the Cohen’s d was 0.39; an impressive bonus for such a small intervention.)

When Pilegard repeated the experiment with a video on “innate vs. acquired immunity,” she got the same results.

Implications and Cautions

As teachers, we know that every little bit helps. When we use gestures to reinforce the underlying logical structure of our explanations, doing so might help students learn more.

As we plan, therefore, we should be consciously aware of our lesson’s logical structure, and think a bit about how gestures might reinforce that structure.

At the same time, regular readers know that all the usual cautions apply:

We should look at groups of studies, not just one study.

Pilegard’s research focused on college students. Will this strategy work with other students? We don’t know for sure.

These video lessons were quite short: under two minutes each. Will this strategy work over longer periods of time? We don’t know for sure.

In other words — this research offers a promising strategy. And, we need more research with students who resemble our own classrooms and lessons that last longer to have greater confidence.

I myself do plan to think about gestures for upcoming lessons. But I won’t ignore all the other teaching strategies (retrieval practice, cognitive load management, etc.). Here’s hoping that future research can point the way…


By the way:

Teachers often ask how they can get copies of research to study it for themselves.

Easy answer #1: Google Scholar.

If that doesn’t work, I recommend easy answer #2: email the researcher.

In this case, I emailed Dr. Pilegard asking for a copy of the study — and she emailed it to me 11 minutes later.

In honor of her doing so, I’m creating the Pilegard Award for Prompt Generosity in Sharing Research with People who Email You Out of the Blue.

No doubt it will be much coveted.

 

Handwriting Improves Learning, Right?
Andrew Watson
Andrew Watson

Here’s a good rule for research: if you believe something, look for research that contradicts your belief.

So, if you think that retrieval practice helps students learn, see if you can find research showing the opposite.

If you disapprove of cold-calling, see if any studies support its use.

If you think that hand-written notes help students more than notes taken on a laptop, try to find research that disagrees with you.

In this last case, you might even find me. Most teachers I know believe that handwritten notes are superior, and they cite a well-known study to support that belief.

I’ve argued for years that this research assumes students can’t learn how to do new things – a very odd belief for a teacher to have. If you believe a students can learn how to do new things, well, this study actually suggests that laptop notes will help more than handwritten notes.

However, the “good rule” described above applies to me too. If I believe that we don’t know whether handwriting or keyboarding is better for learning, I should look for evidence that contradicts my belief.

For that reason, I pounced on a recent science news headline. The gist: recent research by Robert Wiley and Brenda Rapp shows that students who wrote by hand learned more than those who used laptops.

So, does their research finally contradict my belief?

Learning Arabic Letters

Wiley and Rapp had college-age adults learn Arabic letters.

12 of them learned by pressing the right key on a keyboard.

12 learned by looking at the letters closely and confirming they were the same.

And, 12 learned by writing the letters.

Did these distinct learning strategies make a difference several days later?

YES THEY DID.

The hand-writers learned a lot more, and learned a lot faster.

In fact – here’s a cool part – their learning transferred to new, related skills.

These participants practiced with letters. When Wiley and Rapp tested them on WORDS, the hand-writers did better than the other two groups – even though they hadn’t practiced with words.

So: sure enough, handwriting helped students learn more.

Boundary Conditions

Given the strength and clarity of these findings, you might think that I’m going to change my mind.

Reader, I am not. Here’s why:

This research shows that writing by hand helps people learn how to write by hand. It also helps people learn to do things immediately related to writing by hand – like, say, saying and writing words.

We should notice the narrow boundaries around that conclusion.

People who write by hand learn how to write by hand.

That research finding, however, does NOT demonstrate that writing by hand helps people learn things unrelated to handwriting itself.

For instance: do handwritten notes help people learn more about history or psychology or anatomy than laptop notes? This research does not answer that question, because that question falls outside the boundaries of the research.

In a similar way: practicing scales on the piano surely helps play piano scales better than – say – watching someone else do so.

But: does practicing piano scales make me better at other tasks requiring manual dexterity? Knitting? Keyboarding? Sculpting?

To answer those questions, we have to research those questions. We can’t extrapolate from piano scales to knitting and sculpting. (Well: we can, but we really shouldn’t.)

So, What’s The Answer?

Is handwriting really a better way to learn than keyboarding?

Honestly, I just don’t think we know. (In fact, Wiley and Rapp don’t claim that handwriting helps anywhere other than learning and reading letters and words.)

In fact, I suspect we need to explore MANY other variables:

the content being learned,

the teacher’s strategy for presenting it,

the student’s preference,

the student’s age –

perhaps even the relative complexity of writing vs. keyboarding. (I’m not an expert in this topic, but I understand that some languages require very intricate steps for accurate keyboarding.)

We can say – thanks to Wiley and Rapp – that handwriting helps learn how to write by hand. But until we explore those other precise questions precisely, we shouldn’t offer strong answers as if they have research support.

 

Why Don’t My High-School Students Just Follow My Advice?
Andrew Watson
Andrew Watson

I’ve been teaching for several centuries now. You’d think my students would believe me when I tell them how to make their sentences better. Or how to interpret literary passages. Or how to succeed in life.

Why don’t they?

Recent research suggests one potential answer: because my advice isn’t very good.

Here’s the story…

London Calling

A research team in London, led by PhD student Madeleine Moses-Payne, looked at research into adolescent metacognition: their ability to assess the correctness of their own judgments.

And, they looked at teens’ willingness to accept advice — good and bad — from adults.

In this case, the metacognition and “advice” were about a kind of space-themed video game. The participants had to determine — as quickly as possible — if there were more of species X or species Y on a planet.

The species were simply blobs in different colors. So, the participants made a snap judgment: are there more blue or more yellow blobs on the screen? (You can see some images from the study here.)

After the participants made their guess, they then rated their own confidence in their judgment; that’s the metacognition part.

And occasionally they got guidance from a “space advisor,” saying either “there are more blue blobs” or “more yellow blobs.” Most of the time (70%) the advisor was correct; 30% it was wrong.

What did researchers learn by putting all these variables together?

This Depends on That

Moses-Payne’s methodology included 3 age groups: children (8-9 years old), early adolescents (12-13), and late adolescents (16-17).

She wanted to know if data patterns changed with time. Here’s what she found:

First: adolescents (both early and late) were better at metacognition. That is, their confidence in their judgment aligned more precisely with the quality of their guesses, compared to the children.

Second: adolescents rejected more adult advice than children did.

And, here’s the kicker:

Third: adolescents rejected more bad advice.

That is: children lacked metacognitive certainty in the correctness of their judgements. Therefore, they let adult advice — even bad advice — sway their decision making.

However, adolescents had more accurate metacognitive confidence in their judgment. Therefore, they accepted good advice when they weren’t certain, but rejected bad advice when they were certain.

In Moses-Payne’s pithy summary:

adolescents, in contrast to children, take on others’ advice less often, but only when the advice is misleading.

So: why do my students resist my advice? Maybe they resist it when I’m wrong

Not So Fast

So far, this research design makes a lot of sense, and leads to a helpful — and usefully provocative — conclusion.

At the same time, I think we should notice the important limitations of its conclusions.

In this research, the “advice” was either a correct or an incorrect answer about perceiving the relative number of colored blobs on a screen.

It was not, say, advice about career choice, or about the best strategy to use when solving a math problem, or about when to listen to your mother. (ALWAYS listen to your mother.)

Most of the time, in fact, we don’t use the word “advice” to describe information that’s factually correct or incorrect. “Advice” is usually an experienced-based opinion, not the correct answer to a question.

And so: this research does provide a helpful look at adolescent development.

Teens improve their metacognitive awareness of their own right/wrong answers.

They can use that information to guide decision making effectively.

It does NOT, however, give us a comprehensive new framework for thinking about advising teens (“Don’t worry if they reject your advice — it must have been wrong if they did!”).

I suspect adults will still give teens advice. And, they’ll accept some and reject some. And we’ll still be puzzled when they do.

And — if we’re high school teachers — we’ll still think they’re awesome anyway.

Let’s Get Practical: What Works Best in the Classroom?
Andrew Watson
Andrew Watson

At times, this blog explores big-picture hypotheticals — the “what if” questions that can inspire researchers and teachers.

And, at times, we just want practical information. Teachers are busy folks. We simply want to know: what works? What really helps my students learn?

That question, in fact, implies a wise skepticism. If research shows a teaching strategy works well, we shouldn’t just stop with a study or two.

Instead, we should keep researching and asking more questions.

Does this strategy work with …

… older students as well as younger students?

… history classes as well as music classes as well as sports practice?

… Montessori classrooms, military academies, and public school classrooms?

this cultural cultural context as well as that cultural context?

And so forth.

In other words, we want to know: what have you got for me lately?

Today’s News

Long-time readers know of my admiration for Dr. Pooja Agarwal.

Her research into retrieval practice has helped clarify and deepen our understanding of this teaching strategy.

Her book, written with classroom teacher Patrice Bain, remains one of my favorites in the field.

And she’s deeply invested in understanding the complexity of translating research into the classroom.

That is: she doesn’t just see if a strategy works in the psychology lab (work that’s certainly important). Instead, she goes the next step to see if that strategy works with the messiness of classrooms and students and schedule changes and school muddle.

So: what has she done for us lately? I’m glad you asked.

Working with two other scholars, Agarwal asked all of those questions I listed above about retrieval practice.

That is: we think that retrieval practice works. But: does it work with different ages, and various subjects, in different countries?

Agarwal and Co. wanted to find out. They went though an exhaustive process to identify retrieval practice research in classrooms, and studied the results. They found:

First: yup, retrieval practice really does help. In 57% of the studies, the Cohen’s d value was 0.50 or greater. That’s an impressively large result for such a simple, low-cost strategy.

Second: yup, it works it in different fields. By far the most research is done in science and psychology (19 and 16 studies), but it works in every discipline where we look — including, say, history or spelling or CPR.

Third: yup, it works at all ages. Most research is done with college students (and, strangely, medical students), but works in K-12 as well.

Fourth: most retrieval practice research is done with multiple choice. (Yes: a well-designed multiple choice test can be retrieval practice. “Well-designed” = “students have to THINK about the distractors.”)

Fifth: we don’t have enough research to know what the optimal gap is between RP and final test.

Sixth: surprisingly, not enough classroom research focused on FEEDBACK. You’d think that would be an essential component…but Team Agarwal didn’t find enough research here to draw strong conclusions.

Seventh: Of the 50 studies, only 3 were from “non-Western” countries. So, this research gap really stands out.

In brief: if we want to know what really works, we have an increasingly clear answer: retrieval practice works. We had good evidence before; we’ve got better evidence now.

Examples Please

If you’re persuaded that retrieval practice is a good idea, you might want to be sure exactly what it is.

You can always use the “tags” menu on the right; we blog about retrieval practice quite frequently, so you’ve got lots of examples.

But, here’s a handy description (which I first heard in Agarwal and Bain’s book):

When students review, they put information back into their brains. So: “rereading the textbook” = “review,” because students try to redownload the book into their memory systems.

When students use retrieval practice, they take information out of their brains. So, “flashcards” = “retrieval practice,” because students have to remember what that word means.

So:

Reviewing class notes = review.

Outlining the chapter from memory = retrieval practice.

Short answer questions = retrieval practice.

Watching a lecture video = review.

When you strive for retrieval practice, the precise strategy is less important than the cognitive goal. We want student to try to remember before they get the correct answer. That desirable difficulty improves learning.

And, yes, retrieval practice works.