Skip to main content

Andrew Watson About Andrew Watson

Andrew began his classroom life as a high-school English teacher in 1988, and has been working in or near schools ever since. In 2008, Andrew began exploring the practical application of psychology and neuroscience in his classroom. In 2011, he earned his M. Ed. from the “Mind, Brain, Education” program at Harvard University. As President of “Translate the Brain,” Andrew now works with teachers, students, administrators, and parents to make learning easier and teaching more effective. He has presented at schools and workshops across the country; he also serves as an adviser to several organizations, including “The People’s Science.” Andrew is the author of "Learning Begins: The Science of Working Memory and Attention for the Classroom Teacher."

“Students Simply Cannot Improve”: Handwritten Notes vs. Laptop Notes
Andrew Watson
Andrew Watson

I disagree with the title of this blog post. I believe students CAN improve at taking notes. This post is my attempt to convince you that they can, and that we teachers can help them.


Over the years, I’ve written about the Laptop-Notes vs. Handwritten-Notes Debate several times.

I’ve tried to persuade readers that although people really like the idea that handwritten notes are superior, the research behind that claim isn’t really persuasive:

I’ve argued that the well-known study with a clever name (“The Pen is Mightier than the Keyboard“) is based on a bizarre assumption: “students cannot learn how to do new things.” (If they can’t, why do schools exist?)

I’ve also argued that that the recent study about neural differences between the two note-taking strategies doesn’t allow us to draw strong conclusions, for two reasons:

First: we can’t use it to say that handwriting helps students remember more than keyboarding because the researchers didn’t measure how much students remembered. (No, honestly.)

Second: the students who typed did so in a really unnatural way — one-finger hunt-n-peck. Comparing a normal thing (handwriting) to an abnormal thing (hunt-n-peck) doesn’t allow for strong claims.

So, I’ve been fighting this fight for years.

Both of these research approaches described above overlook the most straightforward research strategy of all: let’s measure who learns more in real classrooms — handwriters or keyboarders!! 

A group of researchers recently asked this sensible question, and did a meta-analysis of the studies they found.

The results:

Students who use laptops write more words; 

Students who take handwritten notes score higher on tests and exams.

So: there you have it. Handwritten notes REALLY DO result in more learning than laptop notes. I REALLY HAVE BEEN WRONG all this time.

Case closed.

One More Thing…

Like the TV detective Columbo, however, I have just a few more questions I want to explore.

First: I think the case-closing meta-analysis shows persuasively that handwritten notes as students currently take them are better than laptop notes as students currently take them.

But it doesn’t answer this vital question: can students learn to take notes better?

After all, we focus SO MUCH of our energy on teaching better; perhaps students could also do good things better.

If the answer to that vital question is “no” — students CAN’T take better notes — then obviously they should stick with handwriting. This meta-analysis shows that that’s currently the better strategy.

But if the answer is “yes” — students CAN take better notes than they currently do — then that’s really important news, and we should focus on it.

I myself suspect the answer to that question is “yes.” Here’s why:

The 2014 study — “The Pen is Mightier than the Keyboard” — argues that students benefit from taking notes when they do two things:

First: when they write more words. In their research, students who wrote more words remembered more information later on.

Second: when they reword what the teacher said. Students who copied the teacher’s words more-or-less verbatim remembered LESS than those who put the teacher’s ideas into their own words.

This second finding, by the way, makes lots of sense. Rewording must result from thinking; unsurprisingly, students who think more remember more.

1 + 1 > 1

Let’s assume for a moment that these research findings are true; students benefit from writing more words, and they benefit from rethinking and rewording as they write.

At this point, it simply makes sense to suspect that students who do BOTH will remember even more than students who do only one or the other.

In other words:

“Writing more words” + “rewording as I write”

will be better than

only “writing more words” or

only “rewording as I write.”

Yes, this is a hypothesis — but it’s a plausible one, no?

Alas, in the current reality, students do one or the other.

Handwriters can’t write more words — it’s physically impossible — but they do lots of rewording. (They have to; because they can’t write as fast as the teacher speaks, they have to reword the concepts to write them down.)

Keyborders write more words that handwriters (because typing is faster than handwriting). But they don’t have to reword — because they can write as fast as the teacher says important things.

But — wait just a minute!

Keyboarders DON’T reword…but they could learn to do so.

If keyboarders write more words (which they’re already doing) and put the teacher’s idea into their own words (which they’re not currently doing), then they would get BOTH BENEFITS.

That is: if we teach keyboarders to reword, they will probably get both benefits…and ultimately learn more.

In brief: it seems likely to me that laptop notes — if correctly taken — will result in more learning than handwritten notes. If that hypothesis (untested, but plausible) is true, then we should teach students how to take laptop notes well.

I should say: we have specific reason to suspect that students can learn to use both strategies (more words + rewording) at the same time: because students can learn new things! In fact: schools exist to help them do so.

Contrary to my blog post’s title, students really can improve if we help them do so.

Optimism and Realism

I hear you asking: “Okay, what’s your actual suggestion? Get specific.” That’s a fair question.

I – optimistically – think schools should teach two skills:

First: keyboarding. If students can touch type, they’ll be able to type MANY more words than untrained keyboarders, or handwriters.

Remember, the recent meta-analysis shows that students who keyboard – even if they aren’t touch typists – write more words than hand-writers. Imagine the improvement if they don’t have to hunt-n-peck to find the letter “j,” or the “;”.

Second: explicitly teach students the skill of rewording as they type. This skill – like all new and counter-intuitive skills – will require lots of explanation and lots of practice. Our students won’t change their behavior based on one lesson or one night’s homework.

However, if we teach the skill, and let them practice over a year (or multiple years) students will gradually develop this cognitive habit.

The result of these two steps: students will touch-type LOTS more words, and they will reword their notes as they go. Because they get BOTH benefits, they will learn more than the students who do only one or the other.

Now, I can hear this realistic rejoinder: “Oh come on: we simply don’t have time to add anything to the curriculum. You want us to teach two more things? Not gonna happen.”

I have two responses:

First: “developing these two skills will probably help students learn other curricular topics better. Extra effort up front will probably speed up learning (in some cases/ disciplines/grades) later on.”

If my untested hypothesis is correct, the progress of learning will look something like this:

Second: “I accept the argument that perhaps we can’t add anything to the curriculum. However, we should admit that handwriting is the second best option. Keyboarding – correctly done — is probably better than handwriting for notes; handwriting is the fallback position because we prioritize other skills.

In brief:

The title of this blog post is incorrect. Students CAN learn how to do new things – like take better notes by keyboarding well. We might choose not to teach them how to do so, but we should be honest with ourselves that the limitation is in our curriculum, not in our students’ abilities.


Mueller, P. A., & Oppenheimer, D. M. (2014). The pen is mightier than the keyboard: Advantages of longhand over laptop note taking. Psychological science25(6), 1159-1168.

Van der Weel, F. R., & Van der Meer, A. L. (2024). Handwriting but not typewriting leads to widespread brain connectivity: a high-density EEG study with implications for the classroom. Frontiers in Psychology14, 1219945.

Flanigan, A. E., Wheeler, J., Colliot, T., Lu, J., & Kiewra, K. A. (2024). Typed Versus Handwritten Lecture Notes and College Student Achievement: A Meta-Analysis. Educational Psychology Review36(3), 78.

A Skeptic Converted? The Benefits of Narrative
Andrew Watson
Andrew Watson

Let’s imagine that I tell you about a cool new research finding: singing helps students learn!

Elementary school students sit on a carpet and listen to a story from their teacher. On the one hand, I suspect you’d be really excited. After all, learning is really difficult. Almost ANYTHING that helps students learn is worth a teacher’s attention.

On the other hand…you probably have some important questions:

Who is doing the singing? The teacher, or the students? Or, are they singing together? Or, are they listening to a recording of someone else singing?

What are they singing? A song about the material they’re about to learn (say, the periodic table)? Or, can they sing any old song? (Sondheim’s “Moments in the Woods” is worth endless singing!)

How many times should the students sing to get the learning benefits?

And so forth.

If we don’t know the answers to these questions, how will we make this singing technique work?

On this note: Several months ago, I asked this same set of questions about the common statement that “narrative is psychologically privileged…”

More Questions

I regularly hear that students learn material better if it’s presented as a story than as exposition.

On the one hand, it seems clear that stories are easy to remember — they are “psychologically privileged.”

But this claim helps teachers if and only if teachers know what “a narrative” is, and can easily translate our content into a narrative.

If I want my 10th graders to know the difference between a gerund and a participle, how can I make that distinction into a story?

Or…wait a minute…can I just have my students read a story and point out the gerunds and the participles within it? Does that count?

Or…wait a minute…can a math teacher read a story that says “Gretel told Hansel that 3 + 3 = 6”? Does that count?

If we teachers don’t know exactly what counts as a story, and how to translate exposition into stories, then the “psychological privilege” of narrative is interesting but not helpful. (SO MUCH of education research is interesting but not helpful…)

In my earlier blog post, I asked people for guidance on that question — preferably research-informed guidance showing how to define narrative, and/or translate exposition into narrative.

Honestly, I didn’t get much.

 

But recently I found a research study that just might provide these essential insights.

Promising Steps

A thoughtful writer in this field recently highlighted this study: one that looks at the benefits of narrative for helping students learn.

Bingo! (Or, at least, potential bingo.)

This reseach offered at least two obvious benefits.

First: it provides at least an initial definition of narrative.

A narrative, according to Table 2, has a narrator; it takes place in the past; its events have a three-act structure: with a setup, a confrontation, and a resolution; it centers on humans who act and react; and so forth.

Second: it compares several options.

Students in Group A read a textbook-like, logically-structured explanation of stochastic molecular motion. Let’s call this “exposition only”:

The thermal energy leads not only to rotations or vibrations, whereby covalently bonded atoms move back and forth relative to each other, but also to translational movements, which move molecules from one location to another…

Students in Group B read a narrative version of that information, where the scientists who discovered the relevant information were the protagonists of a three-act story. Let’s call this “narrative only”:

Maud Menten quickly realized that the thermal motion of particles that Robert Brown had seen in his light microscope also resembled the random motion of molecules in cells…

Students in Group C got a combined approach. A narrative passage introduced the ideas and the problem; a subsequent passage — more expository — went into the technical details. We’ll call this strategy “narrative-then-exposition.”

In my thinking, this example provides a crucial detail. Notice that the narrative is NOT about the particles themselves. (“The particle got excited by the thing, and so did something cool!”) Instead, the narrative focuses on the scientists who made the relevant discoveries.

In other words, we might have some additional latitude when putting our narratives together. The story doesn’t have to be about the content itself.

However, that optimistic finding holds true only if these stories helped the students learn. So, did they?

PLOT TWIST

The narrative benefit depended on the students’ prior knowledge.

That is: students who knew relatively less about stochastic movement benefitted from the “narrative only” version.

Students who knew relatively more benefitted from the “narrative-then-exposition” version.

Now, the researchers have plausible theories about this finding. (Check out the “Coda” below if you’re curious.)

But to me, the more important news is that we can start to put together a framework for converting exposition (traditional textbook explanations) into narrative (story).

Step one: we’ve got a checklist of elements that make up a narrative; Table 2 in this study. If you want to translate exposition to narrative, you can start there.

Step two: you do NOT have to make the story about the contect itself (“The poor green finch couldn’t break open the seeds. It was very sad that its genes would not be passed on…”).

Instead, you can focus on the PEOPLE who figured out the content (“Then Darwin realized that the finches with the thicker beak had an advantage over the ones with the narrower beak, so their genes were likelier to be passed on to the next generation…”).

Step three: prior knowledge matters. The more your students already know about a subject, the less important it is to recast it as a story.

Although initially skeptical, I’m starting to hope we might develop comprehensive guidelines for the exposition-to-narrative converstion process.

Don’t Stop Now

Of course, these initial guidelines need A LOT MORE WORK.

Some unanswered questions:

Does this psychological privilege apply in pre-college learning? (The studies I’ve found focus on adults.)

Does it hold true in all disciplines? I see how history and literature and theology (and even science) might be translated to narrative. But LOTS of math and foreign language and art instruction might defy this strategy. (How, exactly, would we make noun declensions into a story?)

When will narrative (potentially good!) spill over into “seductive details” (potentially bad!)?

And so forth.

Yes, I’m more optimistic than I was before I read this study. And: as far as I know, we’re only in early stages of understanding how best to use narratives to enhance learning.


Coda

I promised I’d explain the researchers’ theories about the differing benefits of “narrative” (for relative beginners) or “narative-then-exposition” (for relative experts).

Here goes:

Students who knew relatively less, the researchers suspect, benefitted from the working-memory reduction created by the familiarity of a story structure. (Remember those 3 acts…)

Students who knew relatively more, in turn, didn’t need the WM reduction, but benefitted from the reactivation of prior knowledge prompted by the initial narrative. With that prior knowledge reactivated, they could then concentrate on the factual additions to their schema.

Interesting!


Tobler, S., Sinha, T., Köhler, K., & Kapur, M. (2024). Telling stories as preparation for learning: A Bayesian analysis of transfer performance and investigation of learning mechanisms. Learning and Instruction92, 101944.

 

The Rare Slam Dunk? Blue Light Before Bed
Andrew Watson
Andrew Watson

I spend A LOT of time on this blog debunking “research-based” certainties.

No, handwriting isn’t obviously better than laptops for taking notes.

No, the “jigsaw method” isn’t a certain winner.

No, “the ten minute rule” isn’t a rule, and doesn’t last ten minutes.

No, dopytocin isn’t worth your attention. (Ok, I made up the word “dopytocin.” The blog post explains why.)

And so forth.

For that reason, I was DELIGHTED to discover a strong “research-based” claim that might hold up to scrutiny. Here’s the story…

The Quest Begins…

A colleague posted on eXTwitter a simple question; it went something like this:

I’ve always heard that “blue light from screens before bed” interferes with sleep — in fact, I’ve heard “science says so.” But: I just realized that I don’t actually know anything about the science. Now I’m wondering if it’s true…

Well, NOW YOU’VE GOT MY ATTENTION.

I too have frequently heard the “blue light before bed” claim. I too had understood that “research says so.” I too had never explored that research. It’s obviously time to saddle up.

I confess I did so with a small sense of dismay.

Small toddler sitting on the bed with special effects from ipad

By nature, I’m not a contrarian. I don’t want to go around saying “that thing that everyone believes is ‘based on science’ has no good research support that I can find.” I’m not trying to lose friends and upset people.

But, as you can see from the list above, LOTS of people say “you should do X because research proves Y” — even though research REALLY does not consistently support Y. (Did I mention the “ten minute rule”?)

Fearing that I would — once again — find myself contradicting commonly held wisdom, I dove in.

As is so often the case, I started at elicit.org. (Important note: NOT illicit.org. I have no idea what’s at illicit.org…but the idea makes me nervous. This is a G-rated blog!)

I put this question into its search box: “Does blue light from screens interfere with sleep?”

And I held my breath…

Treasure Unveiled

Come to find out: elicit.org has strong opinions!

Blue light from screens has been shown to interfere with sleep in several ways. It can shorten and worsen sleep quality, delay the onset of sleep, and increase feelings of fatigue (Kurek 2023). However, using amber filters on smartphone screens can improve sleep quality by blocking the short-wavelength blue light (Mortazavi 2018).

Wearing color-tinted lenses to filter short-wavelength light exposure before sleep may also improve sleep, particularly in individuals with certain conditions (Shechter 2020). Exposure to blue-enriched light at low room light levels can impact homeostatic sleep regulation, reducing frontal slow wave activity during the first non-rapid eye movement episode (Chellappa 2013).

I like Elicit because it provides an ENORMOUS amount of data. I won’t attempt to list or summarize all the studies it highlighted. But I’ll point to a few factors that made this claim especially compelling.

Boundary conditions: studies have found this claim to be true across a variety of age groups. This conclusion doesn’t apply simply to a niche-y cohort (say, people in Atlanta who own bison farms); it applies wherever we look.

Positive and negative findings: studies show both that more blue light interferes with sleep and that blocking blue light improves sleep.

Recency: these studies all come from the last decade. In other words: Elicit didn’t pull them up from the late ’80s. We’re talking about the most recent findings here.

I could go on.

Don’t Stop Now

I believe it was Adam Grant who wrote:

Be the kind of person who wants to hear what s/he doesn’t want to hear.

That is: when I start to believe a particular “research-based” claim, I should look hard for contradictory evidence.

If the evidence in favor outweighs the evidence against — and only if the evidence in favor outweighs the evidence against — can I start to believe the claim.

So, I started looking for contradictory evidence.

Here’s an easy strategy: google the claim with the word “controversy.” As in:

“Blue light interferes with sleep controversy”

or

“blue light interferes with sleep myth”

If such a controvery exists, then that google search should find it.

Sure enough, those two searches started to raise some interesting doubts.

I found two articles in the popular press — one in Time magazine, the other in Salon — pointing to this recent study. In it, researchers studied mice and found that yellow light — not blue light — seems the likelier candidate to be interfering with sleep.

Honestly, I found the technical language almost impenetrable, but I think the argument is: yellow light looks more like daylight, and blue light looks more like twilight. So: yellow light (but not blue light) is likelier to interfere with the various chemical processes that point the brain toward sleep. And: that’s what they found with the mice.

Of course, studies in mice are intersting but are never conclusive. (One of my research mottos: “Never, never, never change your teaching practice based on research into non-human animals.”)

So, what happens when we test the yellow-light/blue-light hypothesis in humans?

So Glad You Asked…

Inspired by that mouse study, another researched checked out the hypothesis in humans. She measured the effects of evening exposure to light along various wavelengths — yellow, yellow/blue, blue — and found…

nada.

As in, no wavelength combo had a different effect than any other.

However — this is an important “however” — the study included exactly 16 people. So, these results deserve notice, but don’t overturn all those other studies about the dangers of blue-light.

After all this back-n-forth, where do all these research findings leave us?

First: we do indeed have LOTS of research suggesting that blue light interferes with sleep.

Second: that research has been questioned recently and plausibly. But, those plausible questions don’t (yet) have lots of mojo. Mice + 16 people don’t add up to a fully persuasive case.

By this point, I’ve spent about three hours noodling this question about, and I’m coming around to this point of view:

Maybe the problem isn’t the blue light; maybe it’s the BRIGHT light. Yellow, blue, pink, whatever.

So, rather than buy special glasses or install light filters, I should put down my iPad and read from paper once I get in bed.

I should say that this conclusion isn’t exactly “research based.” Instead, it’s a way of accepting a muddle in the scientific results, and trying to find a good way forward.

This approach guides me in my classroom work, and now it will guide me when it comes to my Kindle as well.

When Experience Contradicts Research: The Problem with Certainty
Andrew Watson
Andrew Watson

A friend recently told me about his classroom experience using mindfulness to promote thoughtful and effective writing.

Young girl at school practicing yoga on a mat

He started the year by explaining the benefits of mindfulness to his students. After that introduction, he began each class period with five minutes of mindful silence.

Although he wasn’t running a research study, he kept two kinds of “data.”

First: his own impression is that students got appreciably better at adding insight, depth, and detail to their writing.

For instance, instead of saying “the mood was optimistic,” they might write “the sun came out” or “I could hear lively music in the background.”

They also “got stuck” less often during the writing process.

Second, he surveyed his students at the end of the year.

He got LOTS of positive responses. One wrote:

I was surprised by how much I looked forward to meditation as the weeks went on, it helped calm me down before big assignments (like practice exams or actual tests) or just gave me a breather during stressful moments.

Another:

I thought meditation was very helpful in class this year because it helped me focus on my clarity of mind. I especially liked it before writing essays because it relaxed me and helped my thoughts flow clearer I think.

A third:

I would start the year by doing it everyday. I’ve started implementing it in my home life and have felt more present and conscious not only my daily interactions but also my thought process and decision making.

I could go on quoting like this for a few pages.

Based on this experience, my friend asked me what research shows about the effects of mindfulness in the classroom…

What Research Shows…

Alas, research into the classroom benefits of mindfulness doesn’t obviously align with my friend’s experience.

Yes, we do have some encouraging research about the ways that mindfulness can reduce stress.

Yes, we do have some correlational research showing a relationship between mindfulness and academic accomplishment.

But my honest opinion is that — so far — we don’t have a strong enough research pool to make inclusion of mindfulness programs a “research-supported” practice in schools.

In particular, we have an ENORMOUS recent study (over 8000 students!) showing that mindfulness training provided students with NO BENEFITS AT ALL, and perhaps (very small) increases in the likelihood of a few problems.

I’m game for the idea that mindfulness training might help students and teachers. But I don’t see enough consistent, high-quality research findings to champion the practice myself.

A Friend’s Quandry

So, a quandry:

My friend’s EXPERIENCE suggests that “brief mindfulness exercises help.”

But RESEARCH suggests that “mindfulness training doesn’t necessarily do anything.”

What should he do?

Equally important: what should YOU do if research suggests that one of your teaching practices a) doesn’t help, or b) might actually hurt?

Let me suggest a few steps.

STEP ZERO: even before we begin answering this question, I think it’s essential to admit that it’s both a challenging and an essential question. (So challenging and important that I wrote a book about it.)

Someone might say to you, “there’s an obviously easy answer to that question.” I think that person is wrong.

STEP ONE: recognize that both kinds of knowledge have their own benefits and weaknesses.

For instance, research helps us see long-term effects that we teachers most often miss.

We know that “short-term performance is an unreliable indicator of long-term learning.” (Thanks, Dr. Nick Soderstrom (on Twitter @NickSoderstrom).)

So, research can help us see when things that feel really good in the classroom right now don’t actually produce the long-term benefits we’re hoping for.

Likewise, research helps us overcome our biases. My friend works REALLY HARD to make his mindfulness intervention work. He devotes LOTS of time to it. (So do his students!)

NO WONDER he sees all the benefits! Well…research can help us see past that motivated reasoning.

Not So Fast…

At the same time, a teacher’s classroom experience provides insights that research just can’t spot.

Teachers combine variables. Researchers isolate variables. That is: we see combinations that researchers rarely explore — combinations like “daily mindfulness + writing.”

Also: research always exists within “boundary conditions.”

A research study done with 3rd grade Montessori students learning long division might — but might not! — apply to dyslexic 11th graders studying history at a military academy.

Unless we have SO MUCH research on a particular topic, a research-only perspective might miss the places that a technique DOES work because we’ve seen that it DOESN’T help in all these other places.

Teachers — however — might discover those places.

Don’t Stop Now

Okay, so we know that this quandry is an important question and requires complex answers (step 0); and we know that research and experience provide separate and useful kinds of knowledge (step 1).

What’s next?

STEP TWO: Get specific.

In that 8000+ person study: what exactly did they do? And: how well does “what they did” align with “what my friend did”?

In the 8000+ person study, they had students practice mindfulness for 10 weeks. They wanted to know if a) the students would keep doing mindfulness on their own after the 10 weeks, and b) if their mindfulness practice would help them — or not — according to 28 different criteria.

The answers were a) “nope, not really” and b) “nope, not at all.”

But: OF COURSE the students didn’t get the benefits of mindfulness (that’s b) because they didn’t continue the mindfulness exercises at home (that’s a).

Notice, however, that this research doesn’t align with my friend’s strategy. His students DID continue the mindfulness because he started every class with time for mindfulness.

True: students who don’t practice mindfulness don’t benefit from it; but my friend’s students might benefit because they had time to practice it.

In other words: that big study shouldn’t necessarily discourage my friend, because his strategy differs from their strategy in meaningful ways.

STEP THREE: Combine humility with determination.

Here’s the trickiest part.

As I’ve just argued, this big study might not apply to my friend’s approach.

AND: my friend’s “biased” perspective (we ALL have “biased” perspectives) might make it difficult for him to recognize the shortcomings in his approach.

For this reason, I think we have to push ourselves relentlessly to balance humility (“I should really focus on and respect research guidance!”) with determination (“My classroom experience is valuable and I should give it weight in my decision making!”).

But, gosh, that’s a difficult balancing act.

It’s tempting to pick one side or the other:

I shall do what research tells me!

or

My training and instincts matter most!

Instead, we should strive to give both sources of knowledge their due…and always doubt our own certainties.

An Excellent Example

Note, by the way, that my friend was doing just that.

After all, his own classroom experience — and his students’ enthusiastic feedback! — gave him all sorts of reasons to be entirely confident.

He might easily have said “research, schmeesearch” and gone ahead with his mindfulness routine.

Instead, he looked for a reason to doubt his own certainty; that is, he asked me what research has to say … knowing that it might not support his experience. (He had, after all, just finished reading my book on evaluating research-based teaching claims.)

He now has to decide the best way to procede. And: in my view, he will do so all the more effectively because he allowed himself to doubt.

In this field, certainty is the enemy of improvement and excellence.

Should Students Annotate Their Texts? A Research Perspective
Andrew Watson
Andrew Watson

A few years ago, I visited an English Department meeting at a well-known high school. The topic under discussion: a recently published “English Department Guide to Annotation.”

A highlighter pen poised above a textbook, ready to emphasize key points in a vibrant splash of color

After the meeting, one of the authors asked me what I thought of the guide and the research behind it.

I asked: “Well, what IS the research behind it?”

The teacher answered: “Oh, ALL the research says this.”

Although this answer (“ALL the research shows…”) is quite common, it always makes me nervous.

In the first place, it sounds like a dodge, doesn’t it? It’s the sort of answer I might give if I didn’t actually know much about the research.

In the second place: it’s never true. Because teaching and psychology and research are all so complicated, researchers NEVER get the precisely same answer when they study interesting, complex, and important questions about teaching.

Even retrieval practice — one of the most research-supported teaching strategies we have! — doesn’t have unanimous support in the research literature.

So: what does research say about annotating texts? I recently stumbled across a study that explores this question…

A Promising Start

A recently published study looks quite specifically at the benefits of teaching annotation. In this study, 125 8th grade students in a social studies class learned a specific method for annotation; another 125 served as the control group.

4 annotated paragraphs, with notes written in the margins, highlighted passages, and additional notes on a sticky note.
Example of annotation from Lloyd, Z. T., Kim, D., Cox, J. T., Doepker, G. M., & Downey, S. E. (2022). Using the annotating strategy to improve students’ academic achievement in social studies. Journal of Research in Innovative Teaching & Learning, 15(2), 218-231.

The students who learned this new method got LOTS of practice: at least 100 minutes over 6 weeks. The study used a “business as usual” control group, which means that the teachers just taught as they usually do for the students who didn’t learn about annotation.

The result: impressive!

In brief, the students who annotated thought the method was relatively easy to use. And, they scored higher on a reading-comprehension test.

If you speak stats, you’ll be impressed to see that the Cohen’s d was 0.46: an attention-getting number for a 6-week intervention.

So far, this study gives us reason to focus on teaching annotation.

Subsequent Concerns

And yet, I do have concerns: some specific to this study, and some more generally about applying research to classrooms.

SPECIFIC concern:

As you can see in the image above, the annotating method includes highlighting. Alas, researchers have looked at highlighting specifically, and found it to be largely unhelpful.

For instance, when John Dunlosky et al. evaluated the efficacy of various study strategies, they didn’t find much to recommend highlighting:

On the basis of the available evidence, we rate highlighting and underlining as having low utility. In most situations that have been examined and with most participants, highlighting does little to boost performance.

It may help when students have the knowledge needed to highlight more effectively, or when texts are difficult, but it may actually hurt performance on higher level tasks that require inference making. (Emphasis added.)

If this annotating method includes a widely-discredited strategy, I worry about the research behind it.

GENERAL concerns

As I’ve written before, I want to offer “research based teaching advice” only if LOTS of research supports the advice. You know your school, curriculum, and students better than I do — so I need SUBSTANTIAL reason to say “do this, not that.”

Alas, we have precious little research into the question of annotation.

I’ve used my go-to resources (scite.ai, connectedpapers.com, elicit.org), and found almost no research pointing one way or the other. (The study described above does mention other experiments … but as far as I can discover none of them focuses precisely on annotation.)

So, yes: we have ONE study saying that this annotation method helped 8th graders learn more in a social studies class. But I don’t think we should be very sure of that narrow finding…much less confident about extrapolating.

That is: do these results tell us anything about annotating in a high-school English class? I don’t think so.

Niche-y, But Important

A final concern merits a brief discussion here.

As noted above, the study uses a “business as usual” control group. That is: some students got A SHINY NEW THING (for 100 minutes!). And some students got … nothing special.

As you can imagine, we might easily conclude that the SHINY NOVELTY — not the annotation specifics — helped the students.

The study would have been more persuasive if the control group had learned a different reading comprehension strategy instead of annotation. In this case, we would have more confidence about the benefits (or lack of benefits) of annotation.

Generally speaking: when a study compares Something to Something Else, not Something to Nothing, it gives us greater reason to rely on its findings.

In Brief

When our students read, we want them to think about the text they’re reading.

We want them to …

… learn the ideas in the text,

… associate it with their prior knowledege,

… update their schema on the topic,

… consider weaknesses, questions, and omissions,

and so forth.

Is annotating the best way to ensure they do so? I’m not yet persuaded.

If you know of good research on the topic, I hope you’ll let me know!


Lloyd, Z. T., Kim, D., Cox, J. T., Doepker, G. M., & Downey, S. E. (2022). Using the annotating strategy to improve students’ academic achievement in social studies. Journal of Research in Innovative Teaching & Learning15(2), 218-231.

Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., & Willingham, D. T. (2013). Improving students’ learning with effective learning techniques: Promising directions from cognitive and educational psychology. Psychological Science in the Public interest14(1), 4-58.

Summer Plans: How Best to Use the Next Few Weeks
Andrew Watson
Andrew Watson

The summer stretches before you like a beach of relaxing joy. With a guilty-pleasure novel in one hand and an umbrella drink in the other, how should you best plan for the upcoming school year?

A woman doing yoga on the beach at sunset; rocks and the ocean visible in the background

Let’s be honest:

You might want to give yourself a break. School is STRESSFUL. Some down time with your best friends — perhaps a refreshing walk in the woods — getting back into a fitness routine … all these sound like excellent ideas to me.

If, however, you’re the sort of person who reads education blogs in the summer, well, you might be looking for some ideas on refreshing your teaching life.

Since you asked…

The Essential Specifics Within the Big Picture

The good news about research-based teaching advice?

We have LOTS and LOTS of helpful suggestions!

The bad news about research-based teaching advice?

Well: we have LOTS and LOTS of helpful suggestions!! Probably too many suggestions to keep track of.

If only someone would organize all those suggestions into a handy checklist, then you might strategically choose just a few of those topics that merit your attention. If this approach sounds appealing to you, I’ve got even more good news:

You can check out Sherrington and Caviglioli’s EXCELLENT book Walkthrus. This book digests substantial research into dozens of specific classroom topics (how to value and maintain silence; how to create a “no opt out” culture). It then offers 5-step strategies to put each one into practice.

In a similar vein, Teaching and Learning Illuminated, by Busch, Watson*, and Bogatchek, captures all sorts of teaching advice in handy visuals. Each one repays close study — in the same way you might closely study a Walkthru.

With these books, you can do a deep dive into as many — or as few — topics as you choose.

School Policy

The hot topics in education policy world are a) cell phones and b) AI.

As everyone knows, Jonathan Haidt’s recent book has made a strong case for heavily restricting cell phone usage for children.

I think it’s equally important to know that LOTS of wise people worry that Haidt is misinterpreting complex data.

Schools and teachers no doubt benefit from reading up on this debate. My own view, however, is that we should focus on the effects that phones (and other kinds of technology) have in our own schools and classrooms. Create policies based on the realities you see in front of you — not abtract data about people who might (but might not) resemble your students.

As for Artificial Intelligence: I think the field is too new — and evolving too rapidly — for anyone to have a broadly useful take on the topic.

In my brief experience, AI-generated results are too often flukily wrong for me to rely on them in my own work. (Every word of this blog is written by me; it’s a 100% AI-free zone.)

Even worse: the mistakes that AI makes are often quite plausible — so you need to be a topic expert to see through them.

My wise friend Maya Bialik — one-time blogger on this site, and founder of QuestionWell AI — knows MUCH more about AI than I do. She recommends this resource list, curated by Eric Curts, for teachers who want to be in the know.

A Pod for You

I’m more a reader than a pod-er, but:

If you’re in the mood for lively podcasts, I have two recommendations:

First, the Learning Scientists routinely do an EXCELLENT job translating cognitive science reseach for classroom teachers.

Unsurprisingly, their wise podcast is still going strong after many years.

Second, Dr. Zach Groshell’s podcast — Progressively Incorrect — explores instructional coaching, math and reading instruction, current debates in education, and other essential topics.

You might start with his interview with fan favorite Dan Willingham.

(Full disclosure: I have appeared on both podcasts, and am friends with the people who run them.)

The Journey Ends at Its Beginning

But, seriously, give yourself a break. You’ve worked hard. Take the summer off. I bet you’ve got A LOT of shows to binge-watch in your queue…


* A different “Watson”: EDWARD Watson. As far as I know, we’re not related.

 

The Best Ways to Use ‘One-Pagers’
Andrew Watson
Andrew Watson

In recent years, our field has seen a great increase in ‘one-pagers’: handy summaries of research topics and suggestions — complete with headings and friendly graphics.

Given their rising popularity, I thought it would be helpful to offer guidance on using them well.

So: what strategies will help you (and through you, your students) benefit the most from one pagers?

Strategy #1: Make Your Own

No, seriously.

The person who learns the most from a one-pager is the person who makes the one pager.

For instance: to make a 1-pager on retrieval practice, you first need to …

… learn a lot about retrieval practice;

… organize all your findings, preferably with lively and helpful graphics;

… understand its boundary conditions and best uses;

… summarize your findings in clear, consistent, understandable language;

… decide what information to include, and what to leave out…

In other words, you need to do all those generative learning activities that are most likely to ensure that you learn and remember new ideas. (Click here for a brisk review of the Ensers’ excellent book on generative learning.)

So, my most emphatic recommendation for 1-pagers: make ’em.

Strategy #2: Edit a 1-Pager

Let’s say I find a 1-pager on taking good classroom notes.

Because I know this research pool fairly well, I can both read the 1-pager and have a conversation with it.

That is: I can comment on the passages where I agree (“I’m so glad someone else has found this study helpful!”) and also note those passages where I don’t (“No, the Mueller and Oppenheimer study really does not support that claim.”).

I could re-prioritize its suggestions, reword a sentence or two, even sketch in a quick graphic that better encapsulates the ideas.

In effect, this editing strategy simply riffs on the first one above (“Make Your Own 1-Pager”). By collaborating or arguing with another 1-pager, I’m getting many of the same cognitive benefits.

Notice one key point here:

To succeed at strategies 1 and 2, I need to know a fair amount about the topic.

That is, relatively speaking I’m an expert.

If I were a novice on the topics of retrieval practice and note-taking, I couldn’t really create or respond to a 1-pager very substantively.

Strategy #3: Use a Single 1-Pager Intensively Over Time

A few recent books have focused on 1-pager summaries of cognitive science in the classroom.

For instance, consider Walkthrus by Tom Sherrington and Oliver Caviglioli. This book chooses several dozen specific classroom topics — cold calling, seating charts — and offers precise 5-step guidelines for examining, honing, and reviewing our practice.

Each of these Walkthrus is, in effect, a 1-pager; each one invites us to spend several days steeping in its advice.

So, if you choose the Walkthru on … say … “scaffolding,” you’ll start by reviewing all five steps. Then you’ll focus on step one: “map out the components of a task”.

This step will take at least a day or two, and during those 48 hours you’ll spend a lot of time thinking about that step.

So too with step 2 (“provide supports at a detailed level”), and then again with step 3 (“provide supports at an overview level”).

As you can easily see, this strategy means you’ll spend a lot of time — days and days! — thinking about the information, ideas, and suggestions on this particular 1-pager. The result: you’ll learn a lot about this topic.

Notice the key here: focused work over time. We don’t get the benefit of walkthrus by reading Walkthrus. We get the benefit by choosing one of them and dwelling on it for meaningful periods of time.

Less Productive 1-pager Strategies

I’m offering the list above because I worry that the most common use of a 1-pager is also  the least effective use.

Here’s the commom, ineffective use:

Step 1: I don’t know much about (for example) worked examples.

Step 2: I come across a 1-pager on worked examples.

Step 3: I read it, and think “wow, that makes so much sense!”

Step 4: I go on about my day.

This process creates the illusion of understanding; I believe that I have learned something about worked examples.

Alas, as we well know, learning almost never happens this way. I don’t learn about worked examples by reading a 1-pager about them, but by grappling substantively with the concept over time.

In other words, I fear a 1-pager paradox:

They DO benefit the people who create them (strategy #1);

They DO benefit people who already know a great deal about a topic (strategy #2);

They DO benefit novices who devote days to intensive study and practice (strategy #3);

But they DON’T benefit beginners who use them as their form invites us to use them: a quick read.

If teachers wants to learn about any meaningful educational topic (working memory, entry routines, growth mindset, turn-n-talk, spacing and interleaving) we can’t do so simply by reading a 1-pager. We have to read a book, attend a workshop, take a class…and then think and practice and fail and start over.

We know that students learn by this interative process of engaging, grappling, struggling, and transforming. Teachers learn this way too.


 

Adam Boxer (on ex/Twitter @AdamBoxer1) is one of the few people who also worries about this trend in public. Here’s a brief recent thread.

Piece-by-Piece PowerPoint: Exploring the “Dynamic Drawing Principle”
Andrew Watson
Andrew Watson

The plan for this blog post:

First: I’ll describe a particular teaching practice — one that I’ve used in my own work for quite a while.

Then: I’ll describe the reseach behind the practice — which I discovered just recently.

Finally: I’ll zoom back to offer a larger perspective.

The Potential Paradox of Dual Coding

Long-time readers know that I’m a fan of “dual coding.”

As Oliver Caviglioli has explained in his marvelous book, students learn more easily if our explanations include both visual and verbal representation.

A pencil drawing of two hands clasping each other

That is: teachers should combine a description in words with a “stable visual representation” — a diagram or outline or graph. (Cavilgioli offers dozens — probably hundreds — of examples.)

This technique, although mightily helpful, requires lots of thought to pull off effectively. One problem in particular has raised challenges in my own consulting work.

On the one hand, I want to clarify the various cognitive science concepts I’m describing — say, tripartite attention, or schema theory, or even dual coding itself — by creating “stable visual representations” of them.

On the other hand, those concepts include LOTS of important pieces. A rich understanding of them requires grokking those pieces and the relationships among them.

For this reason, my “stable visual representation” — which exists to reduce working memory load — might in fact overwhelm working memory. SO MANY pieces. SO MANY important relationships.

What’s a consultant to do?

It Just Might Work…

Over the years, I’ve developed a technique that a) makes sense to me, and b) just seems to work.

I start by creating the final version of the “stable visual representation” I want teachers to understand.

For instance, I regularly show Dr. Efrat Furst’s “pyramid model” to explain schema theory:

Furst Pyramid Model

To explore this model in depth, you can check out Dr. Furst’s EXCELLENT website here.

As you can see, this model includes an ENORMOUS number of concepts (and relationships among them).

For that reason, I don’t start with the model itself; instead, I break it down into incremental pieces and add to them one at a time. (This is the teaching technique I promised at the top.)

So, my first slide simply has the four large grey rectangles: the barest visual outline of the diagram.

Only then — one at a time — do I add in “novice” and then “advanced” and then “expert.” (I don’t even mention “teacher” at this point — because the core distinction between “expert” and “teacher” won’t make any sense until the first three stages do.)

With the rectangles and the labels in place, then I’ll start talking about the three “novice” triangles on the left — in fact, only the two dark gray ones on the lower row. Teachers have to understand the concepts they represent before adding the third top triangle will be helpful.

And so forth.

In other words: presenting this diagram takes probably 10 PowerPoint slides and several dozen (more than 100?) animations.

In this way — I hope — I can successfully dual code the concept of schema theory.

I’m saying the words, and gradually … very gradually … unveiling piece by piece by piece the stable visual representation.

Of course, I’m not the only person to work this way. For instance: in Adam Boxer’s top-notch book Teaching Secondary Science, Boxer describes his method for explaning the anatomy and function of the heart with just such a step-by-step unveiling. If you haven’t read this book yet, you REALLY should.

But Wait, We’ve Got Research…

Usually in this field I find research that gives me new teaching ideas.

Occasionally I find reseach that supports teaching practices I’m already doing.

In this case: I’ve long hoped that reseach supports my step-by-step dual coding practice. But, to be honest, I never knew of any such research.

Recently, a friend mentioned something called the “dynamic drawing principle” — and sure enough, I struck research gold (or something close to it).

The first set of studies on this topic was done by research greats Logan Fiorella and Richard Mayer.

Over a series of four studies Fiorella and Mayer reached several tentative conclusions:

Students learn more when they see a professor draw diagrams representing the concepts under discussion — rather than seeing the professor talk about an already-drawn diagram.

This effect held when students saw the professor’s fully body while they drew, or simply the hand as it drew.

In some cases, the students’ prior knowledge mattered to these results; in others, it didn’t.

In another more recent study, Selen Turkay found that students learned more from animated drawing than from a video of a lecture, an audio recording of a lecture, and the finished drawings without the animation.

Although we don’t have lots-n-lots of research here, we have initial support for the teaching practice described above. That is: students seem to learn more when the dual coding diagram is created bit by bit in front of them, rather than revealed all at once.

That is, in a phrase, the “dynamic drawing principle.”

Not So Fast

If you’ve been reading closely, you’ve spotted a flaw in my claim that the dynamic drawing prinicple supports my piece-by-piece-PowerPoint method.

The flaw? Drawing.

My technique unveils and moves objects on the slides, but I don’t actually draw these diagrams live in front of the audience.

If you look at the complexity of Furst’s diagram, you can see it would be nearly impossible to do so. (And, I’m a terrible drawer. Friends even make fun of my handwriting…)

In fact, Fiorella and Mayer tested this very question. In their third experiment, some students watched a lecture with completed drawings, and others saw the drawings appear without a hand drawing them. (Fiorella and Mayer used an iPad drawing capture technology.)

Sure enough, students did NOT learn more from these handless live drawings than from completed drawings.

So, what will I do?

The Perfect Is the Enemy of the Good

As I see it, we have a small number of research studies which suggest…

… my piece-by-piece-PowerPoint idea is a good idea, but

… it might not help without live drawing.

That tentative research-based advice faces off against compelling experiences:

First: I simply could not possibly create these diagrams live. Not gonna happen. No ma’am.

Second: I regularly get compliments on my piece-by-piece method.

No, really. When I show my step-by-step visual explaining the functioning of working memory, for instance, teachers occasionally interrupt me to say “that was SO helpful.” (I am not making this up.)

And while I understand why live drawing might be better than PowerPoint revealing, I think my “dynamic-kind-of-drawing” technique is probably better than a Big Reveal of a complete, complex diagram.

For all these reasons, I’m going to keep going with my current practice.

And — I promise! — I’ll keep my eye out for further research that might persuade me to change my ways.


Fiorella, L., & Mayer, R. E. (2016). Effects of observing the instructor draw diagrams on learning from multimedia messages. Journal of Educational Psychology108(4), 528.

Turkay, S. (2022). Comparison of dynamic visuals to other presentation formats when learning social science topics in an online setting. Australasian Journal of Educational Technology38(5), 12-26.

Research Advice That’s New + Useful: Improve Learning by Reappraising Emotions?
Andrew Watson
Andrew Watson

Research benefits teachers if it gives us new, useful ideas.

We can feel relief and gratitude if research simply confirms our prior beliefs — that is, if it doesn’t give us “new” ideas — but we don’t necessarily reap substantial benefit from such confirmations.

A happy student wearing a vest, bow tie, and an upside-down colander on his head, holding a finger up in the air as lightbulbs glow around him

Likewise, research that offers a new perspective but doesn’t inform our teaching feels disappointing. If I can’t do something with the research-based perspective, I’m not sure why I should dwell on it very long. (Most teachers just don’t have time for pointless dwelling…)

So: our sweet spot is, “research that gives new, useful advice.”

Strangely, research into emotions and learning can struggle to fit both those criteria.

For instance, we’ve got lots of research saying—in effect—“don’t be mean to your students.”

That advice sounds useful (criterion #2), but not particularly new (criterion #1). How many people come to Learning and the Brain conferences thinking: “I wonder if research encourages me to taunt my nine-year-olds…”?

So too, I’m glad to see research saying that “the teacher’s sense of humor can lift students’ spirits.” At the same time, that research doesn’t offer much new information; does anyone seriously think that humor is a bad thing?

And I’m not sure how useful such research is. If a teacher isn’t especially funny, the advice “You, be funny!” doesn’t sound very practical. (It’s hard to learn to be taller; it’s also hard to learn to be funnier.)

Because I don’t often find emotion research in this “new + useful” sweet spot, I don’t often write on this topic.

Today’s News

One researcher who does work in the “new + useful” zone is Dr. Sarah Rose Cavanagh, currently at Simmons University.

In a recent study, she and colleagues explored this sensible logical chain:

First: if students feel better during class, they just might learn more.

Second: we’ve got strategies to help students feel better.

Third: so, let’s see if those “feel better” strategies work in class, and do help students learn more!

This plan sounds so sensible. In fact, depending on the study’s findings, it might give us advice that is “new + useful”!

To check out this possibility, Team Cavanagh used two different “feel better” strategies.

The first included “cognitive reappraisal.” Students got brief training in deliberately rethinking their negative experiences. For instance, they were shown this prompt:

“IF I find myself becoming irritated and frustrated with my progress, my professor, or my peers, or find myself feeling lost and confused, THEN I will instead think that the best rewards in learning occur by working through initial confusion.”

You can feel the deliberate reappraisal process here: “instead of thinking THIS, I’ll choose to think THAT.”

The second strategy to help students manage negative emotions is the (more familiar) mindful meditation perspective. As part of their training, students got this prompt:

“IF I find myself becoming nervous about my performance in answering questions in class or on quizzes or tests, or about my grade in the class, THEN I will instead let this nervousness be, accepting it as it is, not trying to change it or make it go away.”

Cavanagh also had a control group as well.

So, here are some of the key questions:

Did these “feel better strategies” work? Did the students rate their emotional state more positively after receiving them?

Did they help students learn more in the short term — that is, at the end of class?

How about the long term — that is, on the final exam?

What did Cavanagh’s team find?

So Many Envelopes

As you can see, Cavanagh’s study produced LOTS of data, and requires careful parsing.

To focus on a simple summary, Cavanagh found that most of those questions get the clear answer “NO.”

As in:

No, neither cognitive reappraisal nor mindful meditation improved the students’ ratings of their mood (compared to the control condition);

No, students didn’t think they learned any more — and (based on quiz results at the end of class) they didn’t learn any more.

Amidst all this “no” news, Cavanagh did get one “YES”:

Yes, students who used cognitive reappraisal (but not mindful meditation) remembered more information on the final exam.

In this one sentence, it seems to me, we’ve found research-based advice that’s both new + useful.

NEW: Although I’ve read about cognitive reappraisal in the past, I’ve never thought to train my students in doing so.

USEFUL: This intervention sounds quite simple to do…and produced the results I care about: long-term learning!

That’s a powerfully tempting combination.

Now I’m A Believer?

I don’t typically make strong recommendations based on one study. In this case, I’ve checked out my usual sources (scite.ai, connectedpapers.com, elicit.org), and found…not much. We just don’t have lots of research on the benefits of cognitive reappraisal in typical classrooms.

I am, however, drawn to this study for a few reasons.

First: the modesty of its conclusions inspires trust. Cavanagh’s own research disconfirmed most of her hypotheses — so I’m likelier to trust her and her team for the one that came through.

Second: it rhymes with other research I trust.

Specifically, mindful meditation has many enthusiastic proponents; I know lots of people who believe it will cure all sorts of school-based ills. However, as I wrote in 2022, an ENORMOUS study (8000+ participants!) showed essentially no benefit to mindfulness practices in schools.

I understand why this study included mindfulness as an option, and I don’t doubt there was real enthusiasm for this strategy. But Cavanagh got the same results as that 8000 person study. This congruence — in the face of such potential pressures — increases my confidence.

 

For all these reasons, I will keep an eye out for more research on cognitive reappraisal and its classroom benefits. If you try this strategy in your classroom, I hope you’ll let me know how it goes.


Cavanagh, S. R., Lang, J. M., Birk, J. L., Fulwiler, C. E., & Urry, H. L. (2021). A multicourse, multisemester investigation of the impact of cognitive reappraisal and mindfulness instruction on short-and long-term learning in the college classroom. Scholarship of Teaching and Learning in Psychology7(1), 14.

Change My Mind, Please!
Andrew Watson
Andrew Watson

When was the last time you changed your mind about something important?

Perhaps you rethought a core political belief: gun control, or abortion, or a stance on international relations.

Maybe you gave up on one teaching practice to adopt a once-unthinkable alternative.

Just possibly, after a lifetime aversion to vegetables, you realized they’re delicious. (Wrap pencil-thin aspargus in proscuitto, give it a glug of olive oil and a sprinkle of parmesan, pop it on the grill…heaven.)

Now think back: what led you to change your mind? How did you stop believing one thing and start believing a contradictory position? What made the magic happen?

This broad question has specific importance here at Learning and the Brain.

After all: if we’re doing our job right, teachers will often leave our conferences and workshops thinking:

“Wow, I should really try this new way. Now that I know more about psychology and neuroscience research, I’ve got LOTS of ideas for improving my work, my classroom, and my school!”

To say the same thing in reverse: if everyone leaves a LatB event thinking EXACTLY what they believed before the event … well … why have the conference? What good did it serve?

Understandable Passion

To be fair, we at LatB have it relatively easy.

Most people come to our conferences wanting to get new ideas. Attendees are — for the most part — hoping that they’ll have new ways of thinking about teaching and learning. Mind-changing is a feature, not a bug.

An animal walks through a doorway. It goes in an elephant, and comes out a giraffe

In many (most?) educational spheres, however, our profession often prefers sturdy conviction over openness to new ideas.

Whether we’re debating the best approach for reading instruction, or high- vs. low- structure pedagogy, or the right way to use technology in schools, we have strong opinions.

And, because we have strong opinions, we want others to share those opinions.

This desire for others to share my opinions simply makes sense. For example:

If I know for certain that my approach to reading instruction is better than my colleague Lloyd’s approach, I want Lloyd to change his approach! Because mine is better!!

Everyone — from the individual student to society at large — suffers because of Lloyd’s wrongness. He MUST BE STOPPED. Even better, he must be CONVERTED.

You can understand my passion here…

The Problem with Passion

Although this passion is understandable, it also creates problems of its own.

Specifically, my passion might prompt me to say:

“Lloyd, listen up. You’re obviously wrong. I’m obviously right. So, do it my way. Pronto.”

Now, in my defense, it’s entirely possible that I am right and Lloyd is wrong. So, this statement could be factually accurate.

However, whether or not the statement is true, it creates real problems.

At the beginning of this blog post, I asked you to recall a time when you did in fact change your mind.

Did you do so because someone said: “I’m right, you’re wrong, so do it my way?”

I’m guessing the your answer is “no.”

It might be “heck no.”

The answer, I suspect, is rarely “heck yes.”

In other words: although “I’m right, you’re wrong” might be a TRUE statement, it is almost certainly a WILDLY INEFFECTIVE statement…because it doesn’t accomplish the goal: changing someone’s mind.

If I passionately want to persuade Lloyd to change his mind, and I knowingly adopt a strategy that almost certainly won’t change his mind…well, my passion has in fact defeated my purpose.

Beyond Hypotheticals

I’m writing this blog post because of a poster I’ve seen several times recently (where else?) on eX-Twitter.

It says, basically:

Because we’re talking about education, beliefs don’t matter; evidence matters.

Now, on the one hand, I typically agree with the research-based conclusions of the people who highlight this poster.

That is: to the degree I’m on a team in the world of cognitive science, my team-mates are likeliest to make this statement.

On the other hand, I wince every time I see it. I do so because I think this statement makes it conspicuously less likely that my team will convince anyone to do anything.

To me, the poster sounds — more or less — like it’s saying this:

Your beliefs don’t matter to me; my evidence must matter to you.

And yet, who will listen to me after I’ve said “your beliefs don’t matter to me”? Who should listen to me after I’ve said such a thing? Why would someone who experiences my open contempt listen to and trust me?

I just don’t think people work that way.

Alternative Strategy, Take 1

If I’m going to object to the “you’re wrong; I’m right” strategy, I should probably propose an alternative. I will, in fact, propose two.

When I go to do PD work at a school or conference, I usually begin by saying something like this:

“I’m not here to tell you what to do. I don’t know enough about teaching 6th grade history — or 1st grade math, or soccer coaching — to be able to do that.

Honestly, I don’t think cognitive science can tell teachers what to do.

Instead, I’m here with some ideas about how to think about what you do.

Once you hear these new cognitive-science ways of thinking, you will have wise discussions on how to apply them to you work. That is: you will figure out what to do.

Cognitive science is, I think, really good at helping this way.”

This introduction has, I believe at least two benefits.

First: it’s true. No cognitive scientist knows enough to tell me how to teach grammar well. But, many can help me think about teaching it well. (E.g.: “having too many choices might overwhelm a student’s working memory, so design practice exercises to align with their prior knowledge!”)

Second: this introduction sets a tone. I am, in effect, saying:

“We all bring kinds of expertise to this day.

I know a lot about, say, Posner and Rothbart’s theory of attention.

You know your curriculum, and your students, and your school culture, and your society’s culture.

When we pool all that expertise, we’re likelier to get to good answers. Let’s work together.”

Rather than tell people to abandon their beliefs in favor of my evidence, I invite them to find the best ways to combine both.

I’m not certain that they’ll do exactly what I think they should do.

But I think they’re MUCH LIKELIER to take small steps in a good direction than if I start by dismissing everything they’ve believed up until now.

Alternative Strategy, Take 2

An alternative to my strategy is: I could be completely wrong.

No, really.

Although I doubt that telling people “my evidence trumps your beliefs” does much good, others might have found this strategy to be highly effective.

Heck: you might know of research showing that insisting on evidence (and dismissing prior belief) does cause teachers to change their minds and adopt better teaching practices.

If you have that experience — or know of such research — please share it with me! I would LOVE to be able to try this approach with confidence.

After all, as XKCD has told us, people are often wrong. I’m hoping to be better at changing minds…including my own.


 

Sadly, we’ve been having lots of troubles with the “comments” feature on this blog. We get dozens (hundreds) of spammy comments, wanting to advertise all sorts of sketchy products. And the comment filter just might prevent you from responding to this post.

So: if you have reason to advocate for the “my evidence > your belief” strategy, please reach out to me at my first name (Andrew) and then the “at” sign and then the name of my own company (TranslateTheBrain) and then “.com”

Because, I really do hope you will change my mind. Please!