Skip to main content
“Students Simply Cannot Improve”: Handwritten Notes vs. Laptop Notes
Andrew Watson
Andrew Watson

I disagree with the title of this blog post. I believe students CAN improve at taking notes. This post is my attempt to convince you that they can, and that we teachers can help them.


Over the years, I’ve written about the Laptop-Notes vs. Handwritten-Notes Debate several times.

I’ve tried to persuade readers that although people really like the idea that handwritten notes are superior, the research behind that claim isn’t really persuasive:

I’ve argued that the well-known study with a clever name (“The Pen is Mightier than the Keyboard“) is based on a bizarre assumption: “students cannot learn how to do new things.” (If they can’t, why do schools exist?)

I’ve also argued that that the recent study about neural differences between the two note-taking strategies doesn’t allow us to draw strong conclusions, for two reasons:

First: we can’t use it to say that handwriting helps students remember more than keyboarding because the researchers didn’t measure how much students remembered. (No, honestly.)

Second: the students who typed did so in a really unnatural way — one-finger hunt-n-peck. Comparing a normal thing (handwriting) to an abnormal thing (hunt-n-peck) doesn’t allow for strong claims.

So, I’ve been fighting this fight for years.

Both of these research approaches described above overlook the most straightforward research strategy of all: let’s measure who learns more in real classrooms — handwriters or keyboarders!! 

A group of researchers recently asked this sensible question, and did a meta-analysis of the studies they found.

The results:

Students who use laptops write more words; 

Students who take handwritten notes score higher on tests and exams.

So: there you have it. Handwritten notes REALLY DO result in more learning than laptop notes. I REALLY HAVE BEEN WRONG all this time.

Case closed.

One More Thing…

Like the TV detective Columbo, however, I have just a few more questions I want to explore.

First: I think the case-closing meta-analysis shows persuasively that handwritten notes as students currently take them are better than laptop notes as students currently take them.

But it doesn’t answer this vital question: can students learn to take notes better?

After all, we focus SO MUCH of our energy on teaching better; perhaps students could also do good things better.

If the answer to that vital question is “no” — students CAN’T take better notes — then obviously they should stick with handwriting. This meta-analysis shows that that’s currently the better strategy.

But if the answer is “yes” — students CAN take better notes than they currently do — then that’s really important news, and we should focus on it.

I myself suspect the answer to that question is “yes.” Here’s why:

The 2014 study — “The Pen is Mightier than the Keyboard” — argues that students benefit from taking notes when they do two things:

First: when they write more words. In their research, students who wrote more words remembered more information later on.

Second: when they reword what the teacher said. Students who copied the teacher’s words more-or-less verbatim remembered LESS than those who put the teacher’s ideas into their own words.

This second finding, by the way, makes lots of sense. Rewording must result from thinking; unsurprisingly, students who think more remember more.

1 + 1 > 1

Let’s assume for a moment that these research findings are true; students benefit from writing more words, and they benefit from rethinking and rewording as they write.

At this point, it simply makes sense to suspect that students who do BOTH will remember even more than students who do only one or the other.

In other words:

“Writing more words” + “rewording as I write”

will be better than

only “writing more words” or

only “rewording as I write.”

Yes, this is a hypothesis — but it’s a plausible one, no?

Alas, in the current reality, students do one or the other.

Handwriters can’t write more words — it’s physically impossible — but they do lots of rewording. (They have to; because they can’t write as fast as the teacher speaks, they have to reword the concepts to write them down.)

Keyborders write more words that handwriters (because typing is faster than handwriting). But they don’t have to reword — because they can write as fast as the teacher says important things.

But — wait just a minute!

Keyboarders DON’T reword…but they could learn to do so.

If keyboarders write more words (which they’re already doing) and put the teacher’s idea into their own words (which they’re not currently doing), then they would get BOTH BENEFITS.

That is: if we teach keyboarders to reword, they will probably get both benefits…and ultimately learn more.

In brief: it seems likely to me that laptop notes — if correctly taken — will result in more learning than handwritten notes. If that hypothesis (untested, but plausible) is true, then we should teach students how to take laptop notes well.

I should say: we have specific reason to suspect that students can learn to use both strategies (more words + rewording) at the same time: because students can learn new things! In fact: schools exist to help them do so.

Contrary to my blog post’s title, students really can improve if we help them do so.

Optimism and Realism

I hear you asking: “Okay, what’s your actual suggestion? Get specific.” That’s a fair question.

I – optimistically – think schools should teach two skills:

First: keyboarding. If students can touch type, they’ll be able to type MANY more words than untrained keyboarders, or handwriters.

Remember, the recent meta-analysis shows that students who keyboard – even if they aren’t touch typists – write more words than hand-writers. Imagine the improvement if they don’t have to hunt-n-peck to find the letter “j,” or the “;”.

Second: explicitly teach students the skill of rewording as they type. This skill – like all new and counter-intuitive skills – will require lots of explanation and lots of practice. Our students won’t change their behavior based on one lesson or one night’s homework.

However, if we teach the skill, and let them practice over a year (or multiple years) students will gradually develop this cognitive habit.

The result of these two steps: students will touch-type LOTS more words, and they will reword their notes as they go. Because they get BOTH benefits, they will learn more than the students who do only one or the other.

Now, I can hear this realistic rejoinder: “Oh come on: we simply don’t have time to add anything to the curriculum. You want us to teach two more things? Not gonna happen.”

I have two responses:

First: “developing these two skills will probably help students learn other curricular topics better. Extra effort up front will probably speed up learning (in some cases/ disciplines/grades) later on.”

If my untested hypothesis is correct, the progress of learning will look something like this:

Second: “I accept the argument that perhaps we can’t add anything to the curriculum. However, we should admit that handwriting is the second best option. Keyboarding – correctly done — is probably better than handwriting for notes; handwriting is the fallback position because we prioritize other skills.

In brief:

The title of this blog post is incorrect. Students CAN learn how to do new things – like take better notes by keyboarding well. We might choose not to teach them how to do so, but we should be honest with ourselves that the limitation is in our curriculum, not in our students’ abilities.


Mueller, P. A., & Oppenheimer, D. M. (2014). The pen is mightier than the keyboard: Advantages of longhand over laptop note taking. Psychological science25(6), 1159-1168.

Van der Weel, F. R., & Van der Meer, A. L. (2024). Handwriting but not typewriting leads to widespread brain connectivity: a high-density EEG study with implications for the classroom. Frontiers in Psychology14, 1219945.

Flanigan, A. E., Wheeler, J., Colliot, T., Lu, J., & Kiewra, K. A. (2024). Typed Versus Handwritten Lecture Notes and College Student Achievement: A Meta-Analysis. Educational Psychology Review36(3), 78.

A Skeptic Converted? The Benefits of Narrative
Andrew Watson
Andrew Watson

Let’s imagine that I tell you about a cool new research finding: singing helps students learn!

Elementary school students sit on a carpet and listen to a story from their teacher. On the one hand, I suspect you’d be really excited. After all, learning is really difficult. Almost ANYTHING that helps students learn is worth a teacher’s attention.

On the other hand…you probably have some important questions:

Who is doing the singing? The teacher, or the students? Or, are they singing together? Or, are they listening to a recording of someone else singing?

What are they singing? A song about the material they’re about to learn (say, the periodic table)? Or, can they sing any old song? (Sondheim’s “Moments in the Woods” is worth endless singing!)

How many times should the students sing to get the learning benefits?

And so forth.

If we don’t know the answers to these questions, how will we make this singing technique work?

On this note: Several months ago, I asked this same set of questions about the common statement that “narrative is psychologically privileged…”

More Questions

I regularly hear that students learn material better if it’s presented as a story than as exposition.

On the one hand, it seems clear that stories are easy to remember — they are “psychologically privileged.”

But this claim helps teachers if and only if teachers know what “a narrative” is, and can easily translate our content into a narrative.

If I want my 10th graders to know the difference between a gerund and a participle, how can I make that distinction into a story?

Or…wait a minute…can I just have my students read a story and point out the gerunds and the participles within it? Does that count?

Or…wait a minute…can a math teacher read a story that says “Gretel told Hansel that 3 + 3 = 6”? Does that count?

If we teachers don’t know exactly what counts as a story, and how to translate exposition into stories, then the “psychological privilege” of narrative is interesting but not helpful. (SO MUCH of education research is interesting but not helpful…)

In my earlier blog post, I asked people for guidance on that question — preferably research-informed guidance showing how to define narrative, and/or translate exposition into narrative.

Honestly, I didn’t get much.

 

But recently I found a research study that just might provide these essential insights.

Promising Steps

A thoughtful writer in this field recently highlighted this study: one that looks at the benefits of narrative for helping students learn.

Bingo! (Or, at least, potential bingo.)

This reseach offered at least two obvious benefits.

First: it provides at least an initial definition of narrative.

A narrative, according to Table 2, has a narrator; it takes place in the past; its events have a three-act structure: with a setup, a confrontation, and a resolution; it centers on humans who act and react; and so forth.

Second: it compares several options.

Students in Group A read a textbook-like, logically-structured explanation of stochastic molecular motion. Let’s call this “exposition only”:

The thermal energy leads not only to rotations or vibrations, whereby covalently bonded atoms move back and forth relative to each other, but also to translational movements, which move molecules from one location to another…

Students in Group B read a narrative version of that information, where the scientists who discovered the relevant information were the protagonists of a three-act story. Let’s call this “narrative only”:

Maud Menten quickly realized that the thermal motion of particles that Robert Brown had seen in his light microscope also resembled the random motion of molecules in cells…

Students in Group C got a combined approach. A narrative passage introduced the ideas and the problem; a subsequent passage — more expository — went into the technical details. We’ll call this strategy “narrative-then-exposition.”

In my thinking, this example provides a crucial detail. Notice that the narrative is NOT about the particles themselves. (“The particle got excited by the thing, and so did something cool!”) Instead, the narrative focuses on the scientists who made the relevant discoveries.

In other words, we might have some additional latitude when putting our narratives together. The story doesn’t have to be about the content itself.

However, that optimistic finding holds true only if these stories helped the students learn. So, did they?

PLOT TWIST

The narrative benefit depended on the students’ prior knowledge.

That is: students who knew relatively less about stochastic movement benefitted from the “narrative only” version.

Students who knew relatively more benefitted from the “narrative-then-exposition” version.

Now, the researchers have plausible theories about this finding. (Check out the “Coda” below if you’re curious.)

But to me, the more important news is that we can start to put together a framework for converting exposition (traditional textbook explanations) into narrative (story).

Step one: we’ve got a checklist of elements that make up a narrative; Table 2 in this study. If you want to translate exposition to narrative, you can start there.

Step two: you do NOT have to make the story about the contect itself (“The poor green finch couldn’t break open the seeds. It was very sad that its genes would not be passed on…”).

Instead, you can focus on the PEOPLE who figured out the content (“Then Darwin realized that the finches with the thicker beak had an advantage over the ones with the narrower beak, so their genes were likelier to be passed on to the next generation…”).

Step three: prior knowledge matters. The more your students already know about a subject, the less important it is to recast it as a story.

Although initially skeptical, I’m starting to hope we might develop comprehensive guidelines for the exposition-to-narrative converstion process.

Don’t Stop Now

Of course, these initial guidelines need A LOT MORE WORK.

Some unanswered questions:

Does this psychological privilege apply in pre-college learning? (The studies I’ve found focus on adults.)

Does it hold true in all disciplines? I see how history and literature and theology (and even science) might be translated to narrative. But LOTS of math and foreign language and art instruction might defy this strategy. (How, exactly, would we make noun declensions into a story?)

When will narrative (potentially good!) spill over into “seductive details” (potentially bad!)?

And so forth.

Yes, I’m more optimistic than I was before I read this study. And: as far as I know, we’re only in early stages of understanding how best to use narratives to enhance learning.


Coda

I promised I’d explain the researchers’ theories about the differing benefits of “narrative” (for relative beginners) or “narative-then-exposition” (for relative experts).

Here goes:

Students who knew relatively less, the researchers suspect, benefitted from the working-memory reduction created by the familiarity of a story structure. (Remember those 3 acts…)

Students who knew relatively more, in turn, didn’t need the WM reduction, but benefitted from the reactivation of prior knowledge prompted by the initial narrative. With that prior knowledge reactivated, they could then concentrate on the factual additions to their schema.

Interesting!


Tobler, S., Sinha, T., Köhler, K., & Kapur, M. (2024). Telling stories as preparation for learning: A Bayesian analysis of transfer performance and investigation of learning mechanisms. Learning and Instruction92, 101944.

 

Visual Thinking by Temple Grandin
Erik Jahner, PhD
Erik Jahner, PhD

TGMany of us think with words, solving problems and imagining scenarios by coding information verbally. Our culture is designed to select and promote people who do this well, but this is not the only way of processing the world. To think so neglects the significant neurodiversity that makes humans (and the animal kingdom) amazing. Temple Grandin’s Visual Thinking: The Hidden Gifts of People Who Think in Pictures, Patterns, and Abstractions invites readers into this neglected side of reality, drawing on her personal experiences and extensive research to explore the unique cognitive styles that shape our understanding and interaction with the world through visual thinking. As a prominent advocate for autism awareness and a leading figure in animal science, Grandin offers a compelling examination of visual thinking’s profound impact.

The journey begins with Grandin’s realization that not everyone shares her ability to think in pictures. She distinguishes between visual and verbal thinkers and introduces two types of visual thinkers: object visualizers and spatial visualizers. Object visualizers, like Grandin, think in detailed images, while spatial visualizers think in patterns and abstractions. The visual learner may struggle to understand what thinking verbally is and struggle to understand why they are seeing the world differently from society’s expectations. She helps you identify ways you might think with some surveys and questions that encourage you to pause and reflect. Surveys that were gleaned from her own research as an intensely curious and scientifically minded individual.

She shows that although the system has selected verbal learning as the gateway to academic success, visual thinking can be a significant asset in fields such as art, design, engineering, and architecture among others. However, she also addresses the challenges visual thinkers face in a society that often prioritizes verbal thinking, especially within the education system. The decline of hands-on learning and the emphasis on standardized testing have marginalized many visual thinkers, hindering their potential and depriving society of their innovative contributions.

Blending personal anecdotes, historical examples, and scientific research, Grandin highlights the importance of nurturing visual thinkers. She introduces a number of historical figures whose stores impacted her development, how she saw herself, and the heights she could reach. She emphasizes the value of diverse cognitive styles and neurodiversity in fostering creativity and problem-solving. Furthermore, Grandin explores the broader implications of neglecting visual thinkers, such as the impact on national innovation and the potential for preventing disasters through their keen attention to detail. She also showcases visual thinkers who have bucked the trend and benefited society, despite not always being valued as they grew up.

Grandin’s writing is both engaging and informative, making complex ideas accessible to a broad audience. Her ability to combine personal experiences with scientific insights creates a compelling narrative that underscores the importance of understanding and valuing different cognitive styles.

Grandin not only identifies problems but also offers solutions, advocating for educational reforms and societal changes that could better accommodate and utilize the strengths of visual thinkers. Her call for a more inclusive approach to education and the workforce is both timely and necessary, urging readers to rethink current systems.

Of course, Grandin weaves in her personal passion for animals. The question of animal consciousness has been debated for a long time, with some scientists and philosophers historically viewing animals as simply reacting on instinct, without the emotional depth of humans. This idea often comes from a bias toward verbal thinking, where language is seen as the key to consciousness. Because animals can’t communicate like humans, they’ve been unfairly dismissed as not having feelings or emotions, leading to their mistreatment and use in harmful experiments.

In the past, studying animal behavior through tests and observations in captivity reinforced this limited view. However, recent studies observing animals in their natural environments have shown they are incredible visual thinkers. They can navigate, communicate, solve problems, and even mourn, proving they have rich emotional lives. This new approach helps us see animals not just as instinct-driven beings but as creatures with deep emotional and cognitive capabilities.

Visual Thinking is a thought-provoking and essential read for educators, parents, and anyone interested in cognitive diversity. Temple Grandin’s unique perspective and deep understanding of visual thinking provide a valuable lens through which to view the world. By championing the strengths of visual thinkers, Grandin makes a compelling case for a more inclusive and innovative society, encouraging us to embrace and cultivate diverse ways of thinking for the betterment of all.

The Rare Slam Dunk? Blue Light Before Bed
Andrew Watson
Andrew Watson

I spend A LOT of time on this blog debunking “research-based” certainties.

No, handwriting isn’t obviously better than laptops for taking notes.

No, the “jigsaw method” isn’t a certain winner.

No, “the ten minute rule” isn’t a rule, and doesn’t last ten minutes.

No, dopytocin isn’t worth your attention. (Ok, I made up the word “dopytocin.” The blog post explains why.)

And so forth.

For that reason, I was DELIGHTED to discover a strong “research-based” claim that might hold up to scrutiny. Here’s the story…

The Quest Begins…

A colleague posted on eXTwitter a simple question; it went something like this:

I’ve always heard that “blue light from screens before bed” interferes with sleep — in fact, I’ve heard “science says so.” But: I just realized that I don’t actually know anything about the science. Now I’m wondering if it’s true…

Well, NOW YOU’VE GOT MY ATTENTION.

I too have frequently heard the “blue light before bed” claim. I too had understood that “research says so.” I too had never explored that research. It’s obviously time to saddle up.

I confess I did so with a small sense of dismay.

Small toddler sitting on the bed with special effects from ipad

By nature, I’m not a contrarian. I don’t want to go around saying “that thing that everyone believes is ‘based on science’ has no good research support that I can find.” I’m not trying to lose friends and upset people.

But, as you can see from the list above, LOTS of people say “you should do X because research proves Y” — even though research REALLY does not consistently support Y. (Did I mention the “ten minute rule”?)

Fearing that I would — once again — find myself contradicting commonly held wisdom, I dove in.

As is so often the case, I started at elicit.org. (Important note: NOT illicit.org. I have no idea what’s at illicit.org…but the idea makes me nervous. This is a G-rated blog!)

I put this question into its search box: “Does blue light from screens interfere with sleep?”

And I held my breath…

Treasure Unveiled

Come to find out: elicit.org has strong opinions!

Blue light from screens has been shown to interfere with sleep in several ways. It can shorten and worsen sleep quality, delay the onset of sleep, and increase feelings of fatigue (Kurek 2023). However, using amber filters on smartphone screens can improve sleep quality by blocking the short-wavelength blue light (Mortazavi 2018).

Wearing color-tinted lenses to filter short-wavelength light exposure before sleep may also improve sleep, particularly in individuals with certain conditions (Shechter 2020). Exposure to blue-enriched light at low room light levels can impact homeostatic sleep regulation, reducing frontal slow wave activity during the first non-rapid eye movement episode (Chellappa 2013).

I like Elicit because it provides an ENORMOUS amount of data. I won’t attempt to list or summarize all the studies it highlighted. But I’ll point to a few factors that made this claim especially compelling.

Boundary conditions: studies have found this claim to be true across a variety of age groups. This conclusion doesn’t apply simply to a niche-y cohort (say, people in Atlanta who own bison farms); it applies wherever we look.

Positive and negative findings: studies show both that more blue light interferes with sleep and that blocking blue light improves sleep.

Recency: these studies all come from the last decade. In other words: Elicit didn’t pull them up from the late ’80s. We’re talking about the most recent findings here.

I could go on.

Don’t Stop Now

I believe it was Adam Grant who wrote:

Be the kind of person who wants to hear what s/he doesn’t want to hear.

That is: when I start to believe a particular “research-based” claim, I should look hard for contradictory evidence.

If the evidence in favor outweighs the evidence against — and only if the evidence in favor outweighs the evidence against — can I start to believe the claim.

So, I started looking for contradictory evidence.

Here’s an easy strategy: google the claim with the word “controversy.” As in:

“Blue light interferes with sleep controversy”

or

“blue light interferes with sleep myth”

If such a controvery exists, then that google search should find it.

Sure enough, those two searches started to raise some interesting doubts.

I found two articles in the popular press — one in Time magazine, the other in Salon — pointing to this recent study. In it, researchers studied mice and found that yellow light — not blue light — seems the likelier candidate to be interfering with sleep.

Honestly, I found the technical language almost impenetrable, but I think the argument is: yellow light looks more like daylight, and blue light looks more like twilight. So: yellow light (but not blue light) is likelier to interfere with the various chemical processes that point the brain toward sleep. And: that’s what they found with the mice.

Of course, studies in mice are intersting but are never conclusive. (One of my research mottos: “Never, never, never change your teaching practice based on research into non-human animals.”)

So, what happens when we test the yellow-light/blue-light hypothesis in humans?

So Glad You Asked…

Inspired by that mouse study, another researched checked out the hypothesis in humans. She measured the effects of evening exposure to light along various wavelengths — yellow, yellow/blue, blue — and found…

nada.

As in, no wavelength combo had a different effect than any other.

However — this is an important “however” — the study included exactly 16 people. So, these results deserve notice, but don’t overturn all those other studies about the dangers of blue-light.

After all this back-n-forth, where do all these research findings leave us?

First: we do indeed have LOTS of research suggesting that blue light interferes with sleep.

Second: that research has been questioned recently and plausibly. But, those plausible questions don’t (yet) have lots of mojo. Mice + 16 people don’t add up to a fully persuasive case.

By this point, I’ve spent about three hours noodling this question about, and I’m coming around to this point of view:

Maybe the problem isn’t the blue light; maybe it’s the BRIGHT light. Yellow, blue, pink, whatever.

So, rather than buy special glasses or install light filters, I should put down my iPad and read from paper once I get in bed.

I should say that this conclusion isn’t exactly “research based.” Instead, it’s a way of accepting a muddle in the scientific results, and trying to find a good way forward.

This approach guides me in my classroom work, and now it will guide me when it comes to my Kindle as well.

When Experience Contradicts Research: The Problem with Certainty
Andrew Watson
Andrew Watson

A friend recently told me about his classroom experience using mindfulness to promote thoughtful and effective writing.

Young girl at school practicing yoga on a mat

He started the year by explaining the benefits of mindfulness to his students. After that introduction, he began each class period with five minutes of mindful silence.

Although he wasn’t running a research study, he kept two kinds of “data.”

First: his own impression is that students got appreciably better at adding insight, depth, and detail to their writing.

For instance, instead of saying “the mood was optimistic,” they might write “the sun came out” or “I could hear lively music in the background.”

They also “got stuck” less often during the writing process.

Second, he surveyed his students at the end of the year.

He got LOTS of positive responses. One wrote:

I was surprised by how much I looked forward to meditation as the weeks went on, it helped calm me down before big assignments (like practice exams or actual tests) or just gave me a breather during stressful moments.

Another:

I thought meditation was very helpful in class this year because it helped me focus on my clarity of mind. I especially liked it before writing essays because it relaxed me and helped my thoughts flow clearer I think.

A third:

I would start the year by doing it everyday. I’ve started implementing it in my home life and have felt more present and conscious not only my daily interactions but also my thought process and decision making.

I could go on quoting like this for a few pages.

Based on this experience, my friend asked me what research shows about the effects of mindfulness in the classroom…

What Research Shows…

Alas, research into the classroom benefits of mindfulness doesn’t obviously align with my friend’s experience.

Yes, we do have some encouraging research about the ways that mindfulness can reduce stress.

Yes, we do have some correlational research showing a relationship between mindfulness and academic accomplishment.

But my honest opinion is that — so far — we don’t have a strong enough research pool to make inclusion of mindfulness programs a “research-supported” practice in schools.

In particular, we have an ENORMOUS recent study (over 8000 students!) showing that mindfulness training provided students with NO BENEFITS AT ALL, and perhaps (very small) increases in the likelihood of a few problems.

I’m game for the idea that mindfulness training might help students and teachers. But I don’t see enough consistent, high-quality research findings to champion the practice myself.

A Friend’s Quandry

So, a quandry:

My friend’s EXPERIENCE suggests that “brief mindfulness exercises help.”

But RESEARCH suggests that “mindfulness training doesn’t necessarily do anything.”

What should he do?

Equally important: what should YOU do if research suggests that one of your teaching practices a) doesn’t help, or b) might actually hurt?

Let me suggest a few steps.

STEP ZERO: even before we begin answering this question, I think it’s essential to admit that it’s both a challenging and an essential question. (So challenging and important that I wrote a book about it.)

Someone might say to you, “there’s an obviously easy answer to that question.” I think that person is wrong.

STEP ONE: recognize that both kinds of knowledge have their own benefits and weaknesses.

For instance, research helps us see long-term effects that we teachers most often miss.

We know that “short-term performance is an unreliable indicator of long-term learning.” (Thanks, Dr. Nick Soderstrom (on Twitter @NickSoderstrom).)

So, research can help us see when things that feel really good in the classroom right now don’t actually produce the long-term benefits we’re hoping for.

Likewise, research helps us overcome our biases. My friend works REALLY HARD to make his mindfulness intervention work. He devotes LOTS of time to it. (So do his students!)

NO WONDER he sees all the benefits! Well…research can help us see past that motivated reasoning.

Not So Fast…

At the same time, a teacher’s classroom experience provides insights that research just can’t spot.

Teachers combine variables. Researchers isolate variables. That is: we see combinations that researchers rarely explore — combinations like “daily mindfulness + writing.”

Also: research always exists within “boundary conditions.”

A research study done with 3rd grade Montessori students learning long division might — but might not! — apply to dyslexic 11th graders studying history at a military academy.

Unless we have SO MUCH research on a particular topic, a research-only perspective might miss the places that a technique DOES work because we’ve seen that it DOESN’T help in all these other places.

Teachers — however — might discover those places.

Don’t Stop Now

Okay, so we know that this quandry is an important question and requires complex answers (step 0); and we know that research and experience provide separate and useful kinds of knowledge (step 1).

What’s next?

STEP TWO: Get specific.

In that 8000+ person study: what exactly did they do? And: how well does “what they did” align with “what my friend did”?

In the 8000+ person study, they had students practice mindfulness for 10 weeks. They wanted to know if a) the students would keep doing mindfulness on their own after the 10 weeks, and b) if their mindfulness practice would help them — or not — according to 28 different criteria.

The answers were a) “nope, not really” and b) “nope, not at all.”

But: OF COURSE the students didn’t get the benefits of mindfulness (that’s b) because they didn’t continue the mindfulness exercises at home (that’s a).

Notice, however, that this research doesn’t align with my friend’s strategy. His students DID continue the mindfulness because he started every class with time for mindfulness.

True: students who don’t practice mindfulness don’t benefit from it; but my friend’s students might benefit because they had time to practice it.

In other words: that big study shouldn’t necessarily discourage my friend, because his strategy differs from their strategy in meaningful ways.

STEP THREE: Combine humility with determination.

Here’s the trickiest part.

As I’ve just argued, this big study might not apply to my friend’s approach.

AND: my friend’s “biased” perspective (we ALL have “biased” perspectives) might make it difficult for him to recognize the shortcomings in his approach.

For this reason, I think we have to push ourselves relentlessly to balance humility (“I should really focus on and respect research guidance!”) with determination (“My classroom experience is valuable and I should give it weight in my decision making!”).

But, gosh, that’s a difficult balancing act.

It’s tempting to pick one side or the other:

I shall do what research tells me!

or

My training and instincts matter most!

Instead, we should strive to give both sources of knowledge their due…and always doubt our own certainties.

An Excellent Example

Note, by the way, that my friend was doing just that.

After all, his own classroom experience — and his students’ enthusiastic feedback! — gave him all sorts of reasons to be entirely confident.

He might easily have said “research, schmeesearch” and gone ahead with his mindfulness routine.

Instead, he looked for a reason to doubt his own certainty; that is, he asked me what research has to say … knowing that it might not support his experience. (He had, after all, just finished reading my book on evaluating research-based teaching claims.)

He now has to decide the best way to procede. And: in my view, he will do so all the more effectively because he allowed himself to doubt.

In this field, certainty is the enemy of improvement and excellence.