Skip to main content
The Benefits of Direct Instruction: Balancing Theory with Practice
Andrew Watson
Andrew Watson

When teachers hear that “research shows we should do X,” we have at least two broad questions:

First Question: what’s the research?

Second Question: what EXACTLY does X look like in the classroom?

People who have the expertise to answer the first question (researchers) might not have the K-12 classroom experience to answer the second question.

And, of course, people who can make it work in the classroom (teachers) might not know or understand the research.

Wouldn’t it be great if we could find one book that answers both sets of questions?

In fact, it would be especially great if that book focused on a controversial topic. In that case, we could see a complete argument – both the why and the how – before we make a judgment about the controversy.

Does that sound tempting? I have good news…

Embracing Controversy

A feisty battle has raged in edu-circles for many years now: “direct instruction” vs. “constructivist pedagogy.” *

In one corner, “constructivists” argue that problems or projects or independent inquiries help students discover and build enduring understanding. And, such exploration fosters authentic motivation as well.

In the other corner, “direct instruction” advocates argue that working memory limitations sharply constrain students’ cognitive workspace. For that reason, teachers must explicitly shape learning experiences with small steps and carefully-designed practice.

Both approaches can be – and frequently are – parodied, misunderstood, and badly practiced. So, a book explaining the WHY (research) and the HOW (classroom practice) would be greatly helpful.

Sage on the Page

Adam Boxer teaches chemistry at a school in London, and has been blogging about his work for some time now. (If you follow our twitter account, @LearningandtheB, you’ve seen links to his work before.)

In his book Explicit & Direct Instruction: An Evidence-Informed Guide for Teachers, Boxer gathers eleven essays that explain the research background and then then get SUPER specific with classroom suggestions.

In the first chapter, Kris Boulton tells the history of “Project Follow Through,” a multi-decade program to discover the best way of teaching children.

Researchers tracked more than 200,000 children in 13 different programs over several years, and compared their learning across three dimensions: basic skills, cognitive skills, and affective skills.

Which approach proved most effective?

Direct Instruction, created by Siegfried Engelmann.** It was, in fact, the only program of the 13 that benefitted students in all three dimensions.

When advocates of Direct Instruction (and direct instruction) insist that research shows its effectiveness, they reasonably enough point to Project Follow Through. (Can others critique this study? Of course…)

Both Boulton and Greg Ashman (in the second chapter) then emphasize the alignment of direct instruction with psychology models: cognitive load theory, schema theory, and so forth.

In brief: we’ve got LOTS of research explaining why direct instruction should work, and showing that it does work.

Let’s Get Practical

After Boulton and Ashman explain the why, the next several chapters deliver on the classroom how.

For me, the book’s great success lies in the number, variety, and specificity of these chapters.

What does direct instruction look like for teaching math?

How about science?

How about writing?

What’s the best number of examples to use?

And so forth.

I especially enjoyed Sarah Cullen’s chapter on fading. Cullen begins with an important question/critique:

How, then, can a teaching method that so depends on instruction – on teachers leading learning and controlling the content to which pupils are exposed – foster autonomy?

Her answer focuses on having scaffolds and removing scaffolds – aka, “fading.”

In particular, Cullen wisely conceptualizes fading over many different time spans: fading across grades (which requires planning across years), fading within a term’s curriculum (requiring planning across months), and fading within a lesson (requiring skill, insight, and practice).

Like the book’s other chapters, Cullen’s offers many specific examples for each of her categories. In other words, she ground theoretical understanding with highly specific classroom realities.

In Brief

If you already think direct instruction sounds right, you’ll be glad to have a how-to guide.

If you think it sounds suspect (or even oppressive), you’ll be glad to read a straightforward explanation of the research behind the approach. (You might not be persuaded, but you’ll understand both sides of the argument more clearly.)

And, if you want realistic classroom examples explained with loving detail, this book will launch 2022 just right.


* I’ve put those labels in quotation marks because both are familiar, but neither one really works.

** Direct Instruction (with capital letters) is the name of Engelmann’s specific program. On the other hand, direct instruction (without capital letters) is a broader approach to thinking about teaching and learning.

The Best Kind of Practice for Students Depends on the Learning Goal
Andrew Watson
Andrew Watson

In some ways, teaching ought to be straightforward. Teachers introduce new material (by some method or another), and we have our students practice (by some method or another).

Result: THEY (should) LEARN.

Alas, both classroom experience and psychology/neuroscience research suggest that the process is MUCH more complicated.

For instance:

When we “introduce new material,” should we use direct instruction or more of an inquiry/problem-based pedagogy? *

When we “have our students practice,” what’s the very BEST kind of practice?

Around here, we typically offer two answers to that 2nd question: retrieval practice and interleaving.

Retrieval practice has gotten lots of love on this blog — for instance, here. I have written less about interleaving, mostly because we have less research on the topic.

But I’ve found some ripping good — and very practical — research to share here at the end of 2021.

“What?,” “Why?,” and Other Important Questions

Let’s start with definitions.

Let’s say I teach a particular topic today: “adjectives.” And tomorrow I teach “adverbs.” Next day, “prepositions.” Next: “coordinating conjunctions.”

How should I structure students’ homework?

They could do 20 adjective practice problems tonight. Then 20 adverb problems the next night. Then 20 prepositions. And so forth.

Let’s call that homework schedule blocking.

Or, they could do 5 adjective problems a night for the next 4 nights. And 5 adverb problems a night starting tomorrow night. And so forth.

If I go with this system, students will practice multiple different topics (adjectives, adverbs, prepositions…) at the same time. So, let’s call that homework schedule interleaving.

For the most part, when we compare these two approaches, we find that interleaving results in more learning than blocking. (Lots of info here. Also in this book.)

That’s an interesting conclusion, but why is it true?

In the first place, probably, interleaving is a desirable difficulty. Students must THINK HARDER when they interleave practice, so they learn more.

In the second place, well, we don’t exactly know. Our confusion, in fact, stems in part from an arresting truth: interleaving usually helps students learn, but not always.

Of course, NOTHING ALWAYS WORKS, so we’re not fully surprised. But if the exceptions helped explain the rule, that could be mightily helpful…

An Intriguing Possibility…

Two scholars — Paulo F. Carvalho and Robert Goldstone — have been studying a potential explanation.

Perhaps blocking and interleaving enhance different kinds of memories. And so, research produces contradictory results because researchers use different kinds of memory tests.

Specifically, they propose that:

During blocked study, attention and encoding are progressively directed toward the similarities among successive items belonging to the same category,

whereas during interleaved study attention and encoding are progressively directed toward the differences between successive items belonging to different categories.

In other words: blocking focuses students on the properties of a particular category (“adjectives”). Interleaving focuses students on the distinctions among different categories (“adjectives, adverbs, prepositions”).

And so: if I want students to DEFINE ONE topic or idea or category (“adjectives”), blocking will help them do that well.

If I want students to COMPARE/CONTRAST MANY topics or ideas or categories, interleaving will help them do that well.

To repeat the title of this blog post: “the best kind of practice for students depends on the learning goal.”

In their most recent study, Carvalho and Goldstone test this possibility.

Sure enough, they find that students who block practice do better at defining terms, whereas those who interleave practice do better at multiple-choice questions.

The study gets splendidly intricate — they work hard to disprove their own hypothesis. But once they can’t do so, they admit they they just might be right.

Caveats and Classroom Implications

Caveat #1: “one study is just one study, folks.” (Dan Willingham.)

Although, to be fair, Carvalho and Goldstone have been building a series of studies looking at this question.

Caveat #2: The researchers worked with adults (average age in the 30s) studying psychology topics.

Does their conclusion hold true for K-12 students learning K-12 topics? Maybe…

Caveat #3: Practically speaking, this research might focus on a distinction that evaporates over time.

In truth, I always want my students to know specific definitions — like “tragedy” — well. And, I want them to compare those well-known definitions flexibly to other definitions — like, say, “comedy.”

An an English teacher, I — of course! — want my students to define adjective. AND I — of course!! — want them to compare that definition/concept to other related ideas (adverbs; participles; prepositional phrases acting as adjectives).

In other words, I suspect the ultimate teaching implication of this research goes like this:

We should have students BLOCK practice until they know definitions to some degree of confidence, and then have them INTERLEAVE practice to bring those definitions flexibly together.

To be clear: I’m extrapolating, based on my classroom experience and on my reading in this field.

Until my interpretation gets more research behind it, Carvahlo and Goldstone’s research suggests this general plan:

START BY DECIDING ON THE GOAL.

If you mostly want your students to know individual concepts, have them block their practice.

If you mostly want them to bring several topics together, have them interleave practice.

As your goal changes, their homework changes too.

As is so often the case, this research doesn’t tell teachers what to do. It helps us think more clearly about the work we’re doing.

In my view, that’s the most helpful research of all.


* I think that’s a false choice; both approaches make sense under different circumstances. More on that in another blog post.


Carvalho, P. F., & Goldstone, R. L. (2021). The most efficient sequence of study depends on the type of test. Applied Cognitive Psychology35(1), 82-97.

When Does Technology Distract Students? The Benefits of Research that Contradicts My Beliefs
Andrew Watson
Andrew Watson

I spoke with several hundred students last week about research-based study strategies.

As always, students were fascinating to hear about psychology and neuroscience research: for instance, the benefits of retrieval practice.

And, as always, they did not love my alarming news about multi-tasking. My advice goes like this:

“If you want to study less and learn more, do one thing at a time.”

No insta-snap-gram-tweet-flix-chat-tok while you’re studying. One thing at a time.

Since that talk, I’ve found some interesting research about the LIMITS of that advice, so I thought I’d share it here.

Tech Problems, Tech Benefits

Our reasons to worry about technology use during class seem perfectly obvious. If I am paying attention to my tweets, I am not paying attention to my academic work.

Divided attention = less learning. Obviously.

At the same time, we can easily see ways that technology benefits learning.

If — during a lecture — students text one another to reinforce their understanding of the material (“What did the prof just say?”), they might solidify their learning.

If they look up complementary information on the interwebs, their quest might boost their comprehension. (I’ve been surprised how often my students want to do this in class, and I occasionally allow them to do so.)

So, we need a more precise question than “is technology good or bad?” We need to know — under what precise circumstances does it help? Or hurt?

Technology and Higher Order Thinking

For instance: does off-topic texting during a lecture interfere with both “lower order” and “higher order” thinking, as defined by Bloom?

And, by the way, what role does note-taking play?

A study from 2018 explores this question.

The details, of course, get complicated, but the short version goes like this. Students watched a 24 minute lecture about psychiatric diagnoses: PTSD, ADHD, OCD, and so forth. They also took notes.

Some students received and answered off-topic texts during the lecture — one about every 2 minutes.

After the lecture, students took a test.

Some of those test questions focused simply on recalling details: “How long must an individual display symptoms of PTSD in order to be diagnosed?”

The researchers designed these questions to measure knowledge and comprehension — that is, “Bloom’s level 1 & 2.”

Four questions, however, focused on deeper understanding: “Compare PTSD and ADHD. How do these disorders differ? Are there ways in which they are similar?”

That is: these questioned aimed to measure application and analysis: Bloom’s level 3 & 4.

So: what effect did the OFF-TOPIC TEXTS have on the students’ NOTES, and on their UNDERSTANDING?

The Envelope Please

The researchers’ results surprised them — and certainly surprised me.

Unsurprisingly, students distracted by texts took less complete notes.

And, also unsurprisingly, they did substantially less well on the factual questions. Texters averaged a 62 on those questions, while non-texters averaged a 71.  (If you speak stats, the Cohen’s d was 0.64. That’s an impressively large difference, at least to me.)

Here’s the surprise: researchers did NOT find a statistically significant difference between students’ scores on the application and analysis questions.

How do we explain this finding?

First: let’s admit the possibility that texting distractions do interfere with factual recall but not analysis.

Honestly, I would not have anticipated that finding, but it could be true.

Second: perhaps the timing matters. That is: these students took the test immediately after the lecture. Perhaps application and analysis — unlike mere factual recall — require more time for steeping.

That is, if the “higher order thinking skills” had been tested the next day, perhaps we would have seen a difference in those scores.

Third: perhaps the number of questions mattered. Because the researchers asked only 4 application/analysis questions, they might not have had enough data to discern a difference.

Perhaps a greater number of questions would have revealed a difference.

The Big Picture(s)

Based on this research, will I tell students “it’s okay to text during lectures”?

NO I WILL NOT.

Here’s why:

First, facts matter. If off-topic texting interferes with factual learning, that finding itself means that texting during lectures is bad.

Second, taking notes properly (almost certainly) matters. If texting  interferes with good note-taking, that finding itself should dissuade students from doing so.

Third, I’m willing to believe that texting doesn’t interfere with application/analysis, but only if other studies — with more questions and later tests — consistently demonstrate that result.

Another point also jumps out at me from this research. This study contradicts my firmly held belief that multitasking vexes learning.

I genuinely believe that IT’S A GOOD THING when research contradicts my firmly held beliefs.

If research never contradicted my beliefs, then I would never learn anything from it.

In fact, I would never need to look at research because it shows me only what I already know.

Research might prove most useful to us when it contradicts our beliefs.

Who knows, maybe I’ll go back to those students and update my advice…


Waite, B. M., Lindberg, R., Ernst, B., Bowman, L. L., & Levine, L. E. (2018). Off-task multitasking, note-taking and lower-and higher-order classroom learning. Computers & Education120, 98-111.

Why I Still Love Learning and the Brain Conferences
Andrew Watson
Andrew Watson

I attended my first Learning and the Brain in 2008; I believe the topic was “The Science of Attention.”

Since then, I’ve attended at least two dozen: in New York, Chicago, Washington, San Francisco. Discussing Stress, and Memory, and Ethics, and Technology. And, of course, learning.

At some point, you might reasonably think, I’d get tired of the handouts and the slides and the coffee.

But, no: I still can’t get enough.

Reason #1: Old Friends

Being an interdisciplinary endeavor, the field of Mind, Brain, and Education is dramatically large…and comfortably small. When you come back, you start recognizing folks right away.

John Almarode presents regularly (and, with his bow tie, vivaciously) about applying cognitive science to the classroom. A post-presentation chat with John is one of the great learning experiences you’ll ever have.

I met Sarah Flotten — currently the interim Director of the Peter Clark Center for Mind Brain Education — through a friend several years ago. It’s now an annual event to catch up with her insights, her school, and her center.

Joanna Christodoulou (a former professor of mine) combines knowledge of neuroscience, knowledge of reading, and enthusiasm so compellingly that I’m still learning from her. I get to catch up with here every year or so at LatB.

This list could go on at length: Pooja Agarwal and Ellen Anderson, and even David Daniel (who doesn’t like it when I mention him in the blog).

In brief: if you want to find colleagues who think the way you do about teaching and learning, you’ll find them here. Even better: you’ll build relationships and coalitions that grow over the years.

Reason #2: New Friends

Once you enter the world of Mind, Brain, and Education — on this blog, on twitter, at the conferences — you’ll start meeting people from (literally) across the globe.

At this most recent conference in Boston, I FINALLY got to meet people I’ve been online chatting with for years.

Beth Hawks (twitter handle @PhysicsHawk) — a science teacher, who blogs here — offers a rare twitter presence. She is encouraging, wise, well-informed, and unwilling to be bamboozled by uplifting-but-empty slogans. I’ve been liking her posts for years, and got to meet her in Boston.

Kristin Simmers (@KristinASimmers) — currently studying the intersection of neuroscience and education — reached out to me about my first book AGES ago, and we’ve been in e-conversation ever since. Perhaps 2 years after that first e-exchange, we got to have lunch at the conference. Where else would I get to meet her in person?

Your MBE colleagues are out there — sometimes a continent away. You can meet them at the conferences.

Reason #3: SO MUCH TO LEARN

Of course, depending on your interests, this could be reason #1.

Even after 14 years, I still have so much to learn in this field. The speakers explore their research and insights — challenging me (and each other) to rethink settled ideas in light of new information.

For instance: on the very first day of the Boston conference, two speakers (politely, curiously) squared off on this important question: can we use conscious strategies to respond to stressful environments?

If the answer is “yes,” then we can guide our students (and our colleagues, and ourselves) down one path.

If the answer is “no” — because “stress turns off the pre-frontal cortex” — then we need a different path entirely.

What’s the correct answer? Honestly: check out Judson Brewer and Bessel van der Kolk to see whose analysis you find more persuasive.

https://www.youtube.com/watch?v=gv-CmqMecVY

https://www.youtube.com/watch?v=d_YApSkqsxM

The best place I know to hear these debates and have these conversations: Learning and the Brain.

Beyond FOMO

If you’re worried that you’ve missed out, I’ve got good news: the schedule for the February Conference in San Francisco has been posted!

Is “Cell Phone Addiction” Really a Thing?
Andrew Watson
Andrew Watson

I recently read a tweet asserting “the fact that cell phones are proven to be as addictive as drugs.”

Of course, people casually use the word “addictive” about all sorts of things: chocolate, massages, pumpkin-spice lattes. (No doubt somewhere Twitter is being described as “addictive.” My mother tells me that this blog is addictive.)

But all that casual language doesn’t add up “proving the fact” that cell phones are “as addictive as drugs.” So I started wondering: has this “fact” been “proven”?

Good News, Bad News (Good News, Meh News)

Over the years I’ve adopted a simple strategy. When someone makes a factual claim about research, I ask for the research.

In this case, I simply asked the tweep for the research behind the claim.

Good news: He answered!

I’ve been amazed over the years how often people make “research-based” claims and then refuse to provide research to support them.

In this case, he did in fact point toward research on the topic. (I found one website claiming that 16% of adolescents, and 6.3% of the population, are addicted; alas, no sources cited. Happily, they do provide treatment…)

The tweep’s willingness to cite evidence enhances his credibility. Let’s check it out…

Bad news: hmm. “Evidence,” in this case, means “links to newspaper articles.” Generally speaking, USA Today and Vice.com aren’t the best places to find research. Research is published in journals. (Heck, I’ve even criticized the New York Times for its research credulity.)

So: the tweep’s credibility clicks down slightly.

Good news: All three of the links do, in fact, point to underlying research! I didn’t get a direct connection to the promised research, but I can keep digging to find it.

Credibility clicks back up.

Meh news: it turns out that all three articles point to the same underlying research. That is: I didn’t find three studies supporting the claim that “cell phones are proven to be as addictive as drugs”; I got one.

Now: one study isn’t nothing. But [checks notes] one isn’t three.

This Just In: Correlation Isn’t…

Given how much is riding on this one study, let’s check it out.

First off, we can see right there in the title that the study focuses on correlation. As you’ve no doubt heard dozens (thousands?) of times, “correlation isn’t causation.”

In this case, the authors asked 48 people questions about their cell-phone usage. Based on their answers, they categorized some of those people as “addicted.” And they then found brain differences between the “addicted” and “not addicted” people.

This quick summary leads to several concerns.

First: one study of 48 people doesn’t “prove a fact.” It might be an interesting data point, but that’s all.

Second: this study doesn’t claim to “prove a fact.” Using a questionnaire, it DEFINES some folks as addicted and others as not addicted.

Third: “brain differences” always seems like a big deal, but trust me — they might not be.

People who throw the javelin probably have a different “average muscular profile” than people who run marathons, because they’re using different muscles.

People who play the piano probably have different neural profiles than people who dance ballet, because they’re spending more time using THIS part of the brain than THAT part.

It seems likely people who score high on this “cell-phone addiction” questionnaire behave differently than those who don’t; so it’s not dramatically surprising that their brains are different.

Did the phone cause to brain differences, or brain differences cause phone use? We don’t know. (Because, “correlation isn’t …”)

Important to Note

One interesting point does jump out. The brain differences found by this research team do — in some ways — align with plausible predictions about addiction.

Now, the researchers don’t make strong claims here: the word “drugs” appears only once in the body of the study.

This finding isn’t a big surprise to me. Very roughly, the  brain differences have to do with “our ability to control what we pay attention to.” It’s not hugely surprising that heavy cell-phone users have brain differences there (and that people addicted to drugs do too).

Don’t Stop Now

If the tweep’s study doesn’t support the claim that “cell phones are proven to be addictive,” does other research?

To answer that question, I did a simple google search (“cell phone addiction real”). The first scholarly article that pops up says…not so much.

Here’s their summary:

Although the majority of research in the field declares that smartphones are addictive or takes the existence of smartphone addiction as granted, we did not find sufficient support from the addiction perspective to confirm the existence of smartphone addiction at this time.

The behaviors observed in the research could be better labeled as problematic or maladaptive smartphone use and their consequences do not meet the severity levels of those caused by addiction.

In brief: “maladaptive,” yes; “addictive,” no.

As I continued clicking, I found other skeptical reviews (for instance, here), and also found some that embrace the category (with some open questions, here).

Oh, and, by the way: “cell phone addiction” isn’t included in the DSM-5.

In other words, I think we can reasonably describe the category of “cell phone addiction” as an active scholarly debate. To be clear, this conclusion means we can’t reasonably describe it as “a proven fact.”

Why I Care

I am, believe it or not, open to the idea that cell phones might be addictive. If they are — if at some point research “proves that fact” — then this label might help us treat a real problem effectively.

My objection springs from another source entirely.

I worry when debate about measurable claims sinks to applying insulting labels.

If I think that asking students to memorize is a bad idea, I could study research on the topic. Or, I could dismiss it as “drill and kill.” The insulting label replaces the argument.

If I think that teacher talk is bad, I could study research on the topic. Or, I could mock it as “sage on the stage.” The dismissive label replaces the argument.

If I think that cell-phone usage is bad for teens, I could study research on the topic. Or, I could call it “an addiction.” The alarming label replaces the argument.

If we’re going to rely on research to make decisions about teaching and education (which is, after all, the GOAL of our organization) we should never replace research with labels.

Instead, let’s try something else. Let’s replace labels with research…


Horvath, J., Mundinger, C., Schmitgen, M. M., Wolf, N. D., Sambataro, F., Hirjak, D., … & Wolf, R. C. (2020). Structural and functional correlates of smartphone addiction. Addictive behaviors105, 106334.

Panova, T., & Carbonell, X. (2018). Is smartphone addiction really an addiction?. Journal of behavioral addictions7(2), 252-259.

Billieux, J., Maurage, P., Lopez-Fernandez, O., Kuss, D. J., & Griffiths, M. D. (2015). Can disordered mobile phone use be considered a behavioral addiction? An update on current evidence and a comprehensive model for future research. Current Addiction Reports2(2), 156-162.

Gutiérrez, J., & Rodríguez de Fonseca, F. (2016). Gabriel Rubio.: Cell Phone Addiction: A Review. Front. Psychiatry7, 175.

The Best Way to Take Class Notes
Andrew Watson
Andrew Watson

Teachers often ask me: “how should my students take notes?”

That question typically springs from a heated debate. Despite all the enthusiasm for academic technology, many teachers insist on hand-written notes. (Long-time readers know: I have a provocative opinion on this topic.)

For the time being, let’s set that debate aside.

Instead, let’s ask a more important question: what kind of mental processing should my students do while they take notes?

If students get the mental processing right, then perhaps the handwriting/laptop debate won’t matter so much.

Possibilities and Predictions

To study complicated questions, we start by simplifying them. So, here’s one simplification: in class, I want my students to…

…learn specific facts, ideas, and procedures, and

…learn connections and relationships among those facts, ideas, and procedures.

Of course, class work includes MANY more complexities, but that distinction might be a helpful place to start.

So: should students’ note-taking emphasize the specific facts? OR, should it emphasize the connections and relationships?

The answer just might depend on my teaching.

Here’s the logic:

If my teaching emphasizes facts, then students’ notes should focus on relationships.

If my teaching emphasizes relationships, then their notes should focus on factual specifics.

In these cases, the note-taking strategy complements my teaching to be sure students think both ways.

Of course, if both my teaching and students’ notes focus on facts, then mental processing of relationships and connections would remain under-developed.

In other words: we might want notes to be complementary, not redundant, when it comes to mental processing.

In fact, two researchers at the University of Louisville — Dr. David Bellinger and Dr. Marci DeCaro — tested such a prediction in recent research

Understanding Circulation

Bellinger and DeCaro had college students listen to information-heavy lecture on blood and the circulatory system.

Some students used guided notes that emphasized factual processing. This note-taking system — called “cloze notes” — includes a transcript of the lecture, BUT leaves words out. Students filled in the words.

Bellinger, D. B., & DeCaro, M. S. (2019). Note-taking format and difficulty impact learning from instructor-provided lecture notes. Quarterly Journal of Experimental Psychology, 72(12), 2807-2819.

Others students used guided notes that emphasized conceptual/relational processing. These notes — “outline notes” — organized the lecture’s ideas into conceptual hierarchies, which the students filled out.

And, to be thorough, Bellinger and DeCaro used both “more challenging” and “less challenging” versions of these note systems. As you can see, examples A and B above leave much larger blanks than examples C and D.

So, which note-taking system helped students more?

Because the lecture was “information heavy,” a note-taking system that highlights facts (the “cloze notes”) would be “redundant,” while a system that highlights conceptual relationships (the “outline notes”) would be “complementary.”

That is: students would get facts from the lecture, and see relationships highlighted in the outline notes.

For this reason, Bellinger and DeCaro predicted that the outline notes would help more in this case.

And, sure enough, students remembered more information — and applied it more effectively — when they used the challenging form of the outline notes.

Classroom Implications

Based on this study, do I recommend that you use outline notes with your students?

NO, READER, I DO NOT.

Remember, the “outline notes” worked here because (presumably) they complemented the factual presentation of the lecture.

If, however, the lecture focused more on relationships and connections, then (presumably) “cloze notes” would help more. They would be “complementary.”

As is so often the case, I don’t think we teachers should DO what research says we should DO.

Instead, I think we should THINK the way researchers help us THINK.

In this case, I should ask myself: “will my classroom presentation focus more on facts, or more on relationships and connections?”

Honestly: that’s a difficult question.

In the first place, I lecture only rarely.

And in the second place, my presentations (I hope) focus on both facts and relationships.

But, if I can figure out an answer — “this presentations focuses on relationships among the characters” — then I should devise a complementary note system. In this case, “cloze notes” would probably help, because they highlight facts (and my presentation highlights connections).

In other words: this research — and the theory behind it — doesn’t offer a straightforward, simple answer to the question that launched this post: “how should my students take notes?”

Because learning is complicated, such a usefully intricate answer might be all the more persuasive.


Bellinger, D. B., & DeCaro, M. S. (2019). Note-taking format and difficulty impact learning from instructor-provided lecture notes. Quarterly Journal of Experimental Psychology72(12), 2807-2819.

What is “Mind, Brain, Education”? Defining the Undefinable…
Andrew Watson
Andrew Watson

Here at Learning and the Brain, we bring together psychology (the study of the MIND), neuroscience (the study of the BRAIN), and pedagogy (the study of EDUCATION).

That is: we bring together THREE complex fields, and try to make sense of their interactions, differences, and commonalities.

Such interdisciplinary work creates meaningful challenges.

In any one of those fields, scholars argue about basic definitions and concepts. So, you can imagine the debates that rage when all 3 disciplines together. (Quick: what does the word “transfer” mean? Each field defines that word quite differently…)

So, who decides what “we” think in the field of MBE? What core beliefs hold us together, and how do we know?

One Answer: Ask Delphi

To solve this puzzle, Dr. Tracy Tokuhama-Espinosa, Dr. Ali Nouri, and Dr. David Daniel organized a “Delphi Panel.”

That is: they asked 100+ panelists to respond to several statements about the intersection of psychology, neuroscience, and education. (Full disclosure: I’m almost sure I was 1 of the 100 — but I don’t have specific memories of my contributions.)

They then crunched all those answers to determine a) the panelists’ points of agreement, and b) their enduring concerns about those points.

For instance, 95% of the panelists agreed with this statement:

Human brains are unique as human faces. While the basic structure of most humans’ brains is the same (similar parts in similar regions), no two brains are identical. The genetic makeup unique to each person combines with life experiences and free will to shape neural pathways.

However, several participants disagreed with the inclusion of the phrase “free will” — including some who agreed with the statement overall.

This Delphi Panel method, in other words, BOTH looks for points of consensus, AND preserves nuanced disagreements about them.

 21 Tenets, and Beyond…

So, what do “we” in the world of MBE believe?

The Delphi Panel supported 6 principles and 21 tenets across a wide range of topics: motivation, facial expression, tone of voice, sleep, stress, novelty, even nutrition. (91% of panelists agreed with the statement “NUTRITION influences learning. Basic nutritional needs are common to all humans, however, the frequency of food intake and some dietary needs vary by individual.”)

Taken all together, they add up to several Key Concepts — almost all of which matter to teachers who read this blog.

For instance:

Teachers should understand some basic definitions, and beware of some enduring neuromyths. (“Learning styles,” here’s looking at you.)

We should know that attention networks can improve, and so can executive functions. (I’m a little concerned about this last statement, as it implies false hopes about working memory training.)

Teachers should know that affect matters as much as cognition; that retrieval practice and spacing really work; that growth mindset is a thing; that interleaving helps.

Excellent Timing

In fact, several of this Delphi Panel’s conclusions align with our upcoming conference on Calming Anxious Brains (starting November 19).

For instance:

STRESS influences learning. However, what stresses one person and how may not stress another in the same way. (95% agreement)

ANXIETY influences learning. However, what causes anxiety in one person may not cause anxiety in another. (97% agreement)

In other words: our students aren’t little learning computers. Their emotional systems — when muddled by the stress and anxiety of Covid times — influence learning profoundly.

Teachers should attend to our students’ emotional lives not because of some misguided mushiness; instead, we do so because those lives can make learning much harder, or much more fluent and natural.

MBE research, and the Delphi Panel, say so.


As a bonus, here’s Dr. Tokuhama-Espinosa explaining the “The Difference between Mind, Brain and Education, Educational Neuroscience and the Learning Sciences”:

Changing the System: Where Do We Start?
Andrew Watson
Andrew Watson

I recently spent two hours talking with a group of splendid teachers from Singapore about Mindset Theory.

We talked about “charging” and “retreating.” We discussed “performance goals” and “learning goals.” Of course, “precise praise” merited lots of attention.

At the end of our session, several of their insightful questions focused on systemic change:

How can we help teachers (not just students) develop a growth mindset?

How can we change our grading system to promote GM goals?

What language should we use throughout the school to talk about learning and development?

These questions — and others like them — got me thinking:

We know that psychology and neuroscience research has so much to offer teachers, learners, and education. What systems should be in place to spread the word? 

Thinking Big

This question gets complicated quickly.

In the first place, teaching will (almost) always be INDIVIDUAL work taking place within a complex SYSTEM.

In some cases, we want teachers to have lots of freedom — say, to try out teaching strategies suggested by cognitive science.

In other cases, we want teachers to follow their school leaders’ guidance — say, when leaders follow wise psychology research.

How can we get that balance right?

  • In England, I believe, a national agency (OFSTED) has evaluation standards that apply to all schools and teachers.
  • France is in the process of creating a Council to vet research-based advice to schools and teachers. (LatB speaker Stanislas DeHaene is taking a leading role.)

In the US, of course, local control of schools makes such a system hard to imagine.

What might we do instead? What levers can we push?

I know of one organization — Deans for Impact — that focuses on teacher education.

Their logic makes great sense.

If we can ensure that teacher training programs incorporate cognitive science wisely, we can change the beliefs and practices of a generation of teachers.

Now THAT would — as they say — “move the needle.”

D4I has published a number of immensely useful summaries and reports. This one, for instance, briskly summarizes six core principles of learning: the research behind them, and their classroom implications.

Focus on Schools

Instead of teacher training, we might focus on schools as systems.

Eric Kalenze (blog here) has written a splendid book about creating a school within a schoolWhat The Academy Taught Us doesn’t focus on cognitive science, but it does offer a chalk-in-hand view of building new systems from scratch.

In Kalenze’s telling, a supportive and inspiring principal created just the right combination to allow for meaningful change. (And a school district’s overly rigid policies brought this hopeful experiment to an end.)

I know of several independent schools that are doing exactly this work. The Center for Transformative Teaching and Learning at St. Andrew’s School has been guiding their faculty — and teachers across the country — for over a decade.

The Peter Clark Center for MBE at the Breck School and the Kravis Center for Excellence in Teaching at Loomis Chaffee (the school where I work) both do excellent work in this field.

Perhaps this “Center” model will spread widely throughout schools in the US. If so, these highly local “Deans for Impact”-like initiatives just might — gradually but powerfully — shape the future of teaching.

One By One

At the same time, my own experience suggests the importance of working teacher by teacher.

I attended my first Learning and the Brain conference in 2008. Inspired by the possibilities of combining psychology, neuroscience, and education, I began my own independent exploration.

Although I don’t run a school or supervise teachers, I’m able to spread the word — both as a classroom teacher, and in my work as a consultant (hello Singapore!).

And here’s where Learning and the Brain conferences continue to be so valuable.

The more individual teachers who attend — the more groups of teachers who pool together to share excitement and ideas — the more we can expand networks and create the movement we need.

Perhaps the best way to change the complex system is: one teacher at a time.

I hope you’ll join us in Boston in November!

Understanding Adolescents: Emotion, Reason, and the Brain
Andrew Watson
Andrew Watson

Kurt Fischer — who helped create Learning and the Brain, and the entire field of Mind, Brain, and Education — used to say: “when it comes to the brain, we’re all still in kindergarten.”

He meant: the brain is so FANTASTICALLY complicated that we barely know how little we know.

Yes, we can name brain regions. We can partially describe neural networks. Astonishing new technologies let us pry into all sorts of secrets.

And yet, by the time he left the program he founded at Harvard, Dr. Fischer was saying: “when it comes to the brain, we’re now just in 1st grade.”

The brain is really that complicated.

Fascinating Questions

Adolescents — with their marvelous and exasperating behavior — raise all sorts of fascinating questions.

In particular, we recognize a real change in their ability to think abstractly.

Unlike their younger selves, teens can often “infer…system-level implications…and lessons that transcend the immediate situation.”

We can say in a general way that, well, teens improve at this cognitive ability. But: can we explain how?

More specifically, can we look a their brains and offer a reasonable explanation? Something like: “because [this part of the brain] changes [this way], teens improve at abstract thinking.”

A research team at the University of Southern California wanted answers.

Networks in the Brain

These researchers showed 65 teens brief, compelling videos about “living, non-famous adolescents from around the world.” They discussed those videos with the teens, and recorded their reactions.

And then they replayed key moments while the teens lay in an fMRI scanner.

In this way, they could (probably) see which brain networks were most active when the teens had specific or abstract reactions.

For example, the teen might say something specific and individual about the teen in the video, or about themselves: “I just feel so bad for her.”

Or, she might say something about an abstract “truth, lesson, or value”: e.g., “We have to inspire people who have the potential to improve society.”

If some brain networks correlated with specific/individual statements, and other networks with abstract/general statements, that correlation might start to answer this question.

As usual, this research team started with predictions.

They suspected that abstract statements would correlate with activity in the default mode network.

And, they predicted that concrete statements would correlate with activity in the executive control network.

What did they find?

Results and Conclusions

Sure enough, the results aligned with their predictions. The orange blobs show the teens’ heightened neural activity when they made abstract statements.

And: those blobs clearly overlap with well-established regions associated with the Default Mode Network.

Neural correlates of abstract construals. Results from a whole-brain analysis reveal regions whose activity while responding to documentary-style stories positively correlates with abstract construal scores from the interview (N = 64). The image is subjected to a cluster forming threshold of P < 0.001, and cluster extent thresholded at k = 177 voxels (for illustrative purposes). The in-set image depicted in purple correspond to 6 mm spherical ROIs located in the DMN. The in-set scatterplot depicts participants’ average parameter estimates (β) from all voxels within the identified ROIs relative to abstract construal scores. Each dot represents one participant. Inf-post PMC = inferior/posterior posteromedial cortices; DMPFC = dorsomedial prefrontal cortex; VMPFC = ventromedial prefrontal cortex. Unless provided in the caption above, the following copyright applies to the content of this slide: © The Author(s) 2021. Published by Oxford University Press.This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.

The study includes a second (even more intricate!) picture of the executive control network — and its functional overlap with concrete statements.

The headline: we can see a (likely) brain basis for concrete and abstract thought in teens.

Equally important, a separate element of the study looks at the role of emotion in adolescent cognition. (One of the study’s authors, Dr. Mary Helen Immordino-Yang, has worked on this topic for years.)

In brief, emotions don’t necessarily limit thinking. They can focus and motivate thinking:

“Rather than interfering with complex cognition, emotion in the context of abstract thinking may drive adolescents’ thinking forward.”

The much-discussed emotionality of teenage years might not be a bug, but a feature.

A Final Note

I’m especially happy to share this research because its lead author — Dr. Rebecca Gotlieb — has long been the book reviewer for this blog.

If you’ve ever wondered how she knows so much about the books she reviews, well, now you know.

Because of work that she (and so many other) researchers are doing, Dr. Fischer could now say that we’re entering 2nd grade in our understanding of the brain…


A Final Final Note

Neuroscience studies always include more details than can be clearly summarized in a blog post. For those of you who REALLY want to dig into the specifics, I’ll add three more interesting points.

First: knowing that scientific research focuses too much on one narrow social stratum, the researchers made a point to work with students who aren’t typically included in such studies.

In this case, they worked with students with a lower “socio-economic status” (SES), as measured by — among other things — whether or not they received free- or reduced-priced lunch. Researchers often overlook low SES students, so it’s exciting this team made a point to widen their horizons.

Second: researchers found that IQ didn’t matter to their results. In other words, “abstract social reasoning” isn’t measured by IQ — which might therefore be less important than some claim it to be.

Third: teachers typically think of “executive function” as a good thing. In this study, LOWER activity in the executive control network ended up helping abstract social thought.

Exactly what to make of this result — and how to use it in the classroom — is far from clear. But it underlines the dangers of oversimplification of such studies. Executive functions are good — obviously! But they’re not always beneficial for everything.


Rebecca Gotlieb, Xiao-Fei Yang, Mary Helen Immordino-Yang, Default and executive networks’ roles in diverse adolescents’ emotionally engaged construals of complex social issues, Social Cognitive and Affective Neuroscience, 2021;, nsab108, https://doi.org/10.1093/scan/nsab108

Let’s Get Practical: Signaling a Growth Mindset
Andrew Watson
Andrew Watson

Most teachers know about Mindset Theory: the idea that students’ beliefs about intelligence shape their success in learning.

Specifically:

If I think that intelligence (whatever that is) can’t change, I learn less.

If I think that intelligence can change, I learn more.

Once widely believed and championed, this theory now faces real doubts — especially following two meta-analyses by Sisk and Burgoyne showing that mindset strategies produce (on average) negligibly small effects.

Alas, Mindset debates often fall into two extreme camps:

“Tell students about growth mindsets — they’ll learn more!” or,

“Mindset research is nonsense; skip the whole thing.”

Ugh.

Perhaps we can do better?

Doing Better

Dan Willingham (I believe) has argued that contrary findings about growth mindset don’t exactly “disprove” Mindset Theory. Instead, they remind us that getting mindset strategies right takes precision and care.

We shouldn’t blithely think: “I’ll just do some mindset stuff now.”

Instead, we should think: “I need to ensure my mindset strategy aligns with research, and with my students, quite precisely.”

For instance: I’m skeptical that simply telling students about mindset — the most common strategy I hear about — has much enduring effect.

Instead, I think we need to have quiet and consistent classroom policies and procedures that re-enforce Growth Mindset messages.

Obviously, if we tell our students that intelligence CAN change and act as if we believe it CAN’T, our actions reveal what really matters to us.

One Recent Example

One research group from Washington State wondered if the syllabus of a college course might be enough to communicate a professor’s mindset.

They created two mindset versions of a Calculus syllabus.

The Fixed Mindset Syllabus, for instance, said:

“If you have not mastered these concepts, you should consider dropping the course.”

“I do not take attendance in class [because] I do not penalize students with strong math abilities.”

It also had one heavily-weighted final exam.

The Growth Mindset Syllabus, by contrast, said:

“I you have not mastered these concepts, you should see me or a teaching assistant and we will provide resources.”

“All students will learn something new and attending class is the best way to learn.”

This syllabus had many exams, equally weighted.

Sure enough: both men and women assumed a) that the professor who wrote the FM syllabus indeed had a fixed mindset, and b) that this professor probably assumed that women are “naturally worse at math” than men.

And, women (but not men) who read the FM syllabus did worse on a subsequent math test than those who read the GM syllabus.

Beyond the Syllabus

These perceptions, it turns out, influenced learning beyond the syllabus.

This research team had students rate their professors’ mindsets.

In 46 courses across the university, students — both male and female — rated their STEM professors’ mindsets similarly. That is: some professors rated strongly at the fixed mindset end of the scale — and the students’ gender didn’t matter in that rating.

And, both male and female students assumed that fixed-mindset professors believed that “women struggle to do well in advanced math.”

Sure enough: men had higher average grades in classes taught by FM professors. Women had higher average grades in classes taught by GM professors.

In other words: those syllabus policies — combined with other classroom factors — influence students’ learning.

It might be hard to identify exactly what causes this effect, but mindset certainly seems to be an important part of the equation.

What Should K-12 Teachers Do?

Few pre-college teachers have a syllabus with the gravitas of a college syllabus.

We do, however, have about policies and procedures. We talk about policies and procedures with our students. This study (and many others) encourages us to rethink those policies with their mindset implications in view.

For instance: does our rewrite policy suggest we think that students can get smarter? (I say to my students: “If you say to me you want to work harder and learn more, why would I say no to that? OF COURSE you can revise the essay!”)

Do we have different policies for “smart students” than “other-than-smart students”?

Do we — heaven help us — have a program for students we call “Gifted”?

In brief: we should not think of mindset as a topic we discuss once in a special program.

Instead, we should consider — bit by bit, day by day — what signals we send to our students. If the language we use, the policies we communicate, the procedures we follow all demonstrate “I think you can get smarter,” our students just might believe us.

If we think they can, they will.