L&B Blog – Page 9 – Education & Teacher Conferences Skip to main content
“Comprehensive and Manageable”: Walkthrus Has It All
Andrew Watson
Andrew Watson

Teachers who want to rely on cognitive science to improve our teaching have SO MANY good options to choose from:

The best ways to help students practice,

The best ways to help explain new material,

Even the best ways to help students feel connected to one another.

This good news, however, can quickly become bad news.

How can we keep track of all this guidance?

How can we balance and combine all these suggestions?

As I’ve written elsewhere, we’re lucky to have an increasing number of books that bring all these pieces together. (I wrote about Teaching and Learning Illustrated just a few weeks ago.)

Another EXCELLENT candidate in this field has been published for a US audience in recent months: Walkthru: 5-Step Guides to Build Great Teaching by Tom Sherrington and Oliver Caviglioli.

The cover of Walkthru: a bright yellow and white cover, with a drawing of two teachers thinking and talking together

Many books in this field summarize and organize research into coherent topics and flow charts.

Sherrington and Caviglioli – long time educators, both – take a different approach. They start from the assumption that teachers want to do something practical with the research right now.

With that in mind, they sort dozens of ideas into “Walkthrus”: a series of five concrete steps that teachers can take to focus on and improve a particular part of their teaching practice.

You want to be better at cold calling?

You want a new way to think about seating charts?

Maybe you’d like to create routines that foster a sense of classroom belonging?

For each of these goals – and DOZENS of others — you can pick a Walkthru and get down to work.

Here’s the fuller story:

The Background

Sherrington and Caviglioli build their Walkthrus on conceptual work done by many other scholars in this field. And – helpfully – they highlight these conceptual frameworks in the first section of their book, entitled “Why?”

MANY – perhaps most – of these frameworks will be familiar to long-time readers.

You’ve already heard about Caviglioli’s own work on dual coding.

I interviewed Blake Harvard for this blog many years ago.

Peps Mccrea’s book on Motivation gets the focus it deserves.

All the greats appear in this first section: Dan Willingham, and Generative Learning, and Shimamura’s MARGE model, and Cognitive Load Theory, and …

In effect, these 30 pages briskly summarize the essential thinkers, models, and frameworks of recent decades.

You might think this “Why” section as a Hall of Fame for this field.

Getting Practical

This sort of brisk summary can be inspiring, but it can also be overwhelming. What should we teachers DO with SO MUCH information?

Fear not!

Sherrington and Caviglioli spend the next 200+ pages answering exactly that question.

As a teacher – or instructional leader – you might pick one of the book’s broader sections: say, “Questioning and Feedback,” or “Behavior and Relationships,” or “Mode B Teaching.”

Or, you might pick one of the individual Walkthrus.

To take one example – literally at random – you might decide to work on helping students read. Happily, one Walthru focuses on “Building a Culture of Reading.” Steps here include:

Read Across the Curriculum, and

Embrace Reading Aloud, and

Embed Reading in Homework Tasks.

You can work through these steps at your own pace in an iterative cycle, which Sherrington and Caviglioli call “ADAPT” (see page 290).

In other words: teachers don’t need to do everything all at once. And we don’t need to figure out how to structure the application process.

Instead, Walkthrus walks us through the translation from theory (the “Why” section) to practice (the “What” section).

This strategy means that an enormous amount of research-based advice is repackaged into brief and manageable steps.

Some Important Notes

First: The USA version of Walkthrus distills the greatest hits from a 3-volume version published in the UK. If you’re REALLY into Walkthrus, you might look for that larger set.

Second: Sherrington and Caviglioli – of course! – make decisions about what to include (and not). Not all teachers or leaders will agree with all these decisions.

However: you can easily find points of agreement and focus there. The book’s structure, in fact, encourages you to do so.

Third: I share a publisher (John Catt) with these authors; in fact, I wrote a “blurb” for the book. I don’t think these factors have influenced my review, but you should have those facts as you weigh my opinions.

TL;DR

You’re looking for a resource that sums up the most important ideas for applying cognitive science to the classroom?

You’d like it to be simultaneously comprehensive and manageable?

Walkthrus just might be the book for you.

How to Change Students’ Minds? Create Surprise…
Andrew Watson
Andrew Watson

Sometimes teaching is challenging. And sometimes, it’s REALLY challenging.

For instance:

Because I’m an English teacher, I want my students to know the word “bildungsroman.” (It means, “a novel of character formation.” Their Eyes Were Watching God depicts Janie’s formation as a complete person — so, it’s a bildungsroman.)

Alas, students find tha word to be disturbingly odd: “bildungswhat???” they cry.

And the definition is at times perplexing. Are the Harry Potter novels examples of a bildungsroman? How about The Book Thief?

So, learning that definition presents a challenge.

But, other literary terms create a bigger learning challeng.

As an English teacher, I also want my students to know the definition of the word “comedy.”

In this case, my students and I face a much different problem. That is: my students think they already know what ‘comedy’ means.

They think it means, basically, “a story that’s funny.”

In the world of literary analysis, however, “comedy” doesn’t mean funny.

Basically, the definition goes like this: ” ‘tragedy’ ends in death or banishment; ‘comedy‘ ends in marriage, implying birth.” (Lots more to say, but that’s a good place to start.)

So: according to this definition, sitcoms aren’t comedy.

And all sorts of stories can be comic, even if they’re not even a little bit funny. (I just read a murder mystery which has a comic ending: one of the protagonists goes on a date — implying the potential for marriage.)

In research world, we call this problem a “prior misconception.”

That is: my students think they know the correct answer (“comedy” = funny), but the question really has a different answer (“comedy” = ending implying marriage).

Sadly, prior misconceptions make learning harder. Students’ prior misconceptions complicate the process of learning correct answers or concepts.

So: what’s a teacher to do?

A Scientific Method?

Although the examples I’ve offered focus on teaching English literary terminology, this question gets most research attention for teaching scientific concepts.

A brighly colored beac ball floating in a vibrantly blue pool

For instance: imagine pushing a solid ball underwater. How much liquid will it displace?

Younger students have an important misconception about this question. They typically think that the amount of water depends on the WEIGHT of the ball, not the SIZE of the ball.

This misconception about “displacement” will get in the way of later scientific learning, so teachers should correct it as quickly as we can. How best to do so?

A research team in Germany approached this question with a specific strategy: using surprise.

These researchers showed a video to 6- to 9-year-olds, whom they met at a natural history museum.

Half of the children were asked to predict how much water would be displaced when balls of various sizes and materials were submerged. Then they saw the actual results.

Sure enough: the children who made predictions  — based on their prior misconceptions — were more surprised than those who didn’t. (Believe it or not, surprise in this case is measured by pupil dilation!)

And, those children learned more from the experiment than those who didn’t make predictions.

That is: they scored higher on subsequent tests about displacement. And — even better — they scored higher on transfer tests of this concept.

So, one potential strategy to help students overcome their prior misconceptions about the natural world:

Step one: ask them to make predictions based on those misconceptions

Step two: surprise them with real-world experiences that contradict them.

Boom: minds changed.

Strengths, and Doubts

When I first saw it, this study appealed to me for a number of reasons.

First, one author — Garvin Brod — has worked on several studies and papers that I admire. (I’ve written about another one here.)

So, when I see Dr. Brod’s name on the byline, I sit up and take notice.

Second: for a variety of technical reasons, I admire the study design. The researchers have taken great care to get the tricky details just right. (For instance: their active control condition makes sense to me.)

However, I do have concerns. (To be clear: Brod & Co. acknowledge both these concerns in their “Limitations” section.)

Concern #1: duration.

For understandable reasons, researchers measured the students’ learning right away. (The entire process took about 30 minutes.)

But we don’t want our students to change their prior misconceptions right now. We want them to change misconceptions — as much as possible — FOREVER.

This problem creates concerns because prior misconceptions are stubborn. To ensure that the “surprise” method works, it would be GREAT if we could retest participants weeks or months later.

Concern #2: contradiction.

I have seen other authors and writers raise a plausible concern. If we invoke students’ prior misconceptions before contradicting them, we run the risk of strengthening those misconceptions.

That is: students will naturally filter the new/contradictory experience through the the distorting lens of their misconceptions. And that lens is EVEN MORE DISTORTING because we just asked students to activate it.

Now at this point I have a confession: I simply can’t remember where I read that. But I remember thinking: “that sounds really plausible to me.”

So at this point, I’m honestly kind of stumped. A well-conceived study suggests the “surprise” strategy will work (at least in the short term). But other scholars in this field have plausible doubts.

Truthfully, I’m hoping one of you will know the “don’t invoke prior misconceptions!” research pool and point it out to me. If/when that happens, I’ll share it with you.

TL;DR

This study suggest that asking students to make predictions based on their prior misconceptions increases their surprise when those misconceptions are contradicted by experience.

And: that feeling of suprise helps them learn a correct conception — at least in the field of science.

However, I myself am not fully persuaded by this approach. I’ll keep a lookout for other studies in the field, and share them with you.


 

Theobald, M., & Brod, G. (2021). Tackling scientific misconceptions: The element of surprise. Child Development92(5), 2128-2141.

Guest Post: “My Learning and the Brain Story”
Guest Post
Guest Post

Beth Hawks has taught science for 25 years. She now serves as the science department chair at Grace Christian School in Raleigh, North Carolina. A graduate of Orla Roberts University, Beth has taught 8th grade Physical Science, Physics, Chemistry, Algebra IB, among other courses. She frequently provides professional development to colleagues in her role as resident brain enthusiast.


When I started teaching at my current school twenty-one years ago, one of the “areas for improvement” on my year-end evaluation was that I didn’t seek out professional development.  I couldn’t disagree.  Back then, it never occurred to me to seek it out.  Both schools I had previously taught in said, “It is professional development day.  Go over there and learn.”  

Headshot of author and teacher Beth Hawks

While I read about teaching, I did not attend very many workshops or conferences that took place on school days.  I hated being out of my classroom, and every teacher knows what a pain it is to develop sub plans.  

That all changed when a brochure showed up in my mailbox for the 2018 Learning and the Brain conference.  

Teachers get a lot of brochures advertising workshops and conferences from a variety of sources, and we ignore most of them. I was on my way to the recycling bin with this one when a keynote speaker whose book I had just read caught my attention, so I opened the brochure.  As I looked at the names and credentials of speakers and the topics of sessions, I was impressed.  

By the end of that morning’s door duty, I was prepared to beg.  I said to my administrator, “You know how I never want to go to anything?  I’ve never wanted to go to anything more than this!”  They approved my request provided that I would teach my academic team what I had learned upon my return.  

Deal.

What I found when I got to Boston was not just one great speaker, but an incredible collection of researchers, scientists, and educators with deep knowledge.  More importantly, they were down to earth and transparent, willing to answer questions and talk and follow up with me after the conference because they cared about improving my practice, not just getting a speaking gig.  I couldn’t get enough of being taught by these experts.  

When I returned home, I was given time in a faculty meeting to present about what I had learned.  I had so much to present, I asked if I could have two.  It was lovely knowing that what I had learned would benefit classrooms other than my own.

In 2019, I attended the Boston conference again.  While the first year had been great, this was the year that I absolutely fell in love with the science of learning.  From the keynote presentations of Barbara Oakley, David Daniel, David Rose, and Sarah Jayne Blakemore to the sessions of John Almarode and Marcia Tate, I came home with both theory and practical advice that changed my classroom practice dramatically.  

  • I thought about encoding and engagement in new ways.  
  • I made a clumsy attempt at interleaving.  
  • I instituted more pre-questions and retrieval than ever before.  

Trying new methods was energizing because I was doing it with more sense of purpose and intention, knowing it was based on evidence.  

I emailed my principal from the airport and told her how much I had learned and that I was pretty sure I could develop a six session course for my colleagues.  She got it approved by our accrediting organization for CEU credit, and by January, nineteen of my fellow teachers were getting the benefit of my Learning and the Brain experience.  We had a great time learning together and brainstorming new ideas for our classes, which ranged from transitional kindergarten to AP Calculus.  

When we were all thrown into virtual instruction just two months later, it was reassuring to have a basis for decision making about what mattered regardless of location.  

  • Encoding looked different than it had in my classroom, but I could still adapt what I had learned about encoding to this new context.  
  • Retrieval would be done through an online chat rather than mini-whiteboards, but I could be confident about the fact that it was still important.  

Those who had taken the course expressed the same feelings, and I was grateful that I had been able to provide them with that reassurance by being a conduit of the Learning and the Brain experience. 

Because of this conference, I became a voracious consumer of cognitive science research.  Last year, when the school allowed me to run the six week course again, I had trouble staying within my time limits because I had learned even more from books, blogs, and podcasts that I would not have sought out without the experience of Learning and the Brain.  Now, I find myself making resource recommendations to the teachers around me several times a week.

This November, I had the awesome opportunity of bringing a friend with me to the conference.  Watching him appreciate the experience deepened my love for it even more.  I enjoyed introducing him to people I respect, and we both got to meet some of our academic heroes.  Talking through what we had learned on our walks back to the hotel helped us both learn even more.  While I appreciate that the virtual option exists for those whose budgets won’t allow travel, I greatly value the things that happen in person.  The ability to talk directly to someone whose work I admire (e.g. Daniel Willingham) can’t be replicated digitally.  I would recommend to anyone that they attend the conference in person if they can. 

I am writing this during Thanksgiving week, so let me end with this.  I am thankful for Learning and the Brain.  If that brochure had not appeared in my box five years ago, I would not have grown in my teaching as much as I have or been able to help my colleagues grow in theirs.  I am thankful so many educators get to learn and grow and communicate with each other, and I am grateful to call them friends.

Classroom Cognition Explained, or, Dual Coding Just Right
Andrew Watson
Andrew Watson

The Good News: research into cognitive science can be SPECTACULARLY USEFUL to teachers. (That’s why we have Learning and the Brain conferences….)

Book Cover for Teaching & Learning Illuminated

The Less Good News: ideas that come from cognitive science can be MISUNDERSTOOD and MISAPPLIED with alarming frequency.

For example: as I’ve written elsewheredual coding has lots of potential benefits for reducing working memory load — and thereby helping students learn. That’s the good news.

But — less good news — dual coding has too often been interpreted to mean “put icons on things to make them better.”

Wouldn’t it be great if someone could bring together LOTS of ideas from cognitive science, AND explain them with well-executed dual coding?

Yes; Yes It Would…

Well, someone has done exactly that. Three someones, in fact.  Bradley Busch, Edward Watson (no relation), and Ludmila Bogatchek have written Teaching and Learning Illuminated: the Big Ideas, Illustrated.

As that title promises, this book illuminates (that is, dual codes) the greatest hits from cognitive science: retrieval practice, cognitive load theory, Rosenshine’s principles, mindset, and a few dozen more.

Each section combines a pithy description of a particular concept with a visual representation of its core ideas.

So, for instance, page 35 summarizes dozens of studies looking at the benefits of spreading practice out (“spacing”) and practicing related topics together (“interleaving”).

And, the facing page offers a carefully considered graph that depicts learning over time. One path (“cramming”) looks good because it works so well in the short term. But the second path (“spacing and interleaving”) results in more learning over time.

Voila: “desirable difficulties” in one thoughtful graph.

Unlike so many examples of dual coding of the “put-an-icon-somewhere” school, Busch, Watson, and Bogatchek create substantial, meaty visuals that both require and reward careful study.

I never looked at the illustrations and thought: “gosh, that’s pretty.”

Instead, I thought:

Oh, gosh, I need to stop and study this for a bit.

Wait, why is that line there?

Ok, now I get it. Presumably this axis is labeled…oh, right, so cool!

In other words, the visuals both require thought and support thought. The result: readers understand these complex ideas even better.

So Many Shelves

I’ve written in the past that the “best book to read” depends on the reader’s current knowledge.

If you’re somewhat of a beginner in this field. I think you should probably read a book that focuses on just one topic: long-term memeory, or attention, or cognitive load theory.

Once you understand lots of the pieces, it’s time to read the books that put them all together.

Teaching and Learning Illuminated looks like an easy read — so many cool pictures! At the same time, it includes an ENORMOUS number of research-based insights and suggestions.

For that reason, I think of it as an “early-advanced” book more than one for those who are new to the field. Those illustrations are welcoming, but they also create cognitive demands of their own.

Full Disclosure

Because this field is relatively small, I know one of the three authors — Bradley Busch — a bit. (I recently recorded some brief video snippets for his website.)

I don’t believe our conversations have influenced this review, but the reader should know of them in making that evaluation.

I’ll also note: yes, I have written a book about Mindset; and yes, this book includes a mindset chapter called “The Watson Matrix.” But: their matrix isn’t about my summation of mindset theory.

 

An Argument Against “Chunking”
Andrew Watson
Andrew Watson

Learning and the Brain exists so that we can talk about good teaching together.

Although such conversations can provide great benefits, they also run into problems.

We might disagree with each other’s beliefs.

Or, we might disagree about research methods.

Even when we do agree, we might struggle to communicate effectively about shared beliefs.

For example: jargon.

When specialists talk with each other about “theory of mind” or “p3” or “element interactivity,” the rest of us often think “what the heck does THAT mean?”

Effective communication stops when words don’t have recognizeable meanings.

Another, subtler problem also hampers communication:

Effective communication stops when we use the same word to mean different things.

Sometimes this problem happens between disciplines.

The word “transfer,” for instance, has different meanings in neuroscience, education, and psychology.

Other words get us all tangled up, even within the same discipline.

I’m looking at you, “chunking.”

Television for All

I believe I first heard the word “chunking” to describe this mental phenomenon:

Imagine I ask you to memorize this list of letters:

CN NAB CFO XHB OCB S

Or, I might ask you to memorize THIS list of letters:

CNN ABC FOX HBO CBS

From one perspective, those lists are identical. They are the same letters in the same order. I just moved the spacing around a bit.

But, when I moved those spaces, I “chunked” the letters.

Penguins grouped together into the shape of a heart

That is: I organized those letters to align with your prior knowledge.

As teachers, we can reduce working memory load by “chunking”: that is, by aligning new ideas/information with ideas/information our students already have.

“Chunking” means “alignment with prior knowledge.”

Cool.

Or, wait a moment…

Curiouser and Curiouser

I’ve also heard “chunking” used in entirely different ways.

The second meaning: “break larger pieces down into smaller pieces.”

If I’ve got a list of ten instructions I want my students to follow, that list will almost certainly overwhelm their working memory. So, I could break that list down.

Three instructions.

Then three more.

An additional two, followed by the final two.

VOILA, I “chunked” the instructions.

Of course, this kind of chunking (breaking down into smaller bits) doesn’t mean the same thing as the first kind of chunking (aligning with prior knowledge).

Nor does it mean the same thing as the THIRD kind of chunking: forming a link with prior knowledge.

That is:

You could learn that “hamster” is another “mammal” that people keep as a “pet.”

You’ve formed a new “chunk”: mammals that are pets.

Or, you could learn that “Saratoga” is another surprising military victory, like “Agincourt” and “Thermopylae.”

You’ve formed a new “chunk”: unlikely military victories.

You see the problem here?

In Sum

So, as far as I can tell, “chunking” means either…

… aligining new information with prior knowledge, or

… breaking large information dumps into smaller pieces, or

… connecting new information with well-known information (which sounds like the first meaning, but isn’t exactly the same thing).

If I tell a colleague, “I think that part of the lesson would have benefitted from more chunking,” s/he doesn’t really know what I mean.

Even worse: s/he might THINK that s/he knows — but might understand chunking one way when I mean it another.

Ugh.

To be clear: I am IN FAVOR of all three strategies.

After all: all three ideas reduce working memory load. And, I’m a BIG FAN of reducing WM load.

However, when we use the word “chunking” to describe three different teaching strategies, we make our advice harder to understand.

That is: we increase the working memory demands of understanding strategies to reduce working memory demands. The paradox is both juicy and depressing.

So, I am enthusiastically in favor of all the strategies implied by the word “chunking,” but I think we should stop calling them “chunking.”

Instead, we should use more precise vocabulary to label our true meaning.

Do Animations Improve Learning? A Definitivie Answer, Please…
Andrew Watson
Andrew Watson

Recently I discussed working memory overload with a group of wise and thoughtful teachers.

I showed them one of my favorite GIFs:

a glass (representing working memory),

slowly filling up with stuff (teaching methods, complex information),

so that there is ultimately no room left in the glass (that is: no room left for understanding).

VOILA: working memory overload in one handy animation.

I love this GIF, and show it often.

Young woman draws an animated storyboard

Yet when I gave these teachers time to discuss this animation, they honestly didn’t like it very much. They had lots of specific (and insightful) suggestions, but the overall message was: thumbs down.

So: should I ditch the GIF?

Where to Start

For a guy who writes a blog about research-informed teaching, the next step seem obvious: find out what the research says!

Surely I can find an answer — maybe even a definitive one.

Alas, I quickly stumbled into a quandry.

On the one hand, we’ve got lots of good research suggesting that — on the whole — students do NOT learn more from animated information.

One of the best known studies — led by the much-esteemed Richard Meyer — supports the static media hypothesis: “static illustrations with printed text reduce extraneous processing and promote germane processing as compared with narrated animations.”

In this study, researchers used animations about everything from lightning formation to toilet tanks to see if they helped students understand.

These animations never helped, and often hurt, student learning.

One the other hand, a substantial meta-analysis of 40 studies finds a “beneficial effect of the presence of animated display for learning dynamic phenomena.”

So: what to do when we’ve got persuasive — and contradictory — evidence?

A Feature, Not a Bug

For people unfamiliar with research-world, this kind of contradiction might seem like a failure. If the people who do the research can’t agree on an answer, surely we should just ignore them.

I would offer a different interpretation.

Teaching is complicated. Learning is complicated. PEOPLE are complicated.

So, any time we do research about people teaching and learning, we’re looking at enormously complicated questions.

Some disagreement is inevitable.

And — here’s the surprise — the fact that we found contradictions means that we’ve been looking hard enough. (If I didn’t find contradictory research, I probably haven’t looked very hard…)

What, then, should we do to resolve the (inevitable, helpful) contradictions?

One useful step: get granular.

In this case: presumably some kinds of animations are helpful under some kinds of circumstances. But others: not so much.

We need to know more about the specifics.

Okay, Some Specifics

With that in mind, I found a more recent study trying to understand when and why animations might hinder understanding.

The study, in effect, looked at two questions:

Are the animations essential to understanding the topic, or are they basically “decorative”?

and

Is the material being studied cognitively challenging?

Two scholars — Annabel Pink and Philip Newton — had students study slides with information on them. Some slides had animations; others didn’t.

And — useful to know — the slides covered complex material: human physiology and enzyme kinetics.

Sure enough, students remembered LESS information from the slides with animations. And they rated those slides as cognitively MORE challenging.

In other words:

When deciding whether or not to break out the GIFs, we can ask ourselves:

Am I just decorating the slide, or does animation help clarify the meaning of the material?

and

Is this material a cognitive heavy lift?

When I ask these questions about my working memory overload GIF, I arrive at these answers:

The GIF illustrates a complex process: it’s not decorative, but meaningfully connected to an understanding of the ideas.

BUT

The ideas are — in fact — quite complicated.

The animation, in other words, might add cognitive load to an already mentally challenging concept. Hence the teachers’ unhappiness.

Small, Medium, and Big Pictures

What should we teachers do with this information?

Narrowly stated, we can consistently ask the two questions above: a) is the animation “decorative”? and b) is the material cognitively challenging?

If either answer is “yes,” then we should hesitate to add animations.

More broadly, we should continue to look for detailed guidance about when to use, and when to avoid using, animations to help students learn.

As far as I can tell, we just don’t have a clear picture about the boundary conditions within which they help students learn.

The big picture looks like this.

Psychology research rarely gives us an absolute, definitive answer to questions like: “should we add animations or not?”

Teachers always need to look at research specifics, compare them to the classroom conditions where we work, and use our own expert judgment to analyze the goodness of fit.


Mayer, R. E., Hegarty, M., Mayer, S., & Campbell, J. (2005). When static media promote active learning: annotated illustrations versus narrated animations in multimedia instruction.. Journal of Experimental Psychology: Applied, 11(4), 256-265. https://doi.org/10.1037/1076-898x.11.4.256

Berney, S., & Bétrancourt, M. (2016). Does animation enhance learning? A meta-analysis. Computers & Education101, 150-167.

Pink, A., & Newton, P. M. (2020). Decorative animations impair recall and are a source of extraneous cognitive load. Advances in Physiology Education.

The Whole Toolbox in One (Free) Download
Andrew Watson
Andrew Watson

If you want to learn more about improving teaching with psychology research, I’ve got good news:

There are SO MANY excellent books to read.

I’ve also got bad news:

There are SO MANY excellent books to read, we can struggle to manage them all.

In fact, as I’ve written elsewhere, I think the “best book to read” depends on the category of book you’re looking for.

At the beginning of your research+education journey, you probably want a book devoted to one topic: say, working memory, or motivation, or attention.

As you get more familiar with different categories of research, you might instead want a book that brings many topics together.

Today I’d like to recommend a book from the second category: the Great Teaching Toolkit: Evidence Review from Evidence Based Education. (You can read about it and download it here.)

Step One: How to Begin?

Anyone striving to write a book that “brings many topics together” starts with an enormous challenge: how to organize such a behemoth?

We have SO MUCH pertinent research on SO MANY topics: how can we possibly tidy this muddle?

The Toolkit’s authors devise a sensible sorting strategy. They believe research gives teachers strong guidance in four areas:

What sorts of knowledge do teachers need?

How can we make classrooms emotionally safe?

How can we structure classroom work and routines efficiently?

What teaching strategies require students to think hard?

Now, other authors organize their thinking in other ways. (For instance: Dan Willingham’s Why Don’t Students Like School focuses on nine key principles from cognitive science that should guide instruction.)

But I think you can see right away why the Toolkit’s organizational structure sounds so helpful and sensible.

Step Two: Break It Down

Within each of these categories, the authors offer between 3 and 6 specific principles: everything from “teachers should know common misconceptions in their discipline” to “strategies for asking questions effectively.”

This structure, in turn, allows for a straightfoward teacher-development plan.

If I were using this Toolkit with a faculty, I would have teachers select one of these sixteen topics: prefereably one where they feel the least confident and successful.

Each teacher would then dig into the research-base suggestions provided right there in the Toolkit.

Even better: the Toolkit reviews the research it summarizes. Teachers and school leaders who want to know exactly why this strategy or topic has been prioritized get all the info they need to dig deeper and discover more.

Examples, Please

You have, no doubt, heard that feedback is essential for student learning.

Imagine that a teacher reviews the Toolkit’s list and determines that s/he really needs to work on this specific part of her craft.

Turning to section 4.4, this teacher quickly gathers several useful insights about the role of feedback in our work.

In the first place, the Toolkit draws a helpful distinction between feedback that helps the teacher — by giving us information about how much our students know and understand — and feedback that helps the student — by giving them structured ways to improve.

That simple distinction sounds almost too obvious to state out loud…but in my experince isn’t emphasized nearly often enough.

In the second place, the teacher will find several thoughtful prompts for further thought.

As the authors wisely say: “there is no simple recipe for giving powerful feedback.”

Should the teacher remind the student of the success criteria, or point out gaps between the current work and those criteria?

The Toolkit doesn’t offer prescriptive answers because research can’t do that. Research can provide us with options, and let teachers sort out the best ways to put all those options together.

And, if you’re a research nerd (as I am), you’ll be delighted to find almost 20 pages of discussion on their sources for these ideas, and their methods for sorting them all together.

TL;DR

You already know several specific cognitive-science informed teaching strategies? You to want a bigger picture?

The Great Teaching Toolkit will be a feast for you. (And yes: you can download it free!)

The Cold-Calling Debate: Potential Perils, Potential Successes
Andrew Watson
Andrew Watson

Some education debates focus on BIG questions:

high structure vs. low structure pedagogy?

technology: good or bad?

how much should teachers focus on emotions?

Other debatess focus on narrower topics. For instance: cold calling. (“Cold calling” means “calling on student who haven’t raised their hands.”)

Proponents generally see several benefits:

Cold calling helps broaden check-for-understanding strategies. That is: it lets teachers know that MANY students understand, not just those who raise their hands.

It increases accountability.

It adds classroom variety.

And so forth.

Opponents likewise raise several concerns. Primarily:

Cold-calling could stress students out — even the ones not being cold called. That is: even the possibility that I might be called on could addle me.

Also, cold calling signals a particular power dynamic — one that runs contrary to many school philosophies.

Because both sides focus on different measures of success or peril, this debate can be difficult to resolve.

The Story So Far

Back in 2020, a friend asked about the cold calling debate. I looked for research, and –honestly — didn’t find much. The result of that search was this blog post.

Kindergarten students sitting on the floor, listening to the teacher at the chalkboard

In brief, the only study I found (focusing on college sophmores) found more benefits and fewer perils.

Students who had been cold-called a) asked more questions later on, and b) felt less stress.

But, one study is just one study. And, if you don’t teach college sophomores, you might not want to rely on research with that age group.

Today’s News

Research might offer teachers useful guidance, but we shouldn’t accept all research without asking a few questions.

One way to ensure we’re getting GOOD research-based advice is to look for wide ranges of evidence: evidence from…

… primary school AND high school

… science class AND history class

… small AND large school

… Stockholm AND Johannesburg

And so forth.

Similarly, teachers should feel especially confident when reseachers use different methodologies to explore their questions.

For this reason, I was especially pleased to find a cold-calling study published just last year.

This study doesn’t go in for random distribution or control groups (staples of other research paradigms). Instead, it uses a technique called “multimodal interaction analysis.”

I haven’t run into this technique before, so I’m honestly a newbie here. But the headline is: researchers used videotapes to study 86 cold-calling interactions.

In their analysis, the break the interaction down into a second-by-second record — noting the spoken words, the hand gestures, the length of pauses, the direction of the teacher’s gaze. (In some ways, it reminds me of Nuthall’s The Hidden Lives of Learners.)

Heck, they even keep track of the teacher’s use of modal verbs. (No, I’m not entirely sure what modal verbs are in German.)

By tracking the interactions with such extraordinary precision, they’re able to look for nuances and patterns that go beyond simply: “the teacher did or didn’t cold call.”

Conclusions?

Perhaps unsurprisingly, the study’s broad conclusion sounds like this: details matter.

The researchers offer a detailed analysis of one cold call, showing how the teacher’s build up to the moment created just the right support, and just the right tone, for the student to succeed.

They likewise detailed another cold call where the teacher’s body language and borderline insulting framing (“do you dare to answer?”) seem to have alarmed a shy student in monosyllables.

By implication, this research suggests that both opponents and proponents are missing a key point.

We needn’t ask: “is cold calling good or bad?”

Instead, we should ask: “what precise actions — what words, what gestures, what habits — set the student up for a positive interaction? Which precise actions do the opposite?”

Once we get good answers, we can focus and practice! Let’s do more of the good stuff, and less of the harmful stuff.

TL;DR

“Is cold calling good or bad?” is probably the wrong question.

Recent research focusing on nuances of technique suggests that teachers can reduce the perils of cold calling to foster participation and enhance learning.


Morek, M., Heller, V., & Kinalzik, N. (2022). Engaging ‘silent’students in classroom discussions: a micro-analytic view on teachers’ embodied enactments of cold-calling practices. Language and Education, 1-19.

Navigating Complexity: When 1st Order Solutions Create 2nd Order Problems
Andrew Watson
Andrew Watson

Here’s a common classroom problem.

As I’m explaning a complex concept, a student raises a hand.

“Just a moment,” I say, and finish my explanation.

Now I turn and smile at the student: “what was your question?” I ask.

All too often, the student answers, “I forgot my question.”

What’s going on here?

As is so often the case, the answer is: working memory overload.

Working memory HOLDS and PROCESSES information. When a student fails to hold and process, that’s working memory overload.

A primary school student wearing a backpack and sitting at a desk raises an eager hand.

In this case, my student was processing my explanation, and so failed to hold the question.

The solution?

It might seem simple. Don’t ask students to hold questions while they process explanations.

Instead, I should answer students’ questions right away. Problem solved….

When Solutions Create Problems

Wait just a moment.

This “solution” I just offered might solve the student’s problem.

At the same time, it might create new problems.

The student’s question — even a well-intentioned one — might throw my explanation off track.

My students might lose their tentative understanding of my complex explanation.

I might lose my own train of thought.

So I fixed one classroom problem but now have yet another one. YIKES.

What’s a teacher to do?

First Things First

This example — but one of many — might throw our entire project into question.

Teachers turn to psychology and neuroscience to solve classroom problems.

However, if these “research-based solutions” simply transform one problem into some other headache, why bother with the research?

We could save time by sticking with the old problem, right?

I think the fair answer to that question is: “actually, no.” Here’s why…

Teachers don’t need research to solve classroom problems. We need research to solve COMPLEX classroom problems.

When our classroom problems are simple, we just solve them on our own. We are — after all — teachers! We specialize in problem solving.

For that reason, we turn to research only when the problem isn’t simple.

And for that reason, we shouldn’t be surprised when the answer isn’t simple either.

OF COURSE we can’t fix the “questions-interrupting-my-explanation” problem with one easy research-based step.

If it were so simple a problem, we would have solved it without the research.

Changing the Lens

As I’ve explored this question with wise teachers in recent weeks, I’ve been struck by a pattern:

PROBLEM ONE requires SOLUTION ONE.

But: SOLUTION ONE creates PROBLEM TWO.

And: it’s often true that PROBLEM TWO comes from a different cognitive function than PROBLEM ONE.

So, in the example above, I started with a working memory problem: my student coudn’t hold and process information.

My solution (“take questions right away”) created another problem — but not a working memory problem.

When I answer questions mid-explanation, my students lose focus. That is, the working memory problem has been transformed into an attention problem.

To solve this second problem, I need to switch from working memory solutions to attention solutions.

In other words, I need to think about a separate cognitive function. I’ll find solutions to this 2nd order problem in a different research field.

Again with the Mantra

If you’ve ever heard me speak at a Learning and the Brain conference, you know my mantra: “don’t just do this thing; instead, think this way.”

In other words: psychology research can’t provide teachers with a list of “best practices.” The strategy that works in my 10th grade English classroom at a boarding school might not help 1st graders add numbers in a Montessori program.

But: the thought process I follow with my 10th graders might lead to beneficial solutions for those 1st graders. The answer (“do this thing”) might be different, but the mental pathway (“think this way”) will be the same.

The point I’m making here is: these thought processes might require us to leap from mental function to mental function in search of a more successful solution.

A solution to a long-term memory problem might uncover a motivational problem.

The solution to an alertness problem might promt an orienting problem.

When I reduce my students’ stress, I might ramp up their working memory difficulties.

And so forth.

When we understand research into all these topics, we can anticipate that these solutions might unveil an entirely different set of troubles.

And by moving nimbly from research topic to research topic, we can ultimately solve that complex problem that once seemed intractable.

All this nimbling about takes practice. And, ironically, it might threaten our own working memory capacity.

But once we get used to thinking this new way, we will arrive at solutions that fit our classrooms, and that work.

Collaborative Learning and Working Memory Overload: Good News or Bad?
Andrew Watson
Andrew Watson

Consider the following paradox:

Teachers need to give students instructions — of course we do!

After all, instructions help students do what they need to do, so that they can learn what we want them to learn.

3 middle school students working together on a problem from a textbook

At the same time, too many instructions might very well overwhelm working memory.

After all, the student has to HOLD the instructions in memory while PROCESSING each one individually. And: “holding while processing” is one handy definition of working memory function.

In brief: the right number of instructions can help learning, but too many instructions can impede learning.

I recently asked a group of wise and experienced teachers this question:

“Can you think of other teaching practices — like instructions — that are beneficial in small amounts, but might create working memory overload in large amounts?”

After some time to think and discuss, one teacher answered: group work.

After all, he mused, collaboration might simplify some mental processes. But collaboration itself creates additional mental taxes — all that negotiating and delegating and coordinating and explaining.

And disagreeing.

And resolving.

Are they ways that teachers can reduce those “mental taxes” so that students get the benifits without the penalties?

If only we had a research-based answer to those questions…

Inspired by this teacher’s observation, I hunted up this study.

Quadratics in Quito

To explore this question, researchers working in Quito, Ecuador worked with 15-year-olds solving quadratic equations.

Specifically, they wanted to know if practice collaborating helps students collaborate effectively.

As is always true, research design gets tricky. But the overall design makes sense.

Some students did practice solving quadratic equations collaboratively; others didn’t.

For a second round of math learning, all students were then sorted into groups for collaborative learning.

So, did students who practiced collaborating do better on later collaboration?

For complex equations: YES. Both three days later and seven days later, students who practiced collaborating did better solving problems than students who didn’t.

For simple equations: NO. If the mental work wasn’t very hard, students didn’t need to practice to collaborate effectively.

In light of these findings, the researchers’ recommendations make good sense.

If learners are novices, learning tasks are complex, and information distribution demands [extensive cooperation], teachers should prepare group members … using similar problems.

If task information does not demand [extensive cooperation], it is not necessary for the teachers to prepare learners to collaborate.

I want to highlight one part of this summary: “using similar problems.”

This research team emphasizes that “collaboration” is NOT a general skill. Collaboration will look different depending on the precise demands of the discipline and the topic.

So: students who “practiced” were given a VERY specific format for learning how to collaborate on this task.

If we want to get the benefits of practice for our own students, we should be sure to tailor the practice in very specific ways.

The Story Behind the Story: an Analogy and a Principle

A research article like this study always begins with a summary of earlier research findings, conclusions, and questions.

This summary includes a particularly helpful foundational inquiry.

When does collaboration increase WM load so much as to threaten learning?

When does collaboration reduce WM load enough to promote learning?

Is there some sort of taxonomy to consider or principle to explore?

To explain, I’ll start with an analogy.

Imagine I want to illuminate a yard at night.

For all sorts of reasons, it would be simplest to have one lamp to do so. Having multiple lamps just adds to the complexity and expense of the project.

So, if my yard is small enough to be covered by one lamp, then I should use one. Adding more lamps makes the project worse — more complicated, more expensive — not better.

But at some point, a yard gets big enough to need multiple lamps. If I use only one lamp, I just can’t illuminate the full yard.

In this case, the additional expense and complexity of having multiple lamps provides a meaningful benefit.

You can see where this is going.

Here’s a potential principle:

If a cognitive problem is simple enough, then one student can solve it on her own.

Adding other students (“collaborative leaning”!) increases the WM complexity of the situation without providing any additional benefit.

In this case, each student’s mental effort has become less effective, not more effective.

If, on the other hand, a cognitive problem gets complex enough, then it goes beyond any one student’s cognitive capacity.

In that case, the problem benefits from additional students’ cognitive efforts — even though all those extra students do increase the complexity of the problem.

At some tipping point, when a problem gets complicated enough, it needs to be divided into sub-tasks — despite the complexity of managing that work.

At that tipping point, well-structured and precisely-practiced collaboration probably is more beneficial than harmful.

TL;DR

Groupwork (probably) increases WM demands on simple cognitive tasks, but reduces WM demands for complex cognitive tasks.

To get the most benefits from collaboration, students should practice that skill — and teachers should tailor the practice to the precise demands of the cognitive work.


Zambrano, J., Kirschner, F., Sweller, J., & Kirschner, P. A. (2019). Effects of group experience and information distribution on collaborative learning. Instructional Science47, 531-550.