L&B Blog – Page 13 – Education & Teacher Conferences Skip to main content
Read This Post with Your Right Brain First…
Andrew Watson
Andrew Watson

My Twitter feed is suddenly awash with one of those “how does your brain?” work tests. (I should say, “tests.”)

If you look at the picture and see an angel, you’re right-brained.

If you see a helicopter, you’re left-brained.

This “test” has several important flaws.

Flaw #1: it’s not a helicopter or an angel — it’s obviously a dog.

Flaw #2: left-brain/right-brain is one of those zombie myths that just keeps coming back, no matter how many times we kill it.

Of all the myths in this field, this one puzzles me the most. Let me try to unpack my confusion.

Not True: The Brain

At the most basic level, this brain myth suffers from the flaw that it lacks any meaningful basis in neurobiological truth. In the world of theories about the brain, that’s a big flaw.

We can’t in any meaningful way find people who “use more of the right brain,” or “rely on left-brain thinking.”

If you’d like a detailed explanation of the wrongness here, I recommend Urban Myths about Learning and Education by de Bruyckere, Kirschner, and Hulshof.

If you’d rather click a link, check out this study. In the mild language of research, it concludes:

Our data are not consistent with a whole-brain phenotype of greater “left-brained” or greater “right-brained” network strength across individuals.

Translation: “people and brains just don’t operate that way. No seriously. They just don’t.”

Yes, yes: a few mental functions typically take place more on one side than another.

A conceptual image of a brain, falsely suggesting that the left hemisphere is computational and the right hemisphere is artistic

Back in grad school, we learned that 95% of right-handed people rely more on the left side of the brain for some reading functions. But 95% =/= 100%. And [checks notes] left-handed people do exist.

In any case, this finding doesn’t support the LB/RB claim — which is that some people rely more on these synapses, and others rely on those synapses.

Honestly: at the basic level of “how we use our brains,” we’re all “whole brained.” *

Not True: The Mind

Okay, so maybe the LB/RB claim isn’t exactly about “the brain” and more about “the mind.”

That is: some folks are more analytical (“left-brained”) and others are more creative (“right-brained”).

This version of the myth doesn’t use the word “brain” literally. (“Who knows precisely where those mental functions happen in the brain? We were just joshing, kind of poetically.”)

It simply argues that people think differently — and we can tidily divide them into two groups.

In other words, this version simply repeats the “learning styles” argument. These theories say we can divide students into distinct groups (visual/auditory/kinesthetic; or,  creative/analytical; or, happy/grumpy/sleepy) and then teach them differently.

Of course, the LB/RB version of “learning styles” is no truer than the other versions; they all lack solid evidence to support them.

The Myers-Briggs Type Indicator sort of claims to measure this distinction (“thinking vs. feeling”). But here again, we just don’t have good evidence supporting this test. **

So, whether we’re talking about neuroscience or psychology, LB/RB ain’t true.

Beyond “True”

One of my favorite quotations is attributed to George Box:

All models are false; some models are useful.

In other words: psychologists can offer a good model for how — say — working memory works. That model is “useful” because it helps us teach better.

However, that model is a model. The staggering complexities of working memory itself defy reduction into a model.

So, maybe LB/RB isn’t true, but is useful?

Honestly, I just don’t see how it could be useful.

If the model were true (it’s not) and I could divide my students into left and right brained groups (I can’t), what would I then do differently?

Just maybe I could devise a “creative” lesson plan for one group and an “analytical” lesson plan for the other. (I’m not sure how, but I’m trying to make this work.)

Yet: doing so would be an enormous waste of time.

Neither group would learn any more than they would with the same lesson plan. And all that time I dumped into my dual planning can’t be used to create an effective lesson plan.

That sound you hear is George Box weeping.

TL;DR

Left-brain/right-brain claims are NEITHER true NOR useful.

Do not take teaching advice from people who make them.


* Yes, it’s true, some people have only one hemisphere. But that’s really rare, and not at all what the LB/RB myth rests upon.

** Some time ago, I tried quite earnestly to find evidence supporting the MBTI. To do so, I emailed the company that produces it asking for published research. They did not send me any research; they did, however, sign me up for their emails.


Nielsen, J. A., Zielinski, B. A., Ferguson, M. A., Lainhart, J. E., & Anderson, J. S. (2013). An evaluation of the left-brain vs. right-brain hypothesis with resting state functional connectivity magnetic resonance imaging. PloS one8(8), e71275.

Pashler, H., McDaniel, M., Rohrer, D., & Bjork, R. (2008). Learning styles: Concepts and evidence. Psychological science in the public interest9(3), 105-119.

You Should Not (or Should) Let Your Students Take Pictures of Slides
Andrew Watson
Andrew Watson

Back in October, I wrote a blog post about a surprise: it turns out that students REMEMBER STUFF BETTER when they take photos of lecture slides.

For several reasons — including common sense — I would have predicted the opposite. In fact, so did the researchers (led by Dr. Annie Ditta) who arrived at this conclusion.

But when Team Ditta ran their study and crunched their numbers, they found that slide photos improved students’ recall.

Woman holding up mobile phono to take photo of speaker and slides

Having written that pro-photo blog post, I was genuinely alarmed to see a tweet from Prof. Dan Willingham — one of the greats in this field. He describes taking photos as “a terrible way to take notes.”

And Dr. Willingham should know. He’s just written a book focusing on study strategies — including note-taking.

What’s going on here? Have I given you terrible advice?

It turns out: Professor Willingham’s advice derives from this study, published in 2021 by Wong and Lim.

My blog post came from the Ditta study, published in 2022.

How do we explain — and choose between — studies that ask the same question and arrive at entirely different answers?

Untangling the Knot

Step 1: don’t panic.

It might seem that contradictory results explode the field of psychology. If THIS study shows “yes” and THAT study shows “no,” then the whole enterprise looks foolish and broken.

But here’s the thing:

Psychology is complicated.

Teaching and learning are complicated.

PEOPLE are complicated.

When psychology researchers study people who are teaching and learning, they’re studying FANTASTICALLY complicated topics.

For that reason, psychology researchers regularly produce contradictory results. That’s just how they roll.

And for that reason, no one study answers a question for good. To quote Dr. Willingham once again: “One study is just one study, folks.”

We should look not for one study to answer a question definitively, but for clusters of studies to point in a consistent direction.

If 10 studies show YES, and 2 studies show NO, and 2 more show CONFUSION — well then, “yes” strikes me as a plausible conclusion. (At least for now.)

Start Here

How can we know if most researchers have arrived at Wong’s 2021 conclusion (“photos = bad”) or at Ditta’s 2022 conclusion (“photos = good”)?

Step 2: Get curious.

Replace advocacy (“I know for sure that photos are good/bad!”) with curiosity (“I wonder what I’ll find? This should be fun…”)

For my curiosity projects, I rely on three websites: scite.ai, connectedpapers.com, and elicit.org. *

They all have different approaches and yield different kinds of results. And, they all help answer the question: “do we yet have a cluster of studies that mostly point to the same conclusion?”

So, what did I find when I asked those resources about the Wong (“photes = bad”) study?

When I looked on connectedpapers.com … it identified exactly ZERO other studies that asked questions about taking photos of lecture slides.

When I asked elicit.org a question on the topic … it came up with nothing.

Scite.ai did identify one other study responding to Wong. Sure enough, it’s the Ditta study: “photos = good.”

So, unless I’m missing something, we just don’t have much research on this topic. We can’t know where a “cluster of studies” might point because we don’t have anything remotely like a cluster.

Getting Specific

We’ve got at least one more research avenue to pursue:

Step 3: explore the boundaries.

Let’s imagine for a minute that Wong did her study with 3rd graders, and found that photos = bad; and (still imagining), Ditta did her study with college students, and found that photos = good.

In that case, we could reasonably imagine that they got different results because they studied participants in different grades.

Or (more imagining) maybe Wong studied photos of slides during a music class, and Ditta studied photos during an art history class.

Here again we could make a reasonable guess: slide photos will help in some disciplines (art!) but not others (music).

Researchers call these “boundary conditions”: as in, “this finding applies to people within these boundaries, but not outside them.

Potential examples: a conclusion applies to …

… math class but not history class, or

… a Montessori school but not a military academy, or

… for dyslexic students, but not for neurotypical readers, or

… in Icelandic culture, but not Brazilian culture.

You get the idea.

When we look at Wong’s and Ditta’s studies, however, we find they’re very similar. Adults watch short-ish videos, and do (or don’t) take photos or notes.

The studies differ slightly — Wong looks at mind wandering as an important variable, for instance — but not enough to draw strong conclusions.

At this point, neither our online resources nor our exploration of boundary conditions gives us any reason to prefer one study to the other.

End at the Beginning

No matter how the journey goes up to this point, we always end with …

Step 4: Look to your experience, and your colleagues.

In other words: we teachers should be curious (step 2) and informed (step 3). And, we always ultimately rely on our own judgement.

In this case — in my view — we simply don’t have a good research consensus to push us strongly one way or another. So, relying on my experience, here’s the policy I would follow with my 10th grade English students:

You may take pictures of photos or complex diagrams — anything that would be hard to put into words.

However, if you can put the material into words, I’m going to ask you to do so.

Why?

Because the more time you spend processing the information, the likelier it is you will understand and remember it.

This policy would, of course, have nuances and exceptions. (E.g.: dysgraphic students shouldn’t have to write as much.)

I want to emphasize, however, that your policy needn’t resemble my policy.

If you teach different kinds of students, or teach in a photo-friendly discipline (art history!), or if your experience tells you something else…you should follow your own wisdom.

TL;DR

Should students take photos of slides as a way to remember the material?

At present, we have so little research on the topic that it really can’t answer that question — ESPECIALLY because the studies contradict one another.

Instead, we should rely on our research-informed judgement.


* As I’ve written elsewhere, I would not use ChatGPT for this kind of inquiry. In my first forays into that world, the website simply MADE UP citations. Ugh.


Ditta, A. S., Soares, J. S., & Storm, B. C. (2022). What happens to memory for lecture content when students take photos of the lecture slides?. Journal of Applied Research in Memory and Cognition.

Wong, S. S. H., & Lim, S. W. H. (2021). Take notes, not photos: Mind-wandering mediates the impact of note-taking strategies on video-recorded lecture learning performance. Journal of Experimental Psychology: Applied.

Beware the Experts: The Danger of Popular Science Writing
Andrew Watson
Andrew Watson

Here’s a little expert advice on nutrition:

Michael Phelps — the most decorated Olympic athelete in any sport ever — obviously had to take EXCELLENT care of his body. He thought A LOT about fitness and nutrition.

While he was training for the Olympics, he ate roughly 10,000 calories a day.

So: if I want to attain peak fitness, I too should eat 10,000 calories a day.

If it’s good enough for Olympic medals winners, it’s good enough for me.

Wait a minute. [insert sound of record scratch]

That’s terrible advice.

10,000 calories per day might have been a good idea for Phelps. However — physically speaking — he and I have very little in common.

During his Olympic career, Phelps was in his teens and 20s.  I’m 57.

He was in peak human physical condition. I am — well — in very average physical condition.

He (I assume) undertook ferociously vigorous physical exercise — and burned calories — most of the day. I spend much of my day sitting here writing blog posts.

Basing my nutritional plan on Phelps’s example just makes no sense.

Simply put: stories of extreme human performance fascinate us. Alas, they rarely produce useful models for everyday life — or for teaching.

Danger, Will Robinson

That last paragraph, sadly, creates real problems for popular science writers.

In my experience, their formula goes something like this:

“Here’s a fascinating story abouts something EXTRAORDINARY that happened.

Now that I’ve got your attention, notice this AMAZING X FACTOR in my story.

Here’s some wonky research roughly related to Amazing X.

You should enact Amazing X in your life, too.”

Whether the extraordinary story focuses on burning planes or impossible inventions or heroic feats, those stories — we’re asked to believe — all have something to tell us about improving our lives.

Underwater picture of a young boy swimming directly toward the camera

But if it’s true, as I wrote above, that “stories of extreme human performance rarely produce useful models for teaching,” then the narrative structure above invites — heck, demands — our skepticism.

Amazing X might benefit extraordinary folks in outlier conditions. But, by definition, few of us teach in outlier conditions. Amazing X just won’t help us much. It might, in fact, be a very bad idea in our classrooms. (10,000 calories, anyone?)

Don’t Start Here

You have, perhaps, heard the story of the Mann Gulch Fire. (If not, you should check it out. It’s an AMAZING story.)

In brief:

Back in 1949, a group of trained “smoke jumpers” battled a wildfire that was burning toward the Missouri river. The fire abruptly turned towards them, and they realized they were trapped … and likely doomed.

In an instant, the group’s leader — “Wag” Dodge — came up with an astonishing solution. He set his own fire, and then stepped into its”shadow”: the area that his fire had burned clear. The wildfire burned around him — but not over the area that his fire had scorched.

Sadly, none of his men followed him into the shadow. Two other men outran the fire; most died.

This story appears in more than one book I know. The message: we want our students to think the way Dodge thought. We want them to be creative thinkers, who can come up with novel solutions to important problems.

I agree with those goals. I want my students to be able to think for themselves, and think past the knowledge that I have.

However: Dodge’s example tells us exactly nothing about helping students develop that capacity.

Dodge was a highly experienced firejumper. And he was in immediate danger of his life.

Our students are not highly experienced in the topic we’re teaching them. (If they were, we wouldn’t need to be teaching them.) And — except in very rare circumstances — they don’t face immediate peril.

Dodge’s thought process, in other words, has almost nothing to do with our students’ thinking. Until they know as much as Dodge knew, and have roughly as much experience as he had, we should have no expectation that they can “think the way he thought.”

We shouldn’t use his example to inform our work — even if it’s a great story.

Familiar Problems

Another example, from another popular science book:

Dr. K reads X-rays for a living. He found that he got bored and tired as the day progressed. He worried — reasonably enough — that he was getting sloppy as the day progressed.

So, he installed a “walking desk” in his office. He walked at a moderate pace as he read the X-rays, and felt much more alert and perceptive.

Dr. K wondered: does this technique benefit others?

He ran a study, and — sure enough!! — found that Doctors Who Walked spotted suspicious masses more often that Doctors Who Sat.

Clearly, walking is good for thinking. Therefore, teachers should have students walk as they learn.

Please insert a second [record scratch] here.

Once again: a great story about experts doesn’t meaningfully apply to the work we do in schools.

Doctors who read X-rays are highly trained experts. They’ve been in school for roughly two decades.

And: reading X-rays is a perception task.

If walking helps highly trained experts stay alert enough to perceive patterns better, we can ask if walking helps students learn better.

But both the people involved (experts vs. novices) and the cognitive task (perceiving established patterns vs. learning new patterns) are meaningfully different.

We really need research looking at this question directly before we make strong recommendations.

Based on my the research I know — and my experience as a classroom teacher:

Yes: exercise is good for the body, and good for the brain.

Yes: physical activity before learning provides lots of benefits. (Link)

No: physical activity during learning hasn’t been studied much. (Link)

And: based on my classroom experience, walking my students around outside while trying to discuss Macbeth with them seems like a deeply bad idea.*

Dr. K’s treadmill might help him and his colleagues; I don’t think it does much of anything for teachers and students.

TL;DR

When reading popular science books that include teaching advice, be aware:

The stories about extraordinary people doing extraordinary things fascinate and compel us.

However:

Before we make changes to our teaching practice, we should see research that looks at students like ours studying a topic like ours.

If we don’t, we’ll end up doing the teaching equivalent of eating 10,000 caleries a day.


* Yes, of course, if students are studying something that is in fact outside, it makes sense to go outside and look at it.

For instance: when I taught Where The Crawdads Sing — a book that relies heavily on the symbolism of marshes and swamps — I took my class out to see the marshes on school property.

I’m not saying: never take students for a walk. I am saying: do so with a very specific pedagogical purpose in mind.

Think, Pair, Share: Does It Help? If Yes, Why?
Andrew Watson
Andrew Watson

On some days, I find myself drawn to esoteric research studies.

A few months ago, for example, I wrote about the effect of earworms on sleep. (Yes, scholars really do research earworms.)

Two students in conversationToday, I’ve found as straightforwardly practical a study as I’ve seen.

Teachers everywhere have been encouraged to have students “think, pair, and share.”

That is:

I ask my students a question: “what are the metaphors in this poem?”

We all pause for several seconds, so that students can think about their individual answers.

Students then pair up: “okay, everyone, chat with the person next to you about your answers.”

Finally, I ask students to share their thoughts: “who has spotted a metaphor they want to discuss?”

Voila: they thought, they paired up, they shared.

The Bigger Picture

In truth, LOTS of classroom strategies have such popular currency that we don’t really think to question them.

Is it a good idea to have students write answers on the board? (I’ve never thought to ask; after all, ALL teachers have students write on the board.)

Should I really do those handshake dances at the door? (My colleagues LOVE a good handshake dance.)

College professors everywhere are banning laptops, because handwritten notes are superior. (Aren’t they? Like, obviously?)

In other words, we don’t need to have a research basis for absolutely everything we do in the classroom.

At the same time, when a scholar does explore my classroom practice in a research-y way, I ought to be at least a little curious about the results. (If you clicked on those links above, you’ll notice that our teacherly instincts might be wrong…)

So, what happens when researchers turn to “Think, Pair, Share”?

Will our beloved habit get the research seal of approval? Or, do we need to rethink this standard practice…?

Worth Checking

Researchers in Erfurt, Germany — led by Lukas Mundelsee — undertook a straightforward study with 9th graders.

Researchers introduced students to a topic, and then asked questions.

In some cases, they just asked students to raise their hands (SHARE only).

In some cases, students THOUGHT individually, and then SHARED.

And, of course, they sometimes THOUGHT and PAIRED and SHARED.

Finally, the researchers measured other useful variables — for instance, how shy do students report themselves to be?

So, what do you think Team Mundelsee found?

Sure enough, “think, pair, share” led to more handraising than “share” alone.

And, in particular, this strategy helped students who reported higher levels of shyness.

In other words: researchers got the result that (I suspect) most of us predicted. And, the underlying explanation makes sense.

If I’m shy, I don’t want to get the answer wrong in public. But if I can pair up to discuss my answer first, then I’m less worried about my potential wrongness.

Beyond the Research

Since “sharing” is under discussion here, I’ll share my own approach to “think, pair, share.”

When it comes to the final step — “who wants to share with the group” — I myself cold call.

That is, I don’t ask for someone to volunteer; I call on a student at random.

Now, I should be clear:

First: Mundelsee’s research does NOT investigate this approach. He’s looking at voluntary sharing.

Second: “cold-calling” does generate some controversy. Some folks consider it stress-inducing, even (brace yourself) ‘carceral.’

Now, I don’t doubt that cold-calling can be done badly. (If pizza can be bad, anything can be bad.)

But I explain my thinking to my students at the beginning of the year, and they seem to get in the grove fairly easily.

In this case, I worry that students need a little incentive to think. After all, if the student knows s/he has a pair coming up, then s/he can simply use the other students’ idea as a share.

ESPECIALLY because students have time to test-drive their ideas when they pair, I think cold-calling should be low stakes enough for them to feel plenty comfortable sharing.

Of course, my classroom climate might not match yours; teachers always adapt and balance to get their teaching strategies just right.

TL;DR

Yes, “think, pair, share” helps students feel comfortable sharing.

And, yes, it does so by reducing anxiety.


Hat tip to Alex Quigley, who first pointed out this study to me.


Mundelsee, L., & Jurkowski, S. (2021). Think and pair before share: Effects of collaboration on students’ in-class participation. Learning and Individual Differences88, 102015.

Cultural Field Trips: Do They Really Enhance SEL?
Andrew Watson
Andrew Watson

Here at Learning and the Brain, we like research-informed teaching suggestions.

At the same time, we remember Prof. Dan Willingham’s timeless motto: “one study is just one study, folks.”

Young girl looking intently into a museum display case

That is: one study might show a particular conclusion – but one study isn’t entirely persuasive.

Instead, we’d like SEVERAL studies looking at roughly the same question. If those studies all point the same direction, then we can feel increasingly confident that this conclusion has merit.

For instance:

Several years ago I blogged about an impressive study by Dr. Jay Greene. It suggested …

… that attendance at live theater improved students’ knowledge of the play (not a big surprise)

… and, it improved students’ tolerance (surprise!)

… and, it enhanced their desire see the world from another’s perspective.

Because I REALLY WANT those statements to be true, I looked at this research with extra skepticism. But I was ultimately persuaded that the study design made lots of sense.

Of course, “one study is just one study.”

It would be GREAT to see more research on this topic. Well, I have good news…

Take Two

I’ve recently come across a second study looking at the benefits of live theater — this one led by Angela Watson (no relation that I know of).*

As is often the case, this research team looked at some of the same questions, and some additional questions as well.

So, they asked:

Does live theater provide benefit for students? How about art museums? How about symphonies?

Do these cultural field trips benefit students in different demographic groups?

The Greene study looked at tolerance and “social perspective taking” – the desire to see the world from another’s perspective. Watson’s research also asked questions about conscientiousness and empathy.

So, what did they find?

Methods and Conclusions

As you can imagine, these questions require lots of logistical complexity.

We can study – say – “retrieval practice” fairly easily in a classroom. But, schlepping large groups of students to plays, museums, and symphonies takes a lot more coordination, planning, and cooperation.

Basically, these researchers had students attend different numbers of plays, symphonies, and museums: as few as one trip, or as many as six.

And, they had students fill out questionnaires. And, they followed up for two years!

So, do additional “cultural” field trips make a difference for these students?

As is so often the case, the answer is both “yes” and “no.”

So, for instance, attending the theater/symphony/museum did NOT make students more interested in going back.

And, contrary to Greene’s study, it did NOT make students more “empathetic.”

Specifically, students responded to statements like these: “After seeing a play or a movie, I have felt as though I were one of the characters.”

Their responses did not vary depending on the number of field trips they went on.

At the same time, Team Watson did find some benefits.

Attending field trips did increase “social perspective taking.”

That is: students were asked questions like “When you are angry at someone, how often do you try to ‘put yourself in his or her shoes’?”

Students who attended more field trips put on those shoes more often.

Also, female students — although not males — demonstrated higher levels of conscientiousness after participating in more cultural field trips. (Alas, this change didn’t last as long as the others.)

The news isn’t all good, but not all bad either.

What About Tolerance?

The case of “tolerance” is especially tricky.

When Watson & Co. ran the numbers, these field trips didn’t affect the students’ answers to questions like:

“Some people have views you oppose very strongly. Do you agree that these people should be allowed to come to your school and give a speech?”

So, strictly speaking, field trips didn’t increase tolerance.

However, Watson speculates that the wording of these questions was especially complicated. And, she notes that the students in this cohort had relatively low reading scores.

So, perhaps, students really did feel more tolerant, but the question’s complexity masked that change.

Watson supports this hypothesis by noticing that the students at the high end of reading comprehension DID have higher tolerance scores.

In other words: the students who understood the question better expressed higher levels of tolerance — and might reflect a truer understanding of the field trips’ effects.

TL;DR

We now have even more research showing some SEL benefits for field trips to cultural events.

This trips…

… consistently help students “put themselves in someone else’s shoes,”

… might increase students’ tolerance of others’ opinions,

… enhance girls’ conscientiousness (albeit temporarily).

We have less certain evidence that field trips enhance empathy, or that they encourage students to attend more cultural events.

Equally important: having MORE studies on one topic allows us to contemplate these SEL benefits with greater nuance and sophistication.


* To be precise, Watson’s study is a “working paper.” It hasn’t been peer reviewed in an academic journal.


Watson, A., Greene, J., Holmes Erickson, H., & Beck, M. I. (2019). Altered attitudes and actions: Social-emotional effects of multiple arts field trips.

Should We Teach Math and English the Same Way?
Andrew Watson
Andrew Watson

Because we teachers are a busy lot, we sometimes want simplicity and clarity:

I’m honestly too busy to sort through all the options and variables; just tell me what to do.

In fact, when I went to my first Learning and the Brain conference in 2008, that was exactly my plan.

The researchers would tell me what to do.

I would do it.

VOILA: brain-based teaching.

The more time I spend in this field, the more I doubt that logical chain. In fact, I frequently warn people against that kind of thinking.

4 students sitting at a table discussing something visible on a laptop

My regular mantra:

“Don’t just DO this thing. Instead, THINK this way.”

In other words, we teachers always have to translate research-based advice to our own context.

Today’s News

I recently came across a study looking at twelve different instructional activities in English and math classrooms.

In this study, “instructional activities” include …

… open discussion among teacher and students,

… use of whiteboard by teacher

… students working in groups

… one-to-one teaching

… students copying from the whiteboard

And so forth.

The research team asked: do students benefit from the same instructional activites in both disciplines?

To answer this question, researchers had observers keep track of teachers’ instructional activities. These observers — during more than 2500 visits! —  recorded whether teachers did these 12 activities “none or very little,” “some of the time,” or “most or all of the time.”

And, they then looked at the students’ scores on national exams in English and math. (This study was done in England, where most students take the GSCE when they’re 16 years old.)

So, what did they find when they put all those pieces together.

Fascinating Results

First, this research team found that teachers do different things:

Some teachers spend much of class time using traditional direct instruction, including lecturing and the use of textbooks, while other teachers devote more class time to students working with their classmates or individual practice.

For instance: one third of teachers use “open discussion” most or all of the time, but one quarter don’t do so at all.

Second, those different instructional activities matter.

In math classes, students benefit from a) practicing on their own, and b) teachers’ checking for understanding.

Students who engage in these activities “all or most of the time” score significantly higher than those who do so “some of the time.” (In this case, “significantly higher” is a bit hard to describe. Probably the easiest way to say this is: both statisticians and the students themselves would notice the difference.)

In English classes, however, students benefit from working and talking with each other (and the teacher).

So, to answer the question in this post’s title: at least according to this study, we shouldn’t teach all disciplines in the same way.

What This Finding DOES Mean

If you teach math or English to high-school students in England, I think you should give this study a careful look to guide your classroom practice.

That is: I’ve given an introduction — but the study includes A LOT more information that could be practically helpful to you.

Even more important:

If you don’t fit in that teaching category, this study means that research-based teaching advice always requires translation and adaptation.

Students benefit from different instructional activities in math and English. And, presumably, in other disciplines as well.

That is: you might go to a conference session that highlights the importance of mind-maps. (I’m picking this example at random.) That session shows research about its effectiveness in helping students learn.

However, this study clearly reminds us that we might need to adapt that advice to our own classrooms.

High-school English teachers might have students create mind-maps together; remember, students benefit from “working and talking together.”

High-school math teachers might have them create mind-maps solo; students benefit from “working on their own.”

More generally, this study might prompt you to ask some direct questions during that mind-map session. Was the research done with students in different grades? In different school or community cultures? Studying different topics? With diagnosed learning differences?

In other words: this specific research finding reminds us of a general lesson. We should be curious about and open to research-based suggestions. AND, we should check and be sure the research aligns with our teaching context before we make drastic changes.

What This Finding DOESN’T Mean

The flipside of my last point is: this research should encourage you to adapt your teaching practices only if your classrooms look like these classrooms.

Do you teach history? This research might not (or might) talk directly to you.

Do you teach second grade? Ditto.

Perhaps you teach in a different cultural context — say, Korea, Cairo, or Kansas.

Perhaps your school has a specific teaching philosophy (a Montessori school; a military academy) that rules out these approaches.

In other words: don’t just DO what this research tells you to do. THINK about your teaching practice with these ideas in mind — and see if they fit and make sense in your world.


Burgess, S. M., Rawal, S., & Taylor, E. S. (2022). Teachers’ use of class time and student achievement (No. w30686). National Bureau of Economic Research.

When Prior Knowledge Bites Back: The Dangers of Knowing Too Much
Andrew Watson
Andrew Watson

In this blog, we typically highlight the benefits of prior knowledge.

For example: if a student knows a lot about baseball, she’ll be much more successful in understanding a reading passage about baseball.

Young rowan tree seedling grow from old stump in a sunlit forest.

That same student could struggle mightily with a passage about cricket. What’s an “over”? A “wicket”? A “badger”?

In the world of cognitive load theory, prior knowledge helps because it reduces working memory load.

An expert knows relevant definitions, concepts, procedures – and the relationships among them.

And because experts have all that knowledge in long-term memory, they don’t need to noodle it around as much in working memory.

The teaching implications of this insight:

First: find out how much prior knowledge students have on any given topic.

Second: ensure student have the prior knowledge they need before starting on any given topic. Don’t start it until they do.

NB: This second insight has important implications for many project pedagogies.

This conclusion is well settled in cognitive load theory. But: is it always true?

Is it possible that prior knowledge might increase working memory load? Could it make thinking and problem solving more difficult?

Thinking the Unthinkable

Here’s a question:

“To mitigate the effects of climate change, would it be a good idea to plant more Douglas fir, oak, and beech trees in the Black Forest?”

I know a bit about climate change, and a bit about trees, and I’m generally inclined to say “yes.” Because I’m a novice – that is, I don’t have lots of prior knowledge on these topics – the question strikes me as straightforward.

However, if I were an expert, I might draw on my prior knowledge to see additional complexities in the question.

For instance…

…those trees might be vulnerable to particular diseases or pests,

…they might harm the ecosystem in the Black Forest,

…they might – paradoxically – do some tree thing or another that would ultimately exacerbate climate change rather than mitigate it.

In this case, an expert’s prior knowledge could introduce complicating variables – and thereby increase working memory load.

A research team, made up of scholars from Germany and Australia*, tested this hypothesis.

As you would expect, they asked forestry experts and forestry non-experts to consider (roughly) the tree-planting question above.

The experts considered the question more complicated than the novices did. That is: that said that it required more thought, more simultaneous contemplation of variables, and more complex thinking..

And – here’s the kicker – their answers weren’t any better than the novice’s answers.

In Other Words

Putting all these pieces together…

Forestry experts’ higher level of prior knowledge increased their perception of the problem’s complexity;

It did so (probably) because they thought of additional variables not included in the question;

These additional variables increased working memory load;

Because of additional strain on working memory, these experts didn’t benefit from their prior knowledge – and didn’t answer the question more effectively than novices.

Wow.

The research team then went ahead and tested this same idea with 4th graders in Australia.

As often happens in research, the details get complicated. The headline is: when they tested a classroom analog of the same problem, they got somewhat similar results.

Students with higher levels of prior knowledge DID perceive the cognitive load to be higher.

However, when those students solved problems, they scored higher than when they did not have relevant prior knowledge. (Remember: for the forestry example, higher cognitive load eliminated the experts’ advantage in solving the problem.)

In other words: the potential dangers of prior knowledge do show up in the classroom, not just in abstract research exercises.

Teaching Implications, Take II

Above I wrote:

First: find out how much prior knowledge students have on any given topic.

Second: ensure student have the prior knowledge they need before starting on any given topic. Don’t start it until they do.

Based on this study, I think we should add another implication:

Third: stop and consider – how might a student’s expert prior knowledge interfere with their learning of this concept? What other concepts or procedures might they draw into a question in ways that unhelpfully complicate their thinking?

At this point, I don’t think we have enough research into the dangers of prior knowledge to have refined or thorough guidance in answer to those new questions.

I do think, however, we should get in the habit of asking them.

TL;DR

Typically, prior knowledge benefits students by reducing working memory load.

Therefore, typically, we should ensure they have relevant prior knowledge before starting a topic.

In some cases – according to this research – prior knowledge can complicate thinking when experts bring in too many ideas from their knowledge base.

In these cases, we should be sure to think through those potential dangers, and head them off as best we can.

And: we should follow this research pool. It’s an intriguing topic!


* One of the researchers here is none other than Ollie Lovell, who wrote an EXCELLENT book on Cognitive Load Theory for teachers. You can read my review here.


Endres, T., Lovell, O., Morkunas, D., Rieß, W., & Renkl, A. (2022). Can prior knowledge increase task complexity?–Cases in which higher prior knowledge leads to higher intrinsic cognitive load. British Journal of Educational Psychology.

Practical Advice for Students: How to Make Good Flashcards
Andrew Watson
Andrew Watson

Flashcards feel to me like a research sweet-spot.

In the first place: for the most part, students believe that they help — and are even willing to make them!

In the second place: flashcards should help. After all, flashcards promote retrieval practice. And as you know, research shows that retrieval practice really helps students learn.

So, if we can find specific research about flashcards, it should be especially useful in our work.

portrait of father teaching daughter how to read by using simple words and letters on a flash card at home

It would be even better if one of the researchers were Mark McDaniel — who co-authored make it stick: one of the great books on memory research for teachers.

If you agree with me on these points, I’ve got some good news for you today!

Starting with Questions

Far and away the most common flashcard question I hear is: “does it matter if students make the flashcards themselves?”

The logic behind this question makes sense. When students think about the material in order to make good flashcards, then that thought might promote learning.

In other words: flashcard making isn’t just the bad kind of “active learning” (students are BUSY!) but the good kind of “active learning” (students are THINKING!).

I have two doubts about this thought process.

First: students might not know enough to make good flashcards.

If their cards prompt them to recall relatively unimportant ideas and processes, then the subsequent retrieval practice won’t really help.

Second: making flashcards takes time.

If students have access to good flashcards — ones that highlight the core concepts, procedures, and facts — then studying with those cards will (perhaps) be more efficient than taking time to make their own.

Two other questions also suggest themselves:

What kind of questions should be on the flashcards?

Most students make detail flashcards. That is: flashcards that focus on facts, definitions, dates, and so forth.

They might also — or instead — make “conceptual” flashcards. That is: flashcards that combine details in compare/contrast patterns, or process steps.*

Question #3:

Do flashcards help some students more than others?

The good news: a recent study explores all those questions.

The First Question

Researchers started with a straightforward experiment. They had students read textbook passages – one about biological anthropology, the other about geology – and then study for a quiz.

The students were divided into four groups, based on how they studied:

Group A studied however they chose.

Group B received 12 flashcards prepared by the textbook makers.

Group C made their own 12 flashcards. They didn’t get any instructions about them.

Group D made their own 12 flashcards. They did get special instructions: “include 4 ‘conceptual’ questions” – that is, questions that compare/contrast, or that highlight several steps in a process.

Let’s return to the first question I asked above: did the students who made their own flashcards learn more than those who studied with pre-made flashcards?

Nope.

That is: students in Groups C & D (who made their own cards) did NOT learn more than those in Group B (who received pre-made flashcards).

Even worse: they DID spend more time.

So, at least in this experiment, asking students to make their own flashcards just isn’t very efficient. They DO spend more time, but DON’T learn more. A bad combination.

Okay, but what about the second question I asked above?

Did the students who made “conceptual” flashcards learn more than those who got no special instructions?

Again, Nope.

Students in Group C — who got no special instructions — mostly made “detail” flashcards. Students in Group D — who were instructed to make 4 “conceptual” flashcards – basically followed those instructions; they made 4 “conceptual” and 8 detail flashcards.

But: both groups spent the same amount of time, and got the same score on the quiz.

Digging Deeper

This research team had hypothesized that the “conceptual” flashcards would benefit learning, and were therefore surprised by the results of this first experiment.

However, they quickly saw a plausible explanation.

C-Group students – who got no instructions – made 12 flashcards. On average, 10 of them were detail flashcards, and the other 2 were “conceptual.”

D-Group students – instructed to make 4 conceptual flashcards – did so: 8 detail cards and 4 “concept” cards.

So you can see, not much of a difference there.

In their next experiment, these researchers doubled down on these two different strategies.

Two new groups of students read the same two passages.

Group E received detail-only flashcards.

Group F received “conceptual”-only flashcards.

Now is there a difference?

By George I think they’ve got it.

Sure enough, in high enough percentages, “conceptual” flashcards do help students learn more.

Now We’re Getting Somewhere

Based on these first two experiments, I think we have some useful answers to our initial questions:

First: at least so far, we don’t have good evidence that students learn more when they make their own flashcards. Alas, this strategy seems inefficient, based on experiment #1.

Second: conceptual flashcards do help students more than detail flashcards, as long as there are enough of them.

I do think this second conclusion requires further nuance.

In the first place, some disciplines really do require lots of detail knowledge. When I studied Czech, I had (literally) HUNDREDS of vocabulary flashcards. Other fields might require a similarly detail-heavy flashcard deck.

In the second place, I do think K-12 students might need detail flashcards more than college students. After all, college students already know more details than younger students do – especially at the highly selective college where this study was performed.

Finally, the distinction between “detail” and “conceptual” might be overdrawn. Here’s a technique I’ve used in my own work.

Step 1: ask a student to draw two vocabulary flashcards, and to define those words.

In my English class, the student might define the words “protagonist” and “sympathetic.”

Step 2: ask the student to make a connection between the two words.

Now the student might say: “Well, a protagonist is typically a sympathetic character – like Janie Mae Crawford. But not always: Macbeth certainly is the protagonist, and certainly isn’t a sympathetic character.”

With this technique, two “detail” flashcards combine to require “conceptual” thinking – at least as defined by the researchers.

TL;DR

As long as we allow for nuance, and the possibility that future research will invite us to rethink these conclusions, this study suggests:

A: Students don’t benefit from making their own flashcards – as long as we provide them with good ones, and

B: Students DO benefit from flashcards that ask them to combine and organize information, not simply recall free-standing facts.

These suggestions – and others that have good research support – give us useful pointers to pass along to our students.


A Final Note:

Eagle-eyed readers will have noticed that I didn’t answer my third question: “do flashcards benefit some students more than others?”

This study does point to an answer, but … I don’t fully understand it.

In brief, “high structure-building” students don’t benefit as much from conceptual flashcards, because they already do better than “low structure-bulding” students – who DO benefit from conceptual flashcards.

Sadly, I don’t understand exactly what “high and low structure-building” means here. Those words sound like a proxy for “high scoring” and “low scoring,” but not exactly. Rather than pretend I know, I’m simply fessing up that there’s an extra variable here.

If you figure it out, please let me know!


* The authors of the study I’m describing acknowledge that this definition of “conceptual” is incomplete. They’re using that word as a handy shorthand for “flashcards that go beyond single details.”  In this blog post, I put the word “conceptual” in quotation marks to highlight the narrow specificity of their definition.


Lin, C., McDaniel, M. A., & Miyatsu, T. (2018). Effects of flashcards on learning authentic materials: The role of detailed versus conceptual flashcards and individual differences in structure-building ability. Journal of applied research in memory and cognition7(4), 529-539.

ChatGPT and Beyond: The Best Online Resources for Evaluating Research Claims
Andrew Watson
Andrew Watson

If we’re going to make “research-based” claims about education — and we are! — then we better have the research goods.

So, how can we at Learning and the Brain — and you as teachers and administrators — find and evaluate research?

For a few years now, I’ve relied on two great sources.

Scite.ai helps me see if a particular study is an outlier, or in alignment with other research findings.

Connectedpapers.com shows a spider web of research studies on any given topic.

They’A neuron made up of walls of binary code: 0s and 1s.re great, and you should check them both out. (I’m on those sites several times a week.)

But, we’ve heard A LOT about AI in recent weeks. Can we rely on newer websites to help us out?

I spent an hour or so on Monday playing around on two sites; here’s what I found.

The Inevitable ChatGPT

I won’t rehash all the Twitter agita on this subject, but gosh ChatGPT has gotten lots of attention. (Twitter tells me it’s either the savior of education, or the end of education, a deranged stalker of New York Times reporters, or a nothingburger.)

Can it help us explore research into education?

To answer this question, I asked Chat GPT to describe “schema theory.” Here are the first two paragraphs of its answer:

“Schema theory” is a cognitive psychology theory that describes how people understand, organize, and store information about the world. The theory suggests that people use mental frameworks or “schemas” to categorize information and simplify their understanding of the world around them.

A schema can be thought of as a mental structure that represents knowledge about a particular category of information, such as objects, events, or social interactions. For example, a person might have a schema for a “restaurant” that includes information about the typical layout, menu items, and social norms associated with dining out.

So far, so good. (The restaurant example of schema is quite common.)

I asked it to dig a bit deeper, and to identify problems or limitations with schema theory. Again, I got a solid set of answers. For instance, a limitation:

Overgeneralization: Schemas can sometimes lead to overgeneralization, causing people to perceive things in a stereotypical or biased manner. For example, people with a schema of a “criminal” may unconsciously associate criminal behavior with certain groups.

When I asked it for research supporting schema theory, its first three answers were several decades old, so I asked it for research from the last 20 years.

Of the three it listed, this one looked most promising:

Lee, J. J., & Anderson, N. (2015). The role of schema in the interpretation of ambiguous scenes. Memory & Cognition, 43(8), 1186-1197.

But, strangely, when I tried to find that study on Google Scholar, it didn’t seem to exist.

Stranger still, when I went to the website for the journal Memory and Cognition, it also doesn’t exist. (To be clear: the journal exists; the article doesn’t exist in it.)

You’re not going to believe this, but it seems that ChatGPT simply MADE UP a research study supporting schema theory.

The Chat Gets Awkward

I pointed out to ChatGPT that the source doesn’t exist, and it responded:

I apologize, it seems I made a mistake. The study by Lee and Anderson (2015) that I mentioned in my previous answer does not seem to exist. As a language model, I strive to provide accurate and up-to-date information, but sometimes I may make errors or misunderstand the information given to me. I appreciate your feedback and will take this into account in my future responses.

Okay, I’m not happy about this. But it did fess up to its mistake, and promise to do better.

So, the conversation continued:

Me: Can you identify recent studies — within the last 20 years — that study the cognitive psychology of schema theory specifically for students learning academic topics?

ChatGPT: Yes, there have been recent studies that have investigated the role of schemas in students’ learning of academic topics.* Here are a few examples:

Kail, R. (2017). Schema theory and its implications for the design of learning environments. Review of Educational Research, 87(1), 93-135.

Well, that first one looks promising. But, I don’t want to be fooled twice, so I asked the obvious question:

Me: Does the Kail 2017 study really exist?

ChatGPT: Yes, the Kail (2017) study does exist. It was published in the Review of Educational Research, a highly respected peer-reviewed journal in the field of education.

Reader, the Kail study does NOT exist! ChatGPT just MADE UP A SECOND STUDY. And, having made it up, it then stuck by its (untrue) story.

I won’t go through all the iterations of this journey in detail.

But, yes, I asked about a third study. Yes, ChatGPT assured me it exists. No, it really doesn’t.

So, at this point, I’m kinda done. Any artificial intelligence that simply conjures sources out of thin air…I myself won’t be relying on it soon.

If only there were another option…

Another Option!

At about this time, my friend Rob McEntarffer asked me if I’d tried Elicit.org. (Important note: bad things might happen if you type “illicit” instead of “elicit.” I haven’t checked, but: be careful out there.)

Rob is wise in the ways of MBE, and so I tried it.

At this point, I’m as hopeful about Elicit.org as I am discouraged about ChatCPT.

Elicit asks users to frame fairly specific questions. It then looks for study abstracts that seem relevant to those questions, and reports back.

So, I asked:

“Can schema theory improve classroom instruction?”

I got a list of seven studies. All seven sounded exactly on topic. And — here’s some surprisingly good news — at least four of the studies exist!

I know because I downloaded them and printed them out. They are, in fact, in my “read this right now” pile.

Now, Elicit has a somewhat easier job that ChatGPT, because it’s answering narrow questions about research studies, not broad questions about ANYTHING.

But, I found it easy to use and — at least on my first test drive — reliable and helpful.

TL;DR

If you’re looking for online sources to find and evaluate research-based claims:

Add Elicit.org to scite.ai and connectedpapers.com as useful research resources.

Until it stops making stuff up, avoid ChatGPT.


* Notice, by the way, that ChatGPT got the possesive (“students’ learning”) correct in this answer. That’s an unusual rule — “use the possessive case before gerunds” — and a difficult apostrophe: plural possessive. So, it’s not great with sources, but it knows from grammar and punctuation!

“Seductive Details” meet “Retrieval Practice”: A Match Made in Cognitive Heaven
Andrew Watson
Andrew Watson

Here’s a common problem: your job today is to teach a boring topic. (You don’t think it’s boring, but your students always complain…)

What’s a teacher to do?

One plausible strategy: You might enliven this topic in some entertaining way.

You’ve got a funny video,

or a clever cartoon,

or a GREAT anecdote about a colleague’s misadventure.

Okay, so this video/cartoon/anecdote isn’t one of today’s learning objectives. BUT: it just might capture your students’ interest and help them pay attention.

However tempting, this strategy does create its own problems. We’ve got lots of research showing that these intriguing-but-off-topic details can get in the way of learning.

That is: students rTwo baby goats, one brown and white, theo other black and white, frolicking in a field.emember the seductive details (as they’re known in the research literature), but less of the actual content we want them to know.

Womp womp.

Some time ago, I wrote about a meta-analysis showing that — yup — seductive details ACTUALLY DO interfere with learning: especially for beginners, especially in shorter lessons.

What could we do to fix this problem? If we can’t use our anecdotes and cartoons, do we just have to bore our students?

“Right-Sized” Retrieval Practice

Here’s one approach we might try: right-sized retrieval practice.

What does “right-sized” mean? Here goes:

One retrieval practice strategy is a brain dump. The instructions sounds something like this: “write down everything you remember about today’s grammar lesson.”

Another retrieval practice strategy calls for more specific questions: “what’s the differenece between a gerund and a participle?” “How might a participle create a dangling modifier?”

A group of scholars in Germany studied this hypothesis:

If teachers use the brain dump approach, students will remember the seductive detail — and it will become a part of their long-term memory.

If, on the other hand, teachers ask specific questions, students will remember the important ideas of the lesson — and not consolidate memory of the seductive detail.

They ran a straightforward study, considering a topic close to every teacher’s heart: coffee.

100+ college students in Germany read a lengthy passage on coffee: information about the coffee plant, its harvesting, its preparation, and its processing.

Half of them read a version including fun-but-extraneous information. For instance: do you know coffee was discovered?

Turns out: goat herders noticed that their goats ate the coffee beans and then did a kind of happy dance. Those herders wondered: could we get the same happy effects? Thus was born today’s coffee industry…

Remembering the GOAT

After reading these coffee passages — with or without seductive details — students answered retrieval practice questions.

Some got a “brain dump” promt: “What do you remember about coffee?”

Others got the specific questions: “What harvesting methods do you remember, and how do they differ?”

So, what effect did those specific questions have on memory of seductive details one week later?

Sure enough, as the researchers had hypothesized, students who answered specific retrieval practice questions remembered MORE of the lesson’s meaningful content.

And, they remembered LESS (actually, NONE) of the seductive details. (Of course, the details get complicated, but this summary captures the main idea.)

BOOM.

So, what’s a classroom teacher to do?

As is so often the case, we should remember that researchers ISOLATE variables and teachers COMBINE variables.

We always have to think about many (many!) topics at once, while research typically tries to find out the importance of exactly one thing.

Putting all these ideas together, I’d recommend the following path:

If I have to teach a topic my students find dull, I can indeed include some seductive details (Ha ha! Goats!) to capture their interest — as long as I conclude that lesson with some highly specific retrieval practice questioning.

And, based on this earlier post on seductive details, this extra step will be especially important if the lesson is short, or the students are beginners with this topic.

TL;DR

Seductive details can capture students’ interest, but also distract them from the important topics of the lesson.

To counteract this problem, teachers should plan for retriveal practice including specific questions — not just a brain dump.


By the way: I first heard about this “retrieval practice vs. seductive details” study from Bradley Busch (Twitter: @BradleyKBusch) and Jade Pearce (Twitter: @PearceMrs). If you’re not familiar with their work, be sure to look them up!


Eitel, A., Endres, T., & Renkl, A. (2022). Specific questions during retrieval practice are better for texts containing seductive details. Applied Cognitive Psychology36(5), 996-c1008.

Sundararajan, N., & Adesope, O. (2020). Keep it coherent: A meta-analysis of the seductive details effect. Educational Psychology Review32(3), 707-734.