Skip to main content
Cultural Field Trips: Do They Really Enhance SEL?
Andrew Watson
Andrew Watson

Here at Learning and the Brain, we like research-informed teaching suggestions.

At the same time, we remember Prof. Dan Willingham’s timeless motto: “one study is just one study, folks.”

Young girl looking intently into a museum display case

That is: one study might show a particular conclusion – but one study isn’t entirely persuasive.

Instead, we’d like SEVERAL studies looking at roughly the same question. If those studies all point the same direction, then we can feel increasingly confident that this conclusion has merit.

For instance:

Several years ago I blogged about an impressive study by Dr. Jay Greene. It suggested …

… that attendance at live theater improved students’ knowledge of the play (not a big surprise)

… and, it improved students’ tolerance (surprise!)

… and, it enhanced their desire see the world from another’s perspective.

Because I REALLY WANT those statements to be true, I looked at this research with extra skepticism. But I was ultimately persuaded that the study design made lots of sense.

Of course, “one study is just one study.”

It would be GREAT to see more research on this topic. Well, I have good news…

Take Two

I’ve recently come across a second study looking at the benefits of live theater — this one led by Angela Watson (no relation that I know of).*

As is often the case, this research team looked at some of the same questions, and some additional questions as well.

So, they asked:

Does live theater provide benefit for students? How about art museums? How about symphonies?

Do these cultural field trips benefit students in different demographic groups?

The Greene study looked at tolerance and “social perspective taking” – the desire to see the world from another’s perspective. Watson’s research also asked questions about conscientiousness and empathy.

So, what did they find?

Methods and Conclusions

As you can imagine, these questions require lots of logistical complexity.

We can study – say – “retrieval practice” fairly easily in a classroom. But, schlepping large groups of students to plays, museums, and symphonies takes a lot more coordination, planning, and cooperation.

Basically, these researchers had students attend different numbers of plays, symphonies, and museums: as few as one trip, or as many as six.

And, they had students fill out questionnaires. And, they followed up for two years!

So, do additional “cultural” field trips make a difference for these students?

As is so often the case, the answer is both “yes” and “no.”

So, for instance, attending the theater/symphony/museum did NOT make students more interested in going back.

And, contrary to Greene’s study, it did NOT make students more “empathetic.”

Specifically, students responded to statements like these: “After seeing a play or a movie, I have felt as though I were one of the characters.”

Their responses did not vary depending on the number of field trips they went on.

At the same time, Team Watson did find some benefits.

Attending field trips did increase “social perspective taking.”

That is: students were asked questions like “When you are angry at someone, how often do you try to ‘put yourself in his or her shoes’?”

Students who attended more field trips put on those shoes more often.

Also, female students — although not males — demonstrated higher levels of conscientiousness after participating in more cultural field trips. (Alas, this change didn’t last as long as the others.)

The news isn’t all good, but not all bad either.

What About Tolerance?

The case of “tolerance” is especially tricky.

When Watson & Co. ran the numbers, these field trips didn’t affect the students’ answers to questions like:

“Some people have views you oppose very strongly. Do you agree that these people should be allowed to come to your school and give a speech?”

So, strictly speaking, field trips didn’t increase tolerance.

However, Watson speculates that the wording of these questions was especially complicated. And, she notes that the students in this cohort had relatively low reading scores.

So, perhaps, students really did feel more tolerant, but the question’s complexity masked that change.

Watson supports this hypothesis by noticing that the students at the high end of reading comprehension DID have higher tolerance scores.

In other words: the students who understood the question better expressed higher levels of tolerance — and might reflect a truer understanding of the field trips’ effects.

TL;DR

We now have even more research showing some SEL benefits for field trips to cultural events.

This trips…

… consistently help students “put themselves in someone else’s shoes,”

… might increase students’ tolerance of others’ opinions,

… enhance girls’ conscientiousness (albeit temporarily).

We have less certain evidence that field trips enhance empathy, or that they encourage students to attend more cultural events.

Equally important: having MORE studies on one topic allows us to contemplate these SEL benefits with greater nuance and sophistication.


* To be precise, Watson’s study is a “working paper.” It hasn’t been peer reviewed in an academic journal.


Watson, A., Greene, J., Holmes Erickson, H., & Beck, M. I. (2019). Altered attitudes and actions: Social-emotional effects of multiple arts field trips.

Should We Teach Math and English the Same Way?
Andrew Watson
Andrew Watson

Because we teachers are a busy lot, we sometimes want simplicity and clarity:

I’m honestly too busy to sort through all the options and variables; just tell me what to do.

In fact, when I went to my first Learning and the Brain conference in 2008, that was exactly my plan.

The researchers would tell me what to do.

I would do it.

VOILA: brain-based teaching.

The more time I spend in this field, the more I doubt that logical chain. In fact, I frequently warn people against that kind of thinking.

4 students sitting at a table discussing something visible on a laptop

My regular mantra:

“Don’t just DO this thing. Instead, THINK this way.”

In other words, we teachers always have to translate research-based advice to our own context.

Today’s News

I recently came across a study looking at twelve different instructional activities in English and math classrooms.

In this study, “instructional activities” include …

… open discussion among teacher and students,

… use of whiteboard by teacher

… students working in groups

… one-to-one teaching

… students copying from the whiteboard

And so forth.

The research team asked: do students benefit from the same instructional activites in both disciplines?

To answer this question, researchers had observers keep track of teachers’ instructional activities. These observers — during more than 2500 visits! —  recorded whether teachers did these 12 activities “none or very little,” “some of the time,” or “most or all of the time.”

And, they then looked at the students’ scores on national exams in English and math. (This study was done in England, where most students take the GSCE when they’re 16 years old.)

So, what did they find when they put all those pieces together.

Fascinating Results

First, this research team found that teachers do different things:

Some teachers spend much of class time using traditional direct instruction, including lecturing and the use of textbooks, while other teachers devote more class time to students working with their classmates or individual practice.

For instance: one third of teachers use “open discussion” most or all of the time, but one quarter don’t do so at all.

Second, those different instructional activities matter.

In math classes, students benefit from a) practicing on their own, and b) teachers’ checking for understanding.

Students who engage in these activities “all or most of the time” score significantly higher than those who do so “some of the time.” (In this case, “significantly higher” is a bit hard to describe. Probably the easiest way to say this is: both statisticians and the students themselves would notice the difference.)

In English classes, however, students benefit from working and talking with each other (and the teacher).

So, to answer the question in this post’s title: at least according to this study, we shouldn’t teach all disciplines in the same way.

What This Finding DOES Mean

If you teach math or English to high-school students in England, I think you should give this study a careful look to guide your classroom practice.

That is: I’ve given an introduction — but the study includes A LOT more information that could be practically helpful to you.

Even more important:

If you don’t fit in that teaching category, this study means that research-based teaching advice always requires translation and adaptation.

Students benefit from different instructional activities in math and English. And, presumably, in other disciplines as well.

That is: you might go to a conference session that highlights the importance of mind-maps. (I’m picking this example at random.) That session shows research about its effectiveness in helping students learn.

However, this study clearly reminds us that we might need to adapt that advice to our own classrooms.

High-school English teachers might have students create mind-maps together; remember, students benefit from “working and talking together.”

High-school math teachers might have them create mind-maps solo; students benefit from “working on their own.”

More generally, this study might prompt you to ask some direct questions during that mind-map session. Was the research done with students in different grades? In different school or community cultures? Studying different topics? With diagnosed learning differences?

In other words: this specific research finding reminds us of a general lesson. We should be curious about and open to research-based suggestions. AND, we should check and be sure the research aligns with our teaching context before we make drastic changes.

What This Finding DOESN’T Mean

The flipside of my last point is: this research should encourage you to adapt your teaching practices only if your classrooms look like these classrooms.

Do you teach history? This research might not (or might) talk directly to you.

Do you teach second grade? Ditto.

Perhaps you teach in a different cultural context — say, Korea, Cairo, or Kansas.

Perhaps your school has a specific teaching philosophy (a Montessori school; a military academy) that rules out these approaches.

In other words: don’t just DO what this research tells you to do. THINK about your teaching practice with these ideas in mind — and see if they fit and make sense in your world.


Burgess, S. M., Rawal, S., & Taylor, E. S. (2022). Teachers’ use of class time and student achievement (No. w30686). National Bureau of Economic Research.

When Prior Knowledge Bites Back: The Dangers of Knowing Too Much
Andrew Watson
Andrew Watson

In this blog, we typically highlight the benefits of prior knowledge.

For example: if a student knows a lot about baseball, she’ll be much more successful in understanding a reading passage about baseball.

Young rowan tree seedling grow from old stump in a sunlit forest.

That same student could struggle mightily with a passage about cricket. What’s an “over”? A “wicket”? A “badger”?

In the world of cognitive load theory, prior knowledge helps because it reduces working memory load.

An expert knows relevant definitions, concepts, procedures – and the relationships among them.

And because experts have all that knowledge in long-term memory, they don’t need to noodle it around as much in working memory.

The teaching implications of this insight:

First: find out how much prior knowledge students have on any given topic.

Second: ensure student have the prior knowledge they need before starting on any given topic. Don’t start it until they do.

NB: This second insight has important implications for many project pedagogies.

This conclusion is well settled in cognitive load theory. But: is it always true?

Is it possible that prior knowledge might increase working memory load? Could it make thinking and problem solving more difficult?

Thinking the Unthinkable

Here’s a question:

“To mitigate the effects of climate change, would it be a good idea to plant more Douglas fir, oak, and beech trees in the Black Forest?”

I know a bit about climate change, and a bit about trees, and I’m generally inclined to say “yes.” Because I’m a novice – that is, I don’t have lots of prior knowledge on these topics – the question strikes me as straightforward.

However, if I were an expert, I might draw on my prior knowledge to see additional complexities in the question.

For instance…

…those trees might be vulnerable to particular diseases or pests,

…they might harm the ecosystem in the Black Forest,

…they might – paradoxically – do some tree thing or another that would ultimately exacerbate climate change rather than mitigate it.

In this case, an expert’s prior knowledge could introduce complicating variables – and thereby increase working memory load.

A research team, made up of scholars from Germany and Australia*, tested this hypothesis.

As you would expect, they asked forestry experts and forestry non-experts to consider (roughly) the tree-planting question above.

The experts considered the question more complicated than the novices did. That is: that said that it required more thought, more simultaneous contemplation of variables, and more complex thinking..

And – here’s the kicker – their answers weren’t any better than the novice’s answers.

In Other Words

Putting all these pieces together…

Forestry experts’ higher level of prior knowledge increased their perception of the problem’s complexity;

It did so (probably) because they thought of additional variables not included in the question;

These additional variables increased working memory load;

Because of additional strain on working memory, these experts didn’t benefit from their prior knowledge – and didn’t answer the question more effectively than novices.

Wow.

The research team then went ahead and tested this same idea with 4th graders in Australia.

As often happens in research, the details get complicated. The headline is: when they tested a classroom analog of the same problem, they got somewhat similar results.

Students with higher levels of prior knowledge DID perceive the cognitive load to be higher.

However, when those students solved problems, they scored higher than when they did not have relevant prior knowledge. (Remember: for the forestry example, higher cognitive load eliminated the experts’ advantage in solving the problem.)

In other words: the potential dangers of prior knowledge do show up in the classroom, not just in abstract research exercises.

Teaching Implications, Take II

Above I wrote:

First: find out how much prior knowledge students have on any given topic.

Second: ensure student have the prior knowledge they need before starting on any given topic. Don’t start it until they do.

Based on this study, I think we should add another implication:

Third: stop and consider – how might a student’s expert prior knowledge interfere with their learning of this concept? What other concepts or procedures might they draw into a question in ways that unhelpfully complicate their thinking?

At this point, I don’t think we have enough research into the dangers of prior knowledge to have refined or thorough guidance in answer to those new questions.

I do think, however, we should get in the habit of asking them.

TL;DR

Typically, prior knowledge benefits students by reducing working memory load.

Therefore, typically, we should ensure they have relevant prior knowledge before starting a topic.

In some cases – according to this research – prior knowledge can complicate thinking when experts bring in too many ideas from their knowledge base.

In these cases, we should be sure to think through those potential dangers, and head them off as best we can.

And: we should follow this research pool. It’s an intriguing topic!


* One of the researchers here is none other than Ollie Lovell, who wrote an EXCELLENT book on Cognitive Load Theory for teachers. You can read my review here.


Endres, T., Lovell, O., Morkunas, D., Rieß, W., & Renkl, A. (2022). Can prior knowledge increase task complexity?–Cases in which higher prior knowledge leads to higher intrinsic cognitive load. British Journal of Educational Psychology.

Practical Advice for Students: How to Make Good Flashcards
Andrew Watson
Andrew Watson

Flashcards feel to me like a research sweet-spot.

In the first place: for the most part, students believe that they help — and are even willing to make them!

In the second place: flashcards should help. After all, flashcards promote retrieval practice. And as you know, research shows that retrieval practice really helps students learn.

So, if we can find specific research about flashcards, it should be especially useful in our work.

portrait of father teaching daughter how to read by using simple words and letters on a flash card at home

It would be even better if one of the researchers were Mark McDaniel — who co-authored make it stick: one of the great books on memory research for teachers.

If you agree with me on these points, I’ve got some good news for you today!

Starting with Questions

Far and away the most common flashcard question I hear is: “does it matter if students make the flashcards themselves?”

The logic behind this question makes sense. When students think about the material in order to make good flashcards, then that thought might promote learning.

In other words: flashcard making isn’t just the bad kind of “active learning” (students are BUSY!) but the good kind of “active learning” (students are THINKING!).

I have two doubts about this thought process.

First: students might not know enough to make good flashcards.

If their cards prompt them to recall relatively unimportant ideas and processes, then the subsequent retrieval practice won’t really help.

Second: making flashcards takes time.

If students have access to good flashcards — ones that highlight the core concepts, procedures, and facts — then studying with those cards will (perhaps) be more efficient than taking time to make their own.

Two other questions also suggest themselves:

What kind of questions should be on the flashcards?

Most students make detail flashcards. That is: flashcards that focus on facts, definitions, dates, and so forth.

They might also — or instead — make “conceptual” flashcards. That is: flashcards that combine details in compare/contrast patterns, or process steps.*

Question #3:

Do flashcards help some students more than others?

The good news: a recent study explores all those questions.

The First Question

Researchers started with a straightforward experiment. They had students read textbook passages – one about biological anthropology, the other about geology – and then study for a quiz.

The students were divided into four groups, based on how they studied:

Group A studied however they chose.

Group B received 12 flashcards prepared by the textbook makers.

Group C made their own 12 flashcards. They didn’t get any instructions about them.

Group D made their own 12 flashcards. They did get special instructions: “include 4 ‘conceptual’ questions” – that is, questions that compare/contrast, or that highlight several steps in a process.

Let’s return to the first question I asked above: did the students who made their own flashcards learn more than those who studied with pre-made flashcards?

Nope.

That is: students in Groups C & D (who made their own cards) did NOT learn more than those in Group B (who received pre-made flashcards).

Even worse: they DID spend more time.

So, at least in this experiment, asking students to make their own flashcards just isn’t very efficient. They DO spend more time, but DON’T learn more. A bad combination.

Okay, but what about the second question I asked above?

Did the students who made “conceptual” flashcards learn more than those who got no special instructions?

Again, Nope.

Students in Group C — who got no special instructions — mostly made “detail” flashcards. Students in Group D — who were instructed to make 4 “conceptual” flashcards – basically followed those instructions; they made 4 “conceptual” and 8 detail flashcards.

But: both groups spent the same amount of time, and got the same score on the quiz.

Digging Deeper

This research team had hypothesized that the “conceptual” flashcards would benefit learning, and were therefore surprised by the results of this first experiment.

However, they quickly saw a plausible explanation.

C-Group students – who got no instructions – made 12 flashcards. On average, 10 of them were detail flashcards, and the other 2 were “conceptual.”

D-Group students – instructed to make 4 conceptual flashcards – did so: 8 detail cards and 4 “concept” cards.

So you can see, not much of a difference there.

In their next experiment, these researchers doubled down on these two different strategies.

Two new groups of students read the same two passages.

Group E received detail-only flashcards.

Group F received “conceptual”-only flashcards.

Now is there a difference?

By George I think they’ve got it.

Sure enough, in high enough percentages, “conceptual” flashcards do help students learn more.

Now We’re Getting Somewhere

Based on these first two experiments, I think we have some useful answers to our initial questions:

First: at least so far, we don’t have good evidence that students learn more when they make their own flashcards. Alas, this strategy seems inefficient, based on experiment #1.

Second: conceptual flashcards do help students more than detail flashcards, as long as there are enough of them.

I do think this second conclusion requires further nuance.

In the first place, some disciplines really do require lots of detail knowledge. When I studied Czech, I had (literally) HUNDREDS of vocabulary flashcards. Other fields might require a similarly detail-heavy flashcard deck.

In the second place, I do think K-12 students might need detail flashcards more than college students. After all, college students already know more details than younger students do – especially at the highly selective college where this study was performed.

Finally, the distinction between “detail” and “conceptual” might be overdrawn. Here’s a technique I’ve used in my own work.

Step 1: ask a student to draw two vocabulary flashcards, and to define those words.

In my English class, the student might define the words “protagonist” and “sympathetic.”

Step 2: ask the student to make a connection between the two words.

Now the student might say: “Well, a protagonist is typically a sympathetic character – like Janie Mae Crawford. But not always: Macbeth certainly is the protagonist, and certainly isn’t a sympathetic character.”

With this technique, two “detail” flashcards combine to require “conceptual” thinking – at least as defined by the researchers.

TL;DR

As long as we allow for nuance, and the possibility that future research will invite us to rethink these conclusions, this study suggests:

A: Students don’t benefit from making their own flashcards – as long as we provide them with good ones, and

B: Students DO benefit from flashcards that ask them to combine and organize information, not simply recall free-standing facts.

These suggestions – and others that have good research support – give us useful pointers to pass along to our students.


A Final Note:

Eagle-eyed readers will have noticed that I didn’t answer my third question: “do flashcards benefit some students more than others?”

This study does point to an answer, but … I don’t fully understand it.

In brief, “high structure-building” students don’t benefit as much from conceptual flashcards, because they already do better than “low structure-bulding” students – who DO benefit from conceptual flashcards.

Sadly, I don’t understand exactly what “high and low structure-building” means here. Those words sound like a proxy for “high scoring” and “low scoring,” but not exactly. Rather than pretend I know, I’m simply fessing up that there’s an extra variable here.

If you figure it out, please let me know!


* The authors of the study I’m describing acknowledge that this definition of “conceptual” is incomplete. They’re using that word as a handy shorthand for “flashcards that go beyond single details.”  In this blog post, I put the word “conceptual” in quotation marks to highlight the narrow specificity of their definition.


Lin, C., McDaniel, M. A., & Miyatsu, T. (2018). Effects of flashcards on learning authentic materials: The role of detailed versus conceptual flashcards and individual differences in structure-building ability. Journal of applied research in memory and cognition7(4), 529-539.

Belonging by Geoffrey Cohen
Erik Jahner, PhD
Erik Jahner, PhD

Geoffrey Cohen, a professor of Psychology at Stanford University, explores the science of self and sense of belonging in work, school, politics, relationships, and society at large. He works from an intervention perspective attempting to understand not through observation alone but through subtly adjusting the environment to change behavioral outcomes and understand how those adjustments interact with what individuals bring to situations. In his most recent book, Belonging: The Science of Creating Connections and Bridging Divides, Geoffrey shares his years of empirical research inviting you to implement a variety of concrete recommendations for building better more inclusive relationships illuminating what generates group divisions, social cohesiveness, and flourishing.

The book addresses what Geoffrey calls “situation-crafting” and “wise” interventions which are not about manipulating individuals but “catalyzing” the development of prosocial behaviors and beliefs which can lead to more psychologically healthy and productive working and learning environments. This involves not only understanding the environment but understanding what individuals bring to the situation through their diverse histories and current frames of mind. Geoffrey explains how this “dance” between the individual and the context facilitates individuals toward both antisocial and prosocial behaviors. Through this book you will reflect on what you are doing that might be catalyzing undesired behaviors in your communities and then learn how you as a manager, teacher, partner, friend, or member of a family can better create situational opportunities so that people can express their potential and feel valued for it – creating situations for belonging and thriving.

The author does an excellent job of framing research in ways that generate curiosity and deeper understanding. Regardless of your level of expertise, you will learn something new from his insights working directly with infamous research like that of the Stanford Prison Experiment demonstrating how situation crafting led everyday people to engage in deplorable behaviors toward fellow humans. I found myself consistently impressed and inspired by his and his colleagues’ creative and lively experiments devised to uncover mediating factors in slippery social psychological phenomena. Moreover, because Geoffrey adopts the philosophy that to truly understand something you must try to change it, his work is very translatable to improving everyday life.

Reducing or eliminating racial profiling in hiring, reducing recidivism rates among parolees, and intervening to reduce peer bullying in schools can often be accomplished through simple science-backed procedural changes like writing a couple of sentences, ten-minute writing activities, group reflections, or one-hour training modules that lead to greater connection across groups. The tools presented here also include subtle conversational adjustments such as adding “I think…” to position statements that invite discussion rather than an oppositional stance.  There is also enlightening in-depth analysis of what leads to the radicalization of individuals and what types of experiences lead them to a reevaluation of their actions and purpose. Overall, you will begin to view the social challenges in your personal and professional lives as opportunities to implement the new tools you will learn through this book.

As if this is not enough reason to read, the concepts are not only embedded in university research and metacognitive discussion, but they are entertainingly couched in historical examples, current tumultuous politics, personal parenting experience of the author, examples from science fiction miniseries like Black Mirror, popular literature, and numerous quotes from popular music. The book not only helps you understand the topic but offers these tapas of history and culture, making it an enjoyable and broad learning experience.

The powerful desire to belong is a mighty lever that has generated modern cultural divides and solidarity, terrorism and altruism, and destabilization and productive collaboration. Recognizing these dynamics will renew your commitment to life as a constructive and creative member of many communities.

ChatGPT and Beyond: The Best Online Resources for Evaluating Research Claims
Andrew Watson
Andrew Watson

If we’re going to make “research-based” claims about education — and we are! — then we better have the research goods.

So, how can we at Learning and the Brain — and you as teachers and administrators — find and evaluate research?

For a few years now, I’ve relied on two great sources.

Scite.ai helps me see if a particular study is an outlier, or in alignment with other research findings.

Connectedpapers.com shows a spider web of research studies on any given topic.

They’A neuron made up of walls of binary code: 0s and 1s.re great, and you should check them both out. (I’m on those sites several times a week.)

But, we’ve heard A LOT about AI in recent weeks. Can we rely on newer websites to help us out?

I spent an hour or so on Monday playing around on two sites; here’s what I found.

The Inevitable ChatGPT

I won’t rehash all the Twitter agita on this subject, but gosh ChatGPT has gotten lots of attention. (Twitter tells me it’s either the savior of education, or the end of education, a deranged stalker of New York Times reporters, or a nothingburger.)

Can it help us explore research into education?

To answer this question, I asked Chat GPT to describe “schema theory.” Here are the first two paragraphs of its answer:

“Schema theory” is a cognitive psychology theory that describes how people understand, organize, and store information about the world. The theory suggests that people use mental frameworks or “schemas” to categorize information and simplify their understanding of the world around them.

A schema can be thought of as a mental structure that represents knowledge about a particular category of information, such as objects, events, or social interactions. For example, a person might have a schema for a “restaurant” that includes information about the typical layout, menu items, and social norms associated with dining out.

So far, so good. (The restaurant example of schema is quite common.)

I asked it to dig a bit deeper, and to identify problems or limitations with schema theory. Again, I got a solid set of answers. For instance, a limitation:

Overgeneralization: Schemas can sometimes lead to overgeneralization, causing people to perceive things in a stereotypical or biased manner. For example, people with a schema of a “criminal” may unconsciously associate criminal behavior with certain groups.

When I asked it for research supporting schema theory, its first three answers were several decades old, so I asked it for research from the last 20 years.

Of the three it listed, this one looked most promising:

Lee, J. J., & Anderson, N. (2015). The role of schema in the interpretation of ambiguous scenes. Memory & Cognition, 43(8), 1186-1197.

But, strangely, when I tried to find that study on Google Scholar, it didn’t seem to exist.

Stranger still, when I went to the website for the journal Memory and Cognition, it also doesn’t exist. (To be clear: the journal exists; the article doesn’t exist in it.)

You’re not going to believe this, but it seems that ChatGPT simply MADE UP a research study supporting schema theory.

The Chat Gets Awkward

I pointed out to ChatGPT that the source doesn’t exist, and it responded:

I apologize, it seems I made a mistake. The study by Lee and Anderson (2015) that I mentioned in my previous answer does not seem to exist. As a language model, I strive to provide accurate and up-to-date information, but sometimes I may make errors or misunderstand the information given to me. I appreciate your feedback and will take this into account in my future responses.

Okay, I’m not happy about this. But it did fess up to its mistake, and promise to do better.

So, the conversation continued:

Me: Can you identify recent studies — within the last 20 years — that study the cognitive psychology of schema theory specifically for students learning academic topics?

ChatGPT: Yes, there have been recent studies that have investigated the role of schemas in students’ learning of academic topics.* Here are a few examples:

Kail, R. (2017). Schema theory and its implications for the design of learning environments. Review of Educational Research, 87(1), 93-135.

Well, that first one looks promising. But, I don’t want to be fooled twice, so I asked the obvious question:

Me: Does the Kail 2017 study really exist?

ChatGPT: Yes, the Kail (2017) study does exist. It was published in the Review of Educational Research, a highly respected peer-reviewed journal in the field of education.

Reader, the Kail study does NOT exist! ChatGPT just MADE UP A SECOND STUDY. And, having made it up, it then stuck by its (untrue) story.

I won’t go through all the iterations of this journey in detail.

But, yes, I asked about a third study. Yes, ChatGPT assured me it exists. No, it really doesn’t.

So, at this point, I’m kinda done. Any artificial intelligence that simply conjures sources out of thin air…I myself won’t be relying on it soon.

If only there were another option…

Another Option!

At about this time, my friend Rob McEntarffer asked me if I’d tried Elicit.org. (Important note: bad things might happen if you type “illicit” instead of “elicit.” I haven’t checked, but: be careful out there.)

Rob is wise in the ways of MBE, and so I tried it.

At this point, I’m as hopeful about Elicit.org as I am discouraged about ChatCPT.

Elicit asks users to frame fairly specific questions. It then looks for study abstracts that seem relevant to those questions, and reports back.

So, I asked:

“Can schema theory improve classroom instruction?”

I got a list of seven studies. All seven sounded exactly on topic. And — here’s some surprisingly good news — at least four of the studies exist!

I know because I downloaded them and printed them out. They are, in fact, in my “read this right now” pile.

Now, Elicit has a somewhat easier job that ChatGPT, because it’s answering narrow questions about research studies, not broad questions about ANYTHING.

But, I found it easy to use and — at least on my first test drive — reliable and helpful.

TL;DR

If you’re looking for online sources to find and evaluate research-based claims:

Add Elicit.org to scite.ai and connectedpapers.com as useful research resources.

Until it stops making stuff up, avoid ChatGPT.


* Notice, by the way, that ChatGPT got the possesive (“students’ learning”) correct in this answer. That’s an unusual rule — “use the possessive case before gerunds” — and a difficult apostrophe: plural possessive. So, it’s not great with sources, but it knows from grammar and punctuation!

“Seductive Details” meet “Retrieval Practice”: A Match Made in Cognitive Heaven
Andrew Watson
Andrew Watson

Here’s a common problem: your job today is to teach a boring topic. (You don’t think it’s boring, but your students always complain…)

What’s a teacher to do?

One plausible strategy: You might enliven this topic in some entertaining way.

You’ve got a funny video,

or a clever cartoon,

or a GREAT anecdote about a colleague’s misadventure.

Okay, so this video/cartoon/anecdote isn’t one of today’s learning objectives. BUT: it just might capture your students’ interest and help them pay attention.

However tempting, this strategy does create its own problems. We’ve got lots of research showing that these intriguing-but-off-topic details can get in the way of learning.

That is: students rTwo baby goats, one brown and white, theo other black and white, frolicking in a field.emember the seductive details (as they’re known in the research literature), but less of the actual content we want them to know.

Womp womp.

Some time ago, I wrote about a meta-analysis showing that — yup — seductive details ACTUALLY DO interfere with learning: especially for beginners, especially in shorter lessons.

What could we do to fix this problem? If we can’t use our anecdotes and cartoons, do we just have to bore our students?

“Right-Sized” Retrieval Practice

Here’s one approach we might try: right-sized retrieval practice.

What does “right-sized” mean? Here goes:

One retrieval practice strategy is a brain dump. The instructions sounds something like this: “write down everything you remember about today’s grammar lesson.”

Another retrieval practice strategy calls for more specific questions: “what’s the differenece between a gerund and a participle?” “How might a participle create a dangling modifier?”

A group of scholars in Germany studied this hypothesis:

If teachers use the brain dump approach, students will remember the seductive detail — and it will become a part of their long-term memory.

If, on the other hand, teachers ask specific questions, students will remember the important ideas of the lesson — and not consolidate memory of the seductive detail.

They ran a straightforward study, considering a topic close to every teacher’s heart: coffee.

100+ college students in Germany read a lengthy passage on coffee: information about the coffee plant, its harvesting, its preparation, and its processing.

Half of them read a version including fun-but-extraneous information. For instance: do you know coffee was discovered?

Turns out: goat herders noticed that their goats ate the coffee beans and then did a kind of happy dance. Those herders wondered: could we get the same happy effects? Thus was born today’s coffee industry…

Remembering the GOAT

After reading these coffee passages — with or without seductive details — students answered retrieval practice questions.

Some got a “brain dump” promt: “What do you remember about coffee?”

Others got the specific questions: “What harvesting methods do you remember, and how do they differ?”

So, what effect did those specific questions have on memory of seductive details one week later?

Sure enough, as the researchers had hypothesized, students who answered specific retrieval practice questions remembered MORE of the lesson’s meaningful content.

And, they remembered LESS (actually, NONE) of the seductive details. (Of course, the details get complicated, but this summary captures the main idea.)

BOOM.

So, what’s a classroom teacher to do?

As is so often the case, we should remember that researchers ISOLATE variables and teachers COMBINE variables.

We always have to think about many (many!) topics at once, while research typically tries to find out the importance of exactly one thing.

Putting all these ideas together, I’d recommend the following path:

If I have to teach a topic my students find dull, I can indeed include some seductive details (Ha ha! Goats!) to capture their interest — as long as I conclude that lesson with some highly specific retrieval practice questioning.

And, based on this earlier post on seductive details, this extra step will be especially important if the lesson is short, or the students are beginners with this topic.

TL;DR

Seductive details can capture students’ interest, but also distract them from the important topics of the lesson.

To counteract this problem, teachers should plan for retriveal practice including specific questions — not just a brain dump.


By the way: I first heard about this “retrieval practice vs. seductive details” study from Bradley Busch (Twitter: @BradleyKBusch) and Jade Pearce (Twitter: @PearceMrs). If you’re not familiar with their work, be sure to look them up!


Eitel, A., Endres, T., & Renkl, A. (2022). Specific questions during retrieval practice are better for texts containing seductive details. Applied Cognitive Psychology36(5), 996-c1008.

Sundararajan, N., & Adesope, O. (2020). Keep it coherent: A meta-analysis of the seductive details effect. Educational Psychology Review32(3), 707-734.

Starting Class with “Prequestions”: Benefits, Problems, Solutions
Andrew Watson
Andrew Watson

We’ve known for many years now that retrieval practice works.

Hispanic student wearing a blue shirt raising his hand to ask a question in class

That is: after we have introduced students to a topic, we might REVIEW it with them the next day. However, they’ll remember it better if we ask them to try to RETRIEVE ideas and procedures about it.

As Dr. Pooja Agarwal and Patrice Bain write, we want students to “pull information out of their brains” (retrieve) not “put information back into their brains” (review).

Sadly, we know that students’ intuition contradicts this guidance. They really want to reread or review their notes, rather than ask themselves questions.

In this (very sad) study, for instance, Dr. Nate Kornell and Dr. Lisa Son found that students think review works better than retrieval even when they do better on quizzes following retrieval!

Yes, even the experience of learning more doesn’t persuade students that they learned more.

YIKES.

The More Things Change…

Let’s take this retrieval practice idea one step further.

I wrote above that answering questions helps students learn AFTER they have been introduced to a topic.

But: does answering questions help students learn a topic even BEFORE they study it?

On the one hand, this suggestion sounds very strange. Students can’t get these “prequestions” right, because they haven’t yet studied the topic.

On the other hand, we’ve got research showing that this strategy works!

In one of my favorite studies ever, Dr. Lindsay Richland found that “prequestions” help students learn. And, she then worked really hard to disprove her own findings. When she couldn’t explain away her conclusions, she finally accepted them. *

Similarly, a more recent study suggests that learning objectives framed as questions (“Where are mirror neurons located?”) helps students learn more than LOs framed as statements (“You will learn where mirror neurons are located.”).

Although this prequestion strategy hasn’t been studied as much as retrieval practice, I do think it has enough research behind it to merit teachers’ respectful attention.

However, I do think this approach has a practical classroom problem…

Sustaining Motivation

For the most part, my high-school students are an amiable lot. If I ask them to do something … say, answer retrieval practice questions … they’ll give it a go.

And, they almost certainly want to get those questions right.

In a class discussion about Their Eyes Were Watching God, for instance, we might compare Janie’s three “husbands.” If I ask a student the following day to list some points of comparison from memory (retrieval practice!), they’ll feel that they ought to remember an answer or two.

Let’s try this logic with prequestioning.

Imagine I ask my students this prequestion: “Why do you think the novel’s protagonist will have the nickname ‘Alphabet’?”

My students will gamely try some answers.

However, I worry that – over time – they’ll start losing interest.

They almost never get these answers right.

And, there’s no “penalty” for getting them wrong, or reward for getting them right. (We don’t want students to focus on rewards and penalties, but schools typically work this way…)

From the student perspective, in other words, the whole prequestion strategy feels like an exercise in futility.

Why should they bother to think seriously about these un-answerable questions? They feel like wasted mental effort…

Two Solutions

First: I’ve tried in the past to solve this problem by using the strategy infrequently.

If my students don’t experience this quirky frustration too often, I hope, they won’t mind participating in this odd ritual.

Recent research, however, offers a second solution – a more honorable solution than mine.

In this study, by Dr. Steven Pan and Dr. Michelle Rivers, prequestions consistently helped students learn.

However, students didn’t really notice the benefit of prequestions – even when they learned more from answering them. (This result sounds a lot like the Kornell and Son study about retrieval practice; students don’t register the benefits they experience.)

So, Pan and Rivers tried several solutions. Specifically, they found benefits to a multi-step approach:

Step 1: have students learn some info with prequestions, and some without.

Step 2: give them a no-stakes quiz on the info.

Step 3: let them see that they remembered information better after prequestions.

Step 4: next time, ask students to recall how well they remembered after answering prequestions.

In other words: students need to experience the benefits and to have them repeatedly pointed out. This combination, probably, helps students believe that prequestions really do help.

This insight (probably?) helps with the motivation problem that has been troubling me in the past.

In other words: students who believe that prequestions will help are much likelier to participate in the curious mental exercise of trying to answer questions whose answer they can’t yet know.

TL;DR

When students answer questions about information they’re about to learn, they remember that information better – even if they get the answers wrong.

This strategy might be effective in the short term, but hamper motivation over time. After all, why should students even try to answer questions if they’re unlikely to know the answer?

To counteract this motivational problem, take students through Pan & Rivers’s procedure for them to experience and remember the benefits that prequestions provide.

We don’t have LOTS of research on this strategy, but we do have enough to make it a plausible approach.


* Sadly, the “prequestion” strategy has frequently been called “pretesting.” Of course, the presence of the stem “test” both confuses the strategy (there’s no testing!) and disinclines people from participating (who wants more testing?).

So, let me emphasize: “prequestions” are simply questions. They’re not a test.

BTW: I’ve recently seen the word “pretrieval” as a way to avoid the “pretest” moniker. You might like it better than “prequestions.”


Agarwal, P. K., & Bain, P. M. (2019). Powerful teaching: Unleash the science of learning. John Wiley & Sons.

Kornell, N., & Son, L. K. (2009). Learners’ choices and beliefs about self-testing. Memory17(5), 493-501.

Pan, S. C., & Rivers, M. L. (2023). Metacognitive awareness of the pretesting effect improves with self-regulation support. Memory & Cognition, 1-20.

Richland, L. E., Kornell, N., & Kao, L. S. (2009). The pretesting effect: Do unsuccessful retrieval attempts enhance learning?. Journal of Experimental Psychology: Applied15(3), 243.

Sana, F., Forrin, N. D., Sharma, M., Dubljevic, T., Ho, P., Jalil, E., & Kim, J. A. (2020). Optimizing the efficacy of learning objectives through pretests. CBE—Life Sciences Education19(3), ar43.

The Trad/Prog Debate Gets Weird
Andrew Watson
Andrew Watson

Few debates rage hotter in education circles than that between educational progressives and educational traditionalists. (I’m emphasizing “educational” in these phrases, because they don’t necessarily align with political trad/prog divides. This blog doesn’t do politics.)

One recent summary — relying heavily on Dewey — describes the debate this way:

Educational traditionalists “argue that teachers should carefully select and sequence the best knowledge from their subject areas and then deliver it directly to the whole class, while maintaining order.”

A graphic of two heads facing each other in conversation: one with a lightbulb inside, the other with a question mark.

Whereas

Educational progressives “argue that teachers should focus on facilitating individualised learning experiences in which pupils can explore their natural inclinations, thus nurturing their interests and developing general thinking skills.”

Of course, the debate gets MUCH more complicated than these summaries, but it’s handy to have a quick definition. According to this summary article cited above, roughly 2/3 of teachers side with one or the other of these two positions.

But: do we have research favoring one approach or the other?

Problems and Solutions

Because both educational philosophies encompass substantial sets of teaching ideas — everything from pedagogy to curriculum to motivation to metacognition — they resist efforts to evaluate them in their entirety.

We might run a study that shows … say … this “ed prog” unit motivates 6th graders more strongly than the analogous “ed trad” unit. However, such a study doesn’t necessarily mean that the philosophy itself works for all students, all subjects, all cultures, and all definitions of “works.”

So, what to do?

Education scholars Dr. Sam Sims and Dr. John Jerrim have found an ENORMOUS data set from Germany that just might make this possible.

It shows how much academic progress several thousand German students made over several years.

It measures their expressed motivation for studying German and math.

Heck, it even tracks their metacognative facility.

And,

It asks questions about their teachers’ expressed place on the ed prog/ed trad continuum.

It also asks questions about the teachers’ educational practices (PBL, lecture), to see if they align with those expressed beliefs.

WOW.

If we crunch the numbers just right, we should be able to answer several questions:

Do students learn more in one or the other of these approaches?

Does their motivation vary depending on them?

Is one approach or another better for more or less successful students?

So, what do Sims and Jerrim discover when they run their equations?

Not What I Was Expecting

The blog title promises something “weird.” So, here goes…

Because the ed prog/ed trad debate has an ethical valance to it, it often prompts dramatic calls:

“This way is the right way, so our profession really must do it right! After all, anything else is wrong.”

This study — weirdly — comes to an astonishingly bland conclusion.

That is:

Which philosophy helped students learn more?

Honestly, both worked equally well.

But wait: which one helped struggling learners more?

Meh. Both worked equally well.

But surely one fostered student motivation more than the other!

Well, kinda. Educationally traditional teaching practices resulted in slightly higher levels of motivation in German. But, not in math. And, not much even in German.

Um, metacognition?

Again, no big difference — although a slight benefit for team ed trad.

So, this ferocious debate we’ve been having for decades? Maybe we’ve been arguing about the wrong topics…

Trying to Process

Honestly, I’m still trying to wrap my head around this research. (By the way, I heard about it from Peps Mccrea. If you haven’t signed up for his “Research Snacks,” do so NOW. And, you can hear his webinar on motivation March 18th.)

Here’s a random grab-bag of my early thoughts:

First, I don’t doubt that LOTS of people will simply reject these findings. One easy way to do so: they’re published not in a peer reviewed journal, but as a working paper.

Debates about the relative merits of peer review rage in the scholarly community. Readers who typically reject working papers for their lack of formal peer review might simply set this one aside.

Second, if these two approaches basically work equally well, then we shouldn’t focus on choosing one or the other: we should focus on doing both as well as we possibly can.

My own view is that cognitive science will help us do so. For instance: no matter my educational philosophical stance, my students will benefit if I understand how working memory works.

Third, the finding about motivation seems especially surprising — or at least provocative.

Champions of educationally progressive education typically trumpet motivation as one of its obvious benefits. (Hence the contemptuous phrase “drill and kill” to mock direct instruction.)

This research suggests that educationally traditional methods produce slightly higher levels of motivation (in one of the two subjects they measured).

But wait: if students in ed trad classes felt more motivation but didn’t learn more, something very strange is going on.

Perhaps (dare I write it?), motivation doesn’t matter for learning? (How can that possibly be?)

Perhaps (dare I write it?), ed trad methods produce slightly more motivation but slightly less learning — and those two effects balance each other out? (All my ed trad friends just howled in rage.)

Or perhaps there’s some other obvious explanation I’m missing?

Currently, I’m stumped.

Next Steps

Me, I’m going to watch the responses to Sims and Jerrim’s working paper, and see what additional wisdom shakes out.

If you’ve got additional or alternative perspectives, I hope you’ll share them in the comments.

 

CHATTER BY ETHAN KROSS
Erik Jahner, PhD
Erik Jahner, PhD

The founder and director of the Emotional and Self-Control Laboratory at the University of Michigan, Ethan Kross has been a leading voice in a field that is helping us understand the workings of the conscious mind and how understanding its mechanisms can enable us to live happier and more fulfilled lives. While much of our daily life is spent mind wandering and listening to our inner voice, we do not always think about the dynamic ways it is directly linked to our daily experiences. The chatter of our internal voice can seem to be a distracting and destructive cacophony of internal thought. In Chatter: The Voice in Our Head, Why it Matters, and How to Harness It, Kross synthesizes his and others’ research in the field concentrating on this inner voice from a scientific perspective, a book sorely needed to help us understand and take advantage of this all too human condition.

The rich narratives of research, mini-bios, and the wonderings and personal experiences of the author give the reader the sense that they are sitting down and having an intriguing dinner conversation with Kross. We hear about chatter through various anecdotes that we can all relate to and then how individuals overcome the debilitating chatter and move toward a constructive internal discourse. Among these great relatable narratives are a distracted baseball player, a neuroscientist who experienced a stroke losing her inner voice, and an anxious applicant for a job at the NSA among many others. While still theoretically laden and packed tight with empirical research, this book reads much more like a friendly storytelling ­­– always a refreshing approach to science.

This is not just a book explaining what the inner voice is, it is a book about our conversations with ourselves and those around us. How are those conversations affecting that inner voice, and how is our inner voice affecting those conversations? It also demonstrates the intrinsic connectivity between chatter and the environment suggesting ways we can improve our ability to manage chatter by changing our surroundings and some of our basic daily habits. These little nudges to our daily practice are summarized at the end of the book in a set of concrete tools but the real joys of these are in the narrative support the author gives throughout the text.

Beyond the rich, relatable, and entertaining stories, this is also an exceptional example of translational research bringing together neuroscience, psychology, psychobiology, and sociology in a truly interdisciplinary translational endeavor. The artful interweaving of the book’s main ideas across conceptual levels demonstrates the importance of this type of interdisciplinary work.

But this book also hit me in a personal way enriching my own conversations. I could not help but send an uncontrolled stream of texts to friends as I read the book. It captured the essence of many conversations about self-improvement, but it reframes the discussion, grounding it in research but also asking us to consider experimenting in our own lives. It was immediately accessible and curiosity-inducing to family, friends, and colleagues. And there is something authentic for every reader from advice for the psychotherapist to how best to support yourself and your friends. Our internal voice is so visible and yet our ability to reflect on it is limited. Kross gives us some window into those relationships we can improve with ourselves and those around us and it clearly sends the message that chatter is socially embedded and not an individual endeavor.

This short book could easily be read in an afternoon of cerebral escapism tickling your curiosity about your own mind and filling your stores of knowledge with fun and personal narratives easily shared with friends. But it’s a must-read for anyone listening to their inner crickets.