February 2023 – Education & Teacher Conferences Skip to main content
Belonging by Geoffrey Cohen
Erik Jahner, PhD
Erik Jahner, PhD

Geoffrey Cohen, a professor of Psychology at Stanford University, explores the science of self and sense of belonging in work, school, politics, relationships, and society at large. He works from an intervention perspective attempting to understand not through observation alone but through subtly adjusting the environment to change behavioral outcomes and understand how those adjustments interact with what individuals bring to situations. In his most recent book, Belonging: The Science of Creating Connections and Bridging Divides, Geoffrey shares his years of empirical research inviting you to implement a variety of concrete recommendations for building better more inclusive relationships illuminating what generates group divisions, social cohesiveness, and flourishing.

The book addresses what Geoffrey calls “situation-crafting” and “wise” interventions which are not about manipulating individuals but “catalyzing” the development of prosocial behaviors and beliefs which can lead to more psychologically healthy and productive working and learning environments. This involves not only understanding the environment but understanding what individuals bring to the situation through their diverse histories and current frames of mind. Geoffrey explains how this “dance” between the individual and the context facilitates individuals toward both antisocial and prosocial behaviors. Through this book you will reflect on what you are doing that might be catalyzing undesired behaviors in your communities and then learn how you as a manager, teacher, partner, friend, or member of a family can better create situational opportunities so that people can express their potential and feel valued for it – creating situations for belonging and thriving.

The author does an excellent job of framing research in ways that generate curiosity and deeper understanding. Regardless of your level of expertise, you will learn something new from his insights working directly with infamous research like that of the Stanford Prison Experiment demonstrating how situation crafting led everyday people to engage in deplorable behaviors toward fellow humans. I found myself consistently impressed and inspired by his and his colleagues’ creative and lively experiments devised to uncover mediating factors in slippery social psychological phenomena. Moreover, because Geoffrey adopts the philosophy that to truly understand something you must try to change it, his work is very translatable to improving everyday life.

Reducing or eliminating racial profiling in hiring, reducing recidivism rates among parolees, and intervening to reduce peer bullying in schools can often be accomplished through simple science-backed procedural changes like writing a couple of sentences, ten-minute writing activities, group reflections, or one-hour training modules that lead to greater connection across groups. The tools presented here also include subtle conversational adjustments such as adding “I think…” to position statements that invite discussion rather than an oppositional stance.  There is also enlightening in-depth analysis of what leads to the radicalization of individuals and what types of experiences lead them to a reevaluation of their actions and purpose. Overall, you will begin to view the social challenges in your personal and professional lives as opportunities to implement the new tools you will learn through this book.

As if this is not enough reason to read, the concepts are not only embedded in university research and metacognitive discussion, but they are entertainingly couched in historical examples, current tumultuous politics, personal parenting experience of the author, examples from science fiction miniseries like Black Mirror, popular literature, and numerous quotes from popular music. The book not only helps you understand the topic but offers these tapas of history and culture, making it an enjoyable and broad learning experience.

The powerful desire to belong is a mighty lever that has generated modern cultural divides and solidarity, terrorism and altruism, and destabilization and productive collaboration. Recognizing these dynamics will renew your commitment to life as a constructive and creative member of many communities.

ChatGPT and Beyond: The Best Online Resources for Evaluating Research Claims
Andrew Watson
Andrew Watson

If we’re going to make “research-based” claims about education — and we are! — then we better have the research goods.

So, how can we at Learning and the Brain — and you as teachers and administrators — find and evaluate research?

For a few years now, I’ve relied on two great sources.

Scite.ai helps me see if a particular study is an outlier, or in alignment with other research findings.

Connectedpapers.com shows a spider web of research studies on any given topic.

They’A neuron made up of walls of binary code: 0s and 1s.re great, and you should check them both out. (I’m on those sites several times a week.)

But, we’ve heard A LOT about AI in recent weeks. Can we rely on newer websites to help us out?

I spent an hour or so on Monday playing around on two sites; here’s what I found.

The Inevitable ChatGPT

I won’t rehash all the Twitter agita on this subject, but gosh ChatGPT has gotten lots of attention. (Twitter tells me it’s either the savior of education, or the end of education, a deranged stalker of New York Times reporters, or a nothingburger.)

Can it help us explore research into education?

To answer this question, I asked Chat GPT to describe “schema theory.” Here are the first two paragraphs of its answer:

“Schema theory” is a cognitive psychology theory that describes how people understand, organize, and store information about the world. The theory suggests that people use mental frameworks or “schemas” to categorize information and simplify their understanding of the world around them.

A schema can be thought of as a mental structure that represents knowledge about a particular category of information, such as objects, events, or social interactions. For example, a person might have a schema for a “restaurant” that includes information about the typical layout, menu items, and social norms associated with dining out.

So far, so good. (The restaurant example of schema is quite common.)

I asked it to dig a bit deeper, and to identify problems or limitations with schema theory. Again, I got a solid set of answers. For instance, a limitation:

Overgeneralization: Schemas can sometimes lead to overgeneralization, causing people to perceive things in a stereotypical or biased manner. For example, people with a schema of a “criminal” may unconsciously associate criminal behavior with certain groups.

When I asked it for research supporting schema theory, its first three answers were several decades old, so I asked it for research from the last 20 years.

Of the three it listed, this one looked most promising:

Lee, J. J., & Anderson, N. (2015). The role of schema in the interpretation of ambiguous scenes. Memory & Cognition, 43(8), 1186-1197.

But, strangely, when I tried to find that study on Google Scholar, it didn’t seem to exist.

Stranger still, when I went to the website for the journal Memory and Cognition, it also doesn’t exist. (To be clear: the journal exists; the article doesn’t exist in it.)

You’re not going to believe this, but it seems that ChatGPT simply MADE UP a research study supporting schema theory.

The Chat Gets Awkward

I pointed out to ChatGPT that the source doesn’t exist, and it responded:

I apologize, it seems I made a mistake. The study by Lee and Anderson (2015) that I mentioned in my previous answer does not seem to exist. As a language model, I strive to provide accurate and up-to-date information, but sometimes I may make errors or misunderstand the information given to me. I appreciate your feedback and will take this into account in my future responses.

Okay, I’m not happy about this. But it did fess up to its mistake, and promise to do better.

So, the conversation continued:

Me: Can you identify recent studies — within the last 20 years — that study the cognitive psychology of schema theory specifically for students learning academic topics?

ChatGPT: Yes, there have been recent studies that have investigated the role of schemas in students’ learning of academic topics.* Here are a few examples:

Kail, R. (2017). Schema theory and its implications for the design of learning environments. Review of Educational Research, 87(1), 93-135.

Well, that first one looks promising. But, I don’t want to be fooled twice, so I asked the obvious question:

Me: Does the Kail 2017 study really exist?

ChatGPT: Yes, the Kail (2017) study does exist. It was published in the Review of Educational Research, a highly respected peer-reviewed journal in the field of education.

Reader, the Kail study does NOT exist! ChatGPT just MADE UP A SECOND STUDY. And, having made it up, it then stuck by its (untrue) story.

I won’t go through all the iterations of this journey in detail.

But, yes, I asked about a third study. Yes, ChatGPT assured me it exists. No, it really doesn’t.

So, at this point, I’m kinda done. Any artificial intelligence that simply conjures sources out of thin air…I myself won’t be relying on it soon.

If only there were another option…

Another Option!

At about this time, my friend Rob McEntarffer asked me if I’d tried Elicit.org. (Important note: bad things might happen if you type “illicit” instead of “elicit.” I haven’t checked, but: be careful out there.)

Rob is wise in the ways of MBE, and so I tried it.

At this point, I’m as hopeful about Elicit.org as I am discouraged about ChatCPT.

Elicit asks users to frame fairly specific questions. It then looks for study abstracts that seem relevant to those questions, and reports back.

So, I asked:

“Can schema theory improve classroom instruction?”

I got a list of seven studies. All seven sounded exactly on topic. And — here’s some surprisingly good news — at least four of the studies exist!

I know because I downloaded them and printed them out. They are, in fact, in my “read this right now” pile.

Now, Elicit has a somewhat easier job that ChatGPT, because it’s answering narrow questions about research studies, not broad questions about ANYTHING.

But, I found it easy to use and — at least on my first test drive — reliable and helpful.

TL;DR

If you’re looking for online sources to find and evaluate research-based claims:

Add Elicit.org to scite.ai and connectedpapers.com as useful research resources.

Until it stops making stuff up, avoid ChatGPT.


* Notice, by the way, that ChatGPT got the possesive (“students’ learning”) correct in this answer. That’s an unusual rule — “use the possessive case before gerunds” — and a difficult apostrophe: plural possessive. So, it’s not great with sources, but it knows from grammar and punctuation!

“Seductive Details” meet “Retrieval Practice”: A Match Made in Cognitive Heaven
Andrew Watson
Andrew Watson

Here’s a common problem: your job today is to teach a boring topic. (You don’t think it’s boring, but your students always complain…)

What’s a teacher to do?

One plausible strategy: You might enliven this topic in some entertaining way.

You’ve got a funny video,

or a clever cartoon,

or a GREAT anecdote about a colleague’s misadventure.

Okay, so this video/cartoon/anecdote isn’t one of today’s learning objectives. BUT: it just might capture your students’ interest and help them pay attention.

However tempting, this strategy does create its own problems. We’ve got lots of research showing that these intriguing-but-off-topic details can get in the way of learning.

That is: students rTwo baby goats, one brown and white, theo other black and white, frolicking in a field.emember the seductive details (as they’re known in the research literature), but less of the actual content we want them to know.

Womp womp.

Some time ago, I wrote about a meta-analysis showing that — yup — seductive details ACTUALLY DO interfere with learning: especially for beginners, especially in shorter lessons.

What could we do to fix this problem? If we can’t use our anecdotes and cartoons, do we just have to bore our students?

“Right-Sized” Retrieval Practice

Here’s one approach we might try: right-sized retrieval practice.

What does “right-sized” mean? Here goes:

One retrieval practice strategy is a brain dump. The instructions sounds something like this: “write down everything you remember about today’s grammar lesson.”

Another retrieval practice strategy calls for more specific questions: “what’s the differenece between a gerund and a participle?” “How might a participle create a dangling modifier?”

A group of scholars in Germany studied this hypothesis:

If teachers use the brain dump approach, students will remember the seductive detail — and it will become a part of their long-term memory.

If, on the other hand, teachers ask specific questions, students will remember the important ideas of the lesson — and not consolidate memory of the seductive detail.

They ran a straightforward study, considering a topic close to every teacher’s heart: coffee.

100+ college students in Germany read a lengthy passage on coffee: information about the coffee plant, its harvesting, its preparation, and its processing.

Half of them read a version including fun-but-extraneous information. For instance: do you know coffee was discovered?

Turns out: goat herders noticed that their goats ate the coffee beans and then did a kind of happy dance. Those herders wondered: could we get the same happy effects? Thus was born today’s coffee industry…

Remembering the GOAT

After reading these coffee passages — with or without seductive details — students answered retrieval practice questions.

Some got a “brain dump” promt: “What do you remember about coffee?”

Others got the specific questions: “What harvesting methods do you remember, and how do they differ?”

So, what effect did those specific questions have on memory of seductive details one week later?

Sure enough, as the researchers had hypothesized, students who answered specific retrieval practice questions remembered MORE of the lesson’s meaningful content.

And, they remembered LESS (actually, NONE) of the seductive details. (Of course, the details get complicated, but this summary captures the main idea.)

BOOM.

So, what’s a classroom teacher to do?

As is so often the case, we should remember that researchers ISOLATE variables and teachers COMBINE variables.

We always have to think about many (many!) topics at once, while research typically tries to find out the importance of exactly one thing.

Putting all these ideas together, I’d recommend the following path:

If I have to teach a topic my students find dull, I can indeed include some seductive details (Ha ha! Goats!) to capture their interest — as long as I conclude that lesson with some highly specific retrieval practice questioning.

And, based on this earlier post on seductive details, this extra step will be especially important if the lesson is short, or the students are beginners with this topic.

TL;DR

Seductive details can capture students’ interest, but also distract them from the important topics of the lesson.

To counteract this problem, teachers should plan for retriveal practice including specific questions — not just a brain dump.


By the way: I first heard about this “retrieval practice vs. seductive details” study from Bradley Busch (Twitter: @BradleyKBusch) and Jade Pearce (Twitter: @PearceMrs). If you’re not familiar with their work, be sure to look them up!


Eitel, A., Endres, T., & Renkl, A. (2022). Specific questions during retrieval practice are better for texts containing seductive details. Applied Cognitive Psychology36(5), 996-c1008.

Sundararajan, N., & Adesope, O. (2020). Keep it coherent: A meta-analysis of the seductive details effect. Educational Psychology Review32(3), 707-734.

Starting Class with “Prequestions”: Benefits, Problems, Solutions
Andrew Watson
Andrew Watson

We’ve known for many years now that retrieval practice works.

Hispanic student wearing a blue shirt raising his hand to ask a question in class

That is: after we have introduced students to a topic, we might REVIEW it with them the next day. However, they’ll remember it better if we ask them to try to RETRIEVE ideas and procedures about it.

As Dr. Pooja Agarwal and Patrice Bain write, we want students to “pull information out of their brains” (retrieve) not “put information back into their brains” (review).

Sadly, we know that students’ intuition contradicts this guidance. They really want to reread or review their notes, rather than ask themselves questions.

In this (very sad) study, for instance, Dr. Nate Kornell and Dr. Lisa Son found that students think review works better than retrieval even when they do better on quizzes following retrieval!

Yes, even the experience of learning more doesn’t persuade students that they learned more.

YIKES.

The More Things Change…

Let’s take this retrieval practice idea one step further.

I wrote above that answering questions helps students learn AFTER they have been introduced to a topic.

But: does answering questions help students learn a topic even BEFORE they study it?

On the one hand, this suggestion sounds very strange. Students can’t get these “prequestions” right, because they haven’t yet studied the topic.

On the other hand, we’ve got research showing that this strategy works!

In one of my favorite studies ever, Dr. Lindsay Richland found that “prequestions” help students learn. And, she then worked really hard to disprove her own findings. When she couldn’t explain away her conclusions, she finally accepted them. *

Similarly, a more recent study suggests that learning objectives framed as questions (“Where are mirror neurons located?”) helps students learn more than LOs framed as statements (“You will learn where mirror neurons are located.”).

Although this prequestion strategy hasn’t been studied as much as retrieval practice, I do think it has enough research behind it to merit teachers’ respectful attention.

However, I do think this approach has a practical classroom problem…

Sustaining Motivation

For the most part, my high-school students are an amiable lot. If I ask them to do something … say, answer retrieval practice questions … they’ll give it a go.

And, they almost certainly want to get those questions right.

In a class discussion about Their Eyes Were Watching God, for instance, we might compare Janie’s three “husbands.” If I ask a student the following day to list some points of comparison from memory (retrieval practice!), they’ll feel that they ought to remember an answer or two.

Let’s try this logic with prequestioning.

Imagine I ask my students this prequestion: “Why do you think the novel’s protagonist will have the nickname ‘Alphabet’?”

My students will gamely try some answers.

However, I worry that – over time – they’ll start losing interest.

They almost never get these answers right.

And, there’s no “penalty” for getting them wrong, or reward for getting them right. (We don’t want students to focus on rewards and penalties, but schools typically work this way…)

From the student perspective, in other words, the whole prequestion strategy feels like an exercise in futility.

Why should they bother to think seriously about these un-answerable questions? They feel like wasted mental effort…

Two Solutions

First: I’ve tried in the past to solve this problem by using the strategy infrequently.

If my students don’t experience this quirky frustration too often, I hope, they won’t mind participating in this odd ritual.

Recent research, however, offers a second solution – a more honorable solution than mine.

In this study, by Dr. Steven Pan and Dr. Michelle Rivers, prequestions consistently helped students learn.

However, students didn’t really notice the benefit of prequestions – even when they learned more from answering them. (This result sounds a lot like the Kornell and Son study about retrieval practice; students don’t register the benefits they experience.)

So, Pan and Rivers tried several solutions. Specifically, they found benefits to a multi-step approach:

Step 1: have students learn some info with prequestions, and some without.

Step 2: give them a no-stakes quiz on the info.

Step 3: let them see that they remembered information better after prequestions.

Step 4: next time, ask students to recall how well they remembered after answering prequestions.

In other words: students need to experience the benefits and to have them repeatedly pointed out. This combination, probably, helps students believe that prequestions really do help.

This insight (probably?) helps with the motivation problem that has been troubling me in the past.

In other words: students who believe that prequestions will help are much likelier to participate in the curious mental exercise of trying to answer questions whose answer they can’t yet know.

TL;DR

When students answer questions about information they’re about to learn, they remember that information better – even if they get the answers wrong.

This strategy might be effective in the short term, but hamper motivation over time. After all, why should students even try to answer questions if they’re unlikely to know the answer?

To counteract this motivational problem, take students through Pan & Rivers’s procedure for them to experience and remember the benefits that prequestions provide.

We don’t have LOTS of research on this strategy, but we do have enough to make it a plausible approach.


* Sadly, the “prequestion” strategy has frequently been called “pretesting.” Of course, the presence of the stem “test” both confuses the strategy (there’s no testing!) and disinclines people from participating (who wants more testing?).

So, let me emphasize: “prequestions” are simply questions. They’re not a test.

BTW: I’ve recently seen the word “pretrieval” as a way to avoid the “pretest” moniker. You might like it better than “prequestions.”


Agarwal, P. K., & Bain, P. M. (2019). Powerful teaching: Unleash the science of learning. John Wiley & Sons.

Kornell, N., & Son, L. K. (2009). Learners’ choices and beliefs about self-testing. Memory17(5), 493-501.

Pan, S. C., & Rivers, M. L. (2023). Metacognitive awareness of the pretesting effect improves with self-regulation support. Memory & Cognition, 1-20.

Richland, L. E., Kornell, N., & Kao, L. S. (2009). The pretesting effect: Do unsuccessful retrieval attempts enhance learning?. Journal of Experimental Psychology: Applied15(3), 243.

Sana, F., Forrin, N. D., Sharma, M., Dubljevic, T., Ho, P., Jalil, E., & Kim, J. A. (2020). Optimizing the efficacy of learning objectives through pretests. CBE—Life Sciences Education19(3), ar43.