Skip to main content
Tea and Macbeth: Autobiographical vs. Semantic Memory
Andrew Watson
Andrew Watson

A few years ago, a former student named Jeremy invited me out for coffee. (I haven’t changed his name, because I can’t think of any reason to do so.)

We were reminiscing about the good old days–in particular, the very fun group of students in the sophomore class with him.

At one point he said: “You know what I remember most vividly about your class?”

I waited.

“Instead of using a spoon, you’d wrap your teabag string around your pen to wring it out into the mug. That always AMAZED me.”

In my early days as a teacher, I would have been horrified by this comment.

We had done such good work  in this class! We analyzed the heck out of Macbeth. Jeremy had become a splendid writer–he could subordinate a quotation in an appositive like a pro. We had inspiring conversations about Their Eyes Were Watching God.

And all he remembered was the way I wrung out a tea bag?

The Hidden Compliment

Jeremy’s comment might seem like terrible news, but I think it’s good news. Here’s why:

The goal of sophomore English is for Jeremy to learn particular skills, facts, and habits of mind.

That is: he should remember–say–how to write a topic sentence with parallel abstract nouns.

However, he need not remember the specific tasks he undertook to learn that skill.

For example, when he wrote his essay about Grapes of Wrath, he got better at writing essays. Whether or not he remembers the argument he made in that paper, he honed his analytical habits and writing skills. (How do I know? His next paper was better. And the next.)

He doesn’t remember the day he learned how to do those things. But, he definitely learned how to do them.

Many Memories

When psychologists first began studying memory, they quickly realized that “memory” isn’t one thing. We’ve got lots of different kinds of memory.

Those distinct memory systems remember different kinds of things. They store those memories in different places.

For instance: I’ve written a lot about working memory. That essential cognitive system works in a very particular way, with very important strengths and limitations.

But, say procedural memory works very differently. Procedural memory helps us remember how to do things: like, say, ride a bike, or form the past tense of an irregular verb.

These distinctions help me understand Jeremy’s memories of my class.

Jeremy had a strong autobiographical memory: my wringing out a teabag with my pen.

As the name suggests, autobiographical memories are rich with details about the events and people and circumstances.

You have countless such memories:

The time you poured coffee on your boss’s desk;

The first time you met your current partner;

The time you forgot your lines on stage.

You can call up vivid specifics with delicious–or agonizing–precision.

At the same time, Jeremy has lots of semantic memories from our class. As Clare Sealy describes them, semantic memories are “context free.” They “have been liberated from the emotional and spatiotemporal context in which they were first acquired.”

For instance:

Jeremy knows the difference between a direct object and a subject complement.

Having read The Ballad of the Sad Cafe, he knows how to analyze love triangles in literature.

Knowing how we define the word “romance” in English, he can explain the (many) bizarrenesses of The Scarlet Letter.

However, those semantic memories have an entirely feel from autobiographical memories. They lack the vivid specifics.

Jeremy knows that a subject complement “renames or describes” the subject. But he can’t tell you the tie I was wearing when I first explained that. He can’t tell you the (probably gruesome) example I used to make the distinction clear.

If he could, they would be autobiographical memories as well as semantic memories.

Why The Distinction Matters

As teachers, we’re tempted–often encouraged–to make our classes dramatically memorable. We want our students to remember the time that we…

Surprisingly, that approach has a hidden downside.

As Clare Sealy explains in a recent essay, we can easily use information in semantic memory in a variety of circumstances. That is: transfer is relatively easy with semantic memory.

However, that’s not true for autobiographical memory. Because autobiographical memory is bound up with the vivid specifics of that very moment on that very day (in that very room with those very people), students can struggle to shift the underlying insight to new circumstances.

In other words: the vivid freshness of autobiographical memory impedes transfer.

Sealy explains this so nimbly that I want to quote her at length:

Emotional and sensory cues are triggered when we try to retrieve an autobiographical memory. The problem is that sometime they remember the contextual tags but not the actual learning.

Autobiographical memory is so tied up with context, it is no good for remembering things once that context is no longer present.

This means that it has serious limitations in terms of its usefulness as the main strategy for educating children, since whatever is remembered is so bound up with the context in which it was taught. This does not make for flexible, transferable learning that can be brought to bear in different contexts and circumstances.

By the way, in the preceding passage, I’ve used the phrase “autobiographical memory” when Sealy wrote “episodic memory.” The two terms mean the same thing; I think that “autobiographical memory” is a more intuitive label.

To Sum Up

Of course we want our students to remember us and our class: the fun events, the dramatic personalities, the meaningful milestones.

And, we also want them to remember the topics and ideas and processes they learned.

Crucially, the word “remember” means something different in those two sentences; the first is autobiographical memory, the second is semantic.

Teaching strategies that emphasize remembering events might (sadly) make it harder to remember ideas and processes.

So, we should use teaching strategies that foster the creation of semantic memories.

Happily, the autobiographical memories will take care of themselves.


Clare Sealy’s essay appears in The researchED Guide to Education Myths: An Evidence-Informed Guide for Teachers. The (ironic) title is ” Memorable Experiences Are the Best Way to Help Children Remember Things.”

Inquiry- and Problem-Based Pedagogy: Dramatic Results in South America (?)
Andrew Watson
Andrew Watson

A recent study, published by the Center for Effective Global Action, sees big benefits from teaching build around student collaboration, inquiry, and problem-solving.

Working with in four countries (!), in ten different schools, (!!), with over 17,000 students (!!!), researchers find that K-4 students made more progress in math and science when they explored questions, compared with students who listened to lectures.

They report these results in stats-y language that doesn’t translate well: after 7 months, students averaged 0.18 standard deviations higher in math, and 0.14 in science. After four years, those differences bloomed to 0.39 and 0.23.

That not as sexy sounding as, say, they scored X% higher on a standardized test. But, however you look at it, those are eye-catching numbers.

Inquiry Learning vs. What?

Despite these dramatic numbers and claims, I don’t think the study supports the strong claims made by these researchers.

Here’s why.

First, the research purports to study the difference between “inquiry and problem based pedagogy” with “traditional instruction.”

If you look over the description of the classes, however, I think you’ll quickly see that it studies the difference between “good” teaching and “bad” teaching.

So, for instance, in a “traditional” unit on the skeletal system in Argentina:

[S]tudents copy facts about bone tissues and the names of 206 bones of the human skeleton that teachers have written on the blackboard into notebooks.

That’s not traditional. That’s absurd. They copy the names of two hundred and six bones? The mind boggles.

And, by the way, the “inquiry and problem based pedagogy” [IPP] is full of good, old-fashioned direct instruction:

When done well, IPP includes elements of explicit instruction and scaffolding.

Teachers facilitate learning by guiding students through a series of steps and explicitly relating learning to students’ prior knowledge and experiences.

Teachers guide learners through complex tasks with explicit instructions that are relevant to the problems at hand.

They provide structure and scaffolding that help students not only carry out specific activities, but also comprehend why they are doing those activities and how they are related to the set of core concepts they are exploring.

So, yes, these students are inquiring and problem solving. And, they’re getting lots of explicit teacherly guidance.

So, again, the labels used in this study don’t fully align with the concepts we typically use them to mean.

Compared to Whom?

A second questions jumps out here as well.

The teachers who used IPP methods got impressive training and support. For instance:

They got 20 hours of professional training in these methods. (When was the last time your school provided twenty hours of training on one topic?)

They got lesson plans. They got teaching materials.

They got “continuous in-school teacher support.”

What did the teachers in the control-group schools get? The study doesn’t say.

That silence leads to the possibility that they got…nothin’.

Which is to say: the study compares teachers who got lots and lots (and lots) of support, with teachers who didn’t get any support.

So, the difference might have come from the specifics of the teaching method: in this case, “IPP.”

Or, it might have come from the energizing effects of working at a school getting so much researcher support and attention.

We simply don’t know. And, if I’m right that this was a “business as usual” control group, then the study design doesn’t let us know.

Strong Conclusions

Based on this study, I think we can conclude that…

4th graders should not have to copy 206 vocabulary words into their notebooks. (I’ll go out on a limb and say NO ONE should have to do that.)

Some level of explicit teacherly support and guidance is essential.

Once foundational knowledge has been established, an appropriate level of independent questing can solidify and extend knowledge.

Most of us, I suspect, would have agreed with all of those statements before we read the study.

I don’t think, however, we can conclude from this study that “Inquiry and Problem Based Pedagogy” (as we typically use those words in the US) is the best approach. Because: that’s not what this study tested and measured.

Today’s Neuro-Nonsense: Reading Brainwaves in the Classroom
Andrew Watson
Andrew Watson

A friend recently sent me this link to a video story in the Wall Street Journal.

The 6 minute video discusses a program in China that uses a…welll…gizmo to measure students’ attention levels during class.

Everything I know about this particular program comes from the video, so it’s possible I’m missing important information. But, based on the video, I render this temporary verdict:

THIS IS BONKERS.

Electroencephalography simply doesn’t work this way. That is: the gizmo is far too crude (and subject to irrelevant stimuli) to produce meaningful measurements.

And, attention is far too complex a cognitive process to be measured by three contact points with the skull. (Not joking. THREE.)

If I gave you a stethoscope and said, “Here. Please measure that’s person mental health,” I suspect you’d refuse.

You’d say something like this: “I can gather data, and then pretend that those data say something meaningful about a really complex network of mental functions. But you and I both know that this tool simply can’t do what you’re asking it to do.”

You’d be right.

One More Point

Of course, there’s LOTS to say about the social policy implications of using this gizmo. This isn’t a blog about social policy, so I’ll leave that to other analysts.

I’m answering the very basic technical question: “can we really do that?” The answer, as far as I know, is “certainly not.”

 

h/t Scott MacClintic

Getting Research to Work in Schools
Andrew Watson
Andrew Watson

Teachers work one classroom at a time.

If I learn how to teach better, I can help this group of students right here in front of me for as long as they’re here.

My devotion to Learning and the Brain conferences began when I saw how much better I taught when psychology research informed my teaching.

But what if we have bigger aspirations? What if we’d like to help whole grades of students? What if we’d like to help the entire school?

Scaling It Up

One idea has become increasingly popular in recent years: the “research lead.”

In this model, one teacher (or, perhaps, a group) becomes an expert in the magic of teaching+research.

That teacher, the research lead, then works with colleagues and administrators to shape practice and policy.

The research lead can coach other teachers. Inform discussions about school start times. Participate in hiring committees. Shape in-school professional development.

In general, this person spreads the good research word.

This model has several potential strengths. Primarily, it builds a research perspective into the structure of the school, rather than having it be an occasional add-on.

I know of several schools in the US that are adopting this model. The question is: does it work?

“No.” Or, Better Said, “Not Yet.”

The Educational Endowment Fund (EEF) in Great Britain wanted to know the answer to that question, so they commissioned a substantial study.

40 schools added research leads, and the EEF compared their students’ progress to students at schools that didn’t have research leads. They measured national exam results one and two years later.

The effect: basically nothing. There were differences, but they weren’t statistically significant.

This research isn’t encouraging, but I don’t think it’s as discouraging as it seems at first. Here’s why…

I worked as an academic administrator for several years, and I know from hard experience that change takes time.

Remember: the “research lead” model works–if it works–by coaxing. One colleague encourages another. She sits on a committee and adds an opinion to the mix. He steers people in one direction over another.

How long will it take for that coaxing to influence learning? Especially “learning” as measured by a standardized national test? I’m thinking the answer is: longer than two years.

After all: the teachers at these schools who wanted to follow research-based teaching suggestions may well have been doing so already. The research lead has to work with the less-than-all-in colleagues. And if you work in a school, you know that such work doesn’t produce immediate results.

My Suggestions

First: set reasonable expectations. I don’t know what the right amount of time is. But 1 or 2 years is too little.

Second: the model will probably have more success if the research leads have more power. Colleagues can coax colleagues, but leaders can do more than coax. (I know of at least one school where the head said: “if you’re going to work at this school, you’re going to base your teaching on research. If you don’t want to do that, we’ll help you find another job.” Now, THAT lead to change.)

Third: the model will certainly be more effective if the leads stay in their roles a long time. In the EEF study, fully 40% of them stopped after one year. That turnover both highlights and compounds the difficulty of the position.

Of course, I don’t know that the “research lead” model will work. And, I certainly hope that research-based teaching becomes a broad interest: not one narrowly limited to a few people in a school.

Until that happens, I certainly think the research lead model deserves a longer test-drive.

 

If you’d like to learn more about research leads, and this EEF study, check out this article at Blog on Learning & Development.

Prior Knowledge: Building the Right Floor
Andrew Watson
Andrew Watson

Take a gander at this passage from Michael Jones’s recent biography of The Black Prince:

“In the fourteenth century England used a silver standard of currency. The unit of account was the pound sterling (£) which was equal to one and a half marks of silver. The pound was divided into twenty shillings (s), each of twelve pence (d). There was also, from 1344, a gold coinage based on the noble, which was conventionally worth 6s 8d, but was rarely used. It would, however, be significant in the calculation of the ransom of King John II and also in the introduction of gold coinage into Gascony and then the principality of Aquitaine by the Black Prince.”

Many readers, I suspect, felt tempted to give up relatively quickly. (Don’t blame yourself if you did.) Unless you’re really up to speed on 14th century English currency–both silver and gold!–the paragraph quickly becomes overwhelming.

The vocabulary in this passage probably doesn’t strain our cognition. Except for the phrase “marks of silver,” I know what all those words mean. (And, I can guess from context that a “mark” is some unit of measurement.)

However, the passage does place several mental demands on the reader.

First, it invites you to undertake several rapid mathematical calculations. (Quick: how many shillings in a mark?)

Second, it requires you to learn abbreviations as you go. To understand the fourth sentence, you need to remember the (wildly counter-intuitive) abbreviation of “pence” as “d” from the third sentence.

Third, it assumes you recall several events and places unfamiliar–I suspect–to most Americans. Who was King John II? Why was he ransomed…was he kidnapped? Where are Gascony and Aquitaine? They don’t sound very English — why did an English prince introduce coinage to them? Actually: why is a prince empowered to introduce new currency?

Essential Background Knowledge

I thought of this paragraph when I read a recent article by Robert Pondiscio. In it, Pondiscio summarizes a study trying to determine how much background knowledge is necessary for comprehension.

In this study, students who scored higher than a 59% on a background knowledge test understood a reading passage substantially better than those who scored below 59%.

As summarized by Pondiscio, the study’s authors see some clear teaching implications here.

First, we can meaningfully measure our students’ relevant background knowledge.

Second, students who fall short on that measure will benefit A LOT if we provide them with the essentials.

For instance, students who understood that “habitat,” “species,” and “ecosystems” were relevant vocabulary for the study of ecology understood the reading passage more deeply. (The study included 3500 students, so I believe they controlled for various confounds. I haven’t read the study itself–it’s behind a paywall.)

I think those conclusions point to another:

Third: models of teaching that focus on “pure discovery” will create substantial challenges for students who lack background knowledge. Students who don’t know the basics of a topic simply can’t understand the field of inquiry within which they’re meant to discover.

And, they won’t feel motivated by curiosity to find out. They’ll feel discouraged by their confusion. (Few readers, I suspect, were motivated by the paragraph above to learn more about medieval English currency.)

A Final Thought

This study finds that 59% was the essential tipping point. Students who scored lower than 59% on the prior knowledge test found themselves in a different cognitive category than those who scored above.

Howeverthat percentage does not necessarily apply to all circumstances.

In other words: we shouldn’t give our students prior-knowledge tests, and focus only on those who score 58% and below.

Instead, we should plan our lessons and units knowing that some floor-level of knowledge will be crucial for learning most things.

In every case–as you hear me say so often–we’ll have to rely on the teacher’s judgment to discover that level.

Researchers can remind us that the floor exists. But they can’t identify it for every teacher in every classroom. Ultimately, with that research guidance in mind, we’ll find the right place for the floor. And, we’ll build it.

Faster Learners Remember Better (Perhaps)
Andrew Watson
Andrew Watson

As a society, we often conflate speed with cognitive skill. People who learn faster seem smarter, somehow.

As a teacher, it took me years to get past that belief. (The short version of the story: I had a student who got 100% extra time on tests because her processing speed percentile was in the single digits. When I read her first essay I thought: you could have given any of my other students 100% extra time–they never would have come up with this analysis. She wasn’t fast, but good heavens she was insightful.)

I’ve recently found research suggesting NOT that speedsmarts, but that speedy learning  = better memory.

Here’s the story.

The Study

Researchers had adults learn 45 pairs of English and Lithuanian nouns. (Relatively few people speak Lithuanian, and so it’s a good test language.)

Basically, they used computer-based flashcards to study the word pairs. Then, they went through the pairs and tried to fill in blanks correctly.

If the adults got the right answer, the card was dropped from the deck. If they got it wrong, they could study the correct answer, so they could get it right the next time.

Researchers wanted to know, first, how long did it take them to get all the words in the deck right? That is: how many times did they cycle through the deck before they finished the task?

In other words: how quickly did they learn?

And, second, how well did they do on a test of those word pairs the following day?

In other words: how well did they learn?

The Findings

As you can see in the graph, the fast learners remembered better. Check out the bottom half of the image below.

Zerr, C. L., Berg, J. J., Nelson, S. M., Fishell, A. K., Savalia, N. K., & McDermott, K. B. (2018). Learning efficiency: Identifying individual differences in learning rate and retention in healthy adults. Psychological Science, 29(9), 1436-1450.

The top 25% learned the words in about 7 cycles through the deck. And, the next day, they remember about 89% of the word pairs. (I’m eyeballing these numbers from the graph.)

The bottom 25% learned the words in about 16 cycles. And, they next day, they remembered about 56%.

If you want to predict how well these adults would remember the word pairs the next day, the best question you could ask would be: “how quickly did they learn those word pairs today?”

And, here’s the crazy part:

4 of these researchers had used the Lithuanian word-pair method in a different study about 3 years before.

They recontacted those participants, and many of them agreed to be retested.

Sure enough, the people who learned relatively quickly (and remembered well) 3 years ago still did so. The people who learned relatively slowly (and didn’t remember so well) 3 years ago showed the same pattern again.

In other words: this is a relatively stable finding. It lasts not from day to day, but over months and years.

Big Questions

First, of course, we’d like to know why this is true? The study’s authors speculate that the fast learners might be better at attentional control. Or, they might have higher working memory capacity. Or, they might come up with better memory strategies.

At present, we just don’t know.

Second, would this finding hold true for other kinds of learning? After all, we rarely want our students to learn foreign-language word pairs relatively quickly. (Even foreign language teachers let students learn vocabulary over time.)

So: do people who learn letters quickly remember them better? Do people who learn science formulas quickly remember them better?

Does this finding matter for other kinds of learning?

We don’t know.

Third, is this true if we test students’ memories more than a day later. In both halves of this research, participants learned words one day, and the final test was the next day.

What would we find if the final test was two weeks later?

Once again: we just don’t know.

What Should Teachers Do?

For the time being, I don’t think this research creates emphatic teaching guidelines. We simply have too many important questions to let our practice change much based on this study.

We should, of course, keep our eyes out for further exploration of this topic.

If anything, I’d suggest this strategy. If you have a student or two who take an unusually long time to learn something, you might check in with them the next day to see how well those memories have consolidated.

They just might need an extra boost.

This suggestion is, of course, speculation; I’m not insisting that all readers adopt it. However, it seems to me the most plausible application of potentially important research.

Motivation = “Self-Determination” + Common Sense
Andrew Watson
Andrew Watson

Self-Determination Theory, one of the most important theories of motivation, tends to operate behind the scenes.

That is: researchers often use self-determination theory to explain why something else works.

The theory itself argues that humans are motivated by a desire for three basic things.

Autonomy

Relatedness

Competence

Unlike many terms in psychology, those three mean exactly what you think they mean. So, “competence” means, basically, the feeling that I’m skillful at whatever I’m doing. “Relatedness” means, basically, “connected with others.” And so forth.

When giving teachers advice, researchers often turn to self-determination theory to explain why a particular set of suggestions might help students learn.

Goals and Feedback

Common sense tells teachers that we should make goals clear to our students. And, we should offer them specific feedback.

But, why might those two things help? Specifically, why might they promote motivation?

Researchers in Belgium and The Netherlands hypothesized that clear goals and specific feedback might encourage self-determination.

Specifically:

If I, as a student, know what the goals are, I can work more independently to achieve them. That will make me feel autonomous, and competent.

Likewise, specific feedback will allow me to work effectively–that is, competently.

And, of course, goals and (especially) feedback will increase my sense of relatedness with my teacher.

To test this hypothesis, the researchers worked with 500+ high school students taking PE classes. They surveyed them 6 times about their classes, asking about clarity of goals and feedback, and measuring their feelings of autonomy, relatedness, and competence.

What did they find?

(Too Much Of) A Good Thing

Sure enough, they found that clear goals and precise feedback helped students feel “in charge of their learning processes”: that is, autonomous.

They also felt more competent, and more connected and cared for.

In brief: goals and feedback can help students in a number of ways. In the world of motivation theory, they boost the three key components of self-determination theory.

Perhaps the most interesting part of this research puts an asterisk on that finding. While feedback helps, lots and lots of feedback reduces feelings of both competence and relatedness.

In fact, these findings make sense. If my teacher has to give me lots of feedback, the implication is that I’m not very good at what I’m doing–that is, not very competent.

And, that hovering might well feel irritating–reducing rather than increasing relatedness.

In other words, as is so often true, teachers have to apply research-based advice skillfully. We want to have clear goals and helpful feedback. And, we want to ensure that “helpful feedback” doesn’t tip over into excessive feedback.

Paradoxically, too much of a good thing can convert motivation into demotivation.

What if a Research-Supported Educational Idea is Unconstitutional?
Andrew Watson
Andrew Watson

As a reader of this blog, you know we’ve got lots (and LOTS) of research showing that sleep benefits learning.

A good night’s sleep consolidates memories. Naps improve learning. Heck: sleep even helps us forget things we want to forget.

What if it turned out that—for some reason—giving students time to nap were unconstitutional? What if lawyers argued we can’t delay high-school start times to allow for more sleep, because the constitution requires school to begin before 8:00 am?

Although this hypothetical example is obviously silly—why would naps be unconstitutional?—the deeper question matters. We are, after all, a country of laws. If a particular educational practice—like, say, “separate but equal”—is unconstitutional, we don’t do it.

Obviously.

A Current Example

The Tricycle is a magazine that promotes Buddhism, and for that reason takes real interest in mindfulness. You can, for instance, read about “The Buddha’s Original Teachings on Mindfulness” here.

Back in May, Tricycle editor James Shaheen posted a surprising interview on their website.

In this conversation, Shaheen talks with Dr. Candy Gunther Brown: a religious studies professor who argues that mindfulness can’t be separated from its religious origins.

For that reason, requiring mindfulness is—in fact—requiring a kind of religious activity: a requirement that, in Brown’s view, violates the US Constitution.

Digging Deeper

Shaheen’s conversation with Brown lasts almost an hour, and so can’t be summarized in a brief blog post. I encourage you to listen to the whole thing.

To give a flavor of her argument, I’ll briefly mention a few of her points.

First: mindfulness often encourages participants to adopt a detached view of the events around them, and their emotional responses to those events. Rather than respond, we should “take a mental step back” and simply notice our bodily response to our feelings.

Brown notes that this seemingly “neutral stance” might well spill over into a religious perspective. After all, many religions ask their adherents not to notice their internal responses, but to go into the world and do something about those events.

For that reason, mindfulness isn’t a religiously neutral practice.

Second: Brown quotes research suggesting that people who practice mindfulness have religious experiences at a higher rate than those who don’t. And, people who have those often end up moving away from a monotheistic perspective.

Which is to say, mindfulness might in fact change the religious views of the people participating in it.

If we know that to be true (and, to be clear, I haven’t reviewed the research Brown cites), then requiring students to practice mindfulness might both have all the school and health benefits we like and indirectly encourage a particular religious framework within schools.

Of course, Brown discusses other concerns as well. Some are explicitly legal. Others focus on the motives of (some) people and organizations that promote mindfulness: phrases like “stealth Buddhism” make her suspicions amply clear.

You can read her own summary of her argument here.

What Should We Do?

Brown doesn’t want to banish or forbid mindful practices from schools. Instead, she wants schools that have them ensure such programs are voluntary.

They should not even be “opt-out” programs that students must decline—risking their relationships with peers and teachers.

Instead, they should be “opt-in” programs that students sign up for and attend when they choose to.

In her view, this framework—especially if it offers appropriate alternatives—would both provide the good stuff that mindfulness allows and pass constitutional muster.

Another important point stands out in this debate.

Note the source of this anti-mindfulness interview: a magazine and website devoted to Buddhism and mindfulness. (Okay, it’s not exactly an anti-mindfulness interview, but I suspect some people will see it that way.)

You might think that this magazine would be hostile to Brown’s position. You might think it would try to ignore her work, or shout it down.

Instead, editor Shaheen interviews Brown sympathetically and politely and knowledgeably. (He’s clearly read her book.)

So, the second thing we should do is emulate Shaheen’s example. As I’ve written elsewhere: when we hear about evidence that contradicts our beliefs, we should not ignore it or decry it.

We should, instead, learn as much as we can from it.

Whatever you think about the constitutionality of mindfulness, I hope you admire, and follow, Shaheen’s example.

When Introverts Act Like Extraverts (and Vice Versa)
Andrew Watson
Andrew Watson

Susan Cain’s 2013 book Quiet focused teacherly attention on students’ introversion.

In Cain’s telling, schools valorize extraversion over introversion.

We praise and reward the outspoken student. We worry over the quiet student. Often we champion group discussion and teamwork, and look askance at soft-spoken, individual effort.

Whatever the truth of these concerns, this framework rests on the unspoken assumption that introverts and extroverts remain constant in their identities. Introverts act introverted most (or all) of the time. Extraverts are the life of every party.

What if that weren’t true? What if we could deliberately act more one way other the other?

But, Why Are You Asking?

Before we investigate those questions, we should ask a more basic one: why bother? Why encourage extraverts to act like introverts?

It turns out that, on average, extraverts feel happier than introverts do. (Psychologists typically speak of “well being” more than happiness. And, they’ve got fancy terms to define and measure it. But, at the end of the day, extraverts experience more of the good stuff than the introverts do.)

So, if we could help introverts be more extraverted, would they feel happier?

And, by the way, happy people get other benefits. In Csikszentmihalyi’s terminology, they experience flow state more often. In Deci and Ryan’s terminology, they experience connectedness, autonomy, and competence more often.

And so, researchers ask this question as a way to promote happiness, and all the good things that come with it.

The Research, the Results

Seth Margolis and Sonja Lyubomirsky asked 130 college students to act more extraverted for a week, and then more introverted for a week. (Half went in that order; the other half went in the reverse order.)

They did all the things you’d want researchers to do. For instance: they took care to describe introverted and extraverted behavior equally positively. (That’s hard to do, in a culture that valorizes extraversion.)

What did they find?

Margolis and Lyubomirsky kept track of roughly 2 dozen variables, and so they’ve got LOTS of results to report. The headlines:

When these students acted more extraverted, they experienced more positive affect. When they acted more introverted, they experienced less.

Also, back to Csikszentmihalyi: they experienced flow more often when acting extraverted, and less often when acting introverted.

Back to Deci & Ryan: they experienced connectedness more often when extraverted.

Extraversion, however, had weak or inconsistent effects on life satisfaction, and Deci & Ryan’s competence.

But basically, extraversion–even forced extraversion–produced lots of benefits over the week.

Lots of Caveats

Margolis and Lyubomirsky have done unusual work here. For one thing, psychology studies rarely last two full weeks.

For that reason, we’ll want to look out for follow-up studies to see if other researchers arrive at similar conclusions.

Also, they emphasize that their data come from self-report–a kind of measurement that’s inherently less reliable than other kinds.

Those caveats (and many others) acknowledged, I think this study highlights encouraging possibilities.

First: people can successfully change their behavior. Even though I might incline to introversion, I can push myself to act extraverted. And when I do, I get the happiness benefits that extraverts get.

Second: in highlighting extraversion, schools might not be harming introverts as much as Cain worried. If, in fact, extraverts feel happy (and experience flow; and feel greater connectedness) more often than introverts, we might not be harming introverts by nudging them out of their comfort zones.

I should emphasize, this second point is my own: neither stated nor implied by Margolis and Lyubomirsky. And, I could be entirely wrong. Perhaps a week-long experiment in extraversion is beneficial for introverts, but more than a week is draining. Or, perhaps voluntary extraversion produces these benefits, but school-required extraversion doesn’t.

In any case, I think we can usefully rethink questions about introverts and extraverts in school.

Admitting My Bias

As you read this post–which seems to champion extraversion–you might wonder where I myself fall in this dichotomy. That is: am I promoting extraversion simply because I’m an extravert?

The answer is: I am a little bit of both.

If you meet me as a presenter at a Learning and the Brain conference, you’ll see my extraverted side. In that professional setting, I’m comfortable putting on my loud-in-public persona.

If you meet me at a cocktail party, you’ll definitely see my introverted side. I’ll be in the corner having a deep conversation with one person. I certainly won’t be introducing myself to strangers, and telling raucous jokes to a room of on-lookers. (Who am I kidding? I rarely go to cocktail parties, because my introverted side doesn’t like small talk.)

In brief: I’m not championing extraversion because I’m an extravert.

I’m inviting readers to rethink the very belief that extraverts and introverts are two different species. I think we’re all a bit of both.

And, if we can help our students (and ourselves) by encouraging extraversion, then schools and teachers should know the good we can do.

The Deepest Well: Healing the Long-Term Effects of Childhood Adversities by Nadine Burke Harris
Rebecca Gotlieb
Rebecca Gotlieb

Nadine Burke Harris explains that she wrote The Deepest Well: Healing the Long-Term Effects of Childhood Adversities to help parent and caregivers provide their children with the best opportunity in life, even when they face difficulties. This book is a critical, and eye-opening read for those invested in supporting the health and education of young people. As Harris chronicles her own career as a researcher, pediatric clinician, and founder and CEO of the Center for Youth Wellness in San Francisco, she leads the reader through her process of discovering that childhood adversities cause profound and lasting changes in the body and that adverse childhood experiences (ACEs) are widespread and transmitted from one generation to the next. Using her own successes at the Center for Youth Wellness as an example and with suggestions for future efforts, she advocates for a public health response that includes prevention of adverse childhood experiences and quick and sensitive screenings for ACEs in conjunction with a medical and mental health response for treating the psychological and physical effects of trauma. While we need to learn more about how to recover from adversities, six factors we know to be helpful are sleep, mental health, healthy relationships, exercise, nutrition, and mindfulness.

Early in her career, Harris researched the biochemical basis of stress in tadpoles, finding that exposure to stress-related corticosterone early in development inhibited growth and decreased health. Later, when she was served as a pediatrician in a community facing poverty, discrimination, and other hardships, she noticed a similar pattern in her young patients. She explains that the stress response can be beneficial, even lifesaving, in instances that call for acute stress. However, when the stress response is activated intensely for a prolonged period, it damages health. A study conducted in 1985 showed that the more exposure adults had before they were 18 to emotional, physical, or sexual abuse and violence, physical ,or emotional neglect, substance abuse or mental illness, divorce, or criminal behavior, the worse their immune system, heart health, and cancer risk. ACEs have neurological, hormonal, and immunological consequences. People who have experienced six or more of these ACEs have a life expectancy that is 20 years shorter than people who have experienced none. Among kids, exposure to four or more of the ACEs is associated with 32 times increased likelihood of being diagnosed with a learning or behavioral problem. Indeed, ACEs are at the root of many issues in public education.

Harris carefully and deliberately explains that toxic stress can be experienced by anyone. Communities of color and communities facing poverty are more likely to be in a constant state of arousal resulting in more trauma symptoms. What biological research shows, however, is that everyone is equally susceptible to the health effects of trauma when adversity strikes, and everyone is equally in need of help when that happens.

What does Harris propose can be done to prevent and mitigate the ill effects of ACEs? When children have safe, stable, and nurturing caregivers, even if the children are exposed to stressful or dangerous communities, these caregivers can act as an epigenetic force buffering against cellular aging and other adverse effects of trauma. A focus on prevention through caring adults is much more effective than treating the effects of trauma after it has occurred. Nonetheless, treatment is important. While talking about ACEs may feel taboo, universally screening all patients for the number of ACEs they have suffered should be standard practice. Mental health services should be available as part of the primary care clinicians’ practice to make receiving these services easy. Exercise and nutrition can help improve brain functioning and the immune system. Sleep and mindfulness promote the healing of a dysregulate stress response.

Today 39 states and the District of Columbia collect data about ACEs. These data have revealed that more than half of the population has at least one ACE and at least thirteen percent have four or more. Harris notes that many have experienced positive effects of adversity—e.g., developing greater empathy or the ability to persevere.  While she accepts that this is true, and has even experienced that in her own life, she reminds the reader that we should not make character judgments of people who react poorly in the face of adversity. Given the prevalence of ACEs, Harris makes a compelling case for continuing to pursue more advanced ways to treat the health sequela of them. This powerful book concludes with the ACE questionnaire, so that readers can determine their own ACE score or that of the children for whom they care.

Harris, N. B. (2018). The deepest well: Healing the long-term effects of childhood adversity. New York, NY: Houghton Mifflin Harcourt.