Skip to main content
Sorting Hats, Myers-Briggs, and the Perils of False Classification
Andrew Watson
Andrew Watson

Over at the Hidden Brain podcast, host Shankar Vedantam worries about our habit of sorting people into distinct categories.

When it happens at Hogwarts with a talking hat, such sorting can be cute. When it happens in real life, it can create real damage.

Vedantam, although a skeptic, offers a helpfully balanced exploration of this question. He even interviews a psychologist who worked for the Myers-Briggs Foundation to get the best argument in favor of its responsible use.

Of course, in schools this sorting habit shows up most perniciously with learning styles. Three key points to remember about this theory:

Learning Styles Theory makes specific predictions. Research does not bear them out. To the degree that research can show a theory to be false, this theory is false.

Instead, each of us is a learning style of one.

Luckily, we are more alike than different as learners. We can help students by maintaining optimal (relatively low) levels of stress. And, by reducing distracting stimuli in the classroom. Also, by using strategies that create “desirable difficulties.”

In other words: teachers don’t need to sort students into false categories. Cognitive psychology research helps us teach our students all unsorted, just as they are.

 

Not All of Us Work Effectively in a “Memory Palace”
Andrew Watson
Andrew Watson

You’ve probably heard of the “method of loci,” or — more glamorously — the “memory palace.”

Here’s how the strategy works. If I want to remember several words, I visualize them along a path that I know well: say, the walk from my house to the square where I do all my shopping.

To recall the words, I simply walk along that path again in my mind. This combination of visuals — the more striking the better — will help me remember even a long list of unrelated words.

This method gets lots of love, most famously in Joshua Foer’s Moonwalking with Einstein.

Surely we should teach it to our students, no?

Palace Boundaries

We always look for boundary conditions here on the blog. That is, even good teaching ideas have limits, and we want to know what’s outside those limits.

So, for the “method of loci,” one question goes like this: how often do you ask your students to memorize long lists of unrelated words?

If the answer is, “not often,” then I’m not sure how much they’ll benefit from building a memory palace.

Dr. Christopher Sanchez wondered about another limit.

The “method of loci” relies on visualization. Not everyone is equally good at that. Does “visuospatial aptitude” influence the usefulness of building a memory palace?

One Answer, Many Questions

The study to answer this question is quite straight-forward. Sanchez had several students memorize words. Some were instructed to use a memory palace; some not. All took tests of their visual aptitude.

Sure enough, as Sanchez predicted, students who used a memory palace remembered more words than those who didn’t.

And, crucially, palace builders with HIGH visualspatial aptitude recalled more words than those with LOW aptitude.

In fact, those with low aptitude said the memory-palace strategy made the memory task much harder.

This research finding offers a specific example of a general truth. Like all teaching strategies, memory palaces may help some students — but they don’t help all students equally.

This finding also leads to some important questions.

First: If a student has low visuospatial aptitude, how can we tell?

At this point, I don’t have an easy way to diagnose that condition. (I’ve asked around, but so far no luck.)

My best advice is: if a student says to you, “I tried that memory palace thing, but it just didn’t work for me. It’s so HARD!” believe the student.

Second: does this finding apply to other visualization strategies? More broadly, does it apply to dual coding theory?

Again, I think the answer is “probably yes.” Making information visual will help some students…but probably not all of them.

The Big Question (I Can’t Look…)

This next question alarms me a little; I hardly dare write it down. But, here goes…

As you know, learning styles theory has been soundly debunked.

However, might Sanchez’s research imply a kind of learning-anti-style?

That is, no one is a “visual learner.” But, perhaps some people don’t learn well from visual cues, and rely more on other ways of taking in information?

In other words: some students might have a diagnosed learning difference. Others might not have a serious enough difference to merit a diagnosis — but nonetheless struggle meaningfully to process information a particular way.

Those students, like Sanchez’s students with low visuospatial aptitude, don’t process information one way, and prefer to use alternate means.

So, again, that’s not so much a “learning style” as a “learning anti-style”: “I prefer anything but visual, please…”

I haven’t seen this question asked, much less investigated. I’ll let you know what I find as I explore it further.

The Best Teaching Book to Read This Summer: Powerful Teaching
Andrew Watson
Andrew Watson

Let’s describe a perfect book for a Learning and the Brain conference goer:

First: it should begin with solid science. Teachers don’t want advice based on hunches or upbeat guesswork. We’d like real research.

Second: it should include lots of classroom specifics. While research advice can offer us general guidance, we’d like some suggestions on adapting it to our classroom particulars.

Third: it should welcome teachers as equal players in this field. While lots of people tell teachers to “do what research tells us to do” – that is, to stop trusting our instincts – we’d like a book that values us for our experience. And, yes, for our instincts.

And, while I’m making this list of hopes for an impossibly perfect book, I’ll add one more.

Fourth: it should be conspicuously well-written. We’d like a lively writing voice: one that gets the science right, but sounds more like a conversation than a lecture.

Clearly, such a book can’t exist.

Except that it does. And: you can get it soon.

Memory researcher Pooja Agarwal and teacher Patrice Bain have written Powerful Teaching: Unleash the Science of Learning. Let’s see how their book stacks up against our (impossible) criteria:

First: Begins with Research

If you attend Learning and the Brain conferences, you prioritize brain research.

We’re not here for the fads. We’re here for the best ideas that can be supported by psychology and neuroscience.

Happily, Powerful Teaching draws its classroom guidance from extensive research.

Citing dozens of studies done over multiple decades, Agarwal and Bain champion four teaching strategies: retrieval practice, spacing, interleaving, and metacognition.

(As frequent blog readers, you’ve read lots about these topics.)

Agarwal herself did much of the research cited here. In fact, (researcher) Agarwal did much of the on-the-ground research in (teacher) Bain’s classrooms.

And Agarwal studied and worked with many of the best-know memory researchers in the field: “Roddy” Roediger, Mark McDaniel, and Kathleen McDermott, among others.

(BTW: McDaniel will be speaking at the LatB conference this fall in Boston.)

In short: if you read a recommendation in Powerful Teaching, you can be confident that LOTS of quality research supports that conclusion.

Second: Offers Classroom Specifics

Powerful Teaching is written by two teachers. Bain taught 6-8 grade for decades. And Agarwal is currently a psychology professor.

For this reason, their book BOTH offers research-based teaching advice AND gives dozens of specific classroom examples.

What does retrieval practice look like in the classroom? No worries: they’ve got you covered.

This strength merits particular attention, because it helps solve a common problem in our field.

Teachers often hear researchers say, “I studied this technique, and got a good result.” We infer that we should try that same technique.

But, most research takes place in college classrooms. And, the technique that works with that age group just might not work with our students.

How should we translate these research principles to our classrooms? Over and over again — with specific, practical, and imaginative examples — Bain and Agarwal show us how.

Third: Welcomes Teachers

Increasingly in recent months, I’ve seen scholars argue that teacherly instincts should not be trusted. We should just do what research tells us to do.

As I’ve written elsewhere, I think this argument does lots of damage—because we HAVE to use our instincts.

How exactly do research-based principles of instruction work in thousands of different classrooms? Teachers have to adapt those principles, and we’ll need our experience —and our instincts—to do so.

Powerful Teaching makes exactly this point. As Bain and Agarwal write:

You can use Power Tools your way, in your classroom. From preschool through medical school, and biology to sign language, these strategies increase learning for diverse students, grade levels, and subject areas. There are multiple ways to use these strategies to boost students’ learning, making them flexible in your classroom, not just any classroom.

Or, more succinctly:

The better you understand the research behind the strategies, the more effectively you can adapt them in your classroom – and you know your classroom best.

By including so many teachers’ experiences and suggestions, Agarwal and Bain put teacherly insight at the center of their thinking. They don’t need to argue that teachers should have a role; they simply show us that it’s true.

Fourth: Lively Voice

Scientific research offers teachers lots of splendid guidance … but if you’ve tried to read the research, you know it can be dry. Parched, even.

Happily, both Bain and Agarwal have lively writing voices. Powerful Teaching doesn’t feel like a dry lecture, but a friendly conversation.

For example:

Learning is complex and messy, it’s not something we can touch, and it’s really hard to define. You might even say that the learning process looks more like a blob than a flowchart.

Having tried to draw many learning flowcharts, only to end up with blobs, I appreciate this honest and accurate advice.

What’s Not to Love?

As a reviewer, I really should offer at least some criticism of Power Tools. Alas, I really don’t have much – at least not much substantive.

Once or twice, I thought that the research behind a particular finding is more muddled that PT lets on. For example, as I’ve written about before, we’ve got contradictory evidence about the benefits of retrieval practice for unstudied material.

But, as noted above, Agarwal is an important researcher in this field, and so I’m inclined to trust her judgment.

Mostly, I think you should put Powerful Teaching at the top of your summer reading list. You might sign up for the summer book club. Keep on eye on the website for updates.

Pointing Out Online Mistakes Like a “Jerk”: More Misuses of Psychology Research
Andrew Watson
Andrew Watson

Headline writers face a difficult task, I suspect.

On the one hand, they want to capture the gist of the article. On the other hand, they really want you to click the link.

I thought about this puzzle when I read this recent headline:

People Who Point Out Grammar Errors Online Are Pretty Much Jerks, Study Finds

That’s an arresting claim. After all, the word “jerks” doesn’t often appear in psychology research papers…

Digging Deeper

So, what does this particular study say? Are people who “point out” online grammar errors “jerks”?

Researchers Boland and Queen asked themselves this question: does someone’s personality profile influence their response to written mistakes — such as typos or grammar errors?

(By the way: it would seem odd if the answer were “no.” If there is such a thing as a personality profile, shouldn’t it capture — among other things — the way people respond to one another’s errors?

But, in the field of psychology, we don’t just assume things. We research them. That’s what Boland and Queen do here.)

To answer their question, B&Q had 80+ people read short paragraphs: people’s responses to a “housemate wanted” ad.

Some of the responses were error free. Some included typos: “maybe we would mkae good housemates.” Some included grammatical errors: “If your someone who likes to play tennis…”

Participants then evaluated the authors of each paragraph. They also filled out a personality survey measuring “the big five” personality traits: openness to experience, conscientiousness, extraversion, agreeableness, and neuroticism.

So, here’s the research question: did their personality traits predict their responses to grammatical errors and typos?

The Results

The answer is: a smidge.

For instance, people with higher ratings of agreeableness didn’t much care about grammatical errors. People with lower agreeableness ratings cared a bit.

How much?

Well, on average, people with lower agreeableness scored an error-free message as a ~4.2. But, they rated a message with two grammar errors as a ~4.0

On a 7 point scale, does that 0.2 difference really matter? It was statistically significant. But, the researchers’ methodology makes it hard to evaluate the difference.

Here’s a hypothetical. When my students study using method A, they average an 80 on the unit test. When they study using method B, they average an 80.5.

Method B might be “better” in a way that’s statistically significant. But, it’s honestly not significant in the way that you and I use that word. If, for instance, method B takes 3 times as long as method A, that extra 0.5 point almost certainly wasn’t worth it.

So too in this case. The less agreeable folks might, on average, give lower ratings. But, 0.2 points hardly seems like a big enough deal to worry about.

So, Are People Who Point Out Online Grammar Errors Jerks?

First: NO ONE POINTED OUT ANY ONLINE GRAMMAR ERRORS. It just didn’t happen.

Second: The study shows that people with a relatively low agreeable rating feel more judgey about online grammar mistakes.

It does not show that people who comment on grammar mistakes have lower agreeableness scores.

And it certainly does not show that this particular person who just commented on a post has a low agreeableness score.

Those questions are related, but different. And, the differences really matter. Especially if you’re going to call someone a jerk.

Teaching Implications

When you see a headline like “Science Shows Group X Are Jerks,” have confidence it’s a wild overstatement.

So, when “science says” that …

“Teaching method X makes kids brilliant.”

“Cell phones make the world dumb and cruel.” (Or, “Cell phones will transform education and make classrooms perfect.”)

“This one habit will change your classroom forever.”

…follow up with the underlying research. See what the research says specifically. Decide whether or not it works for you and your students.

A Final Note

I’m honestly hoping that this article includes either a typo or a grammatical mistake. If it does, please point it out to me. I promise I won’t think you’re a jerk…

Today’s Unpopular Research Finding: Potential Perils of Mindfulness
Andrew Watson
Andrew Watson

Mindfulness has a great reputation.

Students and teachers can start meditation programs quite easily. And, we’ve heard about its myriad benefits: reduced stress, greater concentration, enhanced classroom cooperation.

If we can fix so many school problems for (essentially) no money, what’s not to love?

Today’s Headline: “Particularly Unpleasant” Experiences

We’ve heard about all the good things that mindfulness can produce. Does it lead to any bad things?

Several researchers in Europe wanted to know if it led to “particularly unpleasant” experiences: “anxiety, fear, distorted emotions or thoughts, altered sense of self or the world.”

In particular, they asked if these experiences occurred during or after meditating.

They surveyed 1200+ people who had practiced meditation for at least two months. (The average experience meditating was, in fact, six years.)

Amazingly, more than 300 of them — 25% — reported a “particularly unpleasant” experience.

And, their findings are in line with two earlier studies (here and here), which reported 25% and 32% of meditators had such experiences.

The rate was lower for religious meditators, and slightly higher for men than women. The kind of meditation mattered somewhat. And (surprisingly for me), the rate was higher among those who had attended meditation retreats.

Lots of other variables didn’t matter: for instance, years of meditation experience, or length of meditation session.

Classroom Implications: Don’ts, and Do’s

Don’t Panic. If you’re currently running a mindfulness program, you don’t need to abandon ship.

Keep in mind:

This study asked respondants one question. We can’t draw extravagant conclusions from just one question.

The study focused on adults, not K-12 students.

We can’t draw causal links. That is: we don’t know, based on this study design, if the meditation led to the “particularly unpleasant” experience. We don’t even know what that rate would be for people in a control group.

We’re still VERY EARLY in exploring this question. We’ve now got 3 studies pointing this direction. But, we need more research — and more consistent ways of investigating this link — to know what to make of it.

Do’s

First: Use this research to improve the mindfulness program you have, or the one you’re planning.

That is: If you’ve got such a program, or have one under consideration, ask yourself, do you see signs that your students have unpleasant experiences?

Are you giving them permission and opportunity to say so?

Do the people running the mindfulness session know what to do if they get that kind of response?

After all, this research team isn’t asking schools and teachers to stop meditating. Like good scientists, they’re looking at both potential benefits and potential detriments.

Second: More generally, let this research be a healthy reminder. Almost all school changes lead to both good and bad results.

While mindfulness breaks might have lots of benefits, they might well have some downsides. So too with everything else.

We should always ask about the downsides.

When doesn’t retrieval practice help? Being outside might help some students learn something, but could it hamper others trying to learn other things?

When we actively seek out both the good and bad in the research-based practices we adopt, we’re likelier to use them more thoughtfully and effectively.

Right Brained Language Learning (And Other Reasons to Ignore Brain Myths)
Andrew Watson
Andrew Watson

As you know from reading this blog, cognitive psychology offers teachers dozens of helpful ideas.

We’re all better teachers when we enhance executive function and foster attention and manage working memory load.

Alas, over the years, many brain myths have gathered to clutter our thinking.

No, we don’t use only 10% of our brains.

No, the “learning pyramid” doesn’t tell you anything useful. (It doesn’t even make sense.)

No, learning styles aren’t a thing.

“Left-Brained” Skepticism

You might think I’m using my “rational, left-brained thinking skills” to offer these skeptical opinions.

Alas, the whole left brain/right brain distinction is itself another myth.

In some cases, brain functions happen more on one side of the brain than the other. But, even when we’re undertaking that particular function, we’re using brain regions from all over to get the mental job done.

A case in point…

“Lateralized” Language. Or, not.

Dedicated left-brain/right-brain advocates often point to language function to make their case.

For instance, Broca’s area — which helps us produce coherent speech — is in the left hemisphere. (So is Wernicke’s area, which helps us understand speech.)

Given these truths, they argue that speech is a “lateralized” brain function. In other words, it takes place in one hemisphere of the brain, not the other.

This claim, however, suffers from several flaws.

In the first place, Broca’s area is in the left hemisphere for 95% of right-handed people. But, that’s not 100%. And, that percentage falls to 50% for left-handed people.

Not so left-lateralized after all.

A second problem: language learning requires lots of right-hemisphere participation.

In a recent study, activity in the right hemisphere predicted participants’ later success in learning Mandarin. In fact, “enhanced cross-hemispheric resting-state connectivity [was] found in successful learners.”

Phrases like “cross-hemispheric resting-state connectivity ” might cause your eyes to glaze over. But, this key point jumps out: we can’t meaningfully ascribe language function to one hemisphere or another.

All complex mental activities require activation across the brain.

Teaching Implications

If you get teaching advice that you should do XYZ because a particular mental function takes place in a particular hemisphere: STOP.

Almost certainly, this claim

a) isn’t meaningfully accurate, and

b) comes from sources who don’t know as much about brains as they think they do.

Instead, ask yourself: does this guidance make sense even without claims about lateralization.

If yes, go ahead! If no, don’t bother.

In other words: use your whole brain and be skeptical.

Visual & Verbal: Welcome to “Dual Coding”
Andrew Watson
Andrew Watson

Over at LoveToTeach87, Kate Jones has written a thoughtful and thorough exploration of Dual Coding.

What is “dual coding”? In brief, when we take care to present information in two formats — visual and verbal — we are dual coding.

Schools too often focus on verbal presentation of information. Dual coding theory reminds us to add visuals as well.

Jones’s post begins with a helpful over view of recent work in the field: in particular, the Learning Scientists, and also Oliver Caviglioli.

She then changes gears, and offers a variety of specific classroom for putting this concept into practice:

timelines

comics/storyboards

revised notes

summary cards

And so forth. Because her post includes so many splendid examples, I encourage you to check it out.

Behind the What, the Why

But, why exactly does dual coding help?

The short answer is: lots of reasons. But for me, the core answer comes back — as it so often does — to working memory.

Working memory allows brains to hold, reorganize, and combine information. In other words: all academic learning requires working memory.

And: we just don’t have very much. (Alas, there’s no artificial way to increase it. Yet.)

But, we do have a secret supply of extra working memory. More precisely, we have different WM stores for visual and auditory information.

If I present information only verbally, then students must hold all that information with that part of their WM.

It’s like doing mental push-ups with one arm.

But, if I provide information both verbally and visually, they get to use both parts of their WM.

And, of course, two arm push-ups are much easier to do.

In sum: dual coding helps students learn, because it divides WM load between verbal and visual capacities.

A Brief Warning

This teaching advice sounds a bit like learning styles. It suggests that we’ve got distinct visual and verbal learning capacities.

PLEASE do not confuse these two theories. Learning styles theory has no support — it just ain’t true.

We ALL use visual WM. We ALL use auditory WM. (Those of us who have sight and hearing.)

There is no “style” here. This cognitive architecture supports learning for us all.

And so, dual coding benefits practically everyone.

Early Signs of Autism: “Joint Attention”
Andrew Watson
Andrew Watson

If you’re attending this weekend’s Learning and the Brain conference, you’ll have many opportunities to learn more about autism. In particular, you’ll hear how our understanding of autism gives us a broader understanding of human brains, cognition, and personality.

In this video, professor Simon Baron-Cohen discusses the importance of “joint attention” for early diagnosis of autism.

As you’ll see, joint attention occurs when the pre-verbal child points or looks at an object. Crucially, the child also checks to see if the parent is also looking. (The key passage begins at about 1:15 on the video.)

If you’re interested in joint attention, and especially its role in human evolution, I highly recommend Michael Tomasello’s book The Cultural Origins of Human Cognition.  In it, Tomasello does a masterly job sleuthing through primate behavior to discover uniquely human traits.

 

A Rose by Any Other Name Would Smell as Confusing
Andrew Watson
Andrew Watson

We have to admit it: when it comes to naming things, the field of psychology has no skills.

In many professions, we can easily distinguish between key terms.

The difference between a kidney and a pancreas? Easy.

The difference between a 2×4 and a 1×6? Easy.

The difference between an altimeter and speed indicator? Easy.

But:

The difference between grit and resilience?

Between self-control and self-regulation?

Between an adolescent and a teen-ager? Um….

And, if we can’t define and distinguish among concepts easily, we’ll struggle to talk with each other sensibly about the work we’re doing.

I think of naming problems in several categories:

Sales-Pitch Names

Occasionally, psychologists come up with a name that seems to have been market tested for maximum sales.

Take, for instance, “wise feedback.”

Many researchers have explored a particular feedback structure that combines, first, an explicit statement of high standards, and second, an explicit statement of support.

For instance:

“I’ve made these suggestions on your essay because we have very high standards in the history department. And, I’m quite confident that – with the right kind of revision – this essay will meet those standards.”

(You can find research into this strategy here.)

I myself find the research quite persuasive. The strategy couldn’t be easier to implement. It couldn’t cost any less – it’s free! And, it’s particularly helpful for marginalized students.

But the phrase “wise feedback” rankles. Whenever I talk with teachers about this strategy, I feel like I’m participating in a late-night cable TV sales pitch.

Couldn’t we find a more neutral name? “Two-step feedback”? “Supportive standards feedback”?

Another example: “engagement.” Blake Harvard recently posted about this word, worrying that it’s too hard to define.

I agree. But, I also worry the name itself tries to prohibit debate. Who could be opposed to “engagement”?

In science world, however, we should always look for opposing viewpoints on any new suggestion. If a brand name – like “engagement” – feels too warm and fuzzy to oppose, the name itself inhibits scientific thinking.

By the way, almost everything that includes the word “brain” in it is a sales-pitch name: “Brain Gym.” “Brain Break.”

Of course, the right kind of exercise and activity do benefit learning. Short cognitive breaks do benefit learning. We don’t need to throw the word “brain” at those sentences to improve those strategies.

Poaching Names

If I’ve got a new idea, and no one pays attention to it, how might I get eyeballs on my website?

I know! I can use a pre-existing popular name, and staple it on to my concept – even if the two aren’t factually related to one another!

That way, readers will think that my new ideas has links to that other well-known idea. Voila – instant credibility.

This “poaching” happens most often with “Mindset.”

You’ve probably read about an “empathy” mindset. Or a “technology” mindset. Or a “creative” mindset. Maybe, an “international” mindset. Or a “your product name here” mindset.

To be clear, these ideas might in fact help students learn. Empathy and creativity and an international perspective can certainly improve schools.

But, Dweck’s word “mindset” has a very particular meaning. She has done quite specific research to support a handful of quite specific theories.

Calling my new thing “a Watson mindset” implies that my work links with Dweck’s. But, that implication needs careful, critical investigation. If you trust Dweck, you don’t have to believe everything called “mindset.”

(Of course, not everyone does trust Dweck. But: that’s a different post.)

Confusing Names

These names make sense to the people who coin and use them. But, they’re not obviously connected to the concepts under discussion – especially to visitors in the field.

Here’s a crazy example: entity theorists.

Believe it or not, one of the best-known concepts in educational psychology used to distinguish between entity theorists and (not joking here) incremental theorists.

But then, in the late 1990s, Carol Dweck started a rebranding project, and now calls those things a fixed mindset and a growth mindset.

I rather suspect her ideas wouldn’t have gotten such traction without the new names.

(Imagine teachers earnestly encouraging their students: “remember to adopt an incremental theory!” I don’t see it…)

A Really Good Name

In the bad old days (the 2000s), psychologists did a lot of research into “the testing effect.” It’s a terrible name. No one in schools wants anything to do with more testing.

Let’s rebrand. How about “retrieval practice”?

That name has many strengths:

First: far from being confusing, it tells you exactly what it means. Practice by retrieving, not by reviewing. Couldn’t be clearer.

Second: far from being a sales pitch, it remains comfortably neutral. It’s not “awesome practice” or “perfect practice.” You get to investigate research pro- and con-, and decide for yourself.

Third: rather than poaching (“students should develop a practice mindset!”), it stands on its own.

I don’t know who came up with this phrase. But, I tip my hat to a modest, clear, straightforward name.

We should all try to follow this clear and neutral example.

 

No, Brain Scans Can’t See You Think
Andrew Watson
Andrew Watson

Over at NPJ Science of Learning, Tracey Tokuhama-Espinosa debunks the myth that “brain scans see thought.”

In brief, Tokuhama-Espinosa argues that

Each brain imaging machine can, at best, measure a single dimension (electrical, chemical, or structural) of one sub-skill set …

No imaging machine can measure thought, only a sub-element of a thought.

The whole article is worth a read. Tokuhama-Espinosa has long made wise contributions to the field of Mind, Brain, Education. Her writing always merits attention.