working memory – Page 4 – Education & Teacher Conferences Skip to main content
Do Collaborative Projects Reduce or Increase Working Memory Stress?
Andrew Watson
Andrew Watson

Should teachers ask students to work on projects in teams?

This question generates a great deal of heat.

Many education thinkers advocate for the benefits of teamwork. Others insist that learning happens one brain at a time, and so should not be cluttered with interference from other brains.

Working Memory: Blesses and Curses

Working memory allows humans to hold and reorganize facts and ideas in temporary mental storage.

When you do a word problem, you must decide which parts should be translated into an equation. (Those decisions take WM.) You have to recall the appropriate equation to use. (Ditto.) And, you must plug the correct data into the correct formula before you can arrive at an answer. (Re-ditto.)

Composing a new sentence in a foreign language? Lots of working memory demands.

Comparing Paul Lawrence Dunbar’s poetry with that of Countee Cullen? Yup.

Learning how to tell time? Once again – lots of working memory involved.

In other words, WM allows students to do practically everything that we want them to do in school.

And yet, this working memory blessing co-exists with a powerful curse: we just don’t have very much of it.

You probably can alphabetize five days of the work week. You probably can’t alphabetize 10 months of the year. The first task lies within WM limits; alas, the second goes way beyond them.

Collaboration’s WM Dangers

In a recent article, Paul Kirschner and others consider the WM benefits and perils of group work.

(These scholars, especially John Sweller, have elaborated “cognitive load theory” to explain the relationship between long-term memory, WM, and the external world of perception and experience. See here for a review.)

One important peril: the working memory demands created by collaboration. When students work together, they have to negotiate roles. They must create joint mental models. They have to schedule and prioritize and debate.

All these “musts” take up precious working memory space. The result might be that students get better at negotiating, modeling, and prioritizing. But, the WM devoted to those task might make it harder for them to learn the content at the heart of the project.

Of course: you might reasonably want your students to focus on the social-emotional skills. But, if you wanted them to focus on Shakespeare or Boyle’s law, then the project might not produce the results you hoped for.

Collaboration’s WM Benefits

At the same time, Kirschner & Co. also see working memory upsides to collaboration.

A particular cognitive task might include quite stiff WM demands. If the group includes members with the right kinds of background knowledge, then the WM chores can be divided up and managed more effectively.

Student A carries this part of the WM load.

Student B carries that part.

Student C takes care of the tricky last bit.

In this way, the WM whole can be greater than the sum of the parts.

In other words: if teachers can organize group projects so that a) the WM difficulties of collaboration remain low, and b) the benefits of sharing WM burdens remain high, then such collaboration truly help students learn.

Putting It Together

Kirschner’s article concludes with a list of key variables for teachers to track: task complexity, domain expertise, team size, and so forth.

Be aware that cognitive load theory gets a little jargony, and you’ll need some time to learn the lingo before the article makes sense.

However, if you can devote that time, I think you’ll benefit from its practical suggestions, and helpful frameworks for planning students’ collaborative learning.

Praising Researchers, Despite Our Disagreements
Andrew Watson
Andrew Watson

This blog often critiques the hype around “brain training.” Whether Lumosity or Tom Brady‘s “brain speed” promises, we’ve seen time and again that they just don’t hold water.

Although I stand behind these critiques, I do want to pause and praise the determined researchers working in this field.

Although, as far as I can see, we just don’t have good research suggesting that brain training works*, it will be an AWESOME accomplishment if it someday comes to pass.

A Case In Point

I’ve just read a study that pursues this hypothesis: perhaps brain training doesn’t succeed because the training paradigms we’ve studied do only one thing.

So, a program to improve working memory might include cognitively demanding exercises, but nothing else. Or, brain stimulation, but nothing else. Or, physical exercise, but nothing else.

What would happen if you combined all three?

To test this question, Ward & Co. ran a remarkably complex study including 518 participants in 5 different research conditions. Some did cognitive exercises. Some also did physical exercises. And some also added neural stimulation.

The study even included TWO control groups.

And, each group participated in dozens of sessions of these trainings.

No matter the results, you have to be impressed with the determination (and organization) that goes into such a complex project.

Okay, but What Were The Results?

Sadly, not much. This study didn’t find that training results transferred to new tasks — which is the main reason we’d care about positive findings in the first place.

We might be inclined to think that the study “didn’t succeed.” That conclusion, however, misses the bigger point. The researchers pursued an entirely plausibly hypothesis…and found that their evidence didn’t support it.

That is: they learned something highly useful, that other researchers might draw on in their own work.

Someday — we fervently hope — researchers will find the right combination to succeed in this task. Those who do so will have relied heavily on all the seemingly unsuccessful attempts that preceded them.

__________

* To be clear: the phrase “brain training” means “training core cognitive capacities, like working memory.”

From a different perspective, teaching itself is a form of brain training. When we teach our students, brains that once could not do something now can do that something.

Brains change all the time. “Brain training” aims for something grander. And, we haven’t yet figured out how to do it.

Default Image
Andrew Watson
Andrew Watson

In a blog post, David Didau raises concerns about “the problem with teachers’ judgment.”

Here goes:

If a brain expert offers me a teaching suggestion, I might respond: “Well, I know my students, and that technique just wouldn’t work with them.”

Alas, this rebuttal simply removes me from the realm of scientific discussion.

Scientific research functions only when a claim can be disproven. Yet the claim “I know my students better than you do” can’t be disproven.

Safe in this “I know my students” fortress, I can resist all outside guidance.

As Didau writes:

If, in the face of contradictory evidence, we [teachers] make the claim that a particular practice ‘works for me and my students’, then we are in danger of adopting an unfalsifiable position. We are free to define ‘works’ however we please.

It’s important to note: Didau isn’t arguing with a straw man. He’s responding to a tweet in which a former teacher proudly announces: “I taught 20 years without evidence or research…I chose to listen to my students.”

(Didau’s original post is a few years old; he recently linked to it to rebut this teacher’s bluff boast.)

Beware Teachers’ Judgment, Part 2

In their excellent book Understanding How We Learn, the Learning Scientists Yana Weinstein and Megan Sumeracki make a related pair of arguments.

They perceive in teachers “a huge distrust of any information that comes ‘from above’ “… and “a preference for relying on [teachers’] intuitions” (p. 22).

And yet, as they note,

There are two major problems that arise from a reliance on intuition.

The first is that our intuitions can lead us to pick the wrong learning strategies.

Second, once we land on a learning strategy, we tend to seek out “evidence” that favors the strategy we have picked. (p. 23)

Weinstein and Sumeracki cite lots of data supporting these concerns.

For instance, college students believe that rereading a textbook leads to more learning than does retrieval practice — even when their own experience shows the opposite.

The Problems with the Problem

I myself certainly agree that teachers should listen to guidance from psychology and neuroscience. Heck: I’ve spent more than 10 years making such research a part of my own teaching, and helping others do so too.

And yet, I worry that this perspective overstates its case.

Why? Because as I see it, we absolutely must rely on teachers’ judgment — and even intuition. Quite literally, we have no other choice. (I’m an English teacher. When I write “literally,” I mean literally.)

At a minimum, I see three ways that teachers’ judgments must be a cornerstone in teacher-researcher conversations.

Judgment #1: Context Always Matters

Researchers arrive at specific findings. And yet, the context in which we teach a) always matters, and b) almost never matches the context in which the research was done.

And therefore, we must rely on teachers’ judgments to translate the specific finding to our specific context.

For example: the estimable Nate Kornell has shown that the spacing effect applies to study with flashcards. In his research, students learned more by studying 1 pile of 20 flashcards than 4 piles of 5 flashcards. The bigger pile spaced out practice of specific flashcards, and thus yielded more learning.

So, clearly, we should always tell our students to study with decks of 20 flashcards.

No, we should not.

Kornell’s study showed that college students reviewing pairs of words learned more from 20-flashcard piles than 5-flashcard piles. But, I don’t teach college students. And: my students simply NEVER learn word pairs.

So: I think Kornell’s research gives us useful general guidance. Relatively large flashcard decks will probably result in more learning than relatively small ones. But, “relatively large” and “relatively small” will vary.

Doubtless, 2nd graders will want smaller decks than 9th graders.

Complex definitions will benefit from smaller decks than simple ones.

Flashcards with important historical dates can be studied in larger piles than flashcards with lengthy descriptions.

In every case, we have to rely on … yes … teachers’ judgments to translate a broad research principle to the specific classroom context.

Judgment #2: Combining Variables

Research works by isolating variables. Classrooms work by combining variables.

Who can best combine findings from various fields? Teachers.

So: we know from psychology research that interleaving improves learning.

We also know from psychology research that working memory overload impedes learning.

Let’s put those findings together and ask: at what point does too much interleaving lead to working memory overload?

It will be simply impossible for researchers to explore all possible combinations of interleaving within all levels of working memory challenge.

The best we can do: tell teachers about the benefits of interleaving, warn them about the dangers of WM overload – and let them use their judgment to find the right combination.

Judgment #3: Resolving Disputes

Some research findings point consistently in one direction. But, many research fields leave plenty of room for doubt, confusion, and contradiction.

For example: the field of retrieval practice is (seemingly) rock solid. We’ve got all sorts of research showing its effectiveness. I tell teachers and students about its benefits all the time.

And yet, we still don’t understand its boundary conditions well.

As I wrote last week, we do know that RP improves memory of specifically tested facts and processes. But we don’t know if it improves memory of facts and processes adjacent to the ones that got tested.

This study says it does. This one says it doesn’t.

So: what should the teacher do right now, before we get a consistent research answer? We should hear about the current research, and then use our best judgment.

One Final Point

People who don’t want to rely on teacherly judgment might respond thus: “well, teachers have to be willing to listen to research, and to make changes to their practice based upon it.”

For example, that teacher who boasted about ignoring research is no model for our work.

I heartily – EMPHATICALLY – agree with that point of view.

At the same time, I ask this question: “why would teachers listen to research-based guidance if those offering it routinely belittle our judgment in the first place?”

If we start by telling teachers that their judgment is not to be trusted, we can’t be surprised that they respond with “a huge distrust of any information that comes ‘from above’.”

So, here’s my suggestion: the field of Mind, Brain, Education should emphasize equal partnership.

Teachers: listen respectfully to relevant psychology and neuroscience research. Be willing to make changes to your practice based upon it.

Psychology and neuroscience researchers: listen respectfully to teachers’ experience. Be up front about the limits of your knowledge and its applicability.

Made wiser by these many points of view, we can all trust each other to do our best within our fields of expertise.

Fool Me Twice, Shame on Me
Andrew Watson
Andrew Watson

We often post about the unreliability of “brain training.”

Heck, even though I live in Boston and am a Patriots fan, I made fun of Tom Brady’s website claiming to “increase brain speed” and other such nonsense. (I don’t even know what “increase brain speed” might mean.)

So, you think I’d be especially wary of these claims. But, even I can fall into such traps — at least temporarily. Last week, it happened TWICE.

Fool Me Once

Many researchers have claimed to be able to increase working memory capacity.

(It would be great if we could do so, because working memory is so important for all classroom learning.)

Alas, very consistently, we find that such programs don’t really work. (For instance, here and here.)

And so, I was very excited to see a new approach to the problem.

We have long known that the cerebellum helps control motor function. More recently, scientists have discovered that it also supports working memory performance.

Perhaps, we could strengthen cerebellar function, and that way enhance WM. Worth a try, no?

Although this explanation makes good sense, and the accompanying graphs looked impressive, I was drawn up short by a serious problem: the researchers didn’t measure working memory.

You read that right. Instead of a WM test, they gave participants a short-term memory test.

So, this research shows that cerebellar training might increase STM. But, it shows nothing about WM.

Brain training hopes dashed…

Fool Me Twice

Unlike WM training, we have had some luck with attention training.

For instance, Green and Bavalier have shown that playing certain computer games can increase various kinds of visual attention.

A recent study claimed that a specially designed iPad game could enhance sustained visual attention. I was gearing up to review the research so I could write about it here, when…

I learned that the test to measure students’ attention was very similar to the game itself. (H/t: Michael Kane)

In other words: participants might have gotten better because they (basically) practiced the test, not because their sustained attention improved.

To measure such progress, researchers would need a test that wasn’t similar to the game participants played.

Brain training hopes re-dashed…

The Big Take Away for Teachers

I’m basically an optimistic person, and I really don’t like being a grinch.

But, sometimes my job requires me to be grinchy.

At this point, I’ve been inspired by “brain training” claims so many times, only to be disappointed by an analysis of the research underlying those claims.

So, from now on, I’m just going to assume that new claims are highly likely to be false.

If brain training claims are subsequently replicated by many research teams; if the methodologies are scrutinized and approved by several scholars in the field; well, if that happens, I’ll relent.

For now, I don’t want to be fooled again.

10,000 People Talk About Sleep and Cognition
Andrew Watson
Andrew Watson

Most of the research studies I read include a few tens of people. Sixty or eighty is good; more than 100 is rare. I’ve seen published studies with an even dozen.

sleep and cognition

So when I hear about a study with over 10,000 participants, I sit up and take notice.

In this case, researchers in Canada asked people to fill out online surveys about sleep, and to take cognitive tests. Given their astonishing data pool, they can reach firm conclusions about the questions they’ve asked.

Sleep and Cognition: Firm Conclusions

Some of these conclusions will sound quite predictable. Others will surprise you. They certainly surprised me.

First, if you want optimal cognitive function, roughly 7-8 hours of sleep gives you the best results. (Assuming that “you” are an average person. Of course, not everyone is average.)

Second, that number doesn’t change with age. (See below for an important caveat.) That is: 30-year-olds and 80-year-olds think best with the same amount of sleep.

Third, too much sleep muddles cognition as much as too little sleep. As someone who likes sleeping, I’m sorry to say this but: the graphs don’t lie.

Fourth, non-optimal sleep doesn’t harm short-term memory. Researchers tested short-term memory with the “spatial span task.” Participants had to remember which boxes flashed green, and press them in the same order. Here’s an example:

https://www.youtube.com/watch?v=zWO_w3m4NQs

Instead, non-optimal sleep fuddles reasoning skills (like executive function and deductive reasoning) and verbal skills (like verbal working memory).

Of course, school requires A LOT of reasoning and verbal skill. No wonder sleep-deprived (or sleep-surfeited) students struggle.

(By the way, fifth, 48.9% of the participants didn’t get enough sleep.)

And, sixtha good night of sleep really does help. That is: people who got even one good night’s sleep before the test saw a measurable uptick in their cognitive performance.

Caveats

From a researcher’s standpoint, it’s important to note that this team didn’t draw on a random sample. These participants volunteered by coming to a particular website.

And, all of the data here come from self-report. People could be deceiving the researchers. They could also be deceiving themselves.

From a teacher’s standpoint, we should note the age cut-off for this study: 18 years. K-12 students might see similar patterns. That is: their short-term memory might be fine after low-sleep nights, while their reasoning and verbal skills suffer.

Or, entirely plausibly, younger people might see different effects. We just don’t know.

A Final Note

In my experience as a high-school teacher, my colleagues (and I) experienced sleep deprivation as much as our students did.

We should, of course, encourage our students to get enough sleep. (We should also schedule the class day to fit our students’ sleep cycles.)

Now that we’ve seen this research into the connection between sleep and cognition, we should also take better care of ourselves.

Does Media Multitasking Really Interfere with Student Thinking?
Andrew Watson
Andrew Watson

media multitaskingTo many teachers, it just seems obvious: all that screen times MUST be bad for student brains.

To many other teachers, it just seems obvious: technology will unleash academic possibilities and revolutionize education.

So, which is it? Does media multitasking damage students’ cognitive capabilities? Or, does it allow them new avenues to creative possibilities?

Here’s What We Know

In a recent analysis, Uncapher and Wagner surveyed research into this topic.

Sure enough, they found some troubling evidence.

In half of the studies they examined, people who often use multiple streams of technology scored lower on working memory tests than those who don’t.

In two studies, they had a harder time recalling information from long-term memory.

Studies also showed problems with sustained attention.

Here’s a place where media multitasking might help: task switching. Given all the practice that multitaskers get diverting attention from one gizmo to another, they might well get better at doing so.

Alas, most of the research that U&W examined didn’t support that hypothesis.

Here’s What We Don’t Know: A LOT

Although all of the sentences above are true, they don’t answer most questions with any certainty.

For example, if half of the studies showed that high multitaskers do worse on working memory tests, that means that half of the studies DON’T reach that conclusion.

(It’s important to note that NONE of the studies showed that high multitaskers were better at working memory tasks than their counterparts.)

Uncapher and Wagner repeatedly emphasize this point. We don’t have lots of studies — and those we do have don’t all point the same direction.

Another important question: causality. Perhaps multitasking reduces sustained attention. Or, perhaps people who have trouble sustaining attention multitask more often.

Firm Conclusions

At present, we can conclude with confidence that we don’t have enough evidence to conclude anything with confidence.

Overall, the evidence suggests heavy media multitasking might cause (or might result from) relative weaknesses in several cognitive functions.

We certainly don’t have evidence that encourages us to promote multi-gizmo use.

I myself try to stick to one device at a time. Until more evidence comes in, I’ll gently suggest my students do likewise.

(For thoughts on technology and attention, click here.)

Does Hands-On Learning Benefit Science Students?
Andrew Watson
Andrew Watson

Phrases like “inquiry learning” or “project-based learning” inspire both enthusiasm and skepticism.

hands-on learning

In part, the difference of opinion results from a very basic problem: it’s hard to define either term precisely. What, exactly, are the essential elements of inquiry learning?

If we can’t even answer that question, it will be jolly hard for researchers to know if the method “really works.”

Questions without Answers; Hands-On Learning

A study published earlier this year focuses on two key elements of inquiry learning.

First: teachers should let students investigate a scientific phenomenon without telling them what they’ll find. It’s called inquiry learning because teachers withhold the correct answers.

Second: teachers should encourage hands-on learning. As much as possible, students should do the work themselves, not watch the teacher do it.

If you approach education with a constructivist lens, you’re likely to favor both approaches. Students who make sense of ideas on their own — with their own thoughts and their own hands, without too much teacher guidance — are likeliest to think deeply about concepts.

If instead you start with cognitive load theory, you’re likely to worry about these practices. Students have relatively little working memory with which to process new ideas. The absence of teacher guidance, and the need to manipulate physical objects might well overwhelm precious cognitive resources.

What They Did; What They Found

Researchers taught 4th and 5th graders about converting potential energy to kinetic energy. They used balls rolling down ramps of different heights to illustrate these concepts.

In one case, a teacher told the students what to expect: the higher the ramp, the farther the ball will roll. The students then watched the teacher do the experiment. (That is: “direct instruction.”)

In another, the teacher told students what to expect, but let them roll balls down the ramps.

In the third case, the teacher didn’t tell students what to expect, and let them do the experiment. (That is: “inquiry learning.”)

So: which combination of inquiry techniques yielded the most learning?

Direct instruction did. By a fair peg. (Cohen’s d was 0.59: not huge, but certainly respectable.)

In fact, in this paradigm, “inquiry learning” was the least effective at helping students take these concepts on board.

(To be complete: direct instruction helped students a) remember what they learned and b) reason with that new knowledge. On a third measure–applying this new knowledge to real world situations–both approaches worked equally well.)

At least in this one research paradigm, working memory limitations made constructivist pedagogy too difficult.

On The Other Hand…

When I first planned this post, I was excited to contrast Zhang’s study with a dramatic report from Washington State.

According to this report — here’s a one-page summary — 9th- and 10th-grade students who followed a constructivist inquiry curriculum (including hands-on learning) learned four extra months of science over two years.

That’s a simply staggering result.

I was hoping to argue that we should expect contradictory studies, and learn from the tensions between them.

In particular, the difference between a 1-shot study and a 2-year-long study should really get our attention.

Alas, I can’t make that argument here.

Compared to What?

In the ramp-and-ball study, Zhang’s three student groups learned under three equally plausible conditions. That is: she compared something to something else.

The Washington study, however, compares something to nothing.

That is: teachers at some schools got a shiny new curriculum and lots of dedicated professional development. Teachers at comparison schools got bupkis.

So, it’s entirely possible that the inquiry curriculum caused the extra learning.

It’s also possible that simply doing something new and exciting enlivened the teachers at the inquiry schools.

They might have been equally enlivened by some other kind of curriculum. Who knows: they might have found a well-designed direct-instruction curriculum inspiring.

Unless your control group is doing something, you can’t conclude that your intervention created the change. “Business as usual” — that’s what the researchers really called the control group! — doesn’t count as “doing something.”

An Invitation

Do you have a well-designed inquiry learning study that you love? Please send it to me: [email protected]. I’d love to write about it here…

 

Do Stress, Age, or Stereotypes Harm Your Working Memory?
Andrew Watson
Andrew Watson

We write a lot about working memory here on the blog, and so I was intrigued to see a review article summarizing 21 factors that might influence our WM performance.

Several of this article’s conclusions jumped out at me. Some reconfirm important points. Others just might surprise you. Certainly they surprised me.

Some headlines…

Gender

Debates about gender and learning, it seems, extend into the world of working memory research.

“No general consensus in the field exists when it comes to the relationship between gender and WM performance. Several researchers report that men have an advantage on spatial WM tasks and that women have an advantage on verbal WM tasks, some researchers report only a spatial advantage for men, and others report no differences at all between genders.”

Age

Unsurprisingly, working memory increases during adolescence — up until our early twenties.

To my surprise, Blasiman and Was report that declines in WM begin in our twenties. This decline is “constant and continuous.” That is: our working memory gets smaller gradually over time; it doesn’t drop off suddenly later in life.

Stress, and Beyond

I’m not surprised to see that stress interferes with WM. If I’m preoccupied with my bills, I won’t have as much WM left over to solve logic puzzles.

I am a bit surprised to read that dieting hampers WM. The authors suggest that my efforts to resist that Snickers distract me from the cognitive task at hand.

(Alas: even ineffective dieting produces this result. That is: I might have a lower WM score even if I’m not losing weight.)

By the way: we have lots of research connecting diet (not dieting) to WM. However, the review’s authors want further research to be persuaded. They currently consider evidence in this area to be “insufficient.”

Stereotypes

Many scholars have explored Claude Steele’s theory on Stereotype Threat.

Several researchers show that ST reduces working memory. Others have demonstrated that strategies to counteract ST help restore WM.

That is: once we identify the problem, we do have ways to fix it.

This conclusion strikes me as particularly interesting, given the recent skepticism about Steele’s theory. It is, of course, harder to argue that Stereotype Threat doesn’t exist if it has an effect on our working memory capacity.

We’ve Only Just Begun

Are you curious about the effect of mindfulness on WM?

Or, sleep?

How about temperature, or bilingualism?

Check out Blasiman and Was’s research here.

You Are a Learning Style of One
Andrew Watson
Andrew Watson

In the bad old days, schools seem to have thought about learning this way:

There are two kinds of students: smart ones, and not-smart ones. It’s easy to tell them apart.

If you teach it and I learn it, I’m a smart one.

If you teach it and I don’t learn it, I’m a not-smart one.

(To be clear: I’ve never heard anyone say that so crudely. But that tone suffuses the mythic past of our profession.)

false learning categories

Of course, this theory suffers from one deep flaw: it just ain’t true.

Those are simply false learning categories. We all can learn, but we all learn differently.

If I teach it and you don’t learn it, the problem may very well be with my teaching. You might well learn it some other way.

A Solution, A Bigger Problem

And yet, this optimistic reframe comes with perils of its own. If, in fact, “we all learn differently,” then teachers face an almost impossible challenge.

We have to figure out how each of our students learns, and then tailor all lessons for all of them. A class with 30 students requires 30 lesson plans.

How on earth can such a system work?

Another Solution?

Facing this baffling challenge, I would LOVE to sort my students into reasonable categories.

Instead of saying “there are smart students and not-smart students,” I’d rather say “students can be smart this way, or that way, or t’other way.”

With this framework, I can now have three lesson plans, not thirty. Or, I can have one lesson plan that teaches all three ways simultaneously.

For example: maybe left-handed students learn one way, right-handed students learn a different way, and ambidextrous students learn a third way. If true, this model allows me to honor my students’ differences AND create a coherent lesson plan.

As it turns out, people have proposed many (MANY) systems for sorting learners into “reasonable categories.”

Perhaps boys and girls learn differently.

Maybe introverts differ from extroverts.

Perhaps some people have interpersonal intelligence, while others have musical/rhythmic intelligence.

Maybe some learn concretely while others learn abstractly; some learn visually while others learn kinesthetically.

The list goes on.

Another Problem: False Learning Categories

Let’s add one more to that list:

Perhaps we can sort students according to the Myers-Briggs test. This student here is an ENTJ (extroverted, intuitive, thinking, and judging), while that student there is an ISFP (introverted, sensing, feeling, perceiving).

This system allows me to teach with distinct categories in mind, and so makes my teaching life easier.

Alas, this system suffers from a (familiar) deep flaw: it just ain’t true.

As Clemente I. Diaz explains, the Myers-Briggs Type Indicator doesn’t measure what it claims to measure.

In fact, it can’t. For example: the MBTI acts as if extroversion and introversion are two different personality types. In truth, we’ve all got a some of both — and, different settings bring out the introvert or extrovert in each of us.

All of the seemingly “reasonable categories” listed above are, in fact, false learning categories.

No: with very rare exceptions, boys and girls don’t learn differently.

No: introverts and extroverts don’t learn differently. (They don’t really exist. We’re all both, depending on the circumstances.)

No: we don’t have learning styles.

Here’s my advice:

Whenever a professed expert suggests you to divide students into different learning categories, assume those categories aren’t valid. Each of us learns our own way.

In a pithy sentence:

You are a learning style of one.

Replacing False Learning Categories with True Ones

That feel-good summary brings us back to the same problem. If each of my students learns differently, then I need to create 30 lesson plans. What to do?

Here’s the good news:

Although we all learn differently, we resemble each other more than we differ.

We all use working memory to learn. When teachers prevent working-memory overload, we benefit all our students. (Including the “introverts” and the “ENTJs.”)

We all use attention to learn. When teachers learn about alertness, orienting, and executive attention, we benefit all our students. (Including the “auditory learners” and the boys.)

Long-term memories form the same way for us all. Spacing, interleaving, and retrieval practice help (almost) all of us learn (almost) everything. (Yup: including the “abstract learners.”)

And so: teachers don’t need to pigeon-hole our students into particular learning categories.

Instead, we can focus on categories of cognitive function. The more we learn about the mental processes that enhance (or inhibit) learning, the more we truly benefit all of our students.