Head Start programs prepare young children — especially those from lower socio-economic cohorts — for success in school.
Can these programs help more if extended by the parents?
Head Start programs prepare young children — especially those from lower socio-economic cohorts — for success in school.
Can these programs help more if extended by the parents?
In the bad old days, schools seem to have thought about learning this way:
There are two kinds of students: smart ones, and not-smart ones. It’s easy to tell them apart.
If you teach it and I learn it, I’m a smart one.
If you teach it and I don’t learn it, I’m a not-smart one.
(To be clear: I’ve never heard anyone say that so crudely. But that tone suffuses the mythic past of our profession.)
Of course, this theory suffers from one deep flaw: it just ain’t true.
Those are simply false learning categories. We all can learn, but we all learn differently.
If I teach it and you don’t learn it, the problem may very well be with my teaching. You might well learn it some other way.
And yet, this optimistic reframe comes with perils of its own. If, in fact, “we all learn differently,” then teachers face an almost impossible challenge.
We have to figure out how each of our students learns, and then tailor all lessons for all of them. A class with 30 students requires 30 lesson plans.
How on earth can such a system work?
Facing this baffling challenge, I would LOVE to sort my students into reasonable categories.
Instead of saying “there are smart students and not-smart students,” I’d rather say “students can be smart this way, or that way, or t’other way.”
With this framework, I can now have three lesson plans, not thirty. Or, I can have one lesson plan that teaches all three ways simultaneously.
For example: maybe left-handed students learn one way, right-handed students learn a different way, and ambidextrous students learn a third way. If true, this model allows me to honor my students’ differences AND create a coherent lesson plan.
As it turns out, people have proposed many (MANY) systems for sorting learners into “reasonable categories.”
Perhaps boys and girls learn differently.
Maybe introverts differ from extroverts.
Perhaps some people have interpersonal intelligence, while others have musical/rhythmic intelligence.
Maybe some learn concretely while others learn abstractly; some learn visually while others learn kinesthetically.
The list goes on.
Let’s add one more to that list:
Perhaps we can sort students according to the Myers-Briggs test. This student here is an ENTJ (extroverted, intuitive, thinking, and judging), while that student there is an ISFP (introverted, sensing, feeling, perceiving).
This system allows me to teach with distinct categories in mind, and so makes my teaching life easier.
Alas, this system suffers from a (familiar) deep flaw: it just ain’t true.
As Clemente I. Diaz explains, the Myers-Briggs Type Indicator doesn’t measure what it claims to measure.
In fact, it can’t. For example: the MBTI acts as if extroversion and introversion are two different personality types. In truth, we’ve all got a some of both — and, different settings bring out the introvert or extrovert in each of us.
All of the seemingly “reasonable categories” listed above are, in fact, false learning categories.
No: with very rare exceptions, boys and girls don’t learn differently.
No: introverts and extroverts don’t learn differently. (They don’t really exist. We’re all both, depending on the circumstances.)
No: we don’t have learning styles.
Here’s my advice:
Whenever a professed expert suggests you to divide students into different learning categories, assume those categories aren’t valid. Each of us learns our own way.
In a pithy sentence:
You are a learning style of one.
That feel-good summary brings us back to the same problem. If each of my students learns differently, then I need to create 30 lesson plans. What to do?
Here’s the good news:
Although we all learn differently, we resemble each other more than we differ.
We all use working memory to learn. When teachers prevent working-memory overload, we benefit all our students. (Including the “introverts” and the “ENTJs.”)
We all use attention to learn. When teachers learn about alertness, orienting, and executive attention, we benefit all our students. (Including the “auditory learners” and the boys.)
Long-term memories form the same way for us all. Spacing, interleaving, and retrieval practice help (almost) all of us learn (almost) everything. (Yup: including the “abstract learners.”)
And so: teachers don’t need to pigeon-hole our students into particular learning categories.
Instead, we can focus on categories of cognitive function. The more we learn about the mental processes that enhance (or inhibit) learning, the more we truly benefit all of our students.
Why do chess experts win more chess matches than novices?
This question has a perfectly straightforward answer: they know more about chess. Obviously.
Forty-five years ago, William Chase and Herbert Simon tested another hypothesis. Perhaps, they speculated, chess experts see the world differently than do chess novices.
They don’t just think differently. The literally see differently. Their chess knowledge changes their perception.
Sure enough, as Chase and Simon predicted, chess experts see chess boards as meaningful groups of chess pieces.
This chess board shows a modified French Dragon Attack.
That chess board shows a King-and-Bishop vs. King-and-Rook problem.
Chess novices, however, see chess boards as scatterings of individual pieces.
This chess board shows…a bunch of pieces.
That chess board shows…a different bunch of pieces.
Because the expert sees a different chess board, she sorts through her possible moves much more efficiently. And: she’s likelier to win the game.
Does this finding hold true for teachers? Does expert teacher vision differ from that of novice teachers?
Charlotte Wolff (and others) explored this question in a study that used eye-tracking software to understand where teachers look.
Sure enough, they found that expert teachers look at classrooms differently.
For instance: expert teachers “appear to be searching for activity between students,” even “following posture and body movements.”
Novices, on the other hand, focus on irrelevant details: for example, a student’s “fluorescent green shoelaces.”
When you look at the photos in the study, you’ll see that novices spend a disproportionate amount of time looking at unimportant details. A painting on the wall. People walking by in the hallway. Even an electrical outlet oddly placed in the wall.
Intriguingly, Wolff & Co found that experienced teachers used different words to describe what they saw. In particular, they commented more frequently on feelings, and on the events happening in the room.
For my taste, this part of the study needs further elaboration. I’d love to hear about they ways that experts describe their classrooms differently from novices.
Here’s why.
A novice teacher might reasonably ask this question: “How do I train myself to have expert teacher vision?”
The likeliest answer is: practice, practice, practice. We don’t know many good shortcuts for developing expertise. It just takes time.
However, if we knew more about the words that experts use, we might train new teachers to speak and think that way when they comment on classrooms. These verbal habits — a kind of deliberate teacherly practice — just might help novice teachers hone their visual skills.
Teachers who want to shape our practice with research find ourselves taking on extra responsibilities.
In particular, we should probably hone our skills at investigating the research we use.
Are we sure — or, sure enough — that the research is well done? How do we know whom to trust?
Over at The Effortful Educator, Blake Harvard has 3 very practical suggestions for considering “research based” advice offered during PD sessions.
Note, in particular, his emphasis on adapting research to your own situation. As I said in a post just 2 days ago: don’t just do this thing. Instead, PD might help you think this way.
Just after I wrote the short post above, I found a useful addition over at 3-Star Learning Experiences.
In this post, Mirjam Neelen & Paul A. Kirschner offer a usefully complex mental model for integrating research, technology, society and teaching.
And, they’ve got specific strategies for evaluating the evidence that might influence your teaching practice.
When you first try them out, these strategies will take some time, and some real mental effort. But: once you get into these habits, you’ll find yourself seeing past weak claims with increasing frequency.
As you may remember, I wrote about this study back in February.
In it, the authors claim that sleeplessness interferes with men’s working memory, but not women’s.
And yet, when you read the methodology section of the study, the flaw quickly becomes clear. Researchers made claims about working memory, but they tested short-term memory.
As you read more and more studies with Neelen and Kirschner’s guidance in mind, you’ll spot this kind of discrepancy with increasing ease.
Teachers love hearing about brain research because it offers us specific and persuasive guidance.
The researcher says: when I DID THIS THING, students learned more than when I DID THAT THING.
As a thoughtful teacher, I draw the obvious conclusion. I too should DO THIS THING.
And yet, you might reach a different conclusion. If you’re interested in using research well, you might even reach a better conclusion.
Here’s a specific example.
Back in 2011, Connor Diemand-Yauman published a study about unusual fonts. (You read that right. Font. As in: typeface.)
He had students learn some information in an easy-to-read font (Arial, 100% black). They learned other information in a harder-to-read font (for example, Bodoni MT, 60% black).
When retested, they remembered more information in the hard to read font.
Being a thorough researcher, Diemand-Yauman tried this hypothesis out in a high school. He had teachers use the Arial font in one of their sections, and the Bodoni MT in another.
Sure enough, the hard-to-read fonts (called “disfluent”) lead to greater learning.
We teachers might take this study as an instruction to DO THIS THING. Given Diemand-Yauman’s results, that is, we should start using unusual fonts.
Instead of DOING THIS THING, however, I think Diemand-Yauman’s research should inspire us to THINK THIS WAY.
Specifically, we should think about finding the right level of difficulty.
When students take on relatively simple material, we can help them learn it better by adding a bit of challenge.
We might — for example — print that information in a disfluent font.
We might space out practice further than usual.
Or, we might interleave this topic with other, similar kinds of information.
But: when students learn complex material, we don’t want to make it any more difficult. In this case, the font should be as fluent as possible. We would space practice out, but not so far. We would interleave, but not so much.
In other words: Diemand-Yauman’s research doesn’t tell us to use quirky fonts (“do this thing”).
Instead, it gives us another option for creating desirable difficulty (“think this way”).
A just-published meta-analysis says: not so much. In the authors’ words:
“there is not enough evidence to show that it [using disfluent fonts] either stimulates analytic processing or increases extraneous cognitive load.”
In other words: hard-to-read fonts aren’t a desirable difficulty. And, they don’t stress working memory too much.
Although I haven’t looked at the nitty-gritty of this study (it’s behind a paywall), I have an alternate interpretation.
Perhaps in some cases disfluent fonts are a desirable difficulty. And, in other cases they stress working memory. In this case, those two findings would offset each other in a meta-analysis. The result would be — as this study finds — no consistent effect.
If I’m right, a disfluent font might improve learning. Or, it might hinder learning.
So: who decides when to use one?
The answer is clear: THE TEACHER DECIDES. Only you know if the material is already hard enough (in which case, use a fluent font). Only you know if it needs some extra cognitive challenge to make it stick (in which case, think about a disfluent font).
No researcher can answer that question, because no researcher knows your curriculum, your school, and your students.
Rather than ask researchers tell you what to do, let them guide you in thinking about teaching problems in more effective ways.
If you’re especially interested in desirable difficulties, here’s an article about a potentially desirable difficulty that turns out to be…not.
The Harvard Gazette recently posted a soothing headline: “When Wandering Minds are Just Fine.”
The subhead offers even more encouraging news: “The practice has no detrimental effects in some situations, study says.”
No detrimental effects. That sounds kind of great.
It also sounds a bit confusing. Here’s the title of another recent journal article:
So: which is it? Mind wandering is just fine? Or, mind wandering hampers learning and creates misunderstandings?
These two studies arrived at dramatically different conclusions because they studied dramatically different situations.
In the language of psychology, they used different research paradigms.
One study had students read a science passage, and then write about the information and concepts they had learned.
The other study had participants look at a clock and press the space bar whenever it showed exactly 12:00. The clock was designed to show 12:00 every twenty seconds.
Now: which one of those two studies sounds more like the work you do every day?
Given that these two studies consider such different activities, we can easily decide which one applies to us. Focus on the study that resembles your students’ work.
For the most part — I suspect — we’re interested in the research about students who read the science passage. After all, that study looks a lot like school.
In that study, researchers found that students who spent more time mind-wandering learned less science.
They also held on to more misconceptions about the science that they studied.
In brief: mind wandering harms much of the work that students do in school.
At the same time, the 20-second-clock study can also give us some useful parameters.
That study makes a simple point. Sometimes — when the clock is approaching 12:00 — we need to focus clearly. Other times — when we’ve got 10-15 seconds to go — it’s just fine to mind wander a bit.
So: if your students can confidently predict when they’ll need to refocus, you don’t need to worry if they take some mental time off.
To explore this question further, you might look at Kevin Kent’s article called Drifting Away. In it, he explores the settings where mind-wandering just might benefit learning.
At one of the first Learning and the Brain conferences I attended, a speaker briefly mentioned that learning styles theory doesn’t have much good evidence to support it.
That comment turned into a heated debate. Several attendees asked vexed, unhappy questions. The speaker held her ground.
When I got in the elevator at the end of that session, I heard one attendee curtly dismiss the speaker’s objection: “well, it’s all just statistics.”
Well, it IS all statistics.
In the worlds of neuroscience and psychology, researchers rely on statistical methods to ensure their recommendations aren’t simply hunches.
Anyone can stand behind a microphone and have an opinion. But: if you’re going to do scientific research, your numbers have to add up.
And, as researchers look at valid statistical models, they just don’t find good support for the idea that — for instance — some people are visual learners and others are auditory learners.
The numbers just don’t add up. Or, in this case: if you teach “visual learners” “visually,” they don’t learn any more than if you had taught them “auditorily” or “kinesthetically.”
Instead, the content itself often offers guidance on the best way to teach. If you’re teaching a French or Spanish or Japanese accent, that content is — by its nature — auditory.
If you’re teaching geography, that content is visual.
Free throws? Kinesthetic.
Most content, however, can be taught in multiple ways.
For example: I’m thinking of an actress. She’s Australian. She played Virginia Woolf in that movie. And, she was married to Tom Cruise.
If you’re shouting NICOLE KIDMAN, you’re right. Notice that I gave you three entry points to the neural network that encodes this memory: her country of origin, a role she played, and her marriage.
So: “teaching to learning styles” helps because you probably teach your content in different ways — auditorily, visually, and kinesthetically. Those three different approaches give distinct connections to the memory you want your students to form.
This approach to teaching helps not because of a student’s learning style, but because all your students now have multiple ways to access that memory.
In other words, the theory helps students learn — but not for the reason it claims to.
Daniel Willingham — one of the early debunkers of learning styles myths — has recently posted his current thoughts on learning styles. The short version:
Nope. Learning Styles still don’t exist. Really.
Learners should “tune their thinking to the task.” That is: learn about geography visually — even if you think you’re not a “visual learner.”
More than many researchers, Willingham gets teachers and teaching. So: if you’re still a learning-styles believer, I encourage you to check out his article.
In related news: Greg Ashman argues that, no, rejecting learning styles theory isn’t sexist. After all, LOTS of thoughtful female researchers reject the theory.
And: the Learning Scientists have a great take on this debate. We shouldn’t focus simply on rejecting learning styles theory. Instead, we should replace it with a better theory. They have thoughts on how to do so…
[Update, 6/25/18]Finally, Scientific American has a recent article showing that most students don’t use the learning styles that they believe would benefit them. And, when they do, those strategies don’t help them learn.
I’ve posted a good bit recently about the dangers of working memory overload. (For instance: here and here.)
Teachers can understand the dangers of WM overload. However, we rarely experience WM overload in school. Because we’re in charge of the lesson, we keep it comfortably within our own mental limits.
(Of course, faculty meetings can tax our working memory. As well as our patience.)
I think it’s occasionally helpful for teachers to feel WM overload, so we can recognize what our students experience all too often.
For that reason, I’m linking to the Ted Ed video below.
Take six minutes. When you try to solve the riddle, you’ll abruptly recognize that baffled-and-stunned look you see on your students.
You might even feel that way when you listen to the solution.
Enjoy!
https://www.youtube.com/watch?v=dh4nEuhZBgg
Unless you’ve been napping under a LatB rock, you’ve heard about the importance of research-based study habits.
In particular, you know that students should spread practice out over time rather than bunching practice all together. (The benefits are called the spacing effect.)
And, you know that students should not simply look over what they already know. Instead, they should quiz themselves to see what they can actively retrieve from memory. (That’s called retrieval practice.)
Here’s a little secret you might not know: most of the research about the spacing effect and retrieval practice takes place in psychology labs.
What happens in the real world? Do students who use these techniques actually learn more than those who don’t?
In a recent study, Fernando Rodriguez and colleagues surveyed students about their study practices.
Do these students space practice over time? Do they do all of their studying all in one session?
Perhaps they quiz themselves on what they know? Or, perhaps they reread the textbook?
Rodriguez & Co. then compared these answers to the students’ grade in the class. By this method, they could tease out the effects of spacing and retrieval practice on actual learning.
So: did these research-endorsed study habits translate into classroom learning?
Rodriguez found mixed results.
Study habits that spaced practice out didn’t make any difference. Students who crammed and students who studied material in several brief sessions got the same final grade.
(I’ll propose an explanation for this finding below.)
However, retrieval practice made a clearly measurable difference. Students who reviewed material averaged a B-. Those who self-tested averaged a B.
Given that both study techniques take the same amount of time, it obviously makes sense to self-test. Students who do so learn more. Retrieval practice just works.
If we’ve got so much research showing the benefits of spacing, why didn’t it help students in this class?
We don’t know for sure, but one answer stands out as very probable: the professor already did the spacing for the students.
That is: the syllabus included frequent review sessions. It had several cumulative tests. The class structure itself required students to think about the material several times over the semester.
Even if students wanted to cram, they couldn’t wait until the last moment to review. The test schedule alone required them to review multiple times.
So: the students’ own additional spacing study habits didn’t help.
However, in a class where the professor hadn’t required spacing, it most likely would have done so.
This possibility, in my view, underlines a bigger point about spacing and retrieval practice:
For the most part, students have primary responsibility for retrieval practice, whereas teachers have primary responsibility for spacing.
That is: students — especially older students — should learn to review by using retrieval practice strategies. (Of course, especially with younger students, teachers should teach RP strategies. And, offer frequent reminders.)
Teachers — in our turn — should design our courses to space practice out. (Of course, students should do what they can to space practice as well.)
In other words: retrieval practice is largely a study habit. Spacing is largely a teaching habit.
Students will get the most benefit from this research when we divide up responsibility this way.
Teenagers often baffle adults.
The moody sulkiness that erupts into warmth and affection. The impressive academic accomplishment that precedes an idiotic, immature decision.
(How often have you had this conversation:
YOU: What on earth were you thinking?
TEEN: That’s just it. I wasn’t thinking.)
Of course, teenagers often baffle themselves. And according to recent research, some of that confusion may result from difficulty understanding their own emotions.
Researchers who study emotions often focus on our ability to distinguish among them.
For instance: when I see a picture of rotting food, I might be disgusted and upset. However, I’m not angry or scared.
Researchers call this ability “emotion differentiation.” Unsurprisingly, individuals who succeed at emotion differentiation see other kinds of success. They’re good at coping with difficult emotional situations. They’re less likely to rely on alcohol to get through tough times.
To understand adolescent emotion, we might ask: how good are teens at emotion differentiation?
Are they better at it than children? Than adults? In other words, how does this capacity develop over time?
Erik Nook recently investigated this question, and came up with an intriguing answer.
He and his colleagues showed people (age range 5 to 24) pictures, and asked them to rate their emotional responses to them. Focusing on negative emotions, Nook asked participants how “angry, upset, sad, disgusted, and scared” each picture made them.
For example: if a participant gave the highest rating to all five emotions, that response pattern showed little emotional differentiation. All five emotions were experienced equally.
If, however, he gave a high rating to “scared,” a medium rating to “disgusted,” and a low rating to the other three, that pattern showed high emotional differentiation.
What did they find?
Children and adults distinguish among emotions better than adolescents do.
That is: children and adults can say “I’m feeling upset, but not scared.” Adolescents, however, have a harder time drawing those distinctions. Their negative emotions swirl together in a chaotic muddle.
But why is this so?
Nook & Co. investigated several competing hypotheses. Their answer reveals a hidden strength in adolescent emotion processing.
It turns out that children are good at distinguishing among emotions because they don’t really understand it’s possible to experience more than one emotion at a time.
In other words: young children report that they’re feeling disgusted but not sad because they don’t recognize it’s possible to feel both disgusted and sad.
Adolescents, however, DO recognize the possibility of feeling multiple emotions. And yet, because this understanding arrives freshly with adolescence, teens don’t yet have much practice differentiating among them.
As Nook and colleagues write:
children have high emotion differentiation because they experience emotions one at a time, whereas adults have high emotion differentiation potentially because of increased ability to specifically identify co-experienced emotions.
Adolescents, however, fall between these two stools. They do recognize the possibility of experience multiple emotions, but don’t yet have enough practice at sorting out which is which.
As so often happens, this research guides us in two directions. Teachers should both think this way and do this thing.
Think this way. With this clearer understanding of adolescent emotion, we can clear our own heads when we cross paths with a teen in an emotional tasmanian-devil vortex.
Rather than say to ourselves “why is this 17-year-old melting down like a child?,” we can say “Aha! He’s aware that he’s experiencing multiple emotions, but he’s not sure which is which. That confusion has led to an atypical emotional outburst.”
This simple understanding may help us stay calm despite adolescent angst.
Do this thing. Adolescents know that they’re feeling many things, but they don’t yet have much experience naming them simultaneously. We can help them.
In the emotional moment itself, we can ask guiding questions and offer potential labels. As always, teacherly guidance can show teens the way in difficult moments.
Also, in our teaching, we can highlight moments of emotional complexity. In Their Eyes Were Watching God, does Janie admire Jody Starks, or hate him? Fear him, or pity him? Perhaps, all at the same time?
In this way, the curriculum that we teach can help adolescents develop emotion differentiation.
Two final notes.
First: we’ve written about work from Leah Somerville’s lab before. If you want to know more about adolescence, look here or here.
Second: one of the co-authors of this study is Stephanie Sasse, one-time editor of this blog. Congratulations!