What a blast! Despite diving into countless neuroscience introductions, this journey felt uniquely enjoyable, resonating with me both as an educator and an eager neuroscience explorer. Chantel Prat’s The Neuroscience of You: How Every Brain Is Different and How to Understand Yours effortlessly blends captivating storytelling with profound insights into the emerging understandings and mysteries of the human brain. Prat’s background as a professor of neuroscience at the University of Washington and her expertise in a variety of cross-disciplinary fields and translation of psychology and neuroscience in several popular science outlets makes her book feel deeply personal and conversational. Through personal anecdotes, and reflective questions and surveys that make the book relevant to you, Prat makes neuroscience accessible and appealing to newcomers and seasoned enthusiasts alike.
The first part of the book introduces neuroscience basics in relatable terms, avoiding overwhelming terminology while still challenging experts with nuanced concepts. Prat ensures the content remains current, reflecting modern understandings rather than outdated perspectives. Delving into familiar core topics like brain localization, hemisphere specialization, neurochemical dynamics, and brain rhythms, she leaves readers feeling empowered to explore their own brain’s unique mix. But she adds some amazing flavor to the topic as is clear with her reference to the neurochemical makeup of the brain as “mixology.” In each section as you apply the concepts there is a necessary and important qualification of “it depends.” You walk away feeling like you are ready to start being your own mixologist realizing the importance of environment, developmental, and genetic variation involved in the process of designing you. You are set up for some fun life designing and biohacking. Along with this preparation, you are set to participate in understanding the developing field of neuroscience. She emphasizes that you are right in the middle of this scientific journey and prepares you to take your own journeys into the wide field of emerging studies.
Building on this foundation, the second part of the book delves deeply and personally into essential cognitive skills, offering robust theory and engaging narratives to help readers understand and utilize their brain’s inner workings. From focusing and adapting to navigating, predicting, and building curiosity, the book covers key aspects of cognitive function, concluding with a fascinating exploration of the brain’s social nature and its role in human interaction. This second part of the book is wonderfully up to date adding nuance and understanding of the science that is currently being evaluated. Hopefully, you walk away from these sections with new questions and ideas as you will better understand your interaction with the world but also be more curious about it.
One of the most important aspects of this book is that each individual is unique. Our unique mix of genes and environment has prepared each of us to interact with our world in our own way. But she also points out that it is this same mix that unites us.
This book deals artfully with the tension between freedom and determination, between scientific terminology and layperson accessibility, and between abstract theory and personal relevance. It is well-rounded so regardless of your expertise level you are bound to get something enjoyable from this text.
Above all, Prat’s passion for neuroscience shines through and is contagious, infusing the book with the excitement of a favorite theme park ride, promising endless returns for those eager to deepen their understanding. Additionally, the book serves as an excellent introduction to cutting-edge research and notable researchers in the field, making it an invaluable resource for anyone curious about the latest developments in neuroscience.
If you teach children for several hours a day, you just know that sometimes they’re with you…and sometimes not.
They might be focused on your description of the “angle-side-angle” theorem; or, they might be thinking about the Oscars. (What a speech!)
So we might reasonably ask: “is their mind-wandering a problem? Do they learn less?”
We might be tempted by an uplifting answer: “mind-wandering allows students to make fresh and helpful connections.” If they link angle-side-angle to the Oscars, after all, they have made connections that will help them consolidate this new geometry information.
Or, we might be worried about a tough-minded answer: “it seems sort of obvious that if students aren’t focusing, they almost certainly aren’t learning.”
Which is it?
We’ve got a fair amout of research with adolescents and adults; for them, mind-wandering hampers learning.
But, what about younger students?
Pharaohs and Dinosaurs
As always, research details matter.
In this recent study, researchers asked 8-9 year olds to listen to two stories: one about pharaohs, the other about dinos.
These stories — about 12 minutes long — were interrupted every 90 seconds or so. The students answered whether they were …
… focusing on the story,
… thinking about something unrelated to the story (“It was fun being at the zoo yesterday”),
… thinking about their interest in — or abilities relative to — the story (“I’m not very good at this,” “I’m really interested in this”), or
… distracted by the environment (a slamming door).
Researchers also asked the students how interested they were in the content of the stories.
And — of especial interest — they measured the students’ understanding of the stories both immediately after the story and also one week later.
I’d Rather Know than Not Know
The results include lots of useful information: some surprising, some not.
First: unsurprisingly (to me), students who mind-wandered remembered less.
And, crucially, they remembered less both right away and AND a week later.
This point really matters. We know from Nick Soderstrom’s work that initial performance isn’t a reliable indicator of long-term learning.
If we had only short-term results, we might optimistically think that short-term memory problems would give way to long-term improvements.
But: nope.
Students who reported more mind wandering didn’t learn as much.
Second: surprisingly (to me), the students’ interest level didn’t matter.
That is: even the students who REALLY LIKE DINOS didn’t learn as much if they mind-wandered.
Interest doesn’t protect students from the dangers of mind-wandering.
Third: horrifyingly (to me), students lose focus roughly 25% of the time.
In this study, they spent…
… about 10% of their time thinking about something else (“the zoo”),
… about 10% of their time thinking about their ability/interest (“I bet I won’t remember this part”), and
… about 5% of the time distracted by the environment (the slamming door).
If we want students to learn 100% of the material, and they’re mentally elsewhere for 25% of the time…well, that distraction puts a firm cap on what they can learn.
To be clear: this study took place during the pandemic, so student were at home and participating on Microsoft Teams. We therefore can’t take this finding as an entirely reliable measurement of their off-task thoughts during class.
However, I honestly worry that they might be mentally off task even more during school hours. The average classroom has LOTS more people in it, and features fewer dinosaur videos…
Teaching Implications
I think this study (especially if others confirm its findings) encourages us to several tough-minded conclusions:
Mind-wandering really does interfere with learning.
It happens A LOT.
Students’ intrinsic interest doesn’t mitigate this problem.
Each of us will respond to those conclusions differently, but they do offer several suggestions:
Second: check for understanding even more frequently than we might think necessary. Doing so will a) help us know if they are mind-wandering, and b) help keep them focused.
Third: remain vigilant even if the topic seems intrinsically interesting. I might think that dinosaurs will keep students’ focus…but in this study they didn’t.
More broadly: I might spend some time looking in the mirror. How distracted am I? How much mind-wandering is a part of my thought routine?
After all: if mind-wandering hampers my own understanding, that result is as bad for me as much as for my students.
Cherry, J., McCormack, T., & Graham, A. J. (2023). Listen up, kids! How mind wandering affects immediate and delayed memory in children. Memory & Cognition, 1-17.
Soderstrom, N. C., & Bjork, R. A. (2015). Learning versus performance: An integrative review. Perspectives on Psychological Science, 10(2), 176-199.
Imagine this conversation that you and I might have:
ANDREW: The fastest way to drive from here to the school is via South Street.
YOU: It is? That seems like a long detour. Why would I go that way?
ANDREW: I didn’t say it was the fastest; I said it was the best because it’s the prettiest.
YOU: You DID say it was fastest…wait, the prettiest? It’s basically junk yards and construction sites.
ANDREW: Yes, but because of all the bakeries, it smells really nice.
YOU: What does that have to do with fastest/prettiest?
ANDREW: Why are you being so unpleasant and difficult? South Street is the best route…
I suspect you would think: “this conversation is very frustrating and unhelpful because the goal posts keep moving.”
That is: I initially claimed that South Street is the fastest…but keep moving my claims as soon as you object. (And, oddly, I’m mad at you for being unreasonable.)
I routinely notice this pattern when I ask questions about the claim that “handwriting is better than laptops for note taking.”
Watch the goalposts move:
CLAIM: Handwriting is better than laptops for note taking. This study says so.
ANDREW: That study starts with the BIZARRE assumption that students can’t learn how to do new things — like, how to take notes correctly. And, research since then has routinely complicated or contradicted it.
CLAIM: I didn’t say laptops are better beacuse of this study. It’s because writing by hand changes neural networks. This research says so.
ANDREW: That research says that writing by hand helps students learn to write by hand. Of course it does.
But that doesn’t mean that writing by hand helps students learn other things — like, say, history or chemistry or German. Can you show me research supporting that claim?
CLAIM: I can’t, but when students write on laptops they distract students around them.
ANDREW: Yes, but that’s a completely different claim than the one you started with.
CLAIM: Why are you being so unpleasant and difficult? Writing by hand is better than taking notes on laptops!
Once again, I find this conversation frustrating and unhelpful. SO MANY MOVING GOALPOSTS.
I am entirely open to the idea that handwriting is better. But if someone makes that claim, and says it’s “research-based,” I’d like them to provide research that actually shows this claim to be true.
So far, that turns out to be a big ask.
This idea that “handwriting is better than keyboarding” has popped up again (I suspect because of a recent study), so I want to re-investigate this claim — with a keen eye on those goalposts.
Reasonable Start
If you see a headline that says, “Why Writing by Hand Is Better for Memory and Learning,” you might interpret that claim roughly this way:
Students who take handwritten notes — in their 6th grade history class, say, or their 10th grade science class — remember more of that material after 2 weeks than students who took notes on laptops.
Yes, I conjured up some of those specifics: “6th grade history,” “two weeks later.” But those seem like reasonable extrapolations. What else could the claim substantively mean?
Briefly: plausible goalpost = “students remember more history 2 weeks later.”
So, let’s look at the recent research being used to support this claim.
Here’s a very basic question: “how did the researchers measure how much the students learned and remembered?”
Did the students take a quiz two weeks later? Did they undertake a “brain dump” the following day? How, precisely, do we know what they learned?
The answer is:
The researchers did not measure how much the students learned/remembered.
Honestly. No quiz. No brain dump. Nothing.
And yet, even though the study doesn’t measure memory or learning, it is being used to argue that handwriting enhances memory and learning.
I find this astonishing.
Instead, the study measures activity “in brain regions associated with memory and learning.”
Did you notice something?
Goalpost plausibly was: “students remember more history 2 weeks later.”
Goalpost now is: “more activity in important brain regions.”
Grrr.
Getting Specific
When evaluating “research-based” claims, it’s helpful to know exactly what the participants in the research did.
So, these 36 participants wrote the same fifteen words multiple times. Sometimes they wrote with a stylus on a tablet; sometimes they typed using only their right index finger. (BTW: all the participants were right handed.)
Now, this insistance on “right index finger” makes sense from a neuro-research perspective. If both “handwriters” and “keyboarders” are using one hand, then the researchers reduce lots of confounding variables.
At the same time, this emphasis also leads to highly artificial circumstances.
Presumably some people type with one finger. But, I’m guessing that most people who want to take laptop notes don’t. I suspect they want to take laptop notes because they have some degree of facility on a keyboard.
So:
Goalpost initially was: “students remember more history 2 weeks later.”
Goalpost then was: “more activity in important brain regions.”
Goalpost now is: “more activity in important brain regions when participants write as they usually do than when they type in a really, really unnatural way.”
Double grrr.
It is, of course, helpful to know about these differences in neural responses. But I don’t think they plausibly add up to “students remember more.” Because — remember — no one measured learning.
Lest I Be Misunderstood
In such conversations, I’m often misunderstood to be confident about the right answer. That is: I might seem to be saying “I’m confident that laptops are better than handwriting for learning.”
I am NOT saying that.
Instead, I’m asking for research that directly measures the claim being made.
If I say to you: “research shows that handwriting is better for learning than laptops,” I should be able to show you research that directly measures that claim.
If, instead, I have research showing that handwriting develops neural networks that might be beneficial for learning, I should say that.
My frustration about this point stems from a broader concern.
Over and over, I find that non-teachers cite research — especially neuroscience research — to boss teachers around. While I certainly do believe that teachers should know about pertinent research findings (that’s why I write this blog!), I also believe that we need to acknowledge the limits of our research-based knowledge.
I just don’t think that research (yet) demonstrates that handwritten notes generate more learning than laptop notes.
Overall, I’m inclined to believe:
Practicing fine motor skills (by, say, handwriting) is really important for young learners.
Praticing handwriting makes us better at handwriting — and other word-related skills.
As students get older and more facile with a keyboard, the benefits of handwriting vs. keyboarding will probably depend on the student, the subject, the kind of notes being taken, etc.
And if I see more than one study directly testing the claim that handwriting helps people learn better, I’m entirely open to that possibility.
But at least so far, that claim is not — by any definition that seems reasonable to me– “research-based.”
Van der Weel, F. R., & Van der Meer, A. L. (2024). Handwriting but not typewriting leads to widespread brain connectivity: a high-density EEG study with implications for the classroom. Frontiers in Psychology, 14, 1219945.
I live in Boston, and we just had an ENORMOUS snow storm. TWELVE INCHES of snow fell in just a few hours. It was, as we say, “a monstah.”
Oh, wait a minute, that didn’t happen.
The FORECAST said we’d get a monstah. In reality, by the end of the day, exactly 0.0 inches of snow had accumulated on my sidewalk. It was as close to “nothing” as was the Patriots’ chance of winning the Super Bowl this year.
You can imagine the public response:
Hah! All the “experts” with all their science-y equipment and equations and models and colorful images … they all got it wrong. AGAIN!
That’s it: I’m done with all this weather forecasting nonsense. I’ll rely on my trick knee to tell me when the weather is a-changing.
While that response is predictable, I also think it’s unfair. In fact, believe it or not, it reminded me of the work we do at Learning and the Brain.
In most ways, weather forecasting has almost nothing to do with cognitive science. But the few similarities might help explain what psychology and neuroscience research can (and can’t do) for teachers.
I want to focus on three illustrative similarities.
Spot the Butterfly
First, both meteorologists and cognitive scientists focus on fantastically complex systems.
In the world of weather:
As the butterfly theory reminds us, small changes over here (a butterfly flapping its wings in my backyard) could cause enormous changes over there (a typhoon in Eastern Samar).
In the world of education:
Whether we’re looking at neurons or IEPs or local school budgets or working memory systems or mandated annual testing, we’ve got an almost infinite number of interconnected variables.
Research might tell us to “do this thing!”, but the effect of that recommendation will necessarily depend on all those other variables.
We should not be shocked, therefore, that a one-step intervention (e.g.: growth mindset training) doesn’t have exactly the effect we want it to. That one intervention interacts with all those other complex systems.
The research-based suggestion isn’t necessarily wrong, but it also can’t briskly overcome all the other forces that influence learning.
Possibilities and Probabilities
Second: like weather forecasts, research-based suggestions focus on probabilities.
That is: the weather channel didn’t say “Boston is going to get 12 inches of snow!”
If you looked past the simplified headline, it said:
“We’ve seen conditions more-or-less like this 100 times before.
2 of those times, we got less than 2 inches
8 times, we got 2-6 inches
25 times, 6-10 inches
45 times, 10-14 inches
15 times, 14-18 inches
5 times, more than 18 inches.
Make plans accordingly.”
They don’t know for sure; they’re making predictions based on previous cases — and those previous cases provide a range of possibilities.
Notice, by the way, that the forecasters weren’t exactly wrong. New York and Philly got pounded; they got the “monstah” we were expecting.
But — because a butterfly somewhere flapped its wings — the storm went slightly south and left us alone.
So, too, with psychology and neuroscience research aimed at the classroom.
Researchers can say: “this strategy helped students score 5% higher on the end-of-year exam … ON AVERAGE.”
That means the strategy (probably) helped more students than it hurt. But the effects were different student-by-student.
Who knows: the strategy could have made learning harder for some students.
We’re looking at probabilities, not panaceas.
The Bigger the Claim…
Third: expert forecasters get their predictions right more often than they get them wrong. And — this is crucial — the “wrong” results come more often for big, outlier events.
Sunny days in June? Glum rain in November?
Relatively easy to predict.
A once-in-a-generation hurricane? A monstah snow storm?
MUCH harder to predict. We just have less data about unusual events because…they’re unusual!
So too in the world of research-based teaching advice.
I honestly think that researchers get their advice “right” much of the time — at least within the narrow confines of the context they describe.
That is: a large collection of well-designed studies probably merits careful consideration.
At the same time, if researchers loudly announce a big, outlier conclusion, we should be ready for that claim to collapse upon further investigation.
Imagine that researchers claim…
… dancing a hornpipe helps students learn fractions, or
… standing in a “power pose” does something worthwhile/important, or
… teachers don’t need to know anything about a topic to teach it well.
In each of these cases, the extremity of the claim should prepare us for doubts.
Equally true, let’s say “research shows” that a particular teaching strategy has a HUGE effect on learning.
It’s possible, but honestly kinda rare.
For instance, as I wrote recently, I found a meta-analysis claiming that the “jigsaw” method has a cohen’s d value of 1.20. As stats people know, that’s simply ENORMOUS.
It’s possible…but I wasn’t at all surprised to find very little support for that claim. I honestly can’t think of any teaching intervention that makes that much of a difference on its own.
TL;DR
Like weather forecasters, psychology and neuroscience research…
… looks at enormously complicated systems,
… offers conclusions best understood as probabilities, and
… is likeliest to be right when it makes modest claims.
In brief: this field can be fantastically useful to classroom teachers, as long as we understand its challenges and limitations.
Our teacherly “trick knee” might be right from time to time. But wisely considered research will probably be better.
Last week, I wrote about the often-testy debates that surround “constructivism.”
One possible cause for the heat behind these debates: we have at least two different meanings for the word “constructivism” in education.
Definition A emphasizes the physical activity and social interactivity of the classroom.
If students are up and moving about, if they’re working and talking and building ideas together, that’s “constructivism.”
Definition B emphasizes the mental activity within the student.
If the student is thinking and learning, then by definition they have “constructed” new meaning.
If students aren’t moving about and aren’t interacting with peers, but they ARELEARNING — well, that’s constructivism. (Of course, if they learn while moving and interacting, then that’s also constructivism.)
Can we use these distinct definitions to develop research that sheds light on this debate?
Guides and Sages
A study published in 2019 — led by Dr. Elizabth Fischer and Dr. Martin Hänze — explores this question:
Is the definition A constructivism (students moving and interacting) the best way to accomplish the definition B constructivism (students learning)?
To answer this question, they undertake a remarkably thorough and complex study with German college students.
As I will explore in detail below, I do think the study has important limitations — limitations, by the way, which the authors themselves highlight.
But I do think …
… that they’ve undertaken a good-faith effort to answer this question with a perfectly reasonable approach, and
… that they’ve taken great care with the nitty-gritty research specifics.
In other words: this study might not be perfect (no study is), but it’s not somehow a partisan hack job.
Some highlights: Fischer and Hänze surveyed over 1700 students in 80 different college courses — from physics to social sciences to language to music. So, they have a large sample size from an unusually diverse range of disciplines.
These surveys — both at the beginning and the end of the term — measured everything from the students’ initial interest in the topic, to their sense of how much they learned, to the amount of mental effort that the course required.
And the researchers themselves evaluated these 80 courses for two different approaches to teaching:
They measured “student-activating methods,” where all students were working “without direct involvement of the teacher”: reading, exercises, group work, games.
The also measured “teacher-guided methods,” which focused primarily on the teacher’s instruction: lecture, demonstration, videos.
Here’s the logical chain they explored:
First: did the students’ “amount of perceived mental effort” in fact lead to a greater sense of learning?
Second: which classroom approach — “student activating methods” or “teacher-guided methods” — resulted in more mental effort?
In this way, Fischer and Hänze could distinguish between the two kinds of constructivism. Did “student-activating methods” (definition A) lead to more learning (definition B)?
Tentative Conclusions
Unsurprisingly, given the amount of data they gathered, these researchers reached a great many detailed conclusions.
For instance: teachers devoted A LOT more time to “teacher-guided methods” (64% of class time) than to “student-activating methods” (13.8%).
Let’s focus on the two questions listed above:
First: did the students’ “amount of perceived mental effort” in fact lead to a greater sense of learning?
Unsurprisingly: yes. Students who felt that they thought harder also thought that they learned more. (Teachers everywhere are breathing a sigh of relief.)
Second: which classroom approach — “student activating methods” or “teacher-guided methods” resulted in more mental effort?
The results were clear: students reported thinking harder and learning more with more “teacher-guided methods,” and less with “student-activating method.”
In other words: in these college classes, definition A constructivism (student activity/interaction) resulted in less definition B constructivism (student learning) than more “teacher-guided” methods.
Recommendations and Limitations
Given their findings, you might expect Fischer and Hänze to demand an end to “student-activating methods.”
THEY DON’T.
Do our findings indicate that university teachers should stop being the guide on the side and return to being the sage on the stage? We refrain from deducing this kind of prescription.
Instead, they want to slow the rush in the other direction. They don’t think their evidence strong enough to forbid guide-on-the-side teaching, but it certainly doesn’t recommend it.
Part of their hesitation stems from the limitations to their approach — limitations which they themselves highlight.
First: their data show correlation, not causation. (Always an important distinction.)
Second: they focus on student survey data. Such data are, by definition, subjective. For instance, students report that they learned more from “teaching guided methods,” but did they? We don’t know.
By the way, researchers had good methodological reasons to rely on surveys. But that reliance does require cautious interpretation.
Third: recall that they found “teacher-guided methods” during 64% of class time; so much more than “student-activating methods” (13.8%).
The researchers reasonably wonder if “student-activating methods” are less effective because the professors haven’t practiced them very much, and aren’t very good at them yet.
That’s a reasonable question.
Long-time readers know my own hypothesis. I suspect that “teacher-guided methods” probably help students more early in their learning process, and “student-activating methods” help more later in that process. This study doesn’t measure that variable, and doesn’t even try to answer the question.
So: Constructivism, or Constructivism?
When these German researchers make a good-faith effort to answer that question in college courses, they arrive at two tentative findings:
Unsurprisingly, students learn more when they think more. Thinking causes “construction” of new knowledge.
Perhaps surprisingly, “student activating methods” do not help students think more than “teacher guided methods.” Instead, students think more when they sit and listen to a well-prepared talk than they do moving about and interacting with one another.
Although Fischer and Hänze are appropriately modest in their recommendations, their research does suggest that we should slow down and ask challenging questions before we insist on too much student movement and interaction.
Fischer, E., & Hänze, M. (2019). Back from “guide on the side” to “sage on the stage”? Effects of teacher-guided and student-activating teaching methods on student learning in higher education. International Journal of Educational Research, 95, 26-35.
I have recently been reviewing human-technology interactions with a team at Brookings, and it is abundantly clear that we all have strong opinions about technology. Questions about technology surface with trepidation, curiosity, and personal doubt: How is technology affecting our development? What should we be worried about? What are the policy implications? What is happening to society? What is fact and what is driven by fear of the unknown? Then I was introduced to a book that addressed just what we needed.
Meet Russell Neuman, the friendly face behind the media technology scene at New York University. As a founding faculty member of the MIT Media Laboratory and a former Senior Policy Analyst in the White House Office of Science and Technology Policy, Russell brings a wealth of experience and expertise to the table. Neuman points out that behind the questions above is always a need to not only understand the technology but also a greater need to understand ourselves (our cognition and our history) in his exciting new book, Evolutionary Intelligence: How Technology Will Make Us Smarter.
We are evolving with our technology. Our brains have adapted to help us cover the globe, but those adaptations are also limited by the stretch of our genes and the limits of our biology. With technology, we move beyond the genetic and beyond social biases developing new skills as well as developing new worlds to adapt to. But we are the ones with the agency and the ability and drive to push ourselves forward to create our own futures for better or worse. As Neuman points out, we need to take the reins of responsibility, not with fear but getting to know ourselves.
In this book, readers are invited on a captivating exploration of the intricate interplay between emerging technologies and human cognition. The first chapter unveils the transformative potential inherent in these advancements, emphasizing the symbiotic relationship between humans and machines as the cradle of developing intelligence. Rather than merely admiring technological marvels, the book underscores the paramount importance of human agency and historical context in shaping our technological trajectory. Neuman then offers an intriguing overview of the mechanics behind upcoming technologies, including innovative modes of human-machine interaction like direct-to-brain communication, accompanied by a measured skepticism regarding its implications. But as we are often fearful of AI taking over our lives, there may be some places where the technology can actually overcome our more detrimental natures. The author tackles this head-on, illuminating the potential of AI to rectify systemic prejudices in human perception—a formidable challenge in its own right. However, the fourth appropriately lengthy chapter aptly warns: great innovation demands great responsibility, addressing urgent concerns such as privacy breaches, social disparities, and the erosion of human autonomy in the digital era.
As the author points out, we often do not recognize social revolutions as we are in the middle of them. From this perspective, Neuman adds some fascinating historical context to his argument. Our interactions with new technologies have always been fraught with fear, trepidation, excitement, and misunderstanding. Drawing parallels to Darwin’s insights on evolution, Chapter 6 underscores the pervasive presence of evolutionary intelligence in contemporary society. We are then taken on an exploration across domains of our lives and how technology may interact with these domains from healthcare to finance, highlighting the need for thoughtful navigation in this rapidly evolving landscape. The book is polished off with discussions of our personal responsibility and technology and the ethics of how we are using it and moving it.
Like me, you will fill the book’s pages with highlights and notes that point to important aspects for parents, students, policymakers, and teachers. You will also savor the historical and research tidbits that can make you the life of the party. But it is the framing that makes this a worthwhile read and a personally reflective experience.
So, will AI replace us? It’s not a war but a coevolutionary dance: its evolutionary intelligence. This is a great primer on the important issues preparing you for the so-called singularity, the point at which technology allows us to ‘transcend the limitations of our bodies.’ When is that supposed to be? Futurists agree; this is only about 20 years away. Best to be prepared—join Russell Neuman in a fresh perspective and get to know yourself and the technology that will evolve with you and the new generation of humans.
If you want to launch a feisty debate in your next faculty meeting, stand up and ask “who here supports ‘constructivism’?”
This question all but guarantees heated discussion, because …
… MANY people identify strongly with one side or another (“I definately DO!” “Well, I certainly DON’T!!”)
and
… people on one side often think ill of people on the other (“How can you possibly hold that immoral position?”).
We’re talking Hadfields and McCoys here.
Typically, this blog would look to research to resolve such a debate. Alas the debate resists a research-based answer, because we can easily find research supporting both positions.
Whether you’re championing PBL or direct instruction or inquiry learning or Direct Instruction or guiding from the side or a knowledge-rich curriculum, you’ve got research to bolster your claim.
Is there any way out of this tangle? Will we always disagree?
A Fresh Place to Start?
In many fields, basic definitions can be helpfully straightforward.
If I’m a carpenter, I can tell you what a “nail” is. In fact, I can show you several. They’re right there.
If I’m a surgeon, I can tell you what a “pancreas” is. I can tap it with my scalpel. It’s right there.
When a mechanic repairs my car, she can show me the “rust” on my “carburetor.” See? Right there.
And so forth.
In psychology and education research, alas, basic definitions get really tricky.
How, exactly, do I define “attention”? Or, “engagement”? Or, heck, how do I define “learning”?
If I can’t define these core concepts, then I can’t develop a precise way to measure them. And research relies on precise measurements…
For that reason, we might start reexamining our Hadfield/McCoy fued about “constructivism” by looking at basic definitions.
What, exactly, does the word “constructivism” mean?
Take Two
This potentially pedantic question turns out to have helpful results.
First: physical + social activity = mental change.
If students are up and DOING carefully designed activities — if they’re moving about and (preferably) interacting with other students — this combination of physical and social work leads to the construction of new mental models.
This kind of “active” classroom work — unlike the “passive” classrooms where students sit and listen to a teacher explain — results in learning.
In other words: we can tell by looking at the students in the classroom whether or not “constructivist teaching” is happening.
Second: mental change = mental change.
This second definition focuses on learning more than on teaching.
It says: OF COURSE students learn by constructing their own mental models. (Unlike The Matrix, reality does not allow us to download mental models into our students.)
If students have learned, they have constructed. ALL LEARNING is by definition constructivist.
So they core question is: what teaching methods produce that result?
If, say, a lecture results in learning, then a lecture is “constructivist.”
If a jigsaw activity results in learning, then it too is “constructivist.”
In other words: we can’t tell by looking at the students whether or not constructivist teaching is happening.
Instead, we can tell only if we determine how much they learned. If they learned, they constructed. Voila: constructivism.
Rethinking Strong Claims
This definitional muddle might explain some of the “are you a constructivist” debate.
According to the first definition, I’m a constructivist teacher if I teach in particular ways: my students are up and about, working and talking with each other and — as much as possible — finding their own way through a particular concept.
According to the second definition, I’m a constructivist teacher if my students learn.
In other words: EVERYONE wants to be a constructivist according to the second definition. The first definition, however, prompts the feud.
Champions of the first definition believe those methods result in the second result. That is: “physical + social” constructivism (“guide on the side”) should result in more learning.
Critics of the first definition believe that “sage on the stage” teaching results in more learning — more construction of mental models.
Once we separate these two ways of thinking about “constructivism,” we can ask:
“How might we use this distinction in research? Can we determine which approach — saging or guiding — results in more construction of new mental models?”
I’ve found a study that tries to do just that. I’ll summarize it in next week’s blog post…
Fischer, E., & Hänze, M. (2019). Back from “guide on the side” to “sage on the stage”? Effects of teacher-guided and student-activating teaching methods on student learning in higher education. International Journal of Educational Research, 95, 26-35.
The basic question: should students make their own flashcards, or use premade flashcards?
In my experience, people really like the answer “students should make their own.” This approach feels more active/good, compared with the passive/bad strategy of using someone else’s.
I have had two concerns with this popular answer.
First: how will students know what flashcards to make? Because students don’t know what they don’t know, they probaby don’t know what they should be studying with flashcards.
Second: making flashcards takes time. If students have only 30 minutes to study, and spend 5-10 of those minutes making cards, then they have much less time to review.
And, as I wrote back in March of 2023, the most on-point study I could find confirmed those suspicions. Students who made their own flashcards did not learn more — although they DID spend more time.
If you can learn the same amount in less time, why not do that?
For all these reasons, I was surprised to read the title of the study my friend recommended:
“User-Generated Digital Flashcards Yield Better Learning Than Premade Flashcards”
What’s going on? Have I been giving you bad advice?
A Feature, Not a Bug
Before I explore this new study — led by Dr. Steven Pan — I want to focus on the seemingly vexing problem of conflicting studies.
That is: Lin & McDaniels’ study from 2018 says “making your own flashcards DOESN’T help you learn.” Pan’s study from 2024 says “making your own flaschcards DOES help you learn.”
What’s going on here?
I started coming to Learning and the Brain because I wanted research-based answers to my teaching questions. (Perhaps you started coming for the same reason.)
The process, I thought, would go like this:
I have a question.
Researchers tell me the correct answer.
I do what the researchers tells me.
So simple.
Here’s the problem. Psychology is complicated. People are COMPLICATED. Teaching and learning are COMPLICATED.
When we start using psychology research methods to study people who are teaching and learning, we’re studying fantastically complicated systems.
OF COURSE we get contradictory answers to specific questions.
This result doesn’t mean something is wrong or broken. It means that we’ve done enough research to unearth the genuine complexity of the questions we’re exploring: psychology + people + teaching + learning.
The fact that we’ve found contradictory results means that we’re doing this right. In other words: if all the research I find tells me the same thing, I probably haven’t looked at enough research.
So: if contradictory findings are the norm, what do we actually do in our classrooms?
Well: we start by looking at the specifics of the research…
Specifics of the Research
I’ve got two different studies with contrasting headlines. To resolve the differences, let’s get specific.
First, Pan’s research team executed all the steps we want them to.
They’ve got precise hypotheses and control groups and abstruse calculations and colorful graphs. The conflict between Lin (2018) and Pan (2024), in other words, doesn’t result from unprofessional work.
Over the course of their six studies, Pan’s team found that students remembered more when they came up with their own definitions of words than they did when they used flashcards with pre-written defintions.
Likewise, students remembered more when they wrote their own examples on flashcards than when they used pre-written examples.
And so forth.
So: YES, students spent more time making their own cards. But because they thought harder while they did so, they ultimately remembered and applied the concepts better 2 days later.
How then can we explain the difference between the two studies?
Focus, Focus, Focus
Because these two studies are different, we can — of course! — find lots of differences.
Here’s a key difference in my view:
In the (2024) Pan study, the researchers told the students which words to study.
In the (2018) Lin and McDaniel study, they didn’t.
So, students learn more useful stuff when we tell them what stuff is useful — that’s the recent Pan study.
They don’t learn more useful stuff when they decide on their own what’s useful — that’s the older Lin & McDaniel study.
My own hypothesis, thus, goes like this:
Students will probably learn more from substantive flashcards that they generate (paraphrase, examples, etc.) than from pre-written flashcards … as long as they get good guidance about the terms and concepts that merit flashcards.
In the absence of that guidance, they probably won’t learn more — because the pre-written flashcards will focus on more useful words and concepts.
Now, my hypothesis hasn’t been tested directly, as far as I know. But it both makes sense and explains the differences between these sets of studies.
TL;DR
Should students make their own flashcards?
As long as
First: they generate new information (rather than simply copying it down), and
Second: we tell them which words and concepts merit flashcards,
then, probably yes.
Given the complexity of this research field, however, we should probably keep our eyes out for further research.
Pan, S. C., Zung, I., Imundo, M. N., Zhang, X., & Qiu, Y. (2022). User-generated digital flashcards yield better learning than premade flashcards. Journal of Applied Research in Memory and Cognition.
The “jigsaw” method sounds really appealing, doesn’t it?
Imagine that I’m teaching a complex topic: say, the digestive system.
Asking students to understand all those pieces — pancreas here, stomach there, liver yon — might get overwhelming quickly.
So, I could break that big picture down into smaller pieces: puzzle pieces, even. And, I assign different pieces to subgroups of students.
Group A studies the liver.
Group B, they’ve got the small intestine.
Group C focuses on the duodenum.
Once each group understands its organ — its “piece of the puzzle” — they can explain it to their peers. That is: they re-assemble the larger puzzle from the small, understandable bits.
This strategy has at least two potential advantages:
First, by breaking the task down into smaller steps, it reduces working memory load. (Blog readers know that I’m a BIG advocate for managing working memory load.)
Second, by inviting students to work together, it potentially increases engagement.
Sadly, both those advantages have potential downsides.
First: the jigsaw method could reduce working memory demands initially. But: it also increases working memory demands in other ways:
… students must figure out their organ themselves, and
… they have to explain their organ (that’s really complicated!), and
… they have to understand other students’ explanations of several other organs!
Second: “engagement” is a notoriously squishy term. It sounds good — who can object to “engagement”? — but how do we define or measure it?
After all, it’s entirely possible that students are “engaged” in the process of teaching one another, but that doesn’t mean they’re helpfully focused on understanding the core ideas I want them to learn.
They could be engaged in, say, making their presentation as funny as possible — as a way of flirting with that student right there. (Can you tell I teach high school?)
In other words: it’s easy to spot ways that the jigsaw method could help students learn, or could interfere with their learning.
If only we had research on the subject…
Research on the Subject
A good friend of mine recently sent me a meta-analysis puporting to answer this question. (This blog post, in fact, springs from his email.)
It seems that this meta-analysis looks at 37 studies and finds that — YUP — jigsaw teaching helps students learn.
In this case, that “research-based” claim falls apart almost immediately.
The meta-analysis crunches the results of several studies, and claims that jigsaw teaching has a HUGE effect. (Stats people: it claims a Cohen’s d of 1.20 — that’s ENORMOUS.)
You’ve probably heard Carl Sagan’s rule that “extraordinary claims require extraordinary evidence.” What evidence does this meta-analysis use to make its extraordinary claim?
Well:
… it doesn’t look at 37 studies, but at SIX (plus five student dissertations), and
… it’s published in a journal that doesn’t focus on education or psychology research, and
… as far as I can tell, the text of the meta-analysis isn’t available online — a very rare limitation.
For that reason, we know nothing about the included studies.
Do they include a control condition?
Were they studying 4th graders or college students?
Were they looking at science or history or chess?
We just don’t know.
So, unless I can find a copy of this meta-analysis online (I looked!), I don’t think we can accept it as extraordinary evidence of its extraordinary claim.
Next Steps
Of course, just because this meta-analysis bonked doesn’t mean we have no evidence at all. Let’s keep looking!
I next went to my go-to source: elicit.com. I asked it to look for research answering this question:
Does “jigsaw” teaching help K-12 students learn?
The results weren’t promising.
Several studies focus on college and graduate school. I’m glad to have that information, but college and graduate students…
… already know a great deal,
… are especially committed to education,
… and have higher degrees of cognitive self-control than younger students.
So, they’re not the most persuasive source of information for K-12 teachers.
One study from the Phillipines showed that, yes, students who used the jigsaw method did learn. But it didn’t have a control condition, so we don’t know if they would have learned more doing something else.
After all, it’s hardly a shocking claim to say “the students studied something, and they learned something.” We want to know which teaching strategy helps them learn the most!
Still others report that “the jigsaw method works” because “students reported higher levels of engagement.”
Again, it’s good that they did so. But unless they learned more, the “self-reports of higher engagement” argument doesn’t carry much weight.
This study focused on 6th graders — so, it’s probably more relevant to K-12 teachers.
It also included control conditions — so we can ask “is jigsaw teaching more effective than something else?” (Rather than the almost useless question: “did students in a jigsaw classroom know more afterwards than they did before?” I mean: of course they did…)
This study, in fact, encompases five separate experiments. For that reason, it’s much too complex to summarize in detail. But the headlines are:
The study begins with a helpful summary of the research so far. (Tl;dr : lots of contradictory findings!)
The researchers worked carefully to provide appropriate control conditions.
They tried different approaches to jigsaw teaching — and different control conditions — to reduce the possibility that they’re getting flukey results.
It has all the signs of a study where the researchers earnestly try to doubt and double-check their own findings.
Their conclusions? How much extra learning did the jigsaw method produce?
Exactly none.
Over the course of five experiments (some of which lasted an entire school term), students in the jigsaw method group learned ever-so-slightly-more, or ever-so-slightly-less, than their control group peers.
The whole process averaged out to no difference in learning whatsoever.
The Last Word?
So, does this recent study finish the debate? Should we cancel all our jigsaw plans?
Based on my reading of this research, I do NOT think you have to stop jigsawing — or, for that matter — start jigsawing. Here’s why:
First: we’ve got research on both sides of the question. Some studies show that it benefits learning; others don’t. I don’t want to get all bossy based on such a contradictory research picture.
Second: I suspect that further research will help us use this technique more effectively.
That is: jigsaw learning probably helps these students learn this materialat this point in the learning process. But it doesn’t help other students in other circumstances.
When we know more about those boundary conditions, we will know if and when to jigsaw with our students.
I myself suspect that we need to focus on a key, under-discussed step in the process: when and how the teacher ensures that each subgroup understands their topic correctly before they “explain” it to the next group. If they misunderstand their topic, after all, they won’t explain it correctly!
Third: let’s assume that this recent study is correct; jigsaw teaching results in no extra learning. Note, however, that it doesn’t result in LESS learning — according to these results, it’s exactly the same.
For that reason, we can focus on the other potential benefits of jigsaw learning. If it DOES help students learn how to cooperate, or foster motivation — and it DOESN’T reduce their learning — then it’s a net benefit.
In sum:
If you’re aware of the potential pitfalls of the jigsaw method (working memory overload, distraction, misunderstanding) and you have plans to overcome them, and
If you really like its potential other benefits (cooperation, motivation),
then you can make an informed decision about using this technique well.
At the same time, I certainly don’t think we have enough research to make jigsaw teaching a requirement.
As far as I know, we just don’t have a clear research picture on how to do it well.
Stanczak, A., Darnon, C., Robert, A., Demolliens, M., Sanrey, C., Bressoux, P., … & Butera, F. (2022). Do jigsaw classrooms improve learning outcomes? Five experiments and an internal meta-analysis. Journal of Educational Psychology, 114(6), 1461.
Researchers tend to focus on particular topics in education.
Some folks study attention, while others look at motivation.
A research team here might look at working memory, while that team over there considers sleep.
And: let’s not forget about technology!
Of course, it’s useful to have specialists in each of these fields; each one is highly complicated.
At the same time, as teachers, we need to understand how all the pieces fit together.
After all, if I’m successfully managing my students’ working memory load, but they’re not paying attention, then all my working-memory efforts have gone to waste.
For this reason, we’d like greater clarity about putting the pieces together.
For instance, you might wonder: what’s the relationship between cognitive load and motivation?
I’m so glad you asked…
Calling Austrialia
In a recently-published study, an international group of researchers asked almost 1300 Australian students in grades 7-10 to fill out surveys about their academic experience.
Some questions asked student to rate their teachers’ efforts to reduce the complexity of the material (that is, the “instructional load”):
On a scale of 1-7, they responded to statements like:
“When we learn new things in class, the teacher makes it easy at first.”
“As we work on tasks or activities in this class, the teacher gives good assistance.”
“In this class, the teacher makes sure we get enough practice before moving on to new tasks or activities.”
Other statements focused mental work the students were doing (that is, their “cognitive load”):
“The work in this class is very difficult for me.”
“The way information is presented in this class is too complex.”
Still others inquired about the teachers’ motivational strategies, and the students’ experience of motivation:
“The teacher communicates which learning goals he/she expects you to accomplish
by the end of the lesson;”
“The teacher doesn’t plan or organize too much. The lesson will just happen;”
“The teacher offers a very interesting, highly engaging lesson;”
“The teacher insists that you have to finish all your required work—no exceptions, no excuses”
As you can see right away, these researchers have an ENORMOUS amount of data to crunch as they calculate the relationships among all these variables.
By the way, we should note the researchers’ method here: they’re considering survey data. Some people — quite reasonably — worry that survey data can’t really capture classroom reality.
For instance, a student might perceive that “the teacher doesn’t plan or organize too much. The lesson will just happen.” But that perception might be entirely inaccurate.
With that caveat in mind, what did the researchers find?
The (Basic) Results: Reversing Assumptions
Given all the variables measured above, we can expect LOTS of findings — reported in graphs and equations and declarations.
What does it all boil down to?
The simple summary reverses a common belief.
Teachers often assume that “when my students feel motivated, they will learn more.”
These data suggest that “when my students learn more, they feel motivated.”
More precisely: according to the survey data, teachers who ensure that cognitive load remains managable help students learn more. That additional learning correlates with higher subsequent motivation.
This finding makes a rough-n-ready sense. For example: my students rarely clamor to learn grammar; they are, honestly, not super motivated to do so.
However, part of their demotivation results from the fact that grammar learning is notoriously difficult. (“Object complements,” anyone?) My students just feel bad when they keep failing at it.
If I teach well — that is, if I reduce the cognitive load of learning grammar — they are likelier to succeed at doing so. Result: they feel less demotivated. Heck, they might even feel motivated.
The (Advanced) Results: All That Lingo
Research ain’t research if it doesn’t include lots of niche-y terminology.
Unfortunately, as is often the case, the terminology here gets rather confusing.
Because the research team draws on two very different fields (working memory, motivation), and two different theories (cognitive load, self-determination), we can easily get bogged down in the terminological niceties.
For example, the researches consider a teacher’s motivational style along two axes: do the teachers support or thwart students’ needs; are they highly directive or not. The resulting four quadrants are broken down into further sub-categories, resulting in — I’m not joking here — EIGHT different teaching styles: “attuning,” “guiding,” “clarifying,” “demanding,” “domineering,” and so forth.
The word “autonomy” — quite an important word in self-determination theory — leads to particular puzzles.
We might reasonably think that “autonomy” means “the teacher encourages students by giving them relatively more freedom to explore and solve problems on their own.” However, in this terminological world:
“Autonomy support … directs attention towards activities that are necessary for learning, thus reducing the proportion of cognitive load that is extraneous.”
Which is to say, teachers support autonomy by “directing” students in specific ways. Lots of student freedom might sound “autonomous,” but that’s not what the word “autonomy” means in this context.
To Sum Up
I’ve focused on this terminology because I think the study’s results are easy to misunderstand.
Here is their conclusion, quoted from the abstract:
We conclude that by using load-reducing strategies and a motivating style characterized by structure and autonomy support, teachers can reduce students’ cognitive load and improve their self-regulated motivation, engagement, and achievement.
In that sentence, “autonomy support” does NOT mean “give students lots of freedom” or “be sure to emphasize voice and choice.” Instead, it means “students can think effectively and autonomously when teachers direct them towards specific and manageable cognitive work.”
That effective thinking, it turn, results in more learning, and higher levels of motivation.
The big picture: by putting together two distinct research fields — cognitive load theory and self determination theory — we can start to get a clearer picture of complex classroom realities.
A final note:
The title of this blog post includes the phrase “teaching style” because the researchers use it.
That phrase, however, does NOT mean that “learning styles” are a thing. They’re not.
Evans, P., Vansteenkiste, M., Parker, P., Kingsford-Smith, A., & Zhou, S. (2024). Cognitive Load Theory and Its Relationships with Motivation: a Self-Determination Theory Perspective. Educational Psychology Review, 36(1), 7.