February 2024 – Education & Teacher Conferences Skip to main content
Constructivism, or Constructivism, Part II
Andrew Watson
Andrew Watson

Last week, I wrote about the often-testy debates that surround “constructivism.”

One possible cause for the heat behind these debates: we have at least two different meanings for the word “constructivism” in education.

Definition A emphasizes the physical activity and social interactivity of the classroom.

If students are up and moving about, if they’re working and talking and building ideas together, that’s “constructivism.”

Definition B emphasizes the mental activity within the student.

If the student is thinking and learning, then by definition they have “constructed” new meaning.

If students aren’t moving about and aren’t interacting with peers, but they ARE LEARNING — well, that’s constructivism. (Of course, if they learn while moving and interacting, then that’s also constructivism.)

Can we use these distinct definitions to develop research that sheds light on this debate?

Guides and Sages

A study published in 2019 — led by Dr. Elizabth Fischer and Dr. Martin Hänze — explores this question:

Is the definition A constructivism (students moving and interacting) the best way to accomplish the definition B constructivism (students learning)?

To answer this question, they undertake a remarkably thorough and complex study with German college students.

As I will explore in detail below, I do think the study has important limitations — limitations, by the way, which the authors themselves highlight.

But I do think …

… that they’ve undertaken a good-faith effort to answer this question with a perfectly reasonable approach, and

… that they’ve taken great care with the nitty-gritty research specifics.

In other words: this study might not be perfect (no study is), but it’s not somehow a partisan hack job.

Some highlights: Fischer and Hänze surveyed over 1700 students in 80 different college courses — from physics to social sciences to language to music. So, they have a large sample size from an unusually diverse range of disciplines.

College students working together at a table, lots of papers in front of them

These surveys — both at the beginning and the end of the term — measured everything from the students’ initial interest in the topic, to their sense of how much they learned, to the amount of mental effort that the course required.

And the researchers themselves evaluated these 80 courses for two different approaches to teaching:

They measured “student-activating methods,” where all students were working “without direct involvement of the teacher”: reading, exercises, group work, games.

The also measured “teacher-guided methods,” which focused primarily on the teacher’s instruction: lecture, demonstration, videos.

Here’s the logical chain they explored:

First: did the students’ “amount of perceived mental effort” in fact lead to a greater sense of learning?

Second: which classroom approach — “student activating methods” or “teacher-guided methods” — resulted in more mental effort?

In this way, Fischer and Hänze could distinguish between the two kinds of constructivism. Did “student-activating methods” (definition A) lead to more learning (definition B)?

Tentative Conclusions

Unsurprisingly, given the amount of data they gathered, these researchers reached a great many detailed conclusions.

For instance: teachers devoted A LOT more time to “teacher-guided methods” (64% of class time) than to “student-activating methods” (13.8%).

Let’s focus on the two questions listed above:

First: did the students’ “amount of perceived mental effort” in fact lead to a greater sense of learning?

Unsurprisingly: yes. Students who felt that they thought harder also thought that they learned more. (Teachers everywhere are breathing a sigh of relief.)

Second: which classroom approach — “student activating methods” or “teacher-guided methods” resulted in more mental effort?

The results were clear: students reported thinking harder and learning more with more “teacher-guided methods,” and less with “student-activating method.”

In other words: in these college classes, definition A constructivism (student activity/interaction) resulted in less definition B constructivism (student learning) than more “teacher-guided” methods.

Recommendations and Limitations

Given their findings, you might expect Fischer and Hänze to demand an end to “student-activating methods.”

THEY DON’T.

Do our findings indicate that university teachers should stop being the guide on the side and return to being the sage on the stage? We refrain from deducing this kind of prescription.

Instead, they want to slow the rush in the other direction. They don’t think their evidence strong enough to forbid guide-on-the-side teaching, but it certainly doesn’t recommend it.

Part of their hesitation stems from the limitations to their approach — limitations which they themselves highlight.

First: their data show correlation, not causation. (Always an important distinction.)

Second: they focus on student survey data. Such data are, by definition, subjective. For instance, students report that they learned more from “teaching guided methods,” but did they? We don’t know.

By the way, researchers had good methodological reasons to rely on surveys. But that reliance does require cautious interpretation.

Third: recall that they found “teacher-guided methods” during 64% of class time; so much more than “student-activating methods” (13.8%).

The researchers reasonably wonder if “student-activating methods” are less effective because the professors haven’t practiced them very much, and aren’t very good at them yet.

That’s a reasonable question.

Long-time readers know my own hypothesis. I suspect that “teacher-guided methods” probably help students more early in their learning process, and “student-activating methods” help more later in that process. This study doesn’t measure that variable, and doesn’t even try to answer the question.

So: Constructivism, or Constructivism?

When these German researchers make a good-faith effort to answer that question in college courses, they arrive at two tentative findings:

Unsurprisingly, students learn more when they think more. Thinking causes “construction” of new knowledge.

Perhaps surprisingly, “student activating methods” do not help students think more than “teacher guided methods.” Instead, students think more when they sit and listen to a well-prepared talk than they do moving about and interacting with one another.

Although Fischer and Hänze are appropriately modest in their recommendations, their research does suggest that we should slow down and ask challenging questions before we insist on too much student movement and interaction.


Fischer, E., & Hänze, M. (2019). Back from “guide on the side” to “sage on the stage”? Effects of teacher-guided and student-activating teaching methods on student learning in higher education. International Journal of Educational Research95, 26-35.

Evolutionary Intelligence by Russell Neuman
Erik Jahner, PhD
Erik Jahner, PhD

Neuman-cover-art-e1678309913861I have recently been reviewing human-technology interactions with a team at Brookings, and it is abundantly clear that we all have strong opinions about technology. Questions about technology surface with trepidation, curiosity, and personal doubt: How is technology affecting our development? What should we be worried about? What are the policy implications? What is happening to society? What is fact and what is driven by fear of the unknown? Then I was introduced to a book that addressed just what we needed.

Meet Russell Neuman, the friendly face behind the media technology scene at New York University. As a founding faculty member of the MIT Media Laboratory and a former Senior Policy Analyst in the White House Office of Science and Technology Policy, Russell brings a wealth of experience and expertise to the table. Neuman points out that behind the questions above is always a need to not only understand the technology but also a greater need to understand ourselves (our cognition and our history) in his exciting new book, Evolutionary Intelligence: How Technology Will Make Us Smarter.

We are evolving with our technology. Our brains have adapted to help us cover the globe, but those adaptations are also limited by the stretch of our genes and the limits of our biology. With technology, we move beyond the genetic and beyond social biases developing new skills as well as developing new worlds to adapt to. But we are the ones with the agency and the ability and drive to push ourselves forward to create our own futures for better or worse. As Neuman points out, we need to take the reins of responsibility, not with fear but getting to know ourselves.

In this book, readers are invited on a captivating exploration of the intricate interplay between emerging technologies and human cognition. The first chapter unveils the transformative potential inherent in these advancements, emphasizing the symbiotic relationship between humans and machines as the cradle of developing intelligence. Rather than merely admiring technological marvels, the book underscores the paramount importance of human agency and historical context in shaping our technological trajectory. Neuman then offers an intriguing overview of the mechanics behind upcoming technologies, including innovative modes of human-machine interaction like direct-to-brain communication, accompanied by a measured skepticism regarding its implications. But as we are often fearful of AI taking over our lives, there may be some places where the technology can actually overcome our more detrimental natures. The author tackles this head-on, illuminating the potential of AI to rectify systemic prejudices in human perception—a formidable challenge in its own right. However, the fourth appropriately lengthy chapter aptly warns: great innovation demands great responsibility, addressing urgent concerns such as privacy breaches, social disparities, and the erosion of human autonomy in the digital era.

As the author points out, we often do not recognize social revolutions as we are in the middle of them. From this perspective, Neuman adds some fascinating historical context to his argument. Our interactions with new technologies have always been fraught with fear, trepidation, excitement, and misunderstanding. Drawing parallels to Darwin’s insights on evolution, Chapter 6 underscores the pervasive presence of evolutionary intelligence in contemporary society. We are then taken on an exploration across domains of our lives and how technology may interact with these domains from healthcare to finance, highlighting the need for thoughtful navigation in this rapidly evolving landscape. The book is polished off with discussions of our personal responsibility and technology and the ethics of how we are using it and moving it.

Like me, you will fill the book’s pages with highlights and notes that point to important aspects for parents, students, policymakers, and teachers. You will also savor the historical and research tidbits that can make you the life of the party. But it is the framing that makes this a worthwhile read and a personally reflective experience.

So, will AI replace us? It’s not a war but a coevolutionary dance: its evolutionary intelligence. This is a great primer on the important issues preparing you for the so-called singularity, the point at which technology allows us to ‘transcend the limitations of our bodies.’ When is that supposed to be? Futurists agree; this is only about 20 years away. Best to be prepared—join Russell Neuman in a fresh perspective and get to know yourself and the technology that will evolve with you and the new generation of humans.

Constructivism, or Constructivism? Part I
Andrew Watson
Andrew Watson

If you want to launch a feisty debate in your next faculty meeting, stand up and ask “who here supports ‘constructivism’?”

This question all but guarantees heated discussion, because …

… MANY people identify strongly with one side or another (“I definately DO!” “Well, I certainly DON’T!!”)

and

… people on one side often think ill of people on the other (“How can you possibly hold that immoral position?”).

We’re talking Hadfields and McCoys here.

Typically, this blog would look to research to resolve such a debate. Alas the debate resists a research-based answer, because we can easily find research supporting both positions.

A construction site: a 6 story building with scaffolding all around, a crane against a sunset

Whether you’re championing PBL or direct instruction or inquiry learning or Direct Instruction or guiding from the side or a knowledge-rich curriculum, you’ve got research to bolster your claim.

Is there any way out of this tangle? Will we always disagree?

A Fresh Place to Start?

In many fields, basic definitions can be helpfully straightforward.

If I’m a carpenter, I can tell you what a “nail” is. In fact, I can show you several. They’re right there.

If I’m a surgeon, I can tell you what a “pancreas” is. I can tap it with my scalpel. It’s right there.

When a mechanic repairs my car, she can show me the “rust” on my “carburetor.” See? Right there.

And so forth.

In psychology and education research, alas, basic definitions get really tricky.

How, exactly, do I define “attention”? Or, “engagement”? Or, heck, how do I define “learning”?

If I can’t define these core concepts, then I can’t develop a precise way to measure them. And research relies on precise measurements…

For that reason, we might start reexamining our Hadfield/McCoy fued about “constructivism” by looking at basic definitions.

What, exactly, does the word “constructivism” mean?

Take Two

This potentially pedantic question turns out to have helpful results.

We’ve got (at least) two fundamentally different ways of thinking about constructivism.

First: physical + social activity = mental change.

If students are up and DOING carefully designed activities — if they’re moving about and (preferably) interacting with other students — this combination of physical and social work leads to the construction of new mental models.

This kind of “active” classroom work — unlike the “passive” classrooms where students sit and listen to a teacher explain — results in learning.

In other words: we can tell by looking at the students in the classroom whether or not “constructivist teaching” is happening.

Secondmental change = mental change.

This second definition focuses on learning more than on teaching.

It says: OF COURSE students learn by constructing their own mental models. (Unlike The Matrix, reality does not allow us to download mental models into our students.)

If students have learned, they have constructed. ALL LEARNING is by definition constructivist.

So they core question is: what teaching methods produce that result?

If, say, a lecture results in learning, then a lecture is “constructivist.”

If a jigsaw activity results in learning, then it too is “constructivist.”

In other words: we can’t tell by looking at the students whether or not constructivist teaching is happening.

Instead, we can tell only if we determine how much they learned. If they learned, they constructed. Voila: constructivism.

Rethinking Strong Claims

This definitional muddle might explain some of the “are you a constructivist” debate.

According to the first definition, I’m a constructivist teacher if I teach in particular ways: my students are up and about, working and talking with each other and — as much as possible — finding their own way through a particular concept.

According to the second definition, I’m a constructivist teacher if my students learn.

In other words: EVERYONE wants to be a constructivist according to the second definition. The first definition, however, prompts the feud.

Champions of the first definition believe those methods result in the second result. That is: “physical + social” constructivism (“guide on the side”) should result in more learning.

Critics of the first definition believe that “sage on the stage” teaching results in more learning — more construction of mental models.

Once we separate these two ways of thinking about “constructivism,” we can ask:

“How might we use this distinction in research? Can we determine which approach — saging or guiding — results in more construction of new mental models?”

I’ve found a study that tries to do just that. I’ll summarize it in next week’s blog post…


Fischer, E., & Hänze, M. (2019). Back from “guide on the side” to “sage on the stage”? Effects of teacher-guided and student-activating teaching methods on student learning in higher education. International Journal of Educational Research95, 26-35.

Should Students Make Their Own Flashcards, Take II
Andrew Watson
Andrew Watson

A respected colleague recently posted a study about making flashcards.

The basic question: should students make their own flashcards, or use premade flashcards?

In my experience, people really like the answer “students should make their own.” This approach feels more active/good, compared with the passive/bad strategy of using someone else’s.

I have had two concerns with this popular answer.

First: how will students know what flashcards to make? Because students don’t know what they don’t know, they probaby don’t know what they should be studying with flashcards.

Second: making flashcards takes time. If students have only 30 minutes to study, and spend 5-10 of those minutes making cards, then they have much less time to review.

And, as I wrote back in March of 2023, the most on-point study I could find confirmed those suspicions. Students who made their own flashcards did not learn more — although they DID spend more time.

If you can learn the same amount in less time, why not do that?

For all these reasons, I was surprised to read the title of the study my friend recommended:

“User-Generated Digital Flashcards Yield Better Learning Than Premade Flashcards”

What’s going on? Have I been giving you bad advice?

A Feature, Not a Bug

Before I explore this new study — led by Dr. Steven Pan — I want to focus on the seemingly vexing problem of conflicting studies.

That is: Lin & McDaniels’ study from 2018 says “making your own flashcards DOESN’T help you learn.” Pan’s study from 2024 says “making your own flaschcards DOES help you learn.”

What’s going on here?

I started coming to Learning and the Brain because I wanted research-based answers to my teaching questions. (Perhaps you started coming for the same reason.)

The process, I thought, would go like this:

I have a question.

Researchers tell me the correct answer.

I do what the researchers tells me.

So simple.

Here’s the problem. Psychology is complicated. People are COMPLICATED. Teaching and learning are COMPLICATED.

When we start using psychology research methods to study people who are teaching and learning, we’re studying fantastically complicated systems.Teenage girl sitting on floor and trying to learn with post-it notes

OF COURSE we get contradictory answers to specific questions.

This result doesn’t mean something is wrong or broken. It means that we’ve done enough research to unearth the genuine complexity of the questions we’re exploring: psychology + people + teaching + learning.

The fact that we’ve found contradictory results means that we’re doing this right. In other words: if all the research I find tells me the same thing, I probably haven’t looked at enough research.

So: if contradictory findings are the norm, what do we actually do in our classrooms?

Well: we start by looking at the specifics of the research…

Specifics of the Research

I’ve got two different studies with contrasting headlines. To resolve the differences, let’s get specific.

First, Pan’s research team executed all the steps we want them to.

They’ve got precise hypotheses and control groups and abstruse calculations and colorful graphs. The conflict between Lin (2018) and Pan (2024), in other words, doesn’t result from unprofessional work.

Over the course of their six studies, Pan’s team found that students remembered more when they came up with their own definitions of words than they did when they used flashcards with pre-written defintions.

Likewise, students remembered more when they wrote their own examples on flashcards than when they used pre-written examples.

And so forth.

So: YES, students spent more time making their own cards. But because they thought harder while they did so, they ultimately remembered and applied the concepts better 2 days later.

How then can we explain the difference between the two studies?

Focus, Focus, Focus

Because these two studies are different, we can — of course! — find lots of differences.

Here’s a key difference in my view:

In the (2024) Pan study, the researchers told the students which words to study.

In the (2018) Lin and McDaniel study, they didn’t.

So, students learn more useful stuff when we tell them what stuff is useful — that’s the recent Pan study.

They don’t learn more useful stuff when they decide on their own what’s useful — that’s the older Lin & McDaniel study.

My own hypothesis, thus, goes like this:

Students will probably learn more from substantive flashcards that they generate (paraphrase, examples, etc.) than from pre-written flashcards … as long as they get good guidance about the terms and concepts that merit flashcards.

In the absence of that guidance, they probably won’t learn more — because the pre-written flashcards will focus on more useful words and concepts.

Now, my hypothesis hasn’t been tested directly, as far as I know. But it both makes sense and explains the differences between these sets of studies.

TL;DR

Should students make their own flashcards?

As long as

First: they generate new information (rather than simply copying it down), and

Second: we tell them which words and concepts merit flashcards,

then, probably yes.

Given the complexity of this research field, however, we should probably keep our eyes out for further research.


Pan, S. C., Zung, I., Imundo, M. N., Zhang, X., & Qiu, Y. (2022). User-generated digital flashcards yield better learning than premade flashcards. Journal of Applied Research in Memory and Cognition.

The Jigsaw Advantage: Should Students Puzzle It Out?
Andrew Watson
Andrew Watson

The “jigsaw” method sounds really appealing, doesn’t it?

Imagine that I’m teaching a complex topic: say, the digestive system.

Asking students to understand all those pieces — pancreas here, stomach there, liver yon — might get overwhelming quickly.

So, I could break that big picture down into smaller pieces: puzzle pieces, even. And, I assign different pieces to subgroups of students.

Group A studies the liver.

Group B, they’ve got the small intestine.

Group C focuses on the duodenum.

Once each group understands its organ — its “piece of the puzzle” — they can explain it to their peers. That is: they re-assemble the larger puzzle from the small, understandable bits.

This strategy has at least two potential advantages:

First, by breaking the task down into smaller steps, it reduces working memory load. (Blog readers know that I’m a BIG advocate for managing working memory load.)

Second, by inviting students to work together, it potentially increases engagement.

Sadly, both those advantages have potential downsides.

First: the jigsaw method could reduce working memory demands initially. But: it also increases working memory demands in other ways:

… students must figure out their organ themselves, and

… they have to explain their organ (that’s really complicated!), and

… they have to understand other students’ explanations of several other organs!

Second: “engagement” is a notoriously squishy term. It sounds good — who can object to “engagement”? — but how do we define or measure it?

After all, it’s entirely possible that students are “engaged” in the process of teaching one another, but that doesn’t mean they’re helpfully focused on understanding the core ideas I want them to learn.

They could be engaged in, say, making their presentation as funny as possible — as a way of flirting with that student right there. (Can you tell I teach high school?)

In other words: it’s easy to spot ways that the jigsaw method could help students learn, or could interfere with their learning.

If only we had research on the subject…

Research on the Subject

A good friend of mine recently sent me a meta-analysis puporting to answer this question. (This blog post, in fact, springs from his email.)

It seems that this meta-analysis looks at 37 studies and finds that — YUP — jigsaw teaching helps students learn.

A closeup of four hands holding out single puzzle pieces, trying to see how to put them together well.

I’m always happy to get a research-based answer…and I always check out the research.

In this case, that “research-based” claim falls apart almost immediately.

The meta-analysis crunches the results of several studies, and claims that jigsaw teaching has a HUGE effect. (Stats people: it claims a Cohen’s d of 1.20 — that’s ENORMOUS.)

You’ve probably heard Carl Sagan’s rule that “extraordinary claims require extraordinary evidence.” What evidence does this meta-analysis use to make its extraordinary claim?

Well:

… it doesn’t look at 37 studies, but at SIX (plus five student dissertations), and

… it’s published in a journal that doesn’t focus on education or psychology research, and

… as far as I can tell, the text of the meta-analysis isn’t available online — a very rare limitation.

For that reason, we know nothing about the included studies.

Do they include a control condition?

Were they studying 4th graders or college students?

Were they looking at science or history or chess?

We just don’t know.

So, unless I can find a copy of this meta-analysis online (I looked!), I don’t think we can accept it as extraordinary evidence of its extraordinary claim.

Next Steps

Of course, just because this meta-analysis bonked doesn’t mean we have no evidence at all. Let’s keep looking!

I next went to my go-to source: elicit.com. I asked it to look for research answering this question:

Does “jigsaw” teaching help K-12 students learn?

The results weren’t promising.

Several studies focus on college and graduate school. I’m glad to have that information, but college and graduate students…

… already know a great deal,

… are especially committed to education,

… and have higher degrees of cognitive self-control than younger students.

So, they’re not the most persuasive source of information for K-12 teachers.

One study from the Phillipines showed that, yes, students who used the jigsaw method did learn. But it didn’t have a control condition, so we don’t know if they would have learned more doing something else.

After all, it’s hardly a shocking claim to say “the students studied something, and they learned something.” We want to know which teaching strategy helps them learn the most!

Still others report that “the jigsaw method works” because “students reported higher levels of engagement.”

Again, it’s good that they did so. But unless they learned more, the “self-reports of higher engagement” argument doesn’t carry much weight.

Recent News

Elicit.com did point me to a highly relevant and useful study, published in 2022.

This study focused on 6th graders — so, it’s probably more relevant to K-12 teachers.

It also included control conditions — so we can ask “is jigsaw teaching more effective than something else?” (Rather than the almost useless question: “did students in a jigsaw classroom know more afterwards than they did before?” I mean: of course they did…)

This study, in fact, encompases five separate experiments. For that reason, it’s much too complex to summarize in detail. But the headlines are:

The study begins with a helpful summary of the research so far. (Tl;dr : lots of contradictory findings!)

The researchers worked carefully to provide appropriate control conditions.

They tried different approaches to jigsaw teaching — and different control conditions — to reduce the possibility that they’re getting flukey results.

It has all the signs of a study where the researchers earnestly try to doubt and double-check their own findings.

Their conclusions? How much extra learning did the jigsaw method produce?

Exactly none.

Over the course of five experiments (some of which lasted an entire school term), students in the jigsaw method group learned ever-so-slightly-more, or ever-so-slightly-less, than their control group peers.

The whole process averaged out to no difference in learning whatsoever.

The Last Word?

So, does this recent study finish the debate? Should we cancel all our jigsaw plans?

Based on my reading of this research, I do NOT think you have to stop jigsawing — or, for that matter — start jigsawing. Here’s why:

First: we’ve got research on both sides of the question. Some studies show that it benefits learning; others don’t. I don’t want to get all bossy based on such a contradictory research picture.

Second: I suspect that further research will help us use this technique more effectively.

That is: jigsaw learning probably helps these students learn this material at this point in the learning process. But it doesn’t help other students in other circumstances.

When we know more about those boundary conditions, we will know if and when to jigsaw with our students.

I myself suspect that we need to focus on a key, under-discussed step in the process: when and how the teacher ensures that each subgroup understands their topic correctly before they “explain” it to the next group. If they misunderstand their topic, after all, they won’t explain it correctly!

Third: let’s assume that this recent study is correct; jigsaw teaching results in no extra learning. Note, however, that it doesn’t result in LESS learning — according to these results, it’s exactly the same.

For that reason, we can focus on the other potential benefits of jigsaw learning. If it DOES help students learn how to cooperate, or foster motivation — and it DOESN’T reduce their learning — then it’s a net benefit.

In sum:

If you’re aware of the potential pitfalls of the jigsaw method (working memory overload, distraction, misunderstanding) and you have plans to overcome them, and

If you really like its potential other benefits (cooperation, motivation),

then you can make an informed decision about using this technique well.

At the same time, I certainly don’t think we have enough research to make jigsaw teaching a requirement.

As far as I know, we just don’t have a clear research picture on how to do it well.


Stanczak, A., Darnon, C., Robert, A., Demolliens, M., Sanrey, C., Bressoux, P., … & Butera, F. (2022). Do jigsaw classrooms improve learning outcomes? Five experiments and an internal meta-analysis. Journal of Educational Psychology114(6), 1461.