Skip to main content
Exploring the Nuances of Peer Feedback
Andrew Watson
Andrew Watson

Over at the Learning Scientists, Katie Marquardt digs into peer feedback.

On the one hand, we can see many reasons that peer feedback would be beneficial.

It means that students are doing more of the work than we are–and, as we know, “the one who does the work does the learning.”

And, the opportunity to give peer feedback provides students with the responsibility and autonomy we want to be teaching.

On the other hand, those benefits don’t always materialize.

As Marquandt writes:

my colleagues express skepticism about peer review, because of the poor quality of feedback students sometimes give each other, and the challenges of managing peer review activities in the lessons.

This is valid criticism, and I have seen these shortcomings in my own lessons, particularly when working with English language learners who may lack the writing skills to give their classmates good feedback.

If we can imagine good and bad sides to peer feedback, what does the research say?

What The Research Says…

If you read this blog often, you can predict what I’m about to say: we need a narrower question.

Surely the effects of peer feedback depend substantially on the peers, and the feedback.

Marquandt’s post does a great job exploring lots of specific research examples. For that reason, I encourage you to read it. You should be asking: which of the studies she describes best matches your students, and your methodology for fostering peer feedback.

To take a compelling example: one study found that students who gave feedback improved their own second drafts of an assignment more than those who received feedback.

Crucially, this finding held true for the students who “commented more on the strength of macro-meaning and the weakness of micro-meaning” of the drafts they reviewed.

To decide whether or not this study applies to you, you’ll need to know what “micro-meaning” and “macro-meaning” actually mean.

And, you’ll have to decide if research done with college physics students writing up lab reports might reasonably apply to your students.

In other words: this topic is a great example of a broader principle. When we look for research to guide our teaching, we should be sure that the people and the specific methods in the research helpfully match our teaching work and our teaching world.

Even More Good News about Mindfulness
Andrew Watson
Andrew Watson

Last week, I described a small but persuasive study about the benefits of mindfulness.

This study combined techniques from both psychology and neuroscience to show that mindfulness really can help students manage stress.

And, it even had an active control group. Just what a research wonk would desire.

As I noted at the time, however, this study focused on stress and not on grades. 

Of course, stress is important. (Let me say that again. Stress is important.) But, as teachers, we probably care about grades too.

We’d love to see another study: one that includes information on topics other than stress. Like, say, learning.

We’d also be delighted it were larger. 40 people is nice…but several hundred would be even more persuasive.

Today’s News

Sure enough, a just-published study focused on mindfulness and several academic measures:

Grades

Attendance

Standardized math and literacy tests

Number of suspensions

Yup: mindfulness correlated with more of the good stuff (higher grades and test scores) and less of the bad stuff (suspensions).

And, this study included 2000 students in grades 5-8.

This study is, in fact, the first to show strong connections between mindfulness and these academic measures.

A Reminder

We might be tempted to jump to a strong conclusion. If

Study #1: mindfulness interventions reduce stress, and

Study #2: higher mindfulness correlates with better academic outcomes,

We’re tempted to conclude that

Mindfulness interventions lead to better academic outcomes.

But, as we remind ourselves daily

Correlation is not causation.

Until we run a large study (with active controls and random assignment) which shows that students who practiced mindfulness ended up with more learning, we can’t be sure of that conclusion.

However, that’s an increasingly plausible possibility, given these two studies.

A Final Note

Both these studies were supervised by John Gabrieli, at MIT. He’ll be speaking at this fall’s Learning and the Brain conference. If you’d like to learn more about the connection between mindfulness and school, come join us (and Dr. Gabrieli) in Boston.

 

 

What (De)Motivates Struggling Math Students?
Andrew Watson
Andrew Watson

We want our students to learn. And: we want our students to want to learn.

So, the more we know about motivation, the better our schools will be.

Here’s one possibility: perhaps teachers’ beliefs about learning can motivate students. Or, sadly, demotivate them.

If that’s true, then we can un-de-motivate them — that is, we can MOTIVATE them — by realigning those beliefs.

Researchers in Germany wanted to explore this possibility.

Background Theory #1

Of course, psychologists have several theories about motivation.

In their work on Self-Determination Theory, for example, Edward Deci and Richard Ryan have argued that we’re motivated by a desire for three things:

Autonomy: that is, age-appropriate independence

Relatedness: that is, connection to other people

Competence: that is, the experience of effectiveness and even mastery

The German researchers focused particularly on the last of these: competence.

In schools, students probably feel competent when they get good grades. So, students who get bad grades need something else to feel some sense of effectiveness and mastery.

They might need a teacher who helps them see past grades to look at other parts of their development.

But, not all teachers will be able to see past grades. In particular, the researchers hypothesized that some teachers think success in math requires innate ability. If a student doesn’t have that innate ability, s/he just won’t learn very much math.

Teacher who focus on innate ability won’t bother to encourage students who get low grades.

But, teachers who don’t focus on innate ability will want to encourage students who get low grades. That encouragement might provide the feeling of competence that–according to Self-Determination Theory–provides motivation.

The Research, The Findings

To explore this causal chain, researchers investigated over 800 4th graders, taught by 56 different teachers across many different school.

If their hypothesis is correct, then students with low grades should feel less motivated IF their teachers think math requires innate ability. But, they should feel more motivated IF their teachers think it doesn’t.

And, students with high grades should feel motivated NO MATTER their teachers’ beliefs. (After all, their high grades provide a feeling of competence–which motivates by itself.)

Sure enough, that’s what the researchers found.

Because of the research methods, the results show up in particularly opaque stats-y language, so I don’t have graphs to post or comprehensible numbers to cite.

But the simple version is: students who struggle in math felt less motivation IF their teachers believed in the importance of innate ability than if their teachers didn’t.

Background Theory #2

The researchers don’t use the word “mindset” here. But, of course, you can see mindset theory all over this work.

At the most obvious level: the belief that success in math requires “innate ability” is itself about as fixed a mindset as we can get.

Of course, on the other hand, teachers who believe that math success doesn’t require innate ability presumably think students can improve. That’s a growth mindset.

I mention this point because: you have no doubt seen many stories in the last few months claiming that mindset theory is all-but dead.

As you’ve seen on this blog before: I think mindset theory is often badly used. (No: inspiring posters ain’t enough.) But, properly understood, it can be a powerful force for good.

Here’s an example:

If teachers accept mindset theory, they’re less likely to think that success in math requires innate ability.

And, according to this research, that means their struggling students will feel higher levels of motivation.

To me, that sounds like an easy win.

Yes or No: “Video Games Can Promote Emotional Intelligence”?
Andrew Watson
Andrew Watson

Video games stir up passionate debates among teachers.

Some of your colleagues (probably) argue that video games curdle our students’ wits, addle their morality, and disrupt their attention. (For instance: here.)

Others (probably) argue that games are the future of education, and we should be getting on board as fast as we can. (For instance: here.)

As is so often the case, I think we should avoid sweeping generalizations. Instead, let’s look carefully at each specific research claim, and see what trends develop over time.

A recent example: “can videogames be used to promote emotional intelligence in teenagers”?

Recent Claims

That suggestion, in fact, is the title of a recent study based on research in Italy. (In other words: I’m not exaggerating the claim. Those are their very words.)

This study, alas, is behind a (steep!) pay wall, so I can’t be sure of all the specifics.

At the same time, the study design looks promising. Some high-school seniors played 12 hours of a video game called “EmotivaMenta,” designed to be an “experienced based learning tool” to promote emotional intelligence.

Compared to a control group, they improved at recognizing their own emotions. And, they got better at managing their emotions by cognitive revaluation. (That means what it sounds like: deliberately thinking your way through a problem to which you initially had a strong emotional reaction.)

So, here’s one potential answer. Can video games promote emotional intelligence?

YES.

Another, Better Answer

Let’s dig a little deeper.

First, researchers note that these students got better at recognizing their emotions in the short term. But, when retested 3 months later, they were no different from the control group. (The trend-line for the “cognitive revaluation” isn’t clear.)

Second, the status of the control group isn’t clear. (Drat that paywall!) Was it an active control group? That is, did they do something similar to a video game for 12 hours? Or, was it a “business as usual” control group: just a bunch of students in the same school who didn’t do anything special?

Of course, we’ll be more persuaded by an active control group than a BAU group.

Third, notice that this was a specially designed video game.

When I read the title of the research, my first thought was that researchers had identified a commercially available game that, when used or framed the right way, increased emotional intelligence.

That’s not what happened.

Instead, it seems, they created a lesson about emotional intelligence in the form of a video game.

So, here’s a different answer to our revised question. Can a lesson about emotional intelligence in the form of a video game influence Italian high-school students?

In the short term YES–assuming the control group is active. But, in the longer term, it seems no.

Case Closed?

Given those caveats, should we give up this effort? Should we conclude that video games can benefit perceptual capabilities, but not emotional ones?

My own view is: let’s keep looking.

After all, these researchers did have some success. Their study wasn’t a home run, but they did get some positive results.

So, perhaps this game would work better if …

…students played over a longer period of time, or

…it were played by younger students, or

…it were redesigned to include some cool new element.

After all, if we can help adolescents with their emotional self-regulation, that’s a real win. ESPECIALLY if we can do it by having them play a game they enjoy.

Simply put, I DON’T think we yet know the answer to this question. But, we DO have reason to believe that video games might be a promising avenue to continue investigating.

Why, and When, Does Music Interfere with Reading?
Andrew Watson
Andrew Watson

We all know that listening to music makes life better.

And, we teachers all know teachers that listening to music while you study makes studying harder and less effective.

For instance, in this study, adults who read in silence scored more than 20% higher on a quiz about that reading passage than others who listened to music with lyrics.

Indeed. 20% higher. (You can read more about that study here.)

Even though we’ve seen this research finding many times, we might want a deeper understanding of this question.

For instance: are there particular points during reading that are particularly vulnerable to interference from music?

Answer #1: New Songs

To answer this question, researchers used eye-tracking technology to see how readers behaved with background music playing.

One answer that jumped out: the change from one song to the next interrupted fluent eye movements.

This finding, of course, makes intuitive sense.

When a new song comes on, we automatically perk up our ears. Even subliminally, we notice a change in our background circumstances. The attention we devote to that change makes it harder to attend to our reading.

The result: less fluent eye movements.

Professor Todd Rose (at Harvard’s Graduate School of Education) used to suggest that–if students insisted on listening to music–they should make a playlist of songs. Those songs should have no lyrics.

And, crucially, students should not press shuffleThey should, in other words, listen to those songs in the same order each time. Over time, students will habituate to those songs in that order, and be less distracted by the switch.

This research supports Rose’s suggestion.

Answer #2: Vocabulary

The second time that music particularly distracts readers: when they face an unusual word. As the authors poetically put it:

“An irrelevant auditory signal may impair sublexical processing of low-frequency words during first-pass reading.”

“An irrelevant auditory signal” means “music,” and “low-frequency words” means “difficult vocabulary.”

So, if you were listening to music while you read that paragraph you’d face particular difficulties. After all, in included several low-frequency words.

Based on this observation, I think we should worry more about homework that includes complex vocabulary–and, I’m guessing, even more so about homework that includes foreign-language vocabulary.

In other words: while listening to music is bad for reading comprehension, it’s especially bad for comprehension of passages with tricky vocab.

To Sum Up

We’ve always known that students make their cognitive lives harder when they listen to music during homework.

Now we have even more evidence showing when, and why.

An Exciting Event in Mindfulness Research
Andrew Watson
Andrew Watson

Let’s imagine a GREAT study on the benefits of mindfulness.

As school people, we’re happy that mindfulness might be helpful at home or at work, but we really want it to be helpful to students. So, we’d love for this study to take place at school.

We’d like the study to show that mindfulness changes mental processes. For instance, we’d love to know that it helps students feel less stress.

And, we’d like the research to look at brains as well as minds. That is: we’d like to have some fMRI data showing relevant changes in brain regions.

At the same time that students report they feel less stress (that’s the mind), we might see neural modulation typical of less stress (that’s the brain).*

Finally, the study’s methodology would hold up to scrutiny. It would, for instance, include a plausible control group. (I’ve written about problems with control groups, including this study about mindfulness.)

Lo and Behold

Sure enough, this study exists!

Working with 6th graders at a school outside Boston, Clemens Bauer randomly assigned half to a mindfulness program and half to a coding training program.

Both groups devoted 45 minutes, four times a week to this effort, for 8 weeks. And, by the way, students in both groups enjoyed this time equally. (So: AT LAST we’ve got a plausible and active control group.)

Bauer’s team had students fill out a stress survey before and after this 8-week stretch. (Sample question: “In the last month, how often have you been upset because of something that happened unexpectedly?”)

And, he performed fMRI scans on them before and after as well.

When looking at those scans, Bauer’s team had a specific prediction. High stress responses typically includes elevated amygdala activation. Often, we can manage that stress response by using the prefrontal cortex–the part of the brain right behind your forehead.

If mindfulness helps manage stress, we would expect to see…

…greater connectivity between the prefrontal cortex and the amygdala, and

…concomitantly reduced activity in the amygdala.

That is, we’d be able to see that mindfulness strengthened connections between self-control systems in the prefrontal cortex. In turn, this increase in self-control would help mitigate stress responses in the amygdala.

Of course, I’m offering a very simplified version of a fantastically complex neural story. Books have been written on these connections, and it’s not blog-friendly kind of information.

Results, Please

If you’re a fan of mindfulness, you’re going to LOVE these results.

Students who practiced mindfulness reported less stress than those in the control group.

They showed higher levels of prefrontal cortex connectivity with the amygdala.

They showed lower levels of amygdala activity when they looked at angry faces.

So: both in their mental activity (reported stress level) and in the neural activity (in the amygdala, between the amygdala and the prefrontal cortex), eight weeks of mindfulness led to beneficial results for these students.

Technically speaking, that’s a home run.

What’s Next

First: to repeat, this study is powerful and persuasive. We can simply revel in its conclusions for a while.

Second: as teachers, we’re glad that student stress levels are lower. The next question is: do students learn more? We can assume they do, but we should measure as well. (To be clear: I think lower stress is an important goal on its own, whether or not it leads to more learning.)

Third: as the study’s authors acknowledge, the sample size here is relatively small. I hope they get funding to repeat it on a much larger scale.

As noted in this study, there’s a disappointing history in the world of mindfulness research. Small studies–often lacking random assignment or a control group–come to promising conclusions. But, the bigger the study–and the better the methodology–the smaller the results.

So: now that we’ve gotten strong effects with a randomized study and a plausible control group, I hope to see these same results at a much larger scale.

I might go sit quietly for a while, and try to clear my mind of extraneous thoughts.


* This sentence has been revised to read “neural modulation” rather than “neural behavior.” (9/18/19)

Brain Words: How the Science of Reading Informs Teaching by J. Richard Gentry and Gene P. Oullette
Rebecca Gotlieb
Rebecca Gotlieb

Far too many children are not learning to read well. New research about reading has not sufficiently informed teaching practices. In Brain Words: How the Science of Reading Informs Teaching, J. Richard Gentry and Gene P. Ouellette, expert reading researchers and education consultants, use the new science of reading to suggest ways to support students in becoming strong readers. They explain recent brain- and behavior-based findings about how kids learn to read.

Brain words, as used by the authors, are words for which students know the pronunciation, meaning, and spelling, such that they can read, write, and use the word correctly and comfortably. This book seeks to help instructors guide students in building brain words by offering ways to assess reading abilities as well as scientifically-backed practices for teaching reading. They emphasize especially the overlooked importance of teaching spelling. The authors offer specific, practical tips for teaching reading in kindergarten through sixth grade. They conclude with advice for schools and parents about how to support students with dyslexia.

Learning to read does not happen automatically. In fact, reading is effortful and as others, such as Maryanne Wolf, have explained, the brain’s distributed reading circuitry is not present at birth but rather develops with exposure to and instruction in reading.
Gentry and Ouellette state that most teachers are not trained in effective literacy instruction practices, and many do not have access to science-based teaching resources. As such, the authors review best practices for teaching reading in light of current research.

As Daniel Willingham and other reading experts have argued also, Gentry and Ouellette state that using both phonics and whole-word approaches to teaching reading is more effective than relying on only one of these strategies. Phonics is necessary for building reading skills, while whole-word reading provides motivation for engaging in active reading. Spelling is a critical step on the road to reading with comprehension, and yet accountability assessments do not measure spelling competence. As a result, many schools do not have spelling curricula. The authors call for a spell-to-read approach to reading instruction. They offer reflective questions that teachers can consider to improve their reading instruction.

Gentry and Ouellette detail a quick and effective way to determine students’ developmental reading phase based on a carefully designed spelling test. Students’ performance on this test can be parsed into phases. The non-alphabetical phase involves children using shapes that might resemble letters but not writing in any recognizable form. The pre-alphabetical phase involves using letters but the letters the child writes do not systematically correspond to sounds. The partial alphabetical phase involves some matching between letters and spoken language. In the full alphabetical phase children spell with one letter to represent each sound. When children can spell nearly or completely correctly, they can begin to read independently. With an understanding of students’ reading and spelling abilities it is possible to optimally facilitate reading instruction.

The authors suggest a “listen first” approach to learning spelling and reading in which students first hear a word, then say the word, write the word, read it, and use it. In older grades a spelling pretest, which students correct themselves while reflecting about the reasons for their mistakes, is an effective teaching tool. The most important measure for improving students’ vocabulary and reading abilities is to support the students in reading more.

Between five and twenty percent of the population is affected by dyslexia. This reading disorder has a neurobiological and a genetic basis. People with dyslexia are not less intelligent nor are they less hard working. The authors explain common signs of dyslexia at different ages (e.g., abnormal spelling, trouble articulating words, or trouble with arbitrary sequences). Early identification of dyslexia is very important for helping these students learn to read and achieve academically. Gentry and Ouellette conclude with suggestions for how parents and schools can support students with dyslexia.

Brain Words will inform educators about recent advances in the science of learning while also offering practical and effective techniques for improving reading instruction. This book can help educators help more students learn to read well.

Gentry, J. R., & Ouellette, G. (2019). Brain words: How the science of reading informs teaching. Stenhouse Publishers.

Trying to Prove Yourself Wrong
Andrew Watson
Andrew Watson

You want the best research to inform your teaching. That’s why you’re reading this blog.

What’s the best way to be sure–or, as sure as you can reasonably be–that you’ve reached the most researchy conclusion?

For example: what should you do if you discover contradictory research?

That’s the problem that Blake Harvard faced over at Effortful Educator.

Here’s the story…

The Setup

Harvard teaches psychology to high school students. He knows A LOT about the mind and the brain. He’s careful to base his teaching practices on deep knowledge of research.

In fact, he even tries occasional experiments to study different teaching practices in a (relatively) controlled manner. In this post, for instance, he writes about his exploration of flexible classroom seating.

In brief, he knows his stuff.

Harvard’s conclusions, at time, challenge current trends. For instance: he describes himself as a relatively traditional teacher: more persuaded by research on direct instruction than by constructivist approaches.

You might not agree with those conclusions. But, if you read his blog, you’ll be impressed by his command of the research.

So, what did Harvard do when he came across research seeming to contradict his beliefs?

What if, for instance, a study headline says that students learn more from (“constructivist”) active learning than from a (direct-instruction-y) lecture?

Heck: the study was even highlighted in the Harvard Gazette. (To be clear: the Blake Harvard I’m writing about doesn’t work at Harvard, the university in Cambridge, MA.)

Key Lesson #1: Try to Prove Yourself Wrong

After a moment of understandable trepidation, Harvard forced himself to do what he tells his psychology students to do: confront their biases.

That is: Harvard (the teacher) thinks that the right kind of lecture will result in more learning than most active learning paradigms: exploratory discussions, for example, or projects.

When he finds research that purports to show the opposite, he had a great opportunity: he could disconfirm his prior convictions.

This may be the very best strategy to achieve the goal at the top of this post: to base our teaching on excellent research.

If you think that strategy X will result in the most learning for your students, you should:

First: assume that someone has found contradictory evidence (someone always has), and

Second: actively seek out that contradictory evidence. (Try Google Scholar.)

When you find it, give that evidence a thoughtful read. You’ll end up facing one of a few options.

Option 1: the contradictory evidence is more persuasive than the evidence you’ve been following. As a result, you’ll be able to improve your teaching practice. That’s great news!

Option 2: the contradictory evidence isn’t very persuasive. As a result, you know you’ve been doing it right up to now. That’s great news!

Option 3: both evidence pools are equally convincing. Now you know that your former certainty isn’t supported by the best evidence. You can try out both approaches with your students. You’ll find the answer that works best in your context. That’s great news!

In any case, your scrupulous attempt to prove yourself wrong will lead to a better teaching result.

Key Lesson #2: Precise Definitions Really Matter

As it turns out, when Harvard tried to prove himself wrong by reviewing the research, he ended up focusing carefully on the study’s definition of “lecture” and “active learning.”

His ultimate conclusion–whether or not he changed his mind–came down to a very precise understanding of the specific teaching techniques used in those two classes..

For instance: if you read a study saying that “metacognition improves learning,” you should find out exactly what the researchers DID. What, precisely, was the metacognitive strategy that students employed?

And: does that technique make sense for you and your classroom?

Until we know the answers to those questions, we can’t know if this research makes sense in our specific classrooms.

A Final Point

You’ve noticed, I suspect, that I haven’t told you what (Blake) Harvard decided about Harvard (University’s) research.

Why?

Partly because I think you should read his post.

But also because the answer to that question–in my view–isn’t as important as these two broader conclusions.

Try to disprove your own beliefs.

Be sure you know exactly what happened in the research.

If you follow those two strategies, you can be increasingly certain that you’re following the best research-based advice around.

The result: your students will learn more.

 

 

What Helps After a Stressful Day? Mindfulness Apps or Digital Games?
Andrew Watson
Andrew Watson

In education research, TECHNOLOGY and MINDFULNESS exist in dramatically different realms.

The stereotypical technophile wants the very latest gizmo to connect with countless others as quickly as possible.

The stereotypical mindful-phile wants ancient traditions to help slow life down and disconnect from most everything.

The Venn diagram overlap between these two fields just isn’t very large.

So, what happens when we run a competition between them?

If we want to “recover” after a stressful day, is a mindfulness app more helpful than a digital game?

First Things First

As I’ve written before, we’re tempted to approach such questions as partisans.

That is:

If I’m on Team Mindfulness, I’m sure that the mindfulness app will be better (or that the study was badly designed).

If I’m on Team Tech, I’m sure that the digital game will promote recovery more effectively (if the research isn’t hideously biased).

Although those thoughts are entirely predictable, they’re probably not terribly helpful. If we really want to know the answer to the question, we should be aware of the biases we bring to this study.

My suggestion–as always–is to shift deliberately to a stance of curiosity. “What an intriguing question,” I push myself to say. “I wonder what the researchers will find. It could go either way, I suppose…”

An equally important point: the answer to the question will depend substantially on our definitions.

In this case: what exactly does “recovery” mean? (That’s why I keep putting it in quotation marks.)

For this study, researchers used two measurements.

First, they had participants fill out a survey of how tired or energetic they felt. So: “recovery” means “more energetic and less tired.”

Second, participants filled out a second survey covering four “aspects of recovery”:

Detachment–spending time not thinking about work

Relaxation

Mastery–the sense of gaining skills in something other than work

Control–the experience of having control within or over activities”

In this study, then, participants “recover” better if they are energetic, detached from work, relaxed, and experiencing mastery and control.

That seems like a plausible definition–although, as I’ll note below, I’m not sure both teams are equally interested in all those outcomes.

The Studies, The “Answers”

Researchers did what you’d want them to do in order to answer these questions effectively.

In the first study, college students spent 15 minutes doing challenging arithmetic problems. Some of the students used a mindfulness app after this stressor, while others played the game Block! Hexa Puzzle. (A third group sat quietly, and had a fidget spinner handy if they wanted something to do.)

In the second study, researchers followed professionals coming home from a long/stressful day at work. For five days, these adults either used the mindfulness app or played the digital game. (No fidget spinners this time.)

What results did the researchers find?

Speaking precisely, they did get statistically significant results.

For the college students, the digital game led to higher energy levels on the first survey. However, there were no significant differences for the “recovery” survey of detachment, relaxation, and so forth.

For the adult professionals, there were no statistically significant results to report. The researchers argue that the digital game helped on the recovery survey increasingly as the week went along, whereas the meditation app helped less. (I’m sure that’s mathematically true, but the graph isn’t very compelling.)

Interpretations

How do we interpret these results?

If I’m on Team Tech, I’d read this study and say: Look! The digital game helped more! Take that!

If I’m on Team Mindfulness, I’d read this study and say: The differences were barely meaningful! And–they measured things our team doesn’t even care about! Bah!

But, I’m not on those teams. I’m on Team Curious. Here’s what I say:

In this research paradigm, both a mindfulness app and a digital game were (more or less) equally effective in helping adults recover after mental stress.

I mean, yes, there were minor differences. But there were A LOT more similarities.

For that reason, we don’t really need to push people one way or another. If a college students wants to recover though mindfulness–that’s great! If they want to recover by playing a digital game–that’s great! Either path should be helpful.

By switching from partisanship (“I’m sure THIS is correct”) to curiosity (“I wonder what we’ll learn here–so many possibilities are plausible!”), we can discover more useful and more honest interpretations of the research we discover.

A Final Note

Because this study works with college students and adults, I myself wouldn’t extrapolate to draw conclusions about younger students–especially much younger students.

It’s possible that “both work equally well” applies to–say–3rd graders. But, at this point, I don’t know of a research answer to that question.

My guess is: as is so often true, it will depend on the 3rd grader in question.

 

 

Flipping the Classroom: Asking the Right Question
Andrew Watson
Andrew Watson

When teachers hear about an intriguing new approach, like–say–“flipping the classroom,”we’re inclined to ask: “but does it work?

Let me propose a different question: under what circumstances does it work?”

After all, we should assume that many teaching techniques work for this teacher instructing these students in this topic. Alas, those same techniques might not work for that teacher teaching those students this other topic.

So, ask not “does flipping the classroom work?” Instead, ask “does flipping the classroom help seventh graders in Germany learn three basic algebraic principles?”

That question might sound obscure. (Okay, I’m sure it sounds obscure.)

But: research can answer that second question. It can answer the first only by answering the second dozens (or hundreds) of different ways.

So, Does It?

Here’s a very particular example. Doctors in Finland have to write very particular kinds of insurance certificates. Therefore, Finnish medical schools have to teach future doctors to write them.

So our question is: “Does flipping the classroom help Finnish medical students learn to write insurance certificates?”

To answer that question, researchers did everything you’d want them to do. They had one professor teach the lecture-only version of that skill. The med students then practiced at home.

For a different group of med students, the professor created a short video for students to watch at home. And, they practiced the skill in class with the professor’s guidance.

Which group learned better?

The Envelope, Please

The flipped classroom group learned better. A LOT BETTER. The cohen’s d value was 2.85. (I’m typically delighted by a d value of 0.50 or higher. I can’t remember another 2.85.)

So, clearly all teachers should start flipping the classroom–right?

NO WE SHOULD NOT.

This study showed that Finnish med students learned certificate writing better this way.

But, this is a niche-ey topic indeed.

These are fourth year med students. They’re nearing the end of a highly technical education. They’re as good at school as any students on the planet.

Also, they’re learning a discrete skill. I don’t know much about Finnish medical insurance, but I’m guessing it’s quite a distinct genre. The video covering this skill lasted four-and-one-half minutes.

In other words: if you’re teaching very advanced students a very narrow topic, then this study might encourage you to flip the classroom.

But, if you’re teaching beginners, or you’re teaching complex and abstract material, you might want to find other research before trying out this technique.

For instance: this study of students learning epidemiology showed that flipping the classroom made essentially no difference.

Final Thoughts

I have a research adjacent (although, not research supported) opinion about flipping the classroom.

As always, I think the key variable is working memory. The headline is: teachers should have students to do the heavy WM work in the classroom.

So: I guess that the basic principles of insurance certificate writing are easy to understand. But, applying them to specific circumstances can be more challenging.

That is: the application takes more WM. For that reason, watching a video at home and practicing in class with the prof makes sense.

In the case of–say–analysis of literature, those demands are reversed. Students can read stories quite effectively on their own. So, that should be the homework. But, the analysis of that literature requires lots of complex working memory initiative. This sort of discussion should be in-class, with the teacher, and not online.

I’ve never seen research consider flipped classrooms from a WM perspective. But, that framework seems to offer reasonable guidelines–especially if you can’t find research that matches your situation.

 


After I drafted the post above, I found this recent meta-analysis. The headline is that it found modest benefits to flipping the classroom, but that they were subject specific. Alas, the abstract doesn’t say which disciplines do and don’t benefit. I hope it becomes public soon, so we can find out!