Andrew Watson – Page 34 – Education & Teacher Conferences Skip to main content

Andrew Watson About Andrew Watson

Andrew began his classroom life as a high-school English teacher in 1988, and has been working in or near schools ever since. In 2008, Andrew began exploring the practical application of psychology and neuroscience in his classroom. In 2011, he earned his M. Ed. from the “Mind, Brain, Education” program at Harvard University. As President of “Translate the Brain,” Andrew now works with teachers, students, administrators, and parents to make learning easier and teaching more effective. He has presented at schools and workshops across the country; he also serves as an adviser to several organizations, including “The People’s Science.” Andrew is the author of "Learning Begins: The Science of Working Memory and Attention for the Classroom Teacher."

The Best Teaching Method? Depends on the Student…
Andrew Watson
Andrew Watson

Should teachers show students how to solve a problem? Should we model the right way to do a task?

Or, should we let students figure solutions out on their own?

This set of questions has gotten LOTS of attention over the years. Sadly, as can happen all too often, the answers have become polarized.

You’ll read (emphatic) teaching advice that we must let students discover answers and processes on their own.

You’ll read (passionate) teaching advice that we have to explain and guide them every step of the way.

How can we escape from this all-or-nothing debate?

Asking a Better Question

Here’s one escape hatch: ask a more helpfully precise question.

In other words: the answer to the question “what’s the best way to teach my students X” is “it depends on your students.”

More specifically, it depends on your students’ level of expertise.

Once we rethink our teaching from this perspective, a common-sensical framework quickly comes into perspective.

“Beginners”–that is, students with little-to-no expertise–need lots of explicit instruction and guidance.

If we’re not there to shepherd them through the early stages, they’re likely to experience working-memory overload. (If you followed our series on working memory this summer, you know working memory overload is baaaaad.)

However, “experts”–that is, students who have gone beyond the foundations of the topic–can explore, invent, and discover on their own. In fact, they’re likely to be distracted by too much explanation.

That last sentence sounds very odd. Why would an “expert” be distracted by explanation?

Here’s why. If you understand a topic, and then listen to me explain it, you have to realign your understanding of it to match my explanation.

That realignment process takes up…you guessed it…working memory.

By the way: this sub-field of cognitive science has its own lingo to describe working memory in action. Right now I’m describing the expertise reversal effect: that is, teaching practices that benefit novices actually impede learning for experts.

An Example. Or Two.

In this study, researchers in Australia had students learn new procedures in geometry and algebra.

Beginners–those who didn’t yet understand much in these areas–benefited from examples showing how to solve the problems. That is: they did better than their beginner peers who didn’t get those example solutions.

However, experts–who understood much more in these areas–did not benefit from those examples. In fact, they might even have learned less.

Other researchers have found similar results for students studying Shakespeare.

One Final Point

If I’ve persuaded you that beginners need explicit instruction, whereas experts benefit from greater freedom to explore and discover, you’re likely to have this question:

How can I distinguish novices from experts?

That question deserves a post of its own. For the time being, I think the simplest answer is the most obvious: the teacher will know.

That is: if your teaching expertise says “these students are ready to struggle at this higher level,” then go for it. If your teaching expertise says “they really need more guided practice, more time with the scaffolds up,” then go that route instead.

We can get some guidance from psychology research in making these decisions. But, ultimately, we have to use our best judgment.

In Defense of Other-Than-Passionate Teaching
Andrew Watson
Andrew Watson

I’m reading Tom Sherrington’s The Learning Rainforest: Great Teaching in Real Classrooms as I travel. Like many of his readers, I’m spending most of my time thinking a) that’s splendidly put, and b) why did it take me so long to start reading this book? It’s been on my “must read” shelf forever…

In brief, I heartily recommend it.

Sherrington opens the second section of Learning Rainforest with a plea for passionate teaching:

“Teach the things that get you excited about your subject. Read that special poem that gets you fired up, show that fascinating maths puzzle with the neat solution, enthuse about the extraordinary story, or talk about that cool exploding watermelon video.” (Yes: Sherrington is British, so he writes “maths” not “math.”)

Much of me wants to agree with this advice. Certainly I try to follow this guidance in my own teaching.

In the classroom, I regularly taught “difficult” texts—from Woolf to Morrison to Hopkins—because they move me so much. (Hopkins’s line “the just man justices” still makes shiver. Who knew “justice” could be a verb?)

And now that I do PD work with teachers, I’m always grateful to get feedback about my enthusiasm and verve.

In brief, I try to practice what Sherrington is preaching.

And Yet…

As I think about this advice, though, I can practice it but not endorse it.

Here’s why:

I think most teachers do our best work when we enter the classroom as our authentic selves.

That is: some teachers are indeed funny. They enliven their classes and their subject matter with puckish wit.

However, many people just aren’t funny. If I try to make my teaching funny because funny works for you, the falsity of that performance may well have dreadful results.

Other teachers have, say, a den-mothery warmth. They can soothe and comfort, and bathe their classrooms with gentle balm.

But: those of us who aren’t naturally soothing might not be able to pull off that act. The pretense would be more disconcerting than calming.

Still other teachers, as Sherrington suggests, are passionate, enthusiastic, and entertaining. Like Robin Williams in The Dead Poets’ Society, they leap about on desks and declaim in Laurence Olivier voices.

Like Sherrington (I imagine), they love showing videos of exploding watermelons. They “get fired up.” They “enthuse.”

And yet, again: some teachers just aren’t like that. Arm waving and zealous emotion simply doesn’t come naturally. As before, faking a teaching style that isn’t my own could backfire disastrously. The only thing worse that fake-funny is fake-enthusiastic.

An Example

In graduate school, one of my best professors taught with an almost studied blandness.

He sat at his desk, looking up occasionally from his notes. While he didn’t read directly from them, he was clearly tracking his outline closely. (We could tell, because his text-only PowerPoint slides often matched what he said, word-for-word.)

He rarely modulated his voice, and never (that I recall) cracked a joke.

And yet, he was fascinating.

Here’s why. First, he had a knack for explaining complex ideas with clarity and rigor. Even the most opaque topics seemed conspicuously clear once he’d explained them.

Second, he had a technique for answering questions that I’ve never seen before.

A student might ask: “What do we know about the impact of music lessons on very young children?”

He’d think for a minute, and then say:

“So, you’re asking if anyone has done a study where one group of three-year-old children had music lessons, and another group spent the same amount of time on an equally active task—maybe dance lessons.

And then, when we tested them on—let’s say—verbal fluency six months later, did those music lessons make any difference?

That’s an interesting question, and as far as I know, no one has done that study…”

In other words: he didn’t so much answer the question as describe how it might be answered by psychology research. (Of course, if such a study had been done, he’d tell us about it.)

After about a month, the questions in class started changing.

My classmates would raise their hands and ask, “Has anyone ever done a study where one group of six-year-olds told stories they made up, while another group read someone else’s story aloud…”

That is: we learned from this professor not only about various psychology topics, but also how to investigate psychology in the first place.

And, to repeat: there was nothing remotely enthusiastic about this class. And yet, this method was remarkably effective, and surprisingly compelling. I always looked forward to his lectures.

In truth, I can think of many excellent teachers whom you’d never describe as “passionate.”

Two Theories

So, if I can’t quite champion excitement as an essential teaching strategy, what would I offer in its stead?

As noted above, I think the first key is authenticity.

If you’re a funny teacher, be funny. If you’re awe-struck and enthusiastic, own that. But if you’re not, don’t try to fake it. Be yourself in the classroom, not a pretend version of another teacher.

The second key: aligning that authenticity with the deep purposes of education.

Here’s what I mean.

I think I’d be a terrible lawyer because, at my core, I hate conflict. My ethical obligation to advocate zealously on my client’s behalf would run smack into my deep desire for everyone to get along.

That is: my authentic self doesn’t really align with the deep purpose of lawyering.

However: teacherly enthusiasm certainly can align with our teacherly goals. We want students to love what they learn, and enthusiasm can go a long way to help them do so.

So too a sense of humor.

A den-mother’s warmth, likewise, might help students face academic rigors that would otherwise stress them out.

And, my professor’s deepest interest—his fascination with the design of psychology studies—lined up beautifully with his teaching goals. He wasn’t enthusiastic. But his authentic self absolutely helped us learn.

In Sum

Should you be worried if your teaching isn’t passionate? Not necessarily.

Should you worry if you’re not classroom-funny? Nope.

Do you need to answer all questions with hypothetical research designs? Heck no.

Should you worry if your authentic self doesn’t foster student growth and learning?

Absolutely.

Exploring the Nuances of Peer Feedback
Andrew Watson
Andrew Watson

Over at the Learning Scientists, Katie Marquardt digs into peer feedback.

On the one hand, we can see many reasons that peer feedback would be beneficial.

It means that students are doing more of the work than we are–and, as we know, “the one who does the work does the learning.”

And, the opportunity to give peer feedback provides students with the responsibility and autonomy we want to be teaching.

On the other hand, those benefits don’t always materialize.

As Marquandt writes:

my colleagues express skepticism about peer review, because of the poor quality of feedback students sometimes give each other, and the challenges of managing peer review activities in the lessons.

This is valid criticism, and I have seen these shortcomings in my own lessons, particularly when working with English language learners who may lack the writing skills to give their classmates good feedback.

If we can imagine good and bad sides to peer feedback, what does the research say?

What The Research Says…

If you read this blog often, you can predict what I’m about to say: we need a narrower question.

Surely the effects of peer feedback depend substantially on the peers, and the feedback.

Marquandt’s post does a great job exploring lots of specific research examples. For that reason, I encourage you to read it. You should be asking: which of the studies she describes best matches your students, and your methodology for fostering peer feedback.

To take a compelling example: one study found that students who gave feedback improved their own second drafts of an assignment more than those who received feedback.

Crucially, this finding held true for the students who “commented more on the strength of macro-meaning and the weakness of micro-meaning” of the drafts they reviewed.

To decide whether or not this study applies to you, you’ll need to know what “micro-meaning” and “macro-meaning” actually mean.

And, you’ll have to decide if research done with college physics students writing up lab reports might reasonably apply to your students.

In other words: this topic is a great example of a broader principle. When we look for research to guide our teaching, we should be sure that the people and the specific methods in the research helpfully match our teaching work and our teaching world.

Even More Good News about Mindfulness
Andrew Watson
Andrew Watson

Last week, I described a small but persuasive study about the benefits of mindfulness.

This study combined techniques from both psychology and neuroscience to show that mindfulness really can help students manage stress.

And, it even had an active control group. Just what a research wonk would desire.

As I noted at the time, however, this study focused on stress and not on grades. 

Of course, stress is important. (Let me say that again. Stress is important.) But, as teachers, we probably care about grades too.

We’d love to see another study: one that includes information on topics other than stress. Like, say, learning.

We’d also be delighted it were larger. 40 people is nice…but several hundred would be even more persuasive.

Today’s News

Sure enough, a just-published study focused on mindfulness and several academic measures:

Grades

Attendance

Standardized math and literacy tests

Number of suspensions

Yup: mindfulness correlated with more of the good stuff (higher grades and test scores) and less of the bad stuff (suspensions).

And, this study included 2000 students in grades 5-8.

This study is, in fact, the first to show strong connections between mindfulness and these academic measures.

A Reminder

We might be tempted to jump to a strong conclusion. If

Study #1: mindfulness interventions reduce stress, and

Study #2: higher mindfulness correlates with better academic outcomes,

We’re tempted to conclude that

Mindfulness interventions lead to better academic outcomes.

But, as we remind ourselves daily

Correlation is not causation.

Until we run a large study (with active controls and random assignment) which shows that students who practiced mindfulness ended up with more learning, we can’t be sure of that conclusion.

However, that’s an increasingly plausible possibility, given these two studies.

A Final Note

Both these studies were supervised by John Gabrieli, at MIT. He’ll be speaking at this fall’s Learning and the Brain conference. If you’d like to learn more about the connection between mindfulness and school, come join us (and Dr. Gabrieli) in Boston.

 

 

What (De)Motivates Struggling Math Students?
Andrew Watson
Andrew Watson

We want our students to learn. And: we want our students to want to learn.

So, the more we know about motivation, the better our schools will be.

Here’s one possibility: perhaps teachers’ beliefs about learning can motivate students. Or, sadly, demotivate them.

If that’s true, then we can un-de-motivate them — that is, we can MOTIVATE them — by realigning those beliefs.

Researchers in Germany wanted to explore this possibility.

Background Theory #1

Of course, psychologists have several theories about motivation.

In their work on Self-Determination Theory, for example, Edward Deci and Richard Ryan have argued that we’re motivated by a desire for three things:

Autonomy: that is, age-appropriate independence

Relatedness: that is, connection to other people

Competence: that is, the experience of effectiveness and even mastery

The German researchers focused particularly on the last of these: competence.

In schools, students probably feel competent when they get good grades. So, students who get bad grades need something else to feel some sense of effectiveness and mastery.

They might need a teacher who helps them see past grades to look at other parts of their development.

But, not all teachers will be able to see past grades. In particular, the researchers hypothesized that some teachers think success in math requires innate ability. If a student doesn’t have that innate ability, s/he just won’t learn very much math.

Teacher who focus on innate ability won’t bother to encourage students who get low grades.

But, teachers who don’t focus on innate ability will want to encourage students who get low grades. That encouragement might provide the feeling of competence that–according to Self-Determination Theory–provides motivation.

The Research, The Findings

To explore this causal chain, researchers investigated over 800 4th graders, taught by 56 different teachers across many different school.

If their hypothesis is correct, then students with low grades should feel less motivated IF their teachers think math requires innate ability. But, they should feel more motivated IF their teachers think it doesn’t.

And, students with high grades should feel motivated NO MATTER their teachers’ beliefs. (After all, their high grades provide a feeling of competence–which motivates by itself.)

Sure enough, that’s what the researchers found.

Because of the research methods, the results show up in particularly opaque stats-y language, so I don’t have graphs to post or comprehensible numbers to cite.

But the simple version is: students who struggle in math felt less motivation IF their teachers believed in the importance of innate ability than if their teachers didn’t.

Background Theory #2

The researchers don’t use the word “mindset” here. But, of course, you can see mindset theory all over this work.

At the most obvious level: the belief that success in math requires “innate ability” is itself about as fixed a mindset as we can get.

Of course, on the other hand, teachers who believe that math success doesn’t require innate ability presumably think students can improve. That’s a growth mindset.

I mention this point because: you have no doubt seen many stories in the last few months claiming that mindset theory is all-but dead.

As you’ve seen on this blog before: I think mindset theory is often badly used. (No: inspiring posters ain’t enough.) But, properly understood, it can be a powerful force for good.

Here’s an example:

If teachers accept mindset theory, they’re less likely to think that success in math requires innate ability.

And, according to this research, that means their struggling students will feel higher levels of motivation.

To me, that sounds like an easy win.

Yes or No: “Video Games Can Promote Emotional Intelligence”?
Andrew Watson
Andrew Watson

Video games stir up passionate debates among teachers.

Some of your colleagues (probably) argue that video games curdle our students’ wits, addle their morality, and disrupt their attention. (For instance: here.)

Others (probably) argue that games are the future of education, and we should be getting on board as fast as we can. (For instance: here.)

As is so often the case, I think we should avoid sweeping generalizations. Instead, let’s look carefully at each specific research claim, and see what trends develop over time.

A recent example: “can videogames be used to promote emotional intelligence in teenagers”?

Recent Claims

That suggestion, in fact, is the title of a recent study based on research in Italy. (In other words: I’m not exaggerating the claim. Those are their very words.)

This study, alas, is behind a (steep!) pay wall, so I can’t be sure of all the specifics.

At the same time, the study design looks promising. Some high-school seniors played 12 hours of a video game called “EmotivaMenta,” designed to be an “experienced based learning tool” to promote emotional intelligence.

Compared to a control group, they improved at recognizing their own emotions. And, they got better at managing their emotions by cognitive revaluation. (That means what it sounds like: deliberately thinking your way through a problem to which you initially had a strong emotional reaction.)

So, here’s one potential answer. Can video games promote emotional intelligence?

YES.

Another, Better Answer

Let’s dig a little deeper.

First, researchers note that these students got better at recognizing their emotions in the short term. But, when retested 3 months later, they were no different from the control group. (The trend-line for the “cognitive revaluation” isn’t clear.)

Second, the status of the control group isn’t clear. (Drat that paywall!) Was it an active control group? That is, did they do something similar to a video game for 12 hours? Or, was it a “business as usual” control group: just a bunch of students in the same school who didn’t do anything special?

Of course, we’ll be more persuaded by an active control group than a BAU group.

Third, notice that this was a specially designed video game.

When I read the title of the research, my first thought was that researchers had identified a commercially available game that, when used or framed the right way, increased emotional intelligence.

That’s not what happened.

Instead, it seems, they created a lesson about emotional intelligence in the form of a video game.

So, here’s a different answer to our revised question. Can a lesson about emotional intelligence in the form of a video game influence Italian high-school students?

In the short term YES–assuming the control group is active. But, in the longer term, it seems no.

Case Closed?

Given those caveats, should we give up this effort? Should we conclude that video games can benefit perceptual capabilities, but not emotional ones?

My own view is: let’s keep looking.

After all, these researchers did have some success. Their study wasn’t a home run, but they did get some positive results.

So, perhaps this game would work better if …

…students played over a longer period of time, or

…it were played by younger students, or

…it were redesigned to include some cool new element.

After all, if we can help adolescents with their emotional self-regulation, that’s a real win. ESPECIALLY if we can do it by having them play a game they enjoy.

Simply put, I DON’T think we yet know the answer to this question. But, we DO have reason to believe that video games might be a promising avenue to continue investigating.

Why, and When, Does Music Interfere with Reading?
Andrew Watson
Andrew Watson

We all know that listening to music makes life better.

And, we teachers all know teachers that listening to music while you study makes studying harder and less effective.

For instance, in this study, adults who read in silence scored more than 20% higher on a quiz about that reading passage than others who listened to music with lyrics.

Indeed. 20% higher. (You can read more about that study here.)

Even though we’ve seen this research finding many times, we might want a deeper understanding of this question.

For instance: are there particular points during reading that are particularly vulnerable to interference from music?

Answer #1: New Songs

To answer this question, researchers used eye-tracking technology to see how readers behaved with background music playing.

One answer that jumped out: the change from one song to the next interrupted fluent eye movements.

This finding, of course, makes intuitive sense.

When a new song comes on, we automatically perk up our ears. Even subliminally, we notice a change in our background circumstances. The attention we devote to that change makes it harder to attend to our reading.

The result: less fluent eye movements.

Professor Todd Rose (at Harvard’s Graduate School of Education) used to suggest that–if students insisted on listening to music–they should make a playlist of songs. Those songs should have no lyrics.

And, crucially, students should not press shuffleThey should, in other words, listen to those songs in the same order each time. Over time, students will habituate to those songs in that order, and be less distracted by the switch.

This research supports Rose’s suggestion.

Answer #2: Vocabulary

The second time that music particularly distracts readers: when they face an unusual word. As the authors poetically put it:

“An irrelevant auditory signal may impair sublexical processing of low-frequency words during first-pass reading.”

“An irrelevant auditory signal” means “music,” and “low-frequency words” means “difficult vocabulary.”

So, if you were listening to music while you read that paragraph you’d face particular difficulties. After all, in included several low-frequency words.

Based on this observation, I think we should worry more about homework that includes complex vocabulary–and, I’m guessing, even more so about homework that includes foreign-language vocabulary.

In other words: while listening to music is bad for reading comprehension, it’s especially bad for comprehension of passages with tricky vocab.

To Sum Up

We’ve always known that students make their cognitive lives harder when they listen to music during homework.

Now we have even more evidence showing when, and why.

An Exciting Event in Mindfulness Research
Andrew Watson
Andrew Watson

Let’s imagine a GREAT study on the benefits of mindfulness.

As school people, we’re happy that mindfulness might be helpful at home or at work, but we really want it to be helpful to students. So, we’d love for this study to take place at school.

We’d like the study to show that mindfulness changes mental processes. For instance, we’d love to know that it helps students feel less stress.

And, we’d like the research to look at brains as well as minds. That is: we’d like to have some fMRI data showing relevant changes in brain regions.

At the same time that students report they feel less stress (that’s the mind), we might see neural modulation typical of less stress (that’s the brain).*

Finally, the study’s methodology would hold up to scrutiny. It would, for instance, include a plausible control group. (I’ve written about problems with control groups, including this study about mindfulness.)

Lo and Behold

Sure enough, this study exists!

Working with 6th graders at a school outside Boston, Clemens Bauer randomly assigned half to a mindfulness program and half to a coding training program.

Both groups devoted 45 minutes, four times a week to this effort, for 8 weeks. And, by the way, students in both groups enjoyed this time equally. (So: AT LAST we’ve got a plausible and active control group.)

Bauer’s team had students fill out a stress survey before and after this 8-week stretch. (Sample question: “In the last month, how often have you been upset because of something that happened unexpectedly?”)

And, he performed fMRI scans on them before and after as well.

When looking at those scans, Bauer’s team had a specific prediction. High stress responses typically includes elevated amygdala activation. Often, we can manage that stress response by using the prefrontal cortex–the part of the brain right behind your forehead.

If mindfulness helps manage stress, we would expect to see…

…greater connectivity between the prefrontal cortex and the amygdala, and

…concomitantly reduced activity in the amygdala.

That is, we’d be able to see that mindfulness strengthened connections between self-control systems in the prefrontal cortex. In turn, this increase in self-control would help mitigate stress responses in the amygdala.

Of course, I’m offering a very simplified version of a fantastically complex neural story. Books have been written on these connections, and it’s not blog-friendly kind of information.

Results, Please

If you’re a fan of mindfulness, you’re going to LOVE these results.

Students who practiced mindfulness reported less stress than those in the control group.

They showed higher levels of prefrontal cortex connectivity with the amygdala.

They showed lower levels of amygdala activity when they looked at angry faces.

So: both in their mental activity (reported stress level) and in the neural activity (in the amygdala, between the amygdala and the prefrontal cortex), eight weeks of mindfulness led to beneficial results for these students.

Technically speaking, that’s a home run.

What’s Next

First: to repeat, this study is powerful and persuasive. We can simply revel in its conclusions for a while.

Second: as teachers, we’re glad that student stress levels are lower. The next question is: do students learn more? We can assume they do, but we should measure as well. (To be clear: I think lower stress is an important goal on its own, whether or not it leads to more learning.)

Third: as the study’s authors acknowledge, the sample size here is relatively small. I hope they get funding to repeat it on a much larger scale.

As noted in this study, there’s a disappointing history in the world of mindfulness research. Small studies–often lacking random assignment or a control group–come to promising conclusions. But, the bigger the study–and the better the methodology–the smaller the results.

So: now that we’ve gotten strong effects with a randomized study and a plausible control group, I hope to see these same results at a much larger scale.

I might go sit quietly for a while, and try to clear my mind of extraneous thoughts.


* This sentence has been revised to read “neural modulation” rather than “neural behavior.” (9/18/19)

Trying to Prove Yourself Wrong
Andrew Watson
Andrew Watson

You want the best research to inform your teaching. That’s why you’re reading this blog.

What’s the best way to be sure–or, as sure as you can reasonably be–that you’ve reached the most researchy conclusion?

For example: what should you do if you discover contradictory research?

That’s the problem that Blake Harvard faced over at Effortful Educator.

Here’s the story…

The Setup

Harvard teaches psychology to high school students. He knows A LOT about the mind and the brain. He’s careful to base his teaching practices on deep knowledge of research.

In fact, he even tries occasional experiments to study different teaching practices in a (relatively) controlled manner. In this post, for instance, he writes about his exploration of flexible classroom seating.

In brief, he knows his stuff.

Harvard’s conclusions, at time, challenge current trends. For instance: he describes himself as a relatively traditional teacher: more persuaded by research on direct instruction than by constructivist approaches.

You might not agree with those conclusions. But, if you read his blog, you’ll be impressed by his command of the research.

So, what did Harvard do when he came across research seeming to contradict his beliefs?

What if, for instance, a study headline says that students learn more from (“constructivist”) active learning than from a (direct-instruction-y) lecture?

Heck: the study was even highlighted in the Harvard Gazette. (To be clear: the Blake Harvard I’m writing about doesn’t work at Harvard, the university in Cambridge, MA.)

Key Lesson #1: Try to Prove Yourself Wrong

After a moment of understandable trepidation, Harvard forced himself to do what he tells his psychology students to do: confront their biases.

That is: Harvard (the teacher) thinks that the right kind of lecture will result in more learning than most active learning paradigms: exploratory discussions, for example, or projects.

When he finds research that purports to show the opposite, he had a great opportunity: he could disconfirm his prior convictions.

This may be the very best strategy to achieve the goal at the top of this post: to base our teaching on excellent research.

If you think that strategy X will result in the most learning for your students, you should:

First: assume that someone has found contradictory evidence (someone always has), and

Second: actively seek out that contradictory evidence. (Try Google Scholar.)

When you find it, give that evidence a thoughtful read. You’ll end up facing one of a few options.

Option 1: the contradictory evidence is more persuasive than the evidence you’ve been following. As a result, you’ll be able to improve your teaching practice. That’s great news!

Option 2: the contradictory evidence isn’t very persuasive. As a result, you know you’ve been doing it right up to now. That’s great news!

Option 3: both evidence pools are equally convincing. Now you know that your former certainty isn’t supported by the best evidence. You can try out both approaches with your students. You’ll find the answer that works best in your context. That’s great news!

In any case, your scrupulous attempt to prove yourself wrong will lead to a better teaching result.

Key Lesson #2: Precise Definitions Really Matter

As it turns out, when Harvard tried to prove himself wrong by reviewing the research, he ended up focusing carefully on the study’s definition of “lecture” and “active learning.”

His ultimate conclusion–whether or not he changed his mind–came down to a very precise understanding of the specific teaching techniques used in those two classes..

For instance: if you read a study saying that “metacognition improves learning,” you should find out exactly what the researchers DID. What, precisely, was the metacognitive strategy that students employed?

And: does that technique make sense for you and your classroom?

Until we know the answers to those questions, we can’t know if this research makes sense in our specific classrooms.

A Final Point

You’ve noticed, I suspect, that I haven’t told you what (Blake) Harvard decided about Harvard (University’s) research.

Why?

Partly because I think you should read his post.

But also because the answer to that question–in my view–isn’t as important as these two broader conclusions.

Try to disprove your own beliefs.

Be sure you know exactly what happened in the research.

If you follow those two strategies, you can be increasingly certain that you’re following the best research-based advice around.

The result: your students will learn more.

 

 

What Helps After a Stressful Day? Mindfulness Apps or Digital Games?
Andrew Watson
Andrew Watson

In education research, TECHNOLOGY and MINDFULNESS exist in dramatically different realms.

The stereotypical technophile wants the very latest gizmo to connect with countless others as quickly as possible.

The stereotypical mindful-phile wants ancient traditions to help slow life down and disconnect from most everything.

The Venn diagram overlap between these two fields just isn’t very large.

So, what happens when we run a competition between them?

If we want to “recover” after a stressful day, is a mindfulness app more helpful than a digital game?

First Things First

As I’ve written before, we’re tempted to approach such questions as partisans.

That is:

If I’m on Team Mindfulness, I’m sure that the mindfulness app will be better (or that the study was badly designed).

If I’m on Team Tech, I’m sure that the digital game will promote recovery more effectively (if the research isn’t hideously biased).

Although those thoughts are entirely predictable, they’re probably not terribly helpful. If we really want to know the answer to the question, we should be aware of the biases we bring to this study.

My suggestion–as always–is to shift deliberately to a stance of curiosity. “What an intriguing question,” I push myself to say. “I wonder what the researchers will find. It could go either way, I suppose…”

An equally important point: the answer to the question will depend substantially on our definitions.

In this case: what exactly does “recovery” mean? (That’s why I keep putting it in quotation marks.)

For this study, researchers used two measurements.

First, they had participants fill out a survey of how tired or energetic they felt. So: “recovery” means “more energetic and less tired.”

Second, participants filled out a second survey covering four “aspects of recovery”:

Detachment–spending time not thinking about work

Relaxation

Mastery–the sense of gaining skills in something other than work

Control–the experience of having control within or over activities”

In this study, then, participants “recover” better if they are energetic, detached from work, relaxed, and experiencing mastery and control.

That seems like a plausible definition–although, as I’ll note below, I’m not sure both teams are equally interested in all those outcomes.

The Studies, The “Answers”

Researchers did what you’d want them to do in order to answer these questions effectively.

In the first study, college students spent 15 minutes doing challenging arithmetic problems. Some of the students used a mindfulness app after this stressor, while others played the game Block! Hexa Puzzle. (A third group sat quietly, and had a fidget spinner handy if they wanted something to do.)

In the second study, researchers followed professionals coming home from a long/stressful day at work. For five days, these adults either used the mindfulness app or played the digital game. (No fidget spinners this time.)

What results did the researchers find?

Speaking precisely, they did get statistically significant results.

For the college students, the digital game led to higher energy levels on the first survey. However, there were no significant differences for the “recovery” survey of detachment, relaxation, and so forth.

For the adult professionals, there were no statistically significant results to report. The researchers argue that the digital game helped on the recovery survey increasingly as the week went along, whereas the meditation app helped less. (I’m sure that’s mathematically true, but the graph isn’t very compelling.)

Interpretations

How do we interpret these results?

If I’m on Team Tech, I’d read this study and say: Look! The digital game helped more! Take that!

If I’m on Team Mindfulness, I’d read this study and say: The differences were barely meaningful! And–they measured things our team doesn’t even care about! Bah!

But, I’m not on those teams. I’m on Team Curious. Here’s what I say:

In this research paradigm, both a mindfulness app and a digital game were (more or less) equally effective in helping adults recover after mental stress.

I mean, yes, there were minor differences. But there were A LOT more similarities.

For that reason, we don’t really need to push people one way or another. If a college students wants to recover though mindfulness–that’s great! If they want to recover by playing a digital game–that’s great! Either path should be helpful.

By switching from partisanship (“I’m sure THIS is correct”) to curiosity (“I wonder what we’ll learn here–so many possibilities are plausible!”), we can discover more useful and more honest interpretations of the research we discover.

A Final Note

Because this study works with college students and adults, I myself wouldn’t extrapolate to draw conclusions about younger students–especially much younger students.

It’s possible that “both work equally well” applies to–say–3rd graders. But, at this point, I don’t know of a research answer to that question.

My guess is: as is so often true, it will depend on the 3rd grader in question.