Skip to main content
When Does Technology Distract Students? The Benefits of Research that Contradicts My Beliefs
Andrew Watson
Andrew Watson

I spoke with several hundred students last week about research-based study strategies.

As always, students were fascinating to hear about psychology and neuroscience research: for instance, the benefits of retrieval practice.

And, as always, they did not love my alarming news about multi-tasking. My advice goes like this:

“If you want to study less and learn more, do one thing at a time.”

No insta-snap-gram-tweet-flix-chat-tok while you’re studying. One thing at a time.

Since that talk, I’ve found some interesting research about the LIMITS of that advice, so I thought I’d share it here.

Tech Problems, Tech Benefits

Our reasons to worry about technology use during class seem perfectly obvious. If I am paying attention to my tweets, I am not paying attention to my academic work.

Divided attention = less learning. Obviously.

At the same time, we can easily see ways that technology benefits learning.

If — during a lecture — students text one another to reinforce their understanding of the material (“What did the prof just say?”), they might solidify their learning.

If they look up complementary information on the interwebs, their quest might boost their comprehension. (I’ve been surprised how often my students want to do this in class, and I occasionally allow them to do so.)

So, we need a more precise question than “is technology good or bad?” We need to know — under what precise circumstances does it help? Or hurt?

Technology and Higher Order Thinking

For instance: does off-topic texting during a lecture interfere with both “lower order” and “higher order” thinking, as defined by Bloom?

And, by the way, what role does note-taking play?

A study from 2018 explores this question.

The details, of course, get complicated, but the short version goes like this. Students watched a 24 minute lecture about psychiatric diagnoses: PTSD, ADHD, OCD, and so forth. They also took notes.

Some students received and answered off-topic texts during the lecture — one about every 2 minutes.

After the lecture, students took a test.

Some of those test questions focused simply on recalling details: “How long must an individual display symptoms of PTSD in order to be diagnosed?”

The researchers designed these questions to measure knowledge and comprehension — that is, “Bloom’s level 1 & 2.”

Four questions, however, focused on deeper understanding: “Compare PTSD and ADHD. How do these disorders differ? Are there ways in which they are similar?”

That is: these questioned aimed to measure application and analysis: Bloom’s level 3 & 4.

So: what effect did the OFF-TOPIC TEXTS have on the students’ NOTES, and on their UNDERSTANDING?

The Envelope Please

The researchers’ results surprised them — and certainly surprised me.

Unsurprisingly, students distracted by texts took less complete notes.

And, also unsurprisingly, they did substantially less well on the factual questions. Texters averaged a 62 on those questions, while non-texters averaged a 71.  (If you speak stats, the Cohen’s d was 0.64. That’s an impressively large difference, at least to me.)

Here’s the surprise: researchers did NOT find a statistically significant difference between students’ scores on the application and analysis questions.

How do we explain this finding?

First: let’s admit the possibility that texting distractions do interfere with factual recall but not analysis.

Honestly, I would not have anticipated that finding, but it could be true.

Second: perhaps the timing matters. That is: these students took the test immediately after the lecture. Perhaps application and analysis — unlike mere factual recall — require more time for steeping.

That is, if the “higher order thinking skills” had been tested the next day, perhaps we would have seen a difference in those scores.

Third: perhaps the number of questions mattered. Because the researchers asked only 4 application/analysis questions, they might not have had enough data to discern a difference.

Perhaps a greater number of questions would have revealed a difference.

The Big Picture(s)

Based on this research, will I tell students “it’s okay to text during lectures”?

NO I WILL NOT.

Here’s why:

First, facts matter. If off-topic texting interferes with factual learning, that finding itself means that texting during lectures is bad.

Second, taking notes properly (almost certainly) matters. If texting  interferes with good note-taking, that finding itself should dissuade students from doing so.

Third, I’m willing to believe that texting doesn’t interfere with application/analysis, but only if other studies — with more questions and later tests — consistently demonstrate that result.

Another point also jumps out at me from this research. This study contradicts my firmly held belief that multitasking vexes learning.

I genuinely believe that IT’S A GOOD THING when research contradicts my firmly held beliefs.

If research never contradicted my beliefs, then I would never learn anything from it.

In fact, I would never need to look at research because it shows me only what I already know.

Research might prove most useful to us when it contradicts our beliefs.

Who knows, maybe I’ll go back to those students and update my advice…


Waite, B. M., Lindberg, R., Ernst, B., Bowman, L. L., & Levine, L. E. (2018). Off-task multitasking, note-taking and lower-and higher-order classroom learning. Computers & Education120, 98-111.

Is “Cell Phone Addiction” Really a Thing?
Andrew Watson
Andrew Watson

I recently read a tweet asserting “the fact that cell phones are proven to be as addictive as drugs.”

Of course, people casually use the word “addictive” about all sorts of things: chocolate, massages, pumpkin-spice lattes. (No doubt somewhere Twitter is being described as “addictive.” My mother tells me that this blog is addictive.)

But all that casual language doesn’t add up “proving the fact” that cell phones are “as addictive as drugs.” So I started wondering: has this “fact” been “proven”?

Good News, Bad News (Good News, Meh News)

Over the years I’ve adopted a simple strategy. When someone makes a factual claim about research, I ask for the research.

In this case, I simply asked the tweep for the research behind the claim.

Good news: He answered!

I’ve been amazed over the years how often people make “research-based” claims and then refuse to provide research to support them.

In this case, he did in fact point toward research on the topic. (I found one website claiming that 16% of adolescents, and 6.3% of the population, are addicted; alas, no sources cited. Happily, they do provide treatment…)

The tweep’s willingness to cite evidence enhances his credibility. Let’s check it out…

Bad news: hmm. “Evidence,” in this case, means “links to newspaper articles.” Generally speaking, USA Today and Vice.com aren’t the best places to find research. Research is published in journals. (Heck, I’ve even criticized the New York Times for its research credulity.)

So: the tweep’s credibility clicks down slightly.

Good news: All three of the links do, in fact, point to underlying research! I didn’t get a direct connection to the promised research, but I can keep digging to find it.

Credibility clicks back up.

Meh news: it turns out that all three articles point to the same underlying research. That is: I didn’t find three studies supporting the claim that “cell phones are proven to be as addictive as drugs”; I got one.

Now: one study isn’t nothing. But [checks notes] one isn’t three.

This Just In: Correlation Isn’t…

Given how much is riding on this one study, let’s check it out.

First off, we can see right there in the title that the study focuses on correlation. As you’ve no doubt heard dozens (thousands?) of times, “correlation isn’t causation.”

In this case, the authors asked 48 people questions about their cell-phone usage. Based on their answers, they categorized some of those people as “addicted.” And they then found brain differences between the “addicted” and “not addicted” people.

This quick summary leads to several concerns.

First: one study of 48 people doesn’t “prove a fact.” It might be an interesting data point, but that’s all.

Second: this study doesn’t claim to “prove a fact.” Using a questionnaire, it DEFINES some folks as addicted and others as not addicted.

Third: “brain differences” always seems like a big deal, but trust me — they might not be.

People who throw the javelin probably have a different “average muscular profile” than people who run marathons, because they’re using different muscles.

People who play the piano probably have different neural profiles than people who dance ballet, because they’re spending more time using THIS part of the brain than THAT part.

It seems likely people who score high on this “cell-phone addiction” questionnaire behave differently than those who don’t; so it’s not dramatically surprising that their brains are different.

Did the phone cause to brain differences, or brain differences cause phone use? We don’t know. (Because, “correlation isn’t …”)

Important to Note

One interesting point does jump out. The brain differences found by this research team do — in some ways — align with plausible predictions about addiction.

Now, the researchers don’t make strong claims here: the word “drugs” appears only once in the body of the study.

This finding isn’t a big surprise to me. Very roughly, the  brain differences have to do with “our ability to control what we pay attention to.” It’s not hugely surprising that heavy cell-phone users have brain differences there (and that people addicted to drugs do too).

Don’t Stop Now

If the tweep’s study doesn’t support the claim that “cell phones are proven to be addictive,” does other research?

To answer that question, I did a simple google search (“cell phone addiction real”). The first scholarly article that pops up says…not so much.

Here’s their summary:

Although the majority of research in the field declares that smartphones are addictive or takes the existence of smartphone addiction as granted, we did not find sufficient support from the addiction perspective to confirm the existence of smartphone addiction at this time.

The behaviors observed in the research could be better labeled as problematic or maladaptive smartphone use and their consequences do not meet the severity levels of those caused by addiction.

In brief: “maladaptive,” yes; “addictive,” no.

As I continued clicking, I found other skeptical reviews (for instance, here), and also found some that embrace the category (with some open questions, here).

Oh, and, by the way: “cell phone addiction” isn’t included in the DSM-5.

In other words, I think we can reasonably describe the category of “cell phone addiction” as an active scholarly debate. To be clear, this conclusion means we can’t reasonably describe it as “a proven fact.”

Why I Care

I am, believe it or not, open to the idea that cell phones might be addictive. If they are — if at some point research “proves that fact” — then this label might help us treat a real problem effectively.

My objection springs from another source entirely.

I worry when debate about measurable claims sinks to applying insulting labels.

If I think that asking students to memorize is a bad idea, I could study research on the topic. Or, I could dismiss it as “drill and kill.” The insulting label replaces the argument.

If I think that teacher talk is bad, I could study research on the topic. Or, I could mock it as “sage on the stage.” The dismissive label replaces the argument.

If I think that cell-phone usage is bad for teens, I could study research on the topic. Or, I could call it “an addiction.” The alarming label replaces the argument.

If we’re going to rely on research to make decisions about teaching and education (which is, after all, the GOAL of our organization) we should never replace research with labels.

Instead, let’s try something else. Let’s replace labels with research…


Horvath, J., Mundinger, C., Schmitgen, M. M., Wolf, N. D., Sambataro, F., Hirjak, D., … & Wolf, R. C. (2020). Structural and functional correlates of smartphone addiction. Addictive behaviors105, 106334.

Panova, T., & Carbonell, X. (2018). Is smartphone addiction really an addiction?. Journal of behavioral addictions7(2), 252-259.

Billieux, J., Maurage, P., Lopez-Fernandez, O., Kuss, D. J., & Griffiths, M. D. (2015). Can disordered mobile phone use be considered a behavioral addiction? An update on current evidence and a comprehensive model for future research. Current Addiction Reports2(2), 156-162.

Gutiérrez, J., & Rodríguez de Fonseca, F. (2016). Gabriel Rubio.: Cell Phone Addiction: A Review. Front. Psychiatry7, 175.

How to Capture Students’ Attention for Online Readings (tl;dr)
Andrew Watson
Andrew Watson

When do students focus while reading online?

When do they lose focus and let their minds wander?

Does the length of the passage being read influence the answer to these questions?

Several researchers, including Dr. Noah Forrin, have been exploring this topic, and have some interesting — and helpful! — answers.

Here’s the story:

The Research Setup

Happily, this question can be explored with well-established methods.

Forrin and his colleagues had 80 college students read articles from Wikipedia: topics included “the galaxy,” “Pompeii,” and “Sartre.” The passages were at a 9th grade reading level, and ran about 500 words.

Students read half of these passages in one-sentence chunks (averaging about 12 words). The other half they read in two-to-six sentence chunks (averaging 30 words).

As students read, Forrin interrupted them to ask if they were thinking about the reading, or thinking about any topic other than the Wikipedia passage.

And — here’s a key point — Forrin’s team asked if the students were mind-wandering intentionally or unintentionally. (Yes: in this field, “mind wander” is a verb.)

Why ask that odd question?

If students mind-wander intentionally, they and their teachers can (presumably) have some control over that problem.

However, if mind wandering is unintentional, then we all might struggle to fix this problem.

As the researchers say:

“intentional processes are presumably more likely to be changed by instructions and conscious strategies than are unintentional processes.”

So, what did Team Forrin find?

The Results

Sure enough, the passage length mattered.

More precisely, it mattered for unintentional mind reading (but not intentional). When reading the one-sentence passages, students unintentionally mind-wandered 19% of the time; when reading long passages, they did so 24% of the time.

Forrin’s team speculates that long passages act as a signal that students might find the passage uninteresting. In their grim summary, they write that

students’ increase in mind-wandering while reading educational texts may (1) emerge rapidly, (2) persist over time, (3) harm comprehension, and (4) be related to a decrease in interest.

Ugh.

Next Steps

So, what should we DO with this glum news?

First, as is always the case, I think teachers should use our experience to apply research wisely to our circumstances. For instance, if you don’t have your students do online readings, don’t worry about Forrin’s findings!

If, however, your students spend LOTS of time reading online, then his conclusions merit your attention.

Second, I think these findings add to an increasingly clear research conclusion: online reading doesn’t promote learning as much as old-fashioned ink-on-paper does.

To my mind Dr. Lalo Salmeron’s meta-analysis remains the most useful exploration of this question. He goes through important findings (no, the age of the reader doesn’t matter; no, we aren’t getting better at this skill) and interesting exceptions (prose fiction).

Third, Forrin himself offers a practical suggestion. If we MUST assign online readings, and we CAN break them down into smaller paragraphs, then maybe we should. His research suggests that doing so reduces the amount of unintentional mind-wandering.

Potential result: students concentrate better and learn more.

If he’s right, then Forrin’s research will have been well worth reading — long paragraphs and all.

Teachers vs Tech?: The Case for an Ed Tech Revolution by Daisy Christodoulou
Rebecca Gotlieb
Rebecca Gotlieb

The dramatically increased reliance on technology to support students’ learning during the COVID-19 pandemic has brought to light educators’ need to understand how technology can support learning and how educators can make thoughtful decisions around the use of technology in schools. Daisy Christodoulou, Director of Education at No More Marking, draws on principles of human cognition and evidence about effective teaching and learning practices to offer suggestions for how technology can help bring about necessary improvements in education. Her book, Teachers vs Tech?: The Case for an Ed Tech Revolution, will be of interest to teachers, school technology managers, and EdTech entrepreneurs.

Lack of understanding and misconceptions about how people learn interfere with building technologies that can improve education. One especially important feature of human cognition is that while we have the ability to store vast amounts of information in long-term memory, we can only hold and manipulate a few pieces of information at a time—i.e., we have limited working memory capacity. Too often in-person or technology-based educational tools and techniques are ineffective because they tax our working memory. There is increasing pushback against teaching students facts since students can “just google it.” Christodoulou argues, however, that without sufficient content knowledge students’ working memory capacity would be quickly overwhelmed, they would not be able to understand the things they look up online, and they would easily fall prey to false information. A common misconception about learning is that students have different “learning styles.” This learning myth assumes that, for example, some students learn better with visual information while others learn better with auditory information.

While it is the case that technology could substantially help improve education by personalizing learning, doing so with technology that teaches to different learning styles or lets students guide their own learning based on their interests and assessments of their competency is not effective. Rather, technology could effectively personalize learning by providing targeted feedback and assessments based on students’ objective performance. Good educational technology can break down complex skills into smaller parts, provide helpful examples, and help students practice those skills repeatedly.

Christodoulou warns that a challenge with smart devices is that it is so easy to become distracted from educational work while using such devices. She suggests reducing device use, changing settings to reduce distractions, and potentially moving towards devices designed for a single learning purpose so that there are fewer possible distractions.

Christodoulou suggests that the path forward for EdTech should be to combine teacher expertise, for example in motivating students and evaluating complex ideas, with the ability of tech to do things like scale lectures, engage students in spaced, repetitive practice, and consistently applying rules to make grading fair. Further, teachers should receive training in using new technologies. Before adopting new EdTech, educators should investigate how the technology personalize the learning experience, how it builds long-term memory, how it support attention, and what evidence there is about its efficacy.

Christodoulou wisely concludes that change in education will only be possible when it is grounded in the realities of how people learn and the objectives that society and students have for school, and when it honors the expertise of teachers. Still, she argues that technologies that adapt to students’ performance and provide opportunities to practice challenging component skills provide an example of useful educational technology. In this moment when understanding the possibilities of EdTech is so important, Teachers vs Tech is a helpful read.

Christodoulou, D. (2020). Teachers vs Tech?: The case for an ed tech revolution. Oxford University Press-Children.

“Students Switch Their Screens Every 19 Seconds.” Sort Of…
Andrew Watson
Andrew Watson

I recently read an arresting claim: when students have web browsers open, they switch their screens — on average — every 19 seconds.

Such data boggle the mind with horror.

As teachers, we want our students to devote sustained thought to complex topics. Nineteen seconds just isn’t going to cut it.

As an English teacher, I think nineteen minutes won’t reveal the full joys of a Shakespearean sonnet. In nineteen seconds, my students won’t have time to find the main verb. The Horror.

Although it took a moment’s sleuthing to track down this claim (see Coda below), this statement does in fact have research support. Back in 2014, Leo Yeykelis and his intrepid team undertook quite a complex study — including screen monitoring and measures of skin conductance! — to arrive at this conclusion.

Clearly, it’s time to panic.

Don’t Panic Just Yet

Whenever we hear shocking “research based” conclusions  — ESPECIALLY conclusions that confirm our prior beliefs — we should look hard at that underlying research.

We need to know exactly what the researchers did before we jump to big conclusions about their work.

If you click the link above, you can read Yeykelis’s study. When I did, two points jumped out at me very quickly:

First: this study draws its conclusions based on research into 10 college undergraduates. TEN.

Now, researchers have very good reasons to run studies with only ten people in them. Research is pricey and time consuming, so it makes sense to do a small-ish study before ramping up to study hundreds or thousands of people.

In other words, they didn’t do anything wrong.

However, for the same reason you wouldn’t take a medicine that had been tested on only ten people, you shouldn’t make dramatic changes to your classroom based on that research.

Second: because this study has so few participants, we want to know what other studies have found on this topic.

Regular readers know: my go-to sites to answer that question are Scite.ai and ConnectedPapers.com.

When I plugged Yeykelis’s study into those search engines, I found some support — but not lots-n-lots. In other words: this field simply hasn’t been studied very much.

For instance: the ConnectPapers grid doesn’t highlight any related research after 2017. (That finding doesn’t mean that no research has been done after that date, but that it hasn’t reached prominence yet.)

Don’t Panic; Do Respond

For all these reasons, I don’t think we should get too focused on “19 seconds.”

At the same time, the other studies highlighted by Scite.ai and ConnectedPapers.com do point consistently in the same direction: screen switching really is a thing.

A bad thing.

We don’t know exactly how prevalent the problem is, but it’s not isolated; it’s not trivial.

For that reason, I think we should consider our own teacherly responsibilities here.

ESPECIALLY during online classes, we can remind students to turn off other technologies. “Remember our class rules: please turn your cell phones off. And, close all the other tabs in your browser except this one, and your homework.”

We can explicitly teach students to purge tech from their study areas and habits. “When you start your work, have your book open, your pencil on your paper, and your computer off.”

We can include those reminders in homework assignments. “This paragraph is due Wednesday before class. Remember: have your web browser closed before you start writing.”

We can look in the mirror. How often do we switch screens? What prompts us to do so? For instance, as I write, I’ve got 7 browser tabs open. I am — REALLY — using them all to write this post. Based on this experience, what realistic guidance can I give my students about multi-screening?

Of course, you’ll translate such suggestions to fit your own teaching circumstances. In every case, we’re making reasonable and measured changes.

The precise number “19” doesn’t really matter. The big picture — learning requires sustained attention, so we should foster it — really does.

Coda

I wrote last week, improbably, about the benefits of Twitter. Yes, it can be mean-spirited and dreadful, but it can also provide helpful insight and useful resources.

The day after I wrote that blog post, my own Twitter experience highlighted that lesson.

I first read about the “19 Seconds” research in Doug Lemov’s excellent new book Teaching in the Online Classroom. (Lemov’s Twitter handle: @Doug_Lemov).

Curious, I tweeted at him asking for the citation.

Before Lemov could answer, another scholar — Christian Bokhove (@cbokhove) — BOTH provided me with Yeykelis’s 2014 study AND linked to a follow-up study with even more dire results.

And then — because Twitter can be great — Lemov himself followed up to be sure I’d gotten the research I needed.

I don’t mean to downplay its flaws. But I’ve found Twitter mighty useful in learning about teaching, and about research; and in finding a kind of community.

It’s ironic but: I’m encouraging you to OPEN another web browser and check out Twitter.

Overcoming Potential Perils of Online Learning [Repost]
Andrew Watson
Andrew Watson

In June of 2019, I wrote about Dr. Rachael Blasiman’s research into the effect of typical distractions on online learning.

Given the current health climate, I thought her work might be especially helpful right now.

The key take-aways here:

First: (unsurprisingly) distractions interfere with online learning, and

Second: (crucially) we can do something about that.

In brief, we should start our online classes by teaching students how to learn online…

Here’s the post from June.


Online learning offers many tempting — almost irresistible — possibilities. Almost anyone can study almost anything from almost anywhere.

What’s not to love?

A tough-minded response to that optimistic question might be:

“Yes, anyone can study anything, but will they learn it?”

More precisely: “will they learn it roughly as well as they do in person?”

If the answer to that question is “no,” then it doesn’t really matter that they undertook all that study.

Rachael Blasiman and her team wanted to know if common at-home distractions interfere with online learning.

So: can I learn online while…

…watching a nature documentary?

…texting a friend?

…folding laundry?

…playing a video game?

…watching The Princess Bride?

Helpful Study, Helpful Answers

To answer this important and practical question, Blasiman’s team first had students watch an online lecture undistracted. They took a test on that lecture, to see how much they typically learn online with undivided attention.

Team Blasiman then had students watch 2 more online lectures, each one with a distractor present.

Some students had a casual conversation while watching. Others played a simple video game. And, yes, others watched a fencing scene from Princess Bride.

Did these distractions influence their ability to learn?

On average, these distractions lowered test scores by 25%.

That is: undistracted students averaged an 87% on post-video quizzes. Distracted students averaged a 62%.

Conversation and The Princess Bride were most distracting (they lowered scores by ~30%). The nature video was least distracting — but still lowered scores by 15%.

In case you’re wondering: men and women were equally muddled by these distractions.

Teaching Implications

In this case, knowledge may well help us win the battle.

Blasiman & Co. sensibly recommend that teachers share this study with their students, to emphasize the importance of working in a distraction-free environment.

And, they encourage students to make concrete plans to create — and to work in — those environments.

(This post, on “implementation intentions,” offers highly effective ways to encourage students to do so.)

I also think it’s helpful to think about this study in reverse. The BAD news is that distractions clearly hinder learning.

The GOOD news: in a distraction-free environment, students can indeed start to learn a good deal of information.

(Researchers didn’t measure how much they remembered a week or a month later, so we don’t know for sure. But: we’ve got confidence they had some initial success in encoding information.)

In other words: online classes might not be a panacea. But, under the right conditions, they might indeed benefit students who would not otherwise have an opportunity to learn.


I’ve just learned that both of Dr. Blasiman’s co-authors on this study were undergraduates at the time they did the work. That’s quite unusual in research world, and very admirable! [6-11-19]

Dangerous Fluency: Performance Isn’t Always Learning
Andrew Watson
Andrew Watson

How often do you have this experience?

Your students obviously understood yesterday’s topic. You know this because, say, their exit tickets revealed a high level of progress.

And yet, when you begin class today, they have seemingly forgotten everything you discussed, and everything they learned. Or, “learned.”

Teachers experience this frustration all the time: short-term performance doesn’t necessarily translate into long-term learning. (I’ve written before about Nick Soderstrom’s AWESOME review article considering this tension.)

A Telling Example

Last year, Glass and Kang published an important study about the effect of technology use during college lectures.

G&K let students use technology–laptops and cellphones–during 1/2 of the lectures in a psychology class, and forbade them during the other half.

In fact, they taught two identical sections of the same course, and enforced this ban in each class on alternating days. So: they could compare students to themselves in the ban- vs. no-ban classes.

The result headlines go like this:

This tech ban had NO EFFECT on immediate quizzes: students scored equally well on average in the ban- and the no-ban classes.

And yet, it had a SUBSTANTIAL effect on the term-end exam. Students averaged 7 points lower on material they had learned when tech was allowed than when it was forbidden.

And, crucially, students scored lower in no-ban classes even if they didn’t use technology themselves. Presumably, their classmates’ technology use distracted them.

This study suggests several conclusions. (I’ll mention a particularly counter-intuitive one at the end of this post.)

Most People Don’t Judge Their Own Learning Well

Because we’re teachers, we are–relatively speaking–experts on learning. Especially if you’re reading this blog (and attending Learning and the Brain conferences), you probably know a lot more about the complexities of learning than most people do.

And, you know more about learning than your students do.

That’s a (surprisingly) controversial statement. But, consider the students’ perspective in Glass and Kang’s psychology lecture.

They might reasonably say:

“Look: I scored equally well on the daily quizzes whether or not I was using technology. Clearly I understand material just fine when I’m texting my friends.

Have a little faith in me as a learner. I know when the professor is saying important things, and I focus then. And, I know when she’s gone off on a cute-but-unessential anecdote, and I use that time to check in with my people.”

Everything in that hypothetical statement is accurate, or at least plausible.

However, it lacks the long-term perspective. Their performance on short-term quizzes does not predict their long-term learning and understanding.

Because we have G&K’s research, and see the longer perspective, we know that their texting had a subtle, harmful effect on retention. However well they did right away, students just didn’t learn that information deeply.

For this reason–among many others–I think teachers should be confident in claiming our expertise. When our students say “I know I learn better this way,” we can use our best judgment in evaluating that claim.

At times–especially if they have a particular diagnosis–they might well be right.

At other times–especially if they want to watch YouTube while doing homework, or claim that their learning style requires that they do X instead of Y–you can offer strong guidance based on cognitive science research.

Counter-Intuitive Conclusion

I promised above I’d offer a surprising interpretation of Glass and Kang’s study. Here goes:

Because students did worse in the no-ban classes whether or not they used technology, the obvious conclusion is that we should utterly ban technology from our classrooms.

However, that conclusion misses an essential part of G&K’s methodology. They didn’t exactly ban technology use. In fact, they required technology use.

You read that right.

Those “immediate quizzes” you’ve been reading about? Students had to take them on some kind of electronic device: a laptop or a cell phone.

So, the study does NOT compare performance in a ban vs. a no-ban condition. It DOES compare performance in classes where technology was required at times (to take quizzes), and where it was used however students liked (as well as taking quizzes).

In other words: the problem wasn’t USE of technology. It was MISUSE of technology.

Here again, I think this insight brings us back to teacher judgment.

Should you ban technology from your classroom?

If the topic you’re covering doesn’t benefit from technology, then you have plenty of reasons to do so.

But, if you’ve got some great way to enhance instruction with technology–and you can monitor their technology use as G&K did–then you might get the same benefits that Glass and Kang’s students did when they took those quizzes on laptops.

Research guidance can shape our thinking. And, we should always blend it with our own experience and classroom skill.

Today’s Neuro-Nonsense: Reading Brainwaves in the Classroom
Andrew Watson
Andrew Watson

A friend recently sent me this link to a video story in the Wall Street Journal.

The 6 minute video discusses a program in China that uses a…welll…gizmo to measure students’ attention levels during class.

Everything I know about this particular program comes from the video, so it’s possible I’m missing important information. But, based on the video, I render this temporary verdict:

THIS IS BONKERS.

Electroencephalography simply doesn’t work this way. That is: the gizmo is far too crude (and subject to irrelevant stimuli) to produce meaningful measurements.

And, attention is far too complex a cognitive process to be measured by three contact points with the skull. (Not joking. THREE.)

If I gave you a stethoscope and said, “Here. Please measure that’s person mental health,” I suspect you’d refuse.

You’d say something like this: “I can gather data, and then pretend that those data say something meaningful about a really complex network of mental functions. But you and I both know that this tool simply can’t do what you’re asking it to do.”

You’d be right.

One More Point

Of course, there’s LOTS to say about the social policy implications of using this gizmo. This isn’t a blog about social policy, so I’ll leave that to other analysts.

I’m answering the very basic technical question: “can we really do that?” The answer, as far as I know, is “certainly not.”

 

h/t Scott MacClintic

Yes or No: “Video Games Can Promote Emotional Intelligence”?
Andrew Watson
Andrew Watson

Video games stir up passionate debates among teachers.

Some of your colleagues (probably) argue that video games curdle our students’ wits, addle their morality, and disrupt their attention. (For instance: here.)

Others (probably) argue that games are the future of education, and we should be getting on board as fast as we can. (For instance: here.)

As is so often the case, I think we should avoid sweeping generalizations. Instead, let’s look carefully at each specific research claim, and see what trends develop over time.

A recent example: “can videogames be used to promote emotional intelligence in teenagers”?

Recent Claims

That suggestion, in fact, is the title of a recent study based on research in Italy. (In other words: I’m not exaggerating the claim. Those are their very words.)

This study, alas, is behind a (steep!) pay wall, so I can’t be sure of all the specifics.

At the same time, the study design looks promising. Some high-school seniors played 12 hours of a video game called “EmotivaMenta,” designed to be an “experienced based learning tool” to promote emotional intelligence.

Compared to a control group, they improved at recognizing their own emotions. And, they got better at managing their emotions by cognitive revaluation. (That means what it sounds like: deliberately thinking your way through a problem to which you initially had a strong emotional reaction.)

So, here’s one potential answer. Can video games promote emotional intelligence?

YES.

Another, Better Answer

Let’s dig a little deeper.

First, researchers note that these students got better at recognizing their emotions in the short term. But, when retested 3 months later, they were no different from the control group. (The trend-line for the “cognitive revaluation” isn’t clear.)

Second, the status of the control group isn’t clear. (Drat that paywall!) Was it an active control group? That is, did they do something similar to a video game for 12 hours? Or, was it a “business as usual” control group: just a bunch of students in the same school who didn’t do anything special?

Of course, we’ll be more persuaded by an active control group than a BAU group.

Third, notice that this was a specially designed video game.

When I read the title of the research, my first thought was that researchers had identified a commercially available game that, when used or framed the right way, increased emotional intelligence.

That’s not what happened.

Instead, it seems, they created a lesson about emotional intelligence in the form of a video game.

So, here’s a different answer to our revised question. Can a lesson about emotional intelligence in the form of a video game influence Italian high-school students?

In the short term YES–assuming the control group is active. But, in the longer term, it seems no.

Case Closed?

Given those caveats, should we give up this effort? Should we conclude that video games can benefit perceptual capabilities, but not emotional ones?

My own view is: let’s keep looking.

After all, these researchers did have some success. Their study wasn’t a home run, but they did get some positive results.

So, perhaps this game would work better if …

…students played over a longer period of time, or

…it were played by younger students, or

…it were redesigned to include some cool new element.

After all, if we can help adolescents with their emotional self-regulation, that’s a real win. ESPECIALLY if we can do it by having them play a game they enjoy.

Simply put, I DON’T think we yet know the answer to this question. But, we DO have reason to believe that video games might be a promising avenue to continue investigating.

What Helps After a Stressful Day? Mindfulness Apps or Digital Games?
Andrew Watson
Andrew Watson

In education research, TECHNOLOGY and MINDFULNESS exist in dramatically different realms.

The stereotypical technophile wants the very latest gizmo to connect with countless others as quickly as possible.

The stereotypical mindful-phile wants ancient traditions to help slow life down and disconnect from most everything.

The Venn diagram overlap between these two fields just isn’t very large.

So, what happens when we run a competition between them?

If we want to “recover” after a stressful day, is a mindfulness app more helpful than a digital game?

First Things First

As I’ve written before, we’re tempted to approach such questions as partisans.

That is:

If I’m on Team Mindfulness, I’m sure that the mindfulness app will be better (or that the study was badly designed).

If I’m on Team Tech, I’m sure that the digital game will promote recovery more effectively (if the research isn’t hideously biased).

Although those thoughts are entirely predictable, they’re probably not terribly helpful. If we really want to know the answer to the question, we should be aware of the biases we bring to this study.

My suggestion–as always–is to shift deliberately to a stance of curiosity. “What an intriguing question,” I push myself to say. “I wonder what the researchers will find. It could go either way, I suppose…”

An equally important point: the answer to the question will depend substantially on our definitions.

In this case: what exactly does “recovery” mean? (That’s why I keep putting it in quotation marks.)

For this study, researchers used two measurements.

First, they had participants fill out a survey of how tired or energetic they felt. So: “recovery” means “more energetic and less tired.”

Second, participants filled out a second survey covering four “aspects of recovery”:

Detachment–spending time not thinking about work

Relaxation

Mastery–the sense of gaining skills in something other than work

Control–the experience of having control within or over activities”

In this study, then, participants “recover” better if they are energetic, detached from work, relaxed, and experiencing mastery and control.

That seems like a plausible definition–although, as I’ll note below, I’m not sure both teams are equally interested in all those outcomes.

The Studies, The “Answers”

Researchers did what you’d want them to do in order to answer these questions effectively.

In the first study, college students spent 15 minutes doing challenging arithmetic problems. Some of the students used a mindfulness app after this stressor, while others played the game Block! Hexa Puzzle. (A third group sat quietly, and had a fidget spinner handy if they wanted something to do.)

In the second study, researchers followed professionals coming home from a long/stressful day at work. For five days, these adults either used the mindfulness app or played the digital game. (No fidget spinners this time.)

What results did the researchers find?

Speaking precisely, they did get statistically significant results.

For the college students, the digital game led to higher energy levels on the first survey. However, there were no significant differences for the “recovery” survey of detachment, relaxation, and so forth.

For the adult professionals, there were no statistically significant results to report. The researchers argue that the digital game helped on the recovery survey increasingly as the week went along, whereas the meditation app helped less. (I’m sure that’s mathematically true, but the graph isn’t very compelling.)

Interpretations

How do we interpret these results?

If I’m on Team Tech, I’d read this study and say: Look! The digital game helped more! Take that!

If I’m on Team Mindfulness, I’d read this study and say: The differences were barely meaningful! And–they measured things our team doesn’t even care about! Bah!

But, I’m not on those teams. I’m on Team Curious. Here’s what I say:

In this research paradigm, both a mindfulness app and a digital game were (more or less) equally effective in helping adults recover after mental stress.

I mean, yes, there were minor differences. But there were A LOT more similarities.

For that reason, we don’t really need to push people one way or another. If a college students wants to recover though mindfulness–that’s great! If they want to recover by playing a digital game–that’s great! Either path should be helpful.

By switching from partisanship (“I’m sure THIS is correct”) to curiosity (“I wonder what we’ll learn here–so many possibilities are plausible!”), we can discover more useful and more honest interpretations of the research we discover.

A Final Note

Because this study works with college students and adults, I myself wouldn’t extrapolate to draw conclusions about younger students–especially much younger students.

It’s possible that “both work equally well” applies to–say–3rd graders. But, at this point, I don’t know of a research answer to that question.

My guess is: as is so often true, it will depend on the 3rd grader in question.