2017 – Page 8 – Education & Teacher Conferences Skip to main content
Chocolate and Cocoa Help You Learn, Right?
Andrew Watson
Andrew Watson

AdobeStock_88446344_Credit

What’s not to love? The photo shows a mug of cocoa, with an already-nibbled chocolate bar in the background. Even better, the headline alerts us that both the cocoa and the chocolate “enhance cognitive abilities and memory.”

For once, this headline is not overselling the scientific article. In the abstract, the authors really do say

Although still at a preliminary stage, research investigating the relations between cocoa and cognition shows dose-dependent improvements in general cognition, attention, processing speed, and working memory.

WOW.

The authors even use the word “nutraceutical”–new to me–to emphasize that chocolate is both nutritious and pharmaceutically beneficial.

New that sounds this good can’t be true. Can it?

Maybe the News Really Is That Good

For their review, Valentina Socci’s team assembles a solid list of articles touting the physical benefits of cocoa flavanols: compared to control groups, those who have chocolate or cocoa over several days/weeks show better blood pressure, insulin resistance, and brain blood flow.

They also show exciting changes in various kinds of brain activity. One study, looking at a particular measure of brain activity (SSVEP), showed

changes in SSVEP average amplitude and phase across several posterior parietal and centro-frontal sites that indicated an increased neural efficiency in response to the working memory task.

Increased neural efficiency on a working memory task! Now you’ve got my attention…

Then Again, Maybe Not…

All that chocolate may have changed SSVEP average amplitude and phase. However, as teachers, we don’t really care about that: we care about learning. Did this “increase in neural efficiency” actually improve working memory?

Nope.

Similarly, another study showed that chocolate improved neural activity “in various brain regions in response to an attention switching task.”

But, that improved neural activity didn’t make them any better at switching attention.

In fact, of the six studies that focus specifically on one-time doses (not weeks-long doses), two showed no meaningful cognitive differences for those who had chocolate/cocoa, and the others showed differences in some measures or some participants–but not in all.

In other words, the research is suggestive and interesting, but hardly persuasive.

Who Is Learning?

I suspect that most of the people reading this blog are in the world of PK-12 education. How many of the people being studied were PK-12 students?

None.

For the studies looking at one-time doses of cocoa, most were in college.

For the studies looking at daily shots, many (most?) of the participants were older than 55.

In fact, many of these studies focused on people with some kind of cognitive impairment: typically dementia.

Reasonable Conclusions

Based on the data gathered here, I think we can reasonably say that for older people–especially those with some cognitive problems–cocoa flavanols might have some physiological benefits (blood pressure, insulin levels), and might even offer some cognitive boosts as well.

That’s exciting and helpful if you teach people, and especially if you are taking care of someone, in that group. (If you’re looking after someone with dementia, by the way, don’t rely on a blog for medical advice: talk with a doctor.)

However, we have no good reason to think that chocolate offers cognitive benefits for PK-12 students. Perhaps it does–but this article simply doesn’t present direct evidence to support that conclusion.

At the same time, I am entirely willing to hypothesize that chocolate offers substantial emotional benefits. For this reason, S’mores will be served at the upcoming Learning and the Brain Conference…

Memory Training That Really (Sort of) Works
Andrew Watson
Andrew Watson

AdobeStock_49504616_credit

Imagine yourself following a route that you know quite well: perhaps your morning commute. You take your car out of your garage; drive past the Dunkin’ Donuts, past the old movie theater, past the grocery store; you park in your favorite spot, walk through the lobby, down the library corridor…

You can easily think of these places in order because you’ve followed this same path hundreds of times. Well, an ancient memory trick takes advantage of your well-rehearsed visual memory.

If you have–say–a list of words to memorize, you can take some time to associate each word with those places. For example, if you have to memorize the words “tomato, airplane, tuba,” you can create a vivid picture of a tomato splatted on your garage door, an airplane flying over the Dunkin’ Donuts, and a tuba band marching in front of the movie theater.

You can then recall those words simply by mentally following your morning commute to work.

Even if you have a very long list of words, this method still works; you can, after all, visualize many, many places along this familiar route.

The Research Questions:

This memory trick–called “the method of loci”–has been around for centuries. Memory champions typically win memory contests by using it. But, can just anyone do it? Do you need to be born with a special memory talent?

Martin Dresler’s research team answers some of these questions. He started by scanning the brains of memory champions while they did some memory feats, hoping to discern neural patterns associated with excellent memory.

He also scanned some non-memory experts as a baseline for comparison.

Sure enough, he found connectivity patterns that helped distinguish between these two groups.

Next, he trained those non-memory experts in two memory techniques. One group practiced the method-of-loci approach for 40 days, 30 minutes each day.

The other group used a well-established short-term memory exercise. (Perhaps you’ve heard of the n-back test.)

What did the researchers find?

The Research Answers:

First, the method of loci really helped. Those trained in this method more than doubled their ability to remember words on a list. (Those who did short-term memory training saw little more improvement than control subjects.)

Equally interesting: the method of loci training created the neural patterns that Dresler had found in the memory experts.

That is: this training paradigm BOTH helped participants remember more words AND changed their brain connectivity patterns.*

In other words: we have two really good reasons to believe that method of loci training helps people remember word lists.

The Inevitable Caveat

If you’ve read this blog for a while, you know I’m going to point out a downside sooner or later. That moment has arrived.

First, the method of loci helps students do something we don’t often ask them to do: remember lists of unrelated words. It’s a cool party trick, sure. But, at what point do we care if our students can do such things?

For example: I suspect the method of loci could be used to help students learn all the elements in the periodic table in order. But–why would we want them to do that? Would such knowledge meaningfully improve their understanding of chemistry?

Second, notice the extraordinary about of time the training took: 30 minutes a day for 40 days! Imagine what else you could do with those twenty hours.

So, I’m not exactly opposed to teaching the method of loci; I’m just unimpressed by it. The method requires lots of training time, and creates a benefit that doesn’t help very much.

If, by the way, you have a good use for this method, please let me know. I’d love to hear about its practical classroom uses.

_______________________________________________________

  • Although it’s true that this training changed the brains of those who participated in it, it’s also true–as I’ve written before–that any activity repeated at length changes your brain. This finding is interesting, but not exactly surprising.

Bold Moves for Schools: How We Create Remarkable Learning Environments by Heidi Hayes Jacob and Marie Hubley Alcock
Rebecca Gotlieb
Rebecca Gotlieb

Screen Shot 2017-07-14 at 10.24.14 AMToday’s learners have different needs than those of yesterday. Educators and policy makers, therefore, need to rethink optimal learning environments. Heidi Hayes Jacobs, founder and president of Curriculum Designers, and Marie Hubley Alcock, president of the education consulting company Learning System Associates, help educators and policy makers contemporize education spaces, curriculum, and pedagogical practices with their new book Bold Moves for Schools: How We Create Remarkable Learning Environments.

The authors argue for the need for updated learning principles, enumerate and explain skills of the effective contemporary teacher, and explain options that have emerged recently for expanding learning environments. Jacobs and Alcock offer guidance about selecting and updating curricula and assessments, especially in light of the significant problems associated with current standardized assessments. They advocate for the benefits of more shared leadership in educational settings. Jacobs and Alcock offer a realistic and progressive vision for how the skilled educators of today can push education practice forward to help prepare students for tomorrow. They urge educators to learn and develop themselves always. With the recommended readings and study guide questions in Bold Moves for Schools, this book can help educators do exactly that.

The authors argue that reformers can begin to affect change by considering education practices that are antiquated and should be eliminated, classical and ought to be preserved, and contemporary and ought to be formulated and expanded. For example, while we should discontinue thinking of students as empty vessels and teachers as disseminators of knowledge, we should think of students as budding creative and critical thinkers whom teachers are responsible for nurturing. Today’s learners need to develop skill in directing their own learning process, building social contracts, critiquing and producing media, innovating, and acting as global citizens. Teachers need to model these skills for their students and advocate for effective, contemporary learning practices. The authors offer bulleted lists of action steps to help teachers develop each of these skills.

In addition to changes in the teaching profession, there must be changes made also to the content taught. Experimental learning, learning that takes place outside the classroom, and learning that is organized around a topic or issue rather than an academic subject can help energize students’ learning by allowing them to feel a sense of ownership and offering opportunities for personalization based on students’ needs.

Schools can update the way they think about learning space, teaching time, the grouping of learners, and the group of teachers to improve learning. For example, rather than grouping students based on age, students could be grouped by interest. Similarly, rather than grouping teachers based on the subjects they teach, groupings ought to be more dynamic, multi-dimensional, and collegial. The benefits of strong professional learning communities should be stressed to teachers. A lateral, collaborative leadership structure within schools and the education system might help make it possible for more innovation. Making major changes in schools can be difficult. Jacobs and Alcock suggest setting up planning teams to help with changes, clarifying a school’s mission, and seeking feedback at multiple levels, including from students before implementing changes.

According to the authors one major issue that has stymied learning in a major way is our accountability system. The exaggerated focus on high stakes tests makes teachers feel a lack of trust and respect, diminishes the appeal of the teaching profession, and discourages collaboration among teachers. It leads to untested, but valuable, subjects being eliminated or whittled down, and it stifles creative and critical thinking. Testing and accountability matter, but we need a radical shift in what assessments look like. They should be authentic, based in performance of real-world tasks and skills, based on measuring innovation and student growth and development, take place over an extended period of time, and include input from students.

We are departing from the industrial age for the information age. Learning environments need to change to keep up. With the help of Bold Moves for Schools Educators and policy makers can use the objective of improving learning as the starting point for the modernization of learning environments.

 

Jacobs, H. H., & Alcock, M. H. (2017). Bold Moves for Schools: How We Create Remarkable Learning Environments. Alexandria, VA: ASCD.

Interrupting Skilled Students
Andrew Watson
Andrew Watson

AdobeStock_65282787_Credit

Here’s a sentence that won’t surprise you: practice typically makes us more skilled at the activity we’re practicing.

Here’s a sentence that might surprise you: practice makes us more vulnerable to mistakes after an interruption.

So, for example, if my students have just learned how to solve for three variables with three equations, then an interruption will have some effect on them when they get back to work.

If, however, they have spent some time getting familiar with the process of solving for three variables with three equations, then an interruption will distract them even more.

Said a different way: an interruption may distract your relatively advanced students more than your less advanced students.

Counter-intuitive?

My first response to this research finding was straightforward puzzlement. Why are experienced students more distractible than neophytes?

As I’ve thought more about this study, I’ve had an idea. If I’m experienced at a step-by-step activity, then I’m probably not paying full attention to each step as I go through the process. After all, my experience lets me work almost by rote. In this case, an interruption is quite a problem, because I wasn’t really focused on my place in the list of steps.

However, if I’m a newbie, I’m likely to be focusing quite keenly on each step, and so–after a distraction–am likelier to remember where I left off.

Teaching Implications

In the first place, this study by Altmann and Hambrick is the only one I know of that reaches this conclusion. Until their results are replicated, we ought to be interested in, but not obsessed by, their findings.

Second, we should note that relative expertise does have occasional disadvantages. We shouldn’t assume that our accomplished students won’t be fuddled by a classroom interruption–in fact, they might be more so than their still-struggling peers.

Third, I for one will be on the lookout for this pattern in my own work. In theory at least, I’m the expert in my classroom, and so I might be more discombobulated than my students by a distraction during a rote task.

Given this research, I now know to turn to my least confident students for a reminder of where were were.

A Tale of Two Analyses
Lindsay Clements
Lindsay Clements

AdobeStock_127365916_Credit

For researchers and research-readers alike, the data analysis portion of a study is many things: complex, exciting, frustrating, intriguing, and sometimes even befuddling.

And, analytics are always on the move. With each new study, researchers are developing increasingly intricate and elegant ways to make meaning of their data. At the same time, powerful statistical software, like SPSS or Stata, is continuously expanding its capability to process such sophisticated research designs.

Certainly, many long hours go into choosing a study’s analytic approach. Researchers must develop and refine their hypotheses; organize their data in such a way that statistical software can read it; and choose a statistical method (i.e., a mathematical approach) to test their research questions.

That last part about choosing a statistical method is where things can get tricky. In general, different statistical methods are not simply multiple ways of doing the same thing. Whereas something like a division problem may use different techniques (e.g., long division, trial- and-error) to get the same result, different statistical methods can analyze the same data yet produce differing, and even contradictory, results.

Differences in design: A little goes a long way

 Just as French philosopher Jean-Paul Sartre liked to say “we are our choices,” in many ways our research results are our choices, too.

A study conducted by Burchinal & Clarke-Stewart illustrates this well. [1] These authors noticed that two different research teams had analyzed the same longitudinal data set, yet found (and published) substantially different results.

These two research teams analyzed data from the National Institute of Child Health and Human Development (NICHD) Study of Early Child Care: a large, national study that followed the development of 1,364 children from six months of age. Both teams were also interested in the same question: what is the impact of mothers’ employment, and children’s subsequent nonmaternal child care, on children’s early cognitive growth?

The NICHD Early Child Care Researchers (Team 1) were first in line to test this question. After a series of analyses, this team concluded that the age at which children entered nonmaternal care, and the amount of time spent in such care, showed no relation to children’s cognitive performance up to age three. [2]

Next, Team 2 (Brooks-Gunn, Han, & Waldfogel, 2002) tested this same question. However, in contrast to the Team 1, they concluded that mothers’ full-time employment during children’s first nine months was indeed associated with impaired cognitive functioning when the children were three years of age. [3]

Speaking different analytic languages

 The contradictory findings between these two research teams were not only curious, but also important to reconcile. After all, the difference between advising mothers of young children to work or not work is a big one. And, such a recommendation has implications for state and federal programs, such as Temporary Assistance for Needy Families, that assist young mothers in finding employment.

Burchinal & Clarke-Stewart therefore conducted a new, third study investigating how each team’s analytic design may have engendered the contradictory results.

Two approaches

 First, Team 1 team used a conservative, top-down analytic approach. This approach:

  • uses all available information, such as multiple outcome variables and data from all participants
  • begins with a general test of significant relations between variables and works its way down to more specific comparisons
  • helps researchers avoid exaggerating the significance of associations found when working with large data sets

Team 2, on the other hand, used an a priori comparison approach. This technique:

  • examines hypotheses and variable relations chosen by researchers before (a priori) data exploration
  • utilizes a small subset of participants and/or variables in order to conduct a small set of comparisons between explicitly chosen participants and/or variables
  • is helpful when theory or previous research strongly implies a relation between specific variables or constructs

Thus, it seemed likely that investigating a smaller group of participants, and analyzing a smaller set of outcome data, contributed to Team 2’s finding of a relation between maternal employment and children’s cognitive growth. On the other hand, utilizing the full set of study participants, and analyzing all possible child outcome data, seemed to result in Team 1’s lack of such a finding.

To confirm this hypothesis, Burchinal & Clarke-Stewart analyzed the same participants and variables that Team 2 did; but, they used the top-down approach this time. The result of these new analyses? No significant findings.

The authors therefore reported Team 1’s findings—that is: it doesn’t hurt young children for their mothers to get a job—as being a more reliable take-away.

A cautionary tale

 It is important to note that both the top-down approach and the a priori comparison approach are well-respected and well-established analytic techniques. And, as with all analytic techniques, each has strengths, weaknesses, and research questions for which its use is optimal.

But a study such as the one conducted by Burchinal & Clarke-Stewart provides an important cautionary tale. That is, when we, as consumers of research findings, draw conclusions from empirical work, it is important to remain attentive to the type of analyses that were used to engender such claims.

Of course, we probably won’t all end up being experts in all areas of analytic approach. But perhaps a good rule of thumb is that when we see a small amount of data being used to make big claims, it’s best to take a second look, get a second opinion, or see if the study has been replicated a second time.

References

[1] Burchinal, M.R. & Clarke-Stewart, K.A. (2007). Maternal employment and child cognitive outcomes: The importance of analytic approach. Developmental Psychology, 43, 1140-1155.

[2] Brooks-Gunn, J., Han, W.J., & Waldfogel, J. (2002). Maternal employment and child cognitive outcomes in the first three years of life: The NICHD Study of Early Child Care. Child Development, 73, 1052–1072.

[3] National Institute of Child Health and Human Development Early Child Care Research Network. (2000). The relation of child care to cognitive and language development. Child Development, 71, 960–980.

Dangerous Authenticity?
Andrew Watson
Andrew Watson

AdobeStock_29925093_Credit

Here’s an odd question: is it ever a bad idea for teachers to be authentic?

In a recent study, Johnson and LaBelle surveyed students to discover the teacher behaviors that struck them as “authentic.”

By closely analyzing the students’ descriptions of “authentic” teachers, they came up with four broad categories. According to their findings, authentic teachers are…

Approachable (for example: they tell appropriate personal stories and jokes)

Passionate (they’re excited about their subject matter)

Attentive (they know their students’ names)

Capable (they stay on top of assignments)

Unsurprisingly, “inauthentic” teachers do the opposite (and, are Disrespectful to boot).

Johnson and LaBelle acknowledge that this research includes some methodological quirks.

In particular, paradoxically, the fact that students describe these behaviors as “authentic” doesn’t mean that they are authentically “authentic” for all teachers.

For example: “authentic” teachers are approachable, and approachable teachers tell jokes. But, what if you’re not a joker? Maybe your sense of humor is quieter than that. Or maybe, while you appreciate a good joke told by others, you’re just not comfortable telling them yourself.

Should you adopt “authentic” teacher behaviors even if they’re not authentic to you?

Zooming Out

This question–which Johnson and LaBelle raise but don’t answer–hovers over much of the research you’ll hear about at Learning and the Brain Conferences.

Let’s imagine that you come to the November LatB conference, which will focus on the intersection of teaching and technology. You might attend a session that warns about the distractions that technology creates, and the attentional benefits that movement can provide.

On the one hand, this argument might make good sense to you. You know of several computer programs that might help your students, and you’re happy to know that they’ll be less distracted by technology if they’ve had a chance to move about recently.

On the other hand, as you listen to the speaker’s list of movement strategies (Have them move into small groups! Require students to change their seats every 20 minutes! Ask 5 students to write their answers on the board!), you might feel a growing dread.

Those strategies might seem like a good fit for the speaker. And, based on the fact that everyone around you is nodding energetically, you conclude they’re eager to give them a go.

But here’s the thing: that’s just not you. You simply can’t imagine directing your students about in some elaborate traffic-control exercise. You’re feeling a little embarrassed just thinking about it.

We’ve got good research showing the benefits of this particular teaching behavior. And, alas, that beneficial teaching behavior just doesn’t mesh with the persona you bring to the classroom.

So, what should you do?

Hard Questions, Tentative Answers

For starters, I think you should be suspicious of anyone who thinks this is an easy question.

On the one had, research has powerful answers to lots of questions about good and bad teaching. On the other hand, research mostly looks at AVERAGES.

And here’s the thing: you are not average. Your students aren’t average either. Your school isn’t average.

You are an agglomeration of unique particulars, and some research-established average might not apply to you.

That hard truth goes double when the teaching practice under discussion runs counter to something deep in your personality.

Here’s the best answer I got. In my view, you can decline particular teaching practices, but you shouldn’t ignore the broader topic within which those practices reside.

To go back to my “attention and movement” example: you can decide that you won’t rely on movement to focus your students. After all, that’s just not you.

But, you can’t overlook the topic of attention itself. There are MANY other teaching strategies you can use to foster attention, and–especially if you’re going to set this one strategy aside-you’ll need to be even more attentive and thoughtful about the other strategies that you have at hand.

Imagine a Venn diagram. Once circle represents all the teaching practices that have research support. A second represents those that students find “authentic.” A third represents those that are, in fact, authentic to you.

Find the teaching practices that fit in all three of those circles–you’ve found the best place to be.

Regions or Networks, Take 2
Andrew Watson
Andrew Watson

AdobeStock_25611509_Credit

Just yesterday, I posted some thoughts about “thinking both-ily”; that is, understanding that brain processing happens in both regions and networks.

Today, I found a Beebs video showing a remarkably detailed version of the neurons that make up brain networks.

You’re probably used to seeing images of the brain like the one above — one that emphasizes regions over networks. This video provides a useful counter-example — a way to visualize networks over regions.

If you can picture both together, you can get even better at thinking both-ily.

Brain Regions or Brain Networks?
Andrew Watson
Andrew Watson

AdobeStock_107836271_Credit

Here’s an odd brain theory to start off your day:

Let’s assume that particular regions of your brain produce particular mental abilities or habits. For instance, let’s say that this part of your brain right here is the generosity center of the brain.

So, if I feel a bump on your head above the “generosity region” of the brain, I can assume that this part of your brain is unusually big, and that you are therefore unusually generous.

However bizarre this theory might sound, phrenology was quite popular in the early 1800s. Most often attributed to Franz Joseph Gall–and, of course, popularized and spread by others–this theory produced a mini-industry of diagnosing people’s characters by feeling the bumps on their heads.

The Larger Question

Phrenology is but the most extreme position in an ongoing debate.

On one side of this debate–the phrenology & co. side–we can think about mental activities taking place in very specific parts of the brain. We can say that the amygdala is the “fear center” of the brain, or that the hippocampus is the “memory center,” or the anterior cingulate cortex the “error-detection center.”

The other side of the debate argues that all brain functions take place in wide networks that spread across many parts of the brain. Memory isn’t just in the hippocampus: it includes the prefrontal cortex, and sensory regions of the neocortex, and the striatum, and the cerebellum…it happen all across the brain.

So, here’s the question: which side of the debate has it right?

A Very Specific Answer

I thought about this debate when I read a recent article about language areas in the brain. Here’s a one-paragraph summary of that article, written by the authors. (Don’t worry too much about the “BA” numbers; focus on the broader argument):

The interest in understanding how language is “localized” in the brain has existed for centuries. Departing from seven meta-analytic studies of functional magnetic resonance imaging activity during the performance of different language activities, it is proposed here that there are two different language networks in the brain: first, a language reception/understanding system, including a “core Wernicke’s area” involved in word recognition (BA21, BA22, BA41, and BA42), and a fringe or peripheral area (“extended Wernicke’s area:” BA20, BA37, BA38, BA39, and BA40) involved in language associations (associating words with other information); second, a language production system (“Broca’s complex:” BA44, BA45, and also BA46, BA47, partially BA6-mainly its mesial supplementary motor area-and extending toward the basal ganglia and the thalamus). This paper additionally proposes that the insula (BA13) plays a certain coordinating role in interconnecting these two brain language systems.

Got that? [I put those words in bold, btw.]

In brief, researchers argue that language requires two sets of neural networks. One network, “Wernicke’s area,” has both a core and a periphery. The other, “Broca’s,” is itself a complex. And, these two networks are coordinated by the insula.

(If you want to reread that paragraph now that you’ve seen the summary, it might make more sense.)

“Both-ily”

As I read that summary, I think the authors are saying that both theories about brain structure are partly true.

Understanding language takes place in Wernicke’s area–which is itself a pair of networks. Producing language takes place in Broca’s area–which is a complex. And those networks and complexes communicate through the insula.

In other words: specific mental functions take place in specific places, but those “places” are best thought of as interconnected networks.

In grad school, a discussion group I was in once debated the theories I outlined above. Our question was: “when we study brains, should we think locally or network-ily?” After an hour of heated discussion, we reached a firm conclusion: we should always think “both-ily.”

One More Famous Example

You probably know the story of Henry Molaisson: a patient whose hippocampi were removed to cure his epilepsy.

The good news: his epilepsy was (largely) cured.

The tragic news: he could no longer form long-term declarative memories.

From H.M.’s example, we learned to think about long-term memory locally: clearly, the hippocampus is essential for creating new long-term declarative memories. After all, if you don’t have one, you can’t learn new things.

(This hypothesis was confirmed with a few other patients since H.M.)

But, from H.M.’s example, we also learned to think about long-term memory in networks. He didn’t learn things when told them, but he could learn new things.

For example: when asked how to get to the kitchen in his new house, he couldn’t answer. He just didn’t “know.” (That is: he didn’t know in a way that would allow him to explain the answer.)

But, when he wanted a cup of tea, he went to the kitchen and made one for himself. Clearly, he did “know.” (That is: he knew in a way that would allow him to get to the kitchen–as long as he didn’t have to explain how he got there.)

When thinking about H.M., should we think “locally” or “network-ily”? I say: think both-ily.

The Bigger Message

When you hear from self-proclaimed brain experts who tells you that “the wrinkle-bop is the juggling center of the brain,” beware.

If those “experts” go on to explain that this sentence is a crude shorthand for “the wrinkle-bop is a very important part of a complex network of areas involved in juggling,” then you’re okay.

But if those “experts” just stop there–in other words, if they really think “locally,” not “both-ily”–then you should be suspicious.

You might conclude that their teaching advice is valid, and decide to give it a try. But, don’t rely on their neuroscience expertise. They are, in effect, just reading the bumps on your skull…

Video Games and Empathy
Andrew Watson
Andrew Watson

AdobeStock_96027478_Credit

Do violent video games reduce empathy?

If people spend lots of time pretending to beat up and shoot pretend people, will this experience reduce their empathy for human suffering? Will it make them more likely to really beat up and shoot real people?

We might get different answers to this question depending on the variables we decide to measure, and the tools we use to measure them.

In this study, researchers found 15 people who often played violent video games–typically “first person shooter” games involving automatic weapons–and 15 who had never played them.

These participants looked at sketches: some showed people by themselves while others depicted people in pairs. Half of the pictures showed mundane activities–two men carrying a cabinet–while the other half showed violent activities–one man forcibly holding another man’s head underwater.

As participants looked at these pictures, researchers used functional magnetic resonance imaging to measure neural responses.

The researchers reasoned as follows: if violent video games impair players’ empathy, these scans should reveal differences in brain networks associated with empathy. That is: gamers and non-gamers would respond similarly to the men carrying the cabinet, but the gamers would not respond with as much empathy as the non-gamers to the sight of human suffering. After all, in this hypothesis, the gamers would have been desensitized to human pain, and so would not have as strong an empathetic response.

How much difference did they find?

One Conclusion, and One More

No difference. Gamers and non-gamers were equally empathetic–and non-empathetic–when they looked at these images.

So: when these researchers answer this version of this question using these tools, they get this answer.

However: when these researchers answer this version of the question using metanalysis, they get a radically different answer:

The evidence strongly suggests that exposure to violent video games is a causal risk factor for increased aggressive behavior, aggressive cognition, and aggressive affect and for decreased empathy and prosocial behavior.

The Takeaway

I hope this entry does NOT persuade you that video games do, or don’t, reduce empathy.

I hope, instead, to persuade you that it’s hard to answer that question once and for all. We have many ways to ask, and many tools with which to answer, such a question. Only by asking (and asking and asking), and then by looking for converging answers, can we start to move towards a conclusion.

This is Your Chess on Ritalin
Andrew Watson
Andrew Watson

AdobeStock_51407584_Credit

In movies and on television, chess skill symbolizes “pure intelligence.” Characters who can outwit others on the chessboard are–obviously–just smarter than everyone else. (On The West Wing, President Bartlet routinely schools his staff on the nuances of the game.)

By implication, people who get better at chess seem to be getting smarter. So, if I can give you a drug that improves your chess score, you might conclude that this drug is making you more intelligent.

This approach, of course, has a controversial history. We have developed drugs (such as methylphenidate and modafinil) that benefit people who struggle during cognitive tasks. Will those same drugs benefit those who don’t typically struggle? If they do, is that benefit somehow unfair?

The Study: Setup

German researchers worked with 40 mid-level chess players. Following a remarkably detailed and precise research regimen, these players spent 4 days playing games against a chess program that had been matched to play at their level.

On each day, these chess players took either methylphenidate (Ritalin/Concerta), modafinil (Provigil), caffeine (yum), or a placebo. The schedule of these 4 drugs was varied among the group, to be sure that the order didn’t matter.

The Study: Results

How did they do? It’s bit complicated…

Compared to the games when they took a placebo, they players slowed down when they took all three drugs. On average, they added nearly 2 minutes to the time they took (9:13 vs 7:17 per game); that’s a slowdown of 25%.

When they took more time, these players often ran up against the time limit that had been set for each game. As a result, they lost lots of games by running out of time.

But, what happens when we look at the games when they didn’t run out of time?

They got better. It’s a little tricky to describe improvement in chess terms. You might say they had a 5% increased chance of winning. Or, you might say–as the lead researcher said:

If we correct for the slowest players, then the effect would be the equivalent of moving a player from say, number 5000 in the world ranking, to number 3500 in the world ranking. In a single game, the effect is the equivalent of having the white pieces, every time.

That’s quite the improvement.

The Study: Implications

So, what do we do with this information? Should we all rush right out and add some methylphenidate to our daily vitamins?

In my view, not yet.

First, this study looked at people playing chess. Although we associate chess with “intelligence in general,” we can’t be sure–based on this study alone–that the effects of these drugs will generalize to other cognitive activities.

Second, the study worked with an unusual subgroup of the population: the average IQ among the players was 125. (Of course, IQ isn’t the only–or necessarily the best–way to measure human cognitive capacity. But, it’s not meaningless.)

An IQ of 125 is more than 1 standard deviation above average. This is, in other words, a select–even atypical–group of thinkers.

For these reasons, I wouldn’t do anything differently just yet.

And third: I stumbled across this study after I had completed this blog entry. The headline is that non-prescription use of Ritalin can muddle the dopamine system–at least in rats.

When I say “muddle,” I’m summarizing the following passage:

These changes in brain chemistry were associated with serious concerns such as risk-taking behaviors, disruptions in the sleep/wake cycle and problematic weight loss, as well as resulting in increased activity and anti-anxiety and antidepressive effects.

In other words, if these effects are true for humans as well as rats, that’s some serious muddling right there.

At the same time, I must tell you that this chess study gives me pause. In grad school, the orthodoxy about these drugs was that “they help people who struggle think more like typical learners, but they don’t help typical learners think like more extraordinary learners.”

(You might think of them as a mental knee brace. The brace helps you if you’re injured, but isn’t particularly beneficial if you’re not.)

This study, however, suggests that–for this atypical group of people doing this atypical thing–such drugs do provide a cognitive benefit.

An alternate explanation

I’m intrigued by the fact that chess players taking methylphenidate, modafinil, and caffeine slowed down.

Perhaps the reason they played better is not that the drugs helped them think better, but that they gave the players more time to think.

Could we get the same benefit by deliberately forcing ourselves to take more time with our thoughts? This study doesn’t answer that question. But, the possibility seems worth exploring.

______________________________________________________________

A final note, unrelated to the content of this study. In looking over the specifics of the research paradigm, I note that the team began work on this study in July of 2011, and that it was published only in 2017. That’s right: they’ve been working on this for over 6 years.

Wow.