Skip to main content
Surprise: Screen Time (Even Before Bed) Doesn’t Harm Adolescents
Andrew Watson
Andrew Watson

We’ve got lots of research on the complexity of adolescent life. And: lots of research on the importance of sleep.

We’ve also got some research showing that technology can clutter our cognitive processes. (To be clear: technology might also be fantastically useful.)

So, what happens when you put all that together and ask about technology and adolescent well-being?

Predictions

I myself would have made two predictions:

One: except at the very extreme end of screen use, I would have doubted technology time matters much for adolescent well-being. Over the years, I’ve seen plenty of studies suggesting that teens do just fine — even socially — when they’re often on line.

In brief: I’ve heard lots of exaggerated concerns, but little persuasive data behind them.

Two: sleep is, of course, essential for human well-being. We can’t think or learn well without it. Heck, we can’t function very well without it.

And, we’ve got research showing that the light from screens delays melatonin onset — and therefore makes it hard to fall asleep.

For those reasons, I would have predicted that screen time before bed — especially LOTS of screen time before bed — would make life hard for adolescents.

The Findings

According to this review, I’m half right. And: not the half I was confident about.

A study that looked at more than 17,000 adolescents in the US, England, and Ireland found that technology use generally didn’t affect adolescent well-being.

(More precisely, they found that screen time accounted for less than 1% of the difference in adolescent well-being.)

And — SURPRISE — they found that technology use before bed had no statistically significant effect.

Amazingly, even one hour of screen time produced no ill effects in this study.

What Teachers and Parents Should (and Should Not) Do

This study reconfirms the point that screen time — except extreme amounts — probably isn’t hurting teens. Even pre-bedtime screens aren’t such a big deal.

(If you’re still having trouble wrapping your head around that second point, don’t worry. I am too.)

So, what should we do?

Well, if we want to improve adolescent well-being, we should NOT focus our efforts on changing their technology habits. After all, if this study is correct, even an optimal change would improve their well-being by less than 1%.

That is: we should NOT be alarmed by the alarmists.

Instead, we should find out what really is stressing them out and focus on those problems instead.

As I find persuasive, research-based evidence to answer that question, this blog will let you know.

STOP THE PRESSES (And Yet, Remain Calm)
Andrew Watson
Andrew Watson

In the world of science, if you see the right kind of evidence, you have to change your mind.

As of this blog post, I might start changing my mind.

Regular readers know that I frequently decry false claims about “brain training.” In particular, when people claim to increase working memory capacity, we find that those claims don’t stand up to research scrutiny. (For instance: here and here and here.)

In my last post on the topic, I more-or-less gave up on the possibility. In fact, I wrote:

So, from now on, I’m just going to assume that new claims are highly likely to be false.

If brain training claims are subsequently replicated by many research teams; if the methodologies are scrutinized and approved by several scholars in the field; well, if that happens, I’ll relent.

For now, I don’t want to be fooled again.

But maybe — just maybe — researchers might have found a strategy to improve working memory. (I can’t believe I just wrote that sentence.)

April’s Big News

In a study just published in Nature Neuroscience, Reinhart and Nguyen might have found a way to boost working memory capacity.

We’ve got persuasive research showing that working memory overload causes brain waves in different regions to fall out of synch.

Reinhart and Nguyen, in effect, wondered if they could help resynchronize those brain waves.

In a multi-step study, they found that:

First: asynchrony of frontotemporal theta-phase waves corresponds with working-memory declines in 60-76 year olds (compared to 20-29 year olds).

(The findings get even more technical from there, so I’ll just stick with “brain waves” for now. If you want the details, click the link above.)

Second: the right kind of external electrical stimulation resynchronizes those waves.

Third: when the theta waves resynchronized, the WM function of the older subjects returned to levels typical for the younger subjects.

Technically speaking, THAT’S HUGE. The right kind of electrical stimulation improved WM.

What Happens Next?

A) Before we get too excited, we should let some expert skeptics weigh in. Although the concept is easy enough to understand — “the right kind of brain zaps restore WM to higher capacity!” — the specifics are fantastically complicated.

We should, in other words, let other scientists in this field kick the tires good and hard.

By the way: nine researchers have responded here. Several have suggestions for other populations to study: for instance, people diagnosed with dementia. But, none of them spot glaring errors in the methodology.

(For instance: in two studies I can think of, researchers made claims about improving working memory, but tested short term memory instead. This study doesn’t include that kind of switcheroo.)

B) Again before we get too excited, we should recall: this study isn’t about raising WM capacity for students. It is about restoring WM capacity for people who have experienced a decline.

That result might benefit each of us as we age. But, it doesn’t (yet) offer benefits to our students who have typically functioning WM.

However, this technique might help younger people with diagnosed WM deficits. That finding alone could be transformative for some students in our schools.

C) We don’t really know what this might look like outside of the neuroscience lab. As professor Robert Howard warns:

The “real world” benefits of any apparent improvements in experimental working memory function associated with the technique will also need to be evaluated together with the impact of any potential adverse effects of brain stimulation.

For example, induced improvements in working memory might come at the price of worsening of other areas of cognitive function.”

D) Okay, you can now go ahead and get really excited. I have said for years that if we could improve WM capacity, that change would be like the printing press in its effect on human cognition.

For the first time in a decade, I’m starting to think that it just might happen.

If you’d like to learn more, this very helpful summary of Reinhart and Nguyen’s work is a good place to start.

How to (Un)Make System-Wide Changes Based on Research
Andrew Watson
Andrew Watson

If you’re reading this blog you already know: brain researchers can offer fascinating and practical advice to improve teaching and schooling.

There’s SO MUCH good advice, you might want to make lots of changes right away to get all those benefits.

Before you do that, let me offer two suggestions.

First: Take Incremental Steps

I worked with a school where the math department dramatically overhauled its entire program, with research guidance along the way.

The department adopted an inquiry based pedagogical model. And, it developed a syllabus designed to space and interleave practice. And it championed a group-work approach. And, to help students manage math anxiety, it adopted a mindful meditation program. And it incorporated lots of exciting technology.

The results: not good. By every measure, the students didn’t learn as much math. (In some cases, the results were quite alarming.) And: stress levels among teachers and students went up considerably.

Here’s the vexing point: the teachers didn’t know what to do next, because they didn’t know which of the changes they made created the problem.

Should they go back to direct instruction? Should they return to the traditional syllabus? Give up on group work and stop mindfulness breaks? Return all the iPads?

Because the grand combination of changes had produced those bad results, teachers didn’t know which one piece (or combination of pieces) to undo.

Potential solution: make one change at a time. Get a good measure of its success or failure before you add the next.

Second: Define Success and Failure in Advance

When we put lots of effort into a new project, we’re naturally inclined root for its success. In fact, we’re inclined to exaggerate its success.

For example: if I commit to a systematic program of retrieval practice with my students, I’m likely to see its benefits. And: to overlook its shortcomings.

To compensate for these natural blind spots, I think, we should define success and failure before we make the changes.

For instance, I might decide that all my retrieval practice work is succeeding if class averages on weekly vocabulary quizzes go up by 5%. It’s failing if that average falls by 5%. Any result in between is inconclusive noise.

(No doubt, what you measure and how you measure it will differ considerably from this hypothetical.)

At the end of the following term, I can check out my class averages, and have a plausible measurement of the results.

When I calculate those averages, I will hope that all my work paid off in more vocabulary learning. But I’ll be less tempted to overhype success — “look, they’ve having so much more fun!” — if cold data throw water on my enthusiasm.

And, that cold water will be all the more bracing if I myself decided on that measurement — well before the experiment even began.

To Sum Up

We should indeed make changes to schools based on psychology and neuroscience research.

To ensure those changes produce the most benefit, we should introduce them at a reasonable pace.

And: we should know before we start how to define success.

Working Memory Overload Throws Neurons Out of Synch
Andrew Watson
Andrew Watson

My English classroom often includes discussions like these:

When we read Zora Neale Hurston’s Their Eyes Were Watching God, I might ask my students “who is the antagonist?”

To answer this question, my students must recall several bits of factual information:

the definitions of “antagonist” and “protagonist”

the major characters of the novel

their most important actions and goals

Once they’ve recalled those facts, my students have to rearrange all that information into new conceptual patterns.

Which character’s actions and goals best align with the definition of “protagonist”? (In this case, that’s an easy question. Janie Crawford is far and away the likeliest nominee. )

Who’s the antagonist? That is, which character’s actions and goals thwart Janie’s?

That’s a much harder question, and students must wrestle with several possibilities as they develop a plausible argument.

Let’s Talk About the Mind

Where do my students hold and process all this information?

For a psychologist, that’s an easy question: working memory.

Working memory allows students to select, hold, reorganize, and combine information held in long-term memory: in this case, the novel’s events.

It also allows them to select, hold, reorganize, and combine information perceived from the environment: the question I just asked about antagonists.

Because we constantly ask our students to hold and combine bits of information, our students use working memory all the time.

When we ask students to calculate the volume of a solid, or to compare historical figures, or to explain a trophic cascade, or to predict what will happen when I roll a ball down a ramp, we’re asking them to use working memory.

By the way: this truth hold for skills and processes as well. Why is learning to drive a stick shift so hard? Because you must hold, combine, and co-ordinate several distinct physical processes.

And, here’s an essential point: we don’t have lots of working memory to use.

Let’s Talk About the Brain

We know a lot about the mental processes involved in working memory. (I might have written a book about them.)

But, the neuroscience of working memory has been harder to study.

In the world of psychology, we know that WM can be easily overwhelmed.

But, in the world of neuroscience, we don’t know exactly what happens at that moment.

In other words: what’s happening in the physical object of the brain that accounts for the mental difficulty?

What happens, for example, when I can’t shift gears properly on this stupid manual car?

Are neurons somehow disconnecting from one another? Are electrical signals going haywire? Perhaps neurotransmitters are watching kitten videos on Youtube ?

Today’s News

We’re starting to get an answer to that question.

New research suggests that successful working memory functioning requires that distinct brain regions operate synchronously.

When they reach overload, those regions fall out of synch.

Once those regions no longer synchronize, then students might struggle to solve math problems, or sound out a word with new phonics rules, or conjugate a verb in a freshly learned tense.

Like much neuroscience research, this study is fantastically complicated. Luckily, it’s been described quite well by Jordana Cepelewicz over at Quanta Magazine. (No need to worry about the “seven plus or minus two” formula.)

The good news here is clear: we’re starting to get a clearer picture about the neuroscience of working memory overload. Because teachers should be obsessed with working memory overload, we might well be intrigued by this news.

We should keep in mind, by the way, that this research so far has been done with monkeys. Whenever considering new research, always keep this rule in mind:

Never, never, never change your teaching practice based on research into non-human animals.

At some point, we might get neuroscience research that helps teachers manage working memory load. Although that day isn’t today, we should be glad that research possibility is clearer now than before.

Is Failure Productive? (Hint: We Should Ask a Better Question)
Andrew Watson
Andrew Watson

Do students learn better after they experience failure? Two recent studies over at The Science of Learning help us answer that question.

In the first study, professors in a Canadian college wanted to help their Intro Bio students learn difficult concepts more effectively. (Difficult concepts include, for example, the “structural directionality of genetic material.”)

They had one Intro Biology section follow a “Productive Failure” model of pedagogy. It went like this.

First, students wrestled with conceptual problems on these difficult topics.

Second, they got in-class feedback on their solutions.

Third, they heard the professor explain how an expert would think through those topics.

Another Intro Bio section followed these same steps but in a different order:

First, they heard the professor explain how an expert would think .

Second, students wrestled with conceptual problems.

Third, they got in-class feedback on their solutions.

So, all students did the same steps. And, they all followed an “active pedagogy” model. But, one group struggled first, whereas the other group didn’t.

Who Learned More?

This answer proves to be unusually complicated to determine. The researchers had to juggle more variables than usual to come up with a valid answer. (If you want the details, click the link above.)

The headlines are:

On the next major test, students who experienced productive failure learned more.

On the final exam, however, only the “low performing” students did better after productive failure. For the middle- and upper- tier students, both strategies worked equally well.

Conclusion #1:

So, we can’t really conclude that productive failure helps students learn.

Instead, we’re on safer ground to say that – over the longer term – productive failure helps “low performing” students learn (compared to other kinds of active learning).

But Wait, There’s (Much) More

Two weeks after they published the study about Canadian college students in Biology classes, Science of Learning then published a study about German fifth graders learning fractions.

(As we discussed in this post, watching students learn fractions helps researchers measure conceptual updating.)

In particular, these researchers wanted to know if students learned better after they struggle for a while. (Again, for details click the link.)

In this case, the answer was: nope.

So, we arrive at Conclusion #2:

Some college students, but not most, learned more from productive failure in a biology class – compared to those who learned via other active learning strategies.

However, fifth graders did not learn more about fractions – compared to those who learned via direct instruction.

Got that?

The Biggie: Conclusion #3

When teachers come to research-world, we can be tempted to look for grand, once-and-for-all findings.

A particular study shows that – say – students learn better when they use an iPad to study astronomical distances. Therefore, we should equip all our students with iPads.

But, that’s NOT what the study showed. Instead, it showed that a particular group of students studying a particular topic with a particular technology got some benefit – compared to a particular alternate approach.

So, Conclusion #3:

Teachers can often find helpful research on teaching strategies.

We should assume that results vary depending on lots of highly specific conditions. And therefore, we should seek out research that includes students (and classroom subjects) as much like our own as possible.

And so: if you teach biology to college students, you might give the first study a close look to see if its methods fit your students well. (Given that it worked particularly well with struggling students, that variable probably matters to you.)

If, however, you teach fractions to fifth graders, you should probably hold off on productive failure – unless you find several other studies that contradict this one.

In other words: teachers can learn the most from psychology and education research when we investigate narrow and specific questions.


A final thought. I’ve only recently come across the website that published these studies. Congratulations to them for emphasizing the complexity of these research questions by publishing these studies almost simultaneously.

I’m sure it’s tempting to make research look like the last word on a particular topic. Here, they’ve emphasized that boundary conditions matter. Bravo.

The Simplest Motivation Strategy that You’re (Probably) Not Using
Andrew Watson
Andrew Watson

In school as in life, sometimes we just need to get stuff done. And, truthfully, getting stuff done can be a real challenge.

For instance: I’m about to start writing a book. Based on previous book-writing experiences, I can predict the mundane problems that will get in my way.

My cats invariably need attention just as I’m starting to get in the zone.

The alerts from my email account lure me away from difficult writing passages.

I can never decide: stop for a snack now, or wait until lunch?

Luckily, we’ve got a remarkably simple strategy to get over these predictable hurdles.

Give Me Three Steps

Step 1: make a list of the potential problems. (I’ve already done that.)

Step 2: figure out the most plausible solutions.

So, for instance: instead of responding to my email alerts, I can simply close that browser. Problem solved.

Step 3: turn the first two steps into an “if-then” plan.

IF I get an email alert while working on my book, THEN I’ll close my email browser rather than look at the email.

Believe it or not, this simply process makes it much likelier that I will, in fact, ignore the email. (Or the cat, or my hunger.) And, because I’ve taken care of the most common obstacles, I’m much likelier to get my book written.

(Ask me six months from now how it’s going.)

Two More Steps?

This technique is even more effective when combined with another technique called “mental contrasting.”

In a recent article summarizing research in these fields, Marc Hauser describes mental contrasting this way:

In [mental contrasting], the individual first identifies and vividly describes a desired goal or wish. To be effective, this wish has to be feasible, but not easy.

Next, the individual identifies an obstacle that might get in the way of achieving this goal and vividly describes it [too].

Doing both together — vividly describing the goal AND vividly describing the obstacle — turns out to be much more helpful than doing just one or the other.

The Proof in the PSAT, and the Pudding

These techniques seem so simple that it’s hard to believe they work. In fact: why should we believe it?

Well, we’ve got some good research to persuade us. Hauser’s article, in fact, does a very helpful job summarizing both the theoretical background behind these strategies, and the studies that show their effectiveness.

For instance, Angela Duckworth (yes, that Angela Duckworth) worked with high-school students who wanted to prepare for the PSAT. Those who went through this process did 60% more practice problems than those who did a control task instead.

In fact, we’ve got good findings for non-academic tasks as well: limiting drinking, smoking, snacking, and so forth.

Practical Applications for Students

This technique, it seems to me, could be VERY easy for teachers to use. When we talk with our students about their homework habits, we can guide them through this process.

In fact, when I work with students in schools, I bring a specific form to guide them through the process.

(Here’s another approach from Ollie Lovell.)

Equally helpfully, we can use this technique to get our own work under control as well. We might not all have books to write, but we all have plenty of lesson-planning to do.

IF my phone rings while I’m preparing tomorrow’s class, THEN I’ll switch the phone to airplane mode without looking at the caller ID.

Problem solved!

The Debate Continues: Being Bilingual Doesn’t Improve Executive Function
Andrew Watson
Andrew Watson

Being bilingual, of course, offers lots of advantages. You can speak with more people, and — depending on potential cultural differences — gain valuable additional perspectives.

For a long time, researchers have wanted to know: does being bilingual offer additional cognitive benefits? Can it, in a meaningful way, help us think differently, and think better?

Some researchers — including my grad-school professor Gigi Luk — have argued that it increases cognitive control.

This possibility has been studied, for instance, with immigrants who learn a second language in Head Start.

However, this claim has long provoked controversy.

In this recent study, using highly sophisticated statistical procedures, researchers found that being bilingual did not improve executive function for Turkish immigrants to Germany.

What Should Teachers Do?

I have, over the years, seen studies on this topic go back and forth.

For instance, the Head Start research cited above seems quite persuasive. In that study, all students increased self-control during their Head Start year; after all, they were getting older.

Crucially, the students who also became bilingual showed greater increases in self-control. The likely explanation? The cognitive control required to be bilingual helps with other kinds of self-control as well.

And yet, as seen in the study of Turkish immigrants, that conclusion just might not be plausible.

So, my suggestions:

First: if your school currently makes strong claims about the executive-function benefits of bilingualism, you should think strongly about acknowledging the controversy in this field. That claim does have support. It also faces lots of strong counter-evidence.

Second: use this study as a reminder to seek out contradictory findings before you make changes to your classroom.

That is: if you hear persuasive research about topic X, be sure to look for anti-X research before you start X-ing.

Third: somewhat glumly, I wonder if this question ever can be answered finally and persuasively. The category “executive function” is alarmingly nebulous. And, the reasons that people become bilingual vary dramatically.

Given so many kinds of variety, I increasingly doubt we’ll be sure of an answer here.

With that point in mind, I think we should highlight this important point: learning a second language has value even if doing so doesn’t produce additional executive function benefits.

When we learn new languages, we create new opportunities to meet and connect with a world full of people. That benefit alone makes all that hard work worth while.

Factual Knowledge Must (Not?) Precede Higher Order Thinking
Andrew Watson
Andrew Watson

Last year, Tom Sherrington put out a recap of “evidence-informed ideas every teacher should know about.”

His statement about Bloom’s taxonomy goes like this:

“Never teach in a way that relegates knowing things to the bottom of the pile, placing creativity and ‘synthesis’ at the top, or get overly bogged down in ideas about ‘higher order thinking skills’ as if they are separate from knowing things.  They aren’t.

Re-think your sense of Bloom’s taxonomy to view knowledge that is the foundation of all else – and knowing things for the sake of it is good. Because there is always a sake and knowing things never stifles creativity; one fuels the other.”

In this summary, Sherrington makes a strong case for the primacy of factual knowledge. In this view, learners simply can’t undertake “higher order” thinking skills — like synthesis or creativity — without a strong foundation of factual knowledge.

Among teachers, this principle may be best known from Daniel Willingham’s Why Don’t Students Like School? He sums up the matter briskly as chapter 2’s core principle:

“Factual knowledge must precede skill.”

Of course, many scholars have rejected this perspective. Best known among them, Jo Boaler has insisted that math education currently relies excessively on memorization of disconnected facts.

Rather that drill times tables, she argues, teachers should prompt inquiry, exploration, and curiosity.

In other words: we can get to the top of the pyramid without worrying overly about the bottom layer.

Yes, but What Does Recent Research Show?

Researcher Pooja Agarwal specializes in cognitive science, with a focus on memory formation. In fact, she’s particularly keen on doing research in classrooms — not just psychology labs — to ensure that research findings generalize to real-world learning.

(We’ve interviewed Dr. Agarwal for the blog before. And, she’ll be offering a one-day Learning and the Brain seminar on powerful teaching in April.)

Agarwal recently explored the relationship between factual knowledge and skill. Her findings might surprise you. (They certainly surprised me.)

Contra Willingham, Agarwal found that …

“…building a foundation of factual knowledge via retrieval practice did not enhance students’ higher order learning.”

Instead, students did best when the form of the practice questions matched the form of the test questions. (‘Higher order’ here means ‘higher on Bloom’s taxonomy.’):

“Fact quizzes enhanced final fact test performance and higher order quizzes enhanced final higher order test performance.”

That is: when students didn’t review a particular set of facts, they could still reason with them — as long as they had practiced doing that kind of reasoning.

Ultimately, Agarwal ends up advocating for “mixed” practice quizzes, which include both factual and ‘higher order’ questions. (Here‘s a link to her latest blog post summarizing this research.)

Lots More to Learn

Willingham has not yet responded to Agarwal’s study. I don’t doubt that he will; keep an eye out on his blog.

In fact: I haven’t seen any research response to this study. It will be a fascinating debate to watch.

I suspect one line of debate will go like this: Agarwal’s study creates a plausible way to measure the tension between “factual knowledge” and “higher-order thinking.” However, that difference as measured in this study might not be just what Sherrington and Willingham mean.

As you can infer, these differences get technical quickly. Rather than dig into them now, I think teachers should have two responses:

First: be very happy that thoughtful people will be rigorously testing this highly complicated question. We really do need to know the answer to this question…and we don’t yet.

In fact, we probably don’t even have the right vocabulary and the right categories to answer it yet.

Seconddon’t yet make any big changes based on this research.

I hope that Agarwal’s study will launch a fresh round of investigation. We should wait to see where that leads us before we make big school plans.

The Better Choice: Open- or Closed-Book Quizzes
Andrew Watson
Andrew Watson

Psychology research offers lots of big ideas for improving student learning: self-determination theory, or the spacing effect, or cognitive load theory.

Once we make sense of that research, we teachers work to translate those big idea to practical classroom strategies.

In some cases, we can simply do what the researcher did. In most cases, however, we have to adapt their test paradigm to our specific classroom world.

So, for example, Nate Kornell explored the spacing effect with flashcards. He found that 1 deck of 20 cards produced more learning 4 decks of 5 cards. Why: a deck with 20 cards spaces practice out more than a deck with five cards.

That “big idea” gives teachers a direction to go.

But: we should not conclude that 20 is always the right number. Instead, we should adapt the concept to our circumstances. 20 flashcards might be WAY TOO MANY for 1st graders. Or, if the concepts on the cards are quite simple, that might be too few for college students studing vocabulary.

Translating Retrieval Practice

We know from many (many) studies that retrieval practice boosts learning.

In brief, as summarized by researcher Pooja Agarwal, we want students to pull ideas out of their brains, not put them back in.

So, students who study by rereading their notes don’t learn much; that’s putting ideas back in. Instead, they should quiz themselves on their notes; that’s pulling ideas out.

This big idea makes lots of sense. But, what exactly does that look like in our classrooms?

Over the years, teachers and researchers have developed lots of suggestions. (You can check out Dr. Agarwal’s site here for ideas.)

Thinking about retrieval practice, researchers in Germany asked a helpful question. In theory, closed-book quizzes ought to generate more learning than open-book quizzes.

After all: if my book is closed, I have to pull the information out of my brain. That’s retrieval practice.

If my book is open, I’m much likelier simply to look around until I find the right answer. That’s not retrieval practice.

These researchers wanted to know: does this sensible prediction come true?

The Results Please

Sure enough, closed-book quizzes do produce more learning. This research team retested students on information twice: one week after, and eight weeks after, they heard information in a lecture.

Sure enough, the students who took closed-book quizzes did substantially better than those who took open-book quizzes. (The cohen’s d values were above 0.80.)

In brief: we now have one more research-supported strategy for creating retrieval practice.

As always, I think we should be careful to think about limits on such research.

In the first place, this study took place with college students. If you teach younger students, and your experience tells you that an open-book strategy will work better under particular circumstances, you might ask a trusted colleague for a second opinion. Research like this gives us excellent guidance, but it can’t answer all questions.

In the second place, other variables might come strongly into play. For instance: stress. If your school culture has always allowed open-book quizzes, your students might freak out at the prospect of a closed-book alternative. If so, the benefits of retrieval practice might be lost to anxiety overload.

In this case, you’ll need to take the time to explain your reasoning, and to ease your students into new learning habits.

In any case, we can be increasingly confident that many varieties of retrieval practice produce the desirable difficulties that help students learn. (For a fun exception to this rule, click here.)

 

The Limitations of Neuroscience in Guiding Teachers
Andrew Watson
Andrew Watson

At a professional development talk on long-term memory formation, a teacher politely scolded me: I should have spent more time discussing alpha waves and gamma waves.

After all, she said, that was the really important stuff when it came to brains and learning.

Of course, the differences between alpha and gamma waves can fascinate us. And, pictures of various graphs can look dramatic — especially if the graphic designer has made the colors particularly attractive.

And yet, this kind of neuroscience information offers almost no useful guidance to teachers.

Here’s why.

What Should Teachers Do?

Pretend for the moment that we can plausibly say “this brain region shows gamma waves when it is learning, and alpha waves when it isn’t.”

(By the way, we almost never can say that plausibly. But, we’re pretending here.)

What should teachers do with that information?

Presumably we should ask: how can we reduce alpha waves and enhance gamma waves?

The answer to that question will always include a particular teaching practice. We should use retrieval practice. Or, we should space out repetitions. Or, we should reduce working memory load.

In every case, we know about the effectiveness of those teaching techniques by studying psychology, not neuroscience.

We can, of course, see changes in brain activity when use various classroom techniques.

But, we can determine their effectiveness only by measuring some behavioral outcome. Did the students do better on the test? Did they pay more attention to the stimulus? Or, did they demonstrate higher working memory scores? In every case, those are psychology questions.

Today’s News

I write about this topic every few months, because confusion between the two disciplines crops up fairly regularly.

For today, I want to highlight a blog post over at the Learning Scientists, where they’ve gathered several resources to explore this distinction.

Some of their resources explore the topic in a general way. The final link leads to a hot topic indeed: Daniel Willingham and Daniel Ansari challenge Jo Boaler and Tanya Lamar’s interpretation of neuroscientific data.

If you’ve been following debates about prior knowledge and math teaching, grab some popcorn and surf on over to that link.