Skip to main content
Today’s Neuro-Nonsense: Reading Brainwaves in the Classroom
Andrew Watson
Andrew Watson

A friend recently sent me this link to a video story in the Wall Street Journal.

The 6 minute video discusses a program in China that uses a…welll…gizmo to measure students’ attention levels during class.

Everything I know about this particular program comes from the video, so it’s possible I’m missing important information. But, based on the video, I render this temporary verdict:

THIS IS BONKERS.

Electroencephalography simply doesn’t work this way. That is: the gizmo is far too crude (and subject to irrelevant stimuli) to produce meaningful measurements.

And, attention is far too complex a cognitive process to be measured by three contact points with the skull. (Not joking. THREE.)

If I gave you a stethoscope and said, “Here. Please measure that’s person mental health,” I suspect you’d refuse.

You’d say something like this: “I can gather data, and then pretend that those data say something meaningful about a really complex network of mental functions. But you and I both know that this tool simply can’t do what you’re asking it to do.”

You’d be right.

One More Point

Of course, there’s LOTS to say about the social policy implications of using this gizmo. This isn’t a blog about social policy, so I’ll leave that to other analysts.

I’m answering the very basic technical question: “can we really do that?” The answer, as far as I know, is “certainly not.”

 

h/t Scott MacClintic

What if a Research-Supported Educational Idea is Unconstitutional?
Andrew Watson
Andrew Watson

As a reader of this blog, you know we’ve got lots (and LOTS) of research showing that sleep benefits learning.

A good night’s sleep consolidates memories. Naps improve learning. Heck: sleep even helps us forget things we want to forget.

What if it turned out that—for some reason—giving students time to nap were unconstitutional? What if lawyers argued we can’t delay high-school start times to allow for more sleep, because the constitution requires school to begin before 8:00 am?

Although this hypothetical example is obviously silly—why would naps be unconstitutional?—the deeper question matters. We are, after all, a country of laws. If a particular educational practice—like, say, “separate but equal”—is unconstitutional, we don’t do it.

Obviously.

A Current Example

The Tricycle is a magazine that promotes Buddhism, and for that reason takes real interest in mindfulness. You can, for instance, read about “The Buddha’s Original Teachings on Mindfulness” here.

Back in May, Tricycle editor James Shaheen posted a surprising interview on their website.

In this conversation, Shaheen talks with Dr. Candy Gunther Brown: a religious studies professor who argues that mindfulness can’t be separated from its religious origins.

For that reason, requiring mindfulness is—in fact—requiring a kind of religious activity: a requirement that, in Brown’s view, violates the US Constitution.

Digging Deeper

Shaheen’s conversation with Brown lasts almost an hour, and so can’t be summarized in a brief blog post. I encourage you to listen to the whole thing.

To give a flavor of her argument, I’ll briefly mention a few of her points.

First: mindfulness often encourages participants to adopt a detached view of the events around them, and their emotional responses to those events. Rather than respond, we should “take a mental step back” and simply notice our bodily response to our feelings.

Brown notes that this seemingly “neutral stance” might well spill over into a religious perspective. After all, many religions ask their adherents not to notice their internal responses, but to go into the world and do something about those events.

For that reason, mindfulness isn’t a religiously neutral practice.

Second: Brown quotes research suggesting that people who practice mindfulness have religious experiences at a higher rate than those who don’t. And, people who have those often end up moving away from a monotheistic perspective.

Which is to say, mindfulness might in fact change the religious views of the people participating in it.

If we know that to be true (and, to be clear, I haven’t reviewed the research Brown cites), then requiring students to practice mindfulness might both have all the school and health benefits we like and indirectly encourage a particular religious framework within schools.

Of course, Brown discusses other concerns as well. Some are explicitly legal. Others focus on the motives of (some) people and organizations that promote mindfulness: phrases like “stealth Buddhism” make her suspicions amply clear.

You can read her own summary of her argument here.

What Should We Do?

Brown doesn’t want to banish or forbid mindful practices from schools. Instead, she wants schools that have them ensure such programs are voluntary.

They should not even be “opt-out” programs that students must decline—risking their relationships with peers and teachers.

Instead, they should be “opt-in” programs that students sign up for and attend when they choose to.

In her view, this framework—especially if it offers appropriate alternatives—would both provide the good stuff that mindfulness allows and pass constitutional muster.

Another important point stands out in this debate.

Note the source of this anti-mindfulness interview: a magazine and website devoted to Buddhism and mindfulness. (Okay, it’s not exactly an anti-mindfulness interview, but I suspect some people will see it that way.)

You might think that this magazine would be hostile to Brown’s position. You might think it would try to ignore her work, or shout it down.

Instead, editor Shaheen interviews Brown sympathetically and politely and knowledgeably. (He’s clearly read her book.)

So, the second thing we should do is emulate Shaheen’s example. As I’ve written elsewhere: when we hear about evidence that contradicts our beliefs, we should not ignore it or decry it.

We should, instead, learn as much as we can from it.

Whatever you think about the constitutionality of mindfulness, I hope you admire, and follow, Shaheen’s example.

Trying to Prove Yourself Wrong
Andrew Watson
Andrew Watson

You want the best research to inform your teaching. That’s why you’re reading this blog.

What’s the best way to be sure–or, as sure as you can reasonably be–that you’ve reached the most researchy conclusion?

For example: what should you do if you discover contradictory research?

That’s the problem that Blake Harvard faced over at Effortful Educator.

Here’s the story…

The Setup

Harvard teaches psychology to high school students. He knows A LOT about the mind and the brain. He’s careful to base his teaching practices on deep knowledge of research.

In fact, he even tries occasional experiments to study different teaching practices in a (relatively) controlled manner. In this post, for instance, he writes about his exploration of flexible classroom seating.

In brief, he knows his stuff.

Harvard’s conclusions, at time, challenge current trends. For instance: he describes himself as a relatively traditional teacher: more persuaded by research on direct instruction than by constructivist approaches.

You might not agree with those conclusions. But, if you read his blog, you’ll be impressed by his command of the research.

So, what did Harvard do when he came across research seeming to contradict his beliefs?

What if, for instance, a study headline says that students learn more from (“constructivist”) active learning than from a (direct-instruction-y) lecture?

Heck: the study was even highlighted in the Harvard Gazette. (To be clear: the Blake Harvard I’m writing about doesn’t work at Harvard, the university in Cambridge, MA.)

Key Lesson #1: Try to Prove Yourself Wrong

After a moment of understandable trepidation, Harvard forced himself to do what he tells his psychology students to do: confront their biases.

That is: Harvard (the teacher) thinks that the right kind of lecture will result in more learning than most active learning paradigms: exploratory discussions, for example, or projects.

When he finds research that purports to show the opposite, he had a great opportunity: he could disconfirm his prior convictions.

This may be the very best strategy to achieve the goal at the top of this post: to base our teaching on excellent research.

If you think that strategy X will result in the most learning for your students, you should:

First: assume that someone has found contradictory evidence (someone always has), and

Second: actively seek out that contradictory evidence. (Try Google Scholar.)

When you find it, give that evidence a thoughtful read. You’ll end up facing one of a few options.

Option 1: the contradictory evidence is more persuasive than the evidence you’ve been following. As a result, you’ll be able to improve your teaching practice. That’s great news!

Option 2: the contradictory evidence isn’t very persuasive. As a result, you know you’ve been doing it right up to now. That’s great news!

Option 3: both evidence pools are equally convincing. Now you know that your former certainty isn’t supported by the best evidence. You can try out both approaches with your students. You’ll find the answer that works best in your context. That’s great news!

In any case, your scrupulous attempt to prove yourself wrong will lead to a better teaching result.

Key Lesson #2: Precise Definitions Really Matter

As it turns out, when Harvard tried to prove himself wrong by reviewing the research, he ended up focusing carefully on the study’s definition of “lecture” and “active learning.”

His ultimate conclusion–whether or not he changed his mind–came down to a very precise understanding of the specific teaching techniques used in those two classes..

For instance: if you read a study saying that “metacognition improves learning,” you should find out exactly what the researchers DID. What, precisely, was the metacognitive strategy that students employed?

And: does that technique make sense for you and your classroom?

Until we know the answers to those questions, we can’t know if this research makes sense in our specific classrooms.

A Final Point

You’ve noticed, I suspect, that I haven’t told you what (Blake) Harvard decided about Harvard (University’s) research.

Why?

Partly because I think you should read his post.

But also because the answer to that question–in my view–isn’t as important as these two broader conclusions.

Try to disprove your own beliefs.

Be sure you know exactly what happened in the research.

If you follow those two strategies, you can be increasingly certain that you’re following the best research-based advice around.

The result: your students will learn more.

 

 

When Facing Dramatic Blog Headlines, Ask For Evidence
Andrew Watson
Andrew Watson

Over at the Blog on Learning and Development, they’ve penned a dramatic headline: Exams May Damage Teenagers’ Mental Health and Restrict Their Potential.

Damage mental health.

Restrict teenagers’ potential.

That’s got your attention.

Your response to such a headline might well depend on your current beliefs about exams.

If you already think that exams harm students, you might cry out a triumphant “I told you so!”  You might send a link to your principal, along with a proposal to cancel the lot of them.

If you already think that exams hold students (and teachers) beneficially accountable for the information and skills they ought to have mastered, you might dismiss the blog post as yet another refusal to maintain strict but helpful standards.

I have an alternate suggestion:

Don’t take sides.

Instead, ask yourself a reasonable and straightforward question:

What pertinent evidence does the blog post offer to support its claims?

After all, you’ve decided to join Learning and the Brain world because you want to go beyond opinions to arrive at research-informed opinions.

So, as you review the blog post beneath that dramatic headline, don’t look for statements you agree (or disagree) with. Instead, check out the quality of the evidence provided in support.

Which Door?

Let’s start by asking this question: which kind of evidence would you find most persuasive?

A survey of high school principals, focusing on student stress levels.

A study comparing the mental health of students who took exams to the health of those who didn’t.

An online poll of high school students and their parents, asking about the highs and lows of high school.

An opinion piece by a noted neuroscientist.

A survey of therapists who work with teens.

Presumably, given these choices, you’d prefer door #2: the research study.

In this hypothetical study, researchers would identify two similar groups of adolescent students. One group would take exams. The other wouldn’t.

When researchers evaluated these students later on, they would find higher rates of mental health diagnosis in the exam group than the no-exam group. (For a relevant parallel, check out this study on developing self-control.)

Such a study would indeed suggest–as the blog states–that “exams may damage teenagers’ mental health.”

The other methods would, of course, reveal opinions. Those opinions might well be informed by different kinds of experience: the students’ experience, their parents’, their teachers’, their therapists’.

But, even well-informed opinions can’t root out the biases that well-designed research seeks to minimize.

Let the Sleuthing Begin

As you begin reviewing this blog post, you’ll find several links to research studies. That’s a good sign.

However–and this is a big however–those cited studies don’t investigate the blog’s central claim. That is: they don’t explore the effects of exams on teens.

Instead, they offer evidence that adolescence is an important time for neuro-biological development. That’s true and important, but it’s not the blog post’s central claim.

Once the author has developed the (important and true) claim that brains change a lot in adolescence, the blog arrives at its core assertion: “GSCEs [exams] impose unnecessary stress on adolescents.”

To support that claim, it offers this link.

Credible Sources

This link reveals good news, and bad.

Good News: the argument that “exams might damage teens’ mental health” comes from a newspaper article covering a neuroscientist’s speech. That scientist–Sarah-Jayne Blakemore–has done lots of research in the world of adolescent brains. She does splendid work.

In fact her most recent book, Inventing Ourselves, has been enthusiastically reviewed on this blog. Twice.

Bad News: the concern that exams might damage mental health is Blakemore’s (very well informed) opinion–but it’s an opinion. She’s giving a speech, not describing a study.

The hypothetical study outlined above–the one that was your first choice for evidence–hasn’t been done. (More precisely: it’s not cited by the blog, or by Blakemore.)

More Bad News: when Blakemore says that “exams” might damage mental health, she means very specific exams: the General Certificate of Secondary Education exams–a kind of a mandatory SAT exam in Great Britain.

That is: Blakemore does not say that exams in general harm students. Despite the headline, nothing in this article even indirectly suggests that schools shouldn’t have final exams.

If you want to persuade your principal to cancel all exams, this article simply doesn’t help you make that case.

Back to the Beginning

Let’s return to the blog headline that got us started: Exams May Damage Teenagers’ Mental Health and Restrict Their Potential.

I think this headline sets up a reasonable expectation. I expect (and you should too) that researchers have done a relevant study, crunched some numbers, and arrived at that conclusion.

They don’t just have an opinion. They don’t just have relevant expertise. They’re not making a prediction.

Instead, they have gathered data, controlled for variables that might muddle their conclusion, done precise calculations, and arrived at a statistically significant finding.

In the absence of that study, it’s genuine surprising that a blog (for an organization that champions brain research) has made such an emphatic claim.

Important Notes

First: I don’t know if the blog-post’s author wrote the headline. Often those two jobs fall to different people. (In newspapers especially, that arrangement can lead to misunderstanding and exaggerated claims.)

While I’m at it, I should also acknowledge that I myself might be guilty of an occasional hyperbolic headline.

I try to stick to the facts. I try (very hard) to cite exactly relevant research. I try to limit my claims to the narrow findings of researchers.

If you catch me going beyond these guidelines, I hope you’ll let me know.

Second: You might reasonably want to know my own opinions about exams. Here goes:

I haven’t seen any research that persuades me one way or the other about their utility.

I suspect that, like so many things in education, they can be done very badly, or done quite well.

Can exams become hideous exercises in mere memorization, yielding lots of stress but no extra learning? Yes, I’m sure that happens.

Can exams be inspiring opportunities for students to show their deep mastery of complex material? Yes, I’m sure that happens.

As is so often the case, I think global conclusions (and alarming headlines) miss the point.

We should ask: what kind of learning we want our students to do? What kind of learning climate we want to create? And, we should ask what kind of exam–including, perhaps, no exam at all–produces that result for most of our students.

Debunking Education Myths (Without Accidentally Reinforcing Them…)
Andrew Watson
Andrew Watson

Sadly, lots of learning myths clutter the field of education.

Right-brain/left-brain thinking? Myth.

The “learning pyramid”? Myth.

And, of course, “learning styles”? Epic myth.

How can we best combat all these myths?

As teachers and school leaders, we see an obvious strategy. If we want people to stop thinking the wrong thing, we should teach them the right thing.

More information, more skill in processing that information, will save the day.

Or, Not…

Alas, we’ve got lots of research showing that this obvious strategy doesn’t work.

In fact, it produces “backfire effects.”

The more we talk about about all the facts that rebut the myth, the more familiar the myth seems. Our attempts to undo a myth turn out to reinforce it — simply because people hear so much about it.

Another problem: the more facts we use to rebut myths, the less mental processing space people have to consider them. As is so often the case: when trying to rebut myths, less information is more powerful.

A Handy Resource

Happily, John Cook and Stephan Lewandowsky have produced “The Debunking Handbook” to help us end myths without reinforcing them.

The focus on highly practical strategies:

Using best alternative explanations

Using graphics

Limiting contradictory information

With this handbook as a guide, you can help your colleagues get past the quaint falsehoods that interfere with learning.

And as a result, you’ll clear up time for the teaching strategies that truly help students flourish.

 

Sorting Hats, Myers-Briggs, and the Perils of False Classification
Andrew Watson
Andrew Watson

Over at the Hidden Brain podcast, host Shankar Vedantam worries about our habit of sorting people into distinct categories.

When it happens at Hogwarts with a talking hat, such sorting can be cute. When it happens in real life, it can create real damage.

Vedantam, although a skeptic, offers a helpfully balanced exploration of this question. He even interviews a psychologist who worked for the Myers-Briggs Foundation to get the best argument in favor of its responsible use.

Of course, in schools this sorting habit shows up most perniciously with learning styles. Three key points to remember about this theory:

Learning Styles Theory makes specific predictions. Research does not bear them out. To the degree that research can show a theory to be false, this theory is false.

Instead, each of us is a learning style of one.

Luckily, we are more alike than different as learners. We can help students by maintaining optimal (relatively low) levels of stress. And, by reducing distracting stimuli in the classroom. Also, by using strategies that create “desirable difficulties.”

In other words: teachers don’t need to sort students into false categories. Cognitive psychology research helps us teach our students all unsorted, just as they are.

 

Pointing Out Online Mistakes Like a “Jerk”: More Misuses of Psychology Research
Andrew Watson
Andrew Watson

Headline writers face a difficult task, I suspect.

On the one hand, they want to capture the gist of the article. On the other hand, they really want you to click the link.

I thought about this puzzle when I read this recent headline:

People Who Point Out Grammar Errors Online Are Pretty Much Jerks, Study Finds

That’s an arresting claim. After all, the word “jerks” doesn’t often appear in psychology research papers…

Digging Deeper

So, what does this particular study say? Are people who “point out” online grammar errors “jerks”?

Researchers Boland and Queen asked themselves this question: does someone’s personality profile influence their response to written mistakes — such as typos or grammar errors?

(By the way: it would seem odd if the answer were “no.” If there is such a thing as a personality profile, shouldn’t it capture — among other things — the way people respond to one another’s errors?

But, in the field of psychology, we don’t just assume things. We research them. That’s what Boland and Queen do here.)

To answer their question, B&Q had 80+ people read short paragraphs: people’s responses to a “housemate wanted” ad.

Some of the responses were error free. Some included typos: “maybe we would mkae good housemates.” Some included grammatical errors: “If your someone who likes to play tennis…”

Participants then evaluated the authors of each paragraph. They also filled out a personality survey measuring “the big five” personality traits: openness to experience, conscientiousness, extraversion, agreeableness, and neuroticism.

So, here’s the research question: did their personality traits predict their responses to grammatical errors and typos?

The Results

The answer is: a smidge.

For instance, people with higher ratings of agreeableness didn’t much care about grammatical errors. People with lower agreeableness ratings cared a bit.

How much?

Well, on average, people with lower agreeableness scored an error-free message as a ~4.2. But, they rated a message with two grammar errors as a ~4.0

On a 7 point scale, does that 0.2 difference really matter? It was statistically significant. But, the researchers’ methodology makes it hard to evaluate the difference.

Here’s a hypothetical. When my students study using method A, they average an 80 on the unit test. When they study using method B, they average an 80.5.

Method B might be “better” in a way that’s statistically significant. But, it’s honestly not significant in the way that you and I use that word. If, for instance, method B takes 3 times as long as method A, that extra 0.5 point almost certainly wasn’t worth it.

So too in this case. The less agreeable folks might, on average, give lower ratings. But, 0.2 points hardly seems like a big enough deal to worry about.

So, Are People Who Point Out Online Grammar Errors Jerks?

First: NO ONE POINTED OUT ANY ONLINE GRAMMAR ERRORS. It just didn’t happen.

Second: The study shows that people with a relatively low agreeable rating feel more judgey about online grammar mistakes.

It does not show that people who comment on grammar mistakes have lower agreeableness scores.

And it certainly does not show that this particular person who just commented on a post has a low agreeableness score.

Those questions are related, but different. And, the differences really matter. Especially if you’re going to call someone a jerk.

Teaching Implications

When you see a headline like “Science Shows Group X Are Jerks,” have confidence it’s a wild overstatement.

So, when “science says” that …

“Teaching method X makes kids brilliant.”

“Cell phones make the world dumb and cruel.” (Or, “Cell phones will transform education and make classrooms perfect.”)

“This one habit will change your classroom forever.”

…follow up with the underlying research. See what the research says specifically. Decide whether or not it works for you and your students.

A Final Note

I’m honestly hoping that this article includes either a typo or a grammatical mistake. If it does, please point it out to me. I promise I won’t think you’re a jerk…

No, Brain Scans Can’t See You Think
Andrew Watson
Andrew Watson

Over at NPJ Science of Learning, Tracey Tokuhama-Espinosa debunks the myth that “brain scans see thought.”

In brief, Tokuhama-Espinosa argues that

Each brain imaging machine can, at best, measure a single dimension (electrical, chemical, or structural) of one sub-skill set …

No imaging machine can measure thought, only a sub-element of a thought.

The whole article is worth a read. Tokuhama-Espinosa has long made wise contributions to the field of Mind, Brain, Education. Her writing always merits attention.

Building a Better Research Mousetrap: @justsaysinmice
Andrew Watson
Andrew Watson

If you keep up on brain news, you have frequently had this experience:

You read a newspaper headline or a blog link, and news of a potential discovery gives you an adrenaline rush:

Eating garlic may prevent age-related memory loss!

Radoiactive tracer shows effectiveness of cancer treatment!!

Ketamine may relieve depression!!!

Filled with enthusiasm, you read the accompanying article. Only to discover: the research was done on mice. The resulting adrenaline crash might cause you to desire chocolate-chip cookies (according to research done in mice).

Today’s News

Of course, mouse research is super important to establish basic biological processes. But, it doesn’t give teachers useful guidance. Ever.

(Now might be a good time to repeat one of my few absolute rules:

NEVER, NEVER, NEVER,

Change your teaching practice

Based on research

Into non-human animals.)

To highlight the foolishness of headline-hiding-the-mouse gambit, researcher James Heathers has created a vital new twitter account: @justsaysinmice.

That’s it. When you follow his account, you’ll get occasional updates with links to articles drawing breathless conclusions about research. Heathers wants you to know that the research shows results in mice.

As of this writing, Heathers’s account has 29 tweets, and north of 45,000 followers.

(By the way, we’ve written about Heathers’s skepticism before. He is developing a specialty in debunking inaccurate science communication.)

So, to tune up your skepticism skills, I encourage you to make @justsaysinmice a part of your twitter world.

Default Image
Andrew Watson
Andrew Watson

My friend Cindy Nebel has a thoughtful post about a recent article at TES.

Here’s the backstory: a world-famous geneticist has dismissed research into Mindset as “bullshit” and “gimmicks.”

Now, reasonable people have their doubts about Mindset Theory. We’ve written about such doubts before.

But, as Nebel emphasizes in her post, wholesale rejection of the theory simply doesn’t make sense. For instance:

Disciplines Matter…

Geneticists know a lot about genetics. And, genes matter for teaching and learning.

(How much do they matter? A BIG controversy…)

But: most geneticists remember that psychology research is complicated. Knowledge and skill in one field don’t automatically translate to knowledge and skill in another.

In other words: psychologists will — most likely — have better insights into the strengths and weaknesses of psychology debates than will rocket scientists, or nuclear submariners, or even geneticists.

This point, of course, extends to other kinds of cross-disciplinary critiques. Here’s Nebel on the intersection of neuroscience and education:

A common misconception that we hear is that education and neuroscience are related disciplines and that those who study the brain must know how we learn.

While one can inform the other, I promise that training in neuroscience does NOT include an understanding of how those brain processes translate into classroom practices.

We often talk about a very necessary dialogue between educators and researchers, because very few individuals have extensive experience in both domains.

For all these reasons, neuroscientists (and psychologists) can provide teachers with useful perspectives. But, only teachers can decide what makes the most sense in the classroom.

…but Cost Doesn’t Matter

One of the stranger parts of the TES interview: Plomin’s insistence that only expensive changes benefit education.

“To think there is some simple cheap little thing that is going to make everybody fine, it is crazy,” he said in exclusive remarks published today.

“Good interventions are the most expensive and intensive.”

Pish posh.

If you’ve spent any time at a Learning and the Brain conference, you know that teachers can make all sorts of highly effective changes to their teaching at no cost whatsoever.

Using retrieval practice instead of simple review: no cost.

Managing students’ working memory load by…say…spreading instructions out over time: no cost.

Moderating students’ alertness levels by having them move: no cost.

Anyone who says we can’t improve teaching and learning without spending lots of money simply doesn’t understand teaching, learning, or the promise of educational psychology.