Andrew Watson – Page 32 – Education & Teacher Conferences Skip to main content

Andrew Watson About Andrew Watson

Andrew began his classroom life as a high-school English teacher in 1988, and has been working in or near schools ever since. In 2008, Andrew began exploring the practical application of psychology and neuroscience in his classroom. In 2011, he earned his M. Ed. from the “Mind, Brain, Education” program at Harvard University. As President of “Translate the Brain,” Andrew now works with teachers, students, administrators, and parents to make learning easier and teaching more effective. He has presented at schools and workshops across the country; he also serves as an adviser to several organizations, including “The People’s Science.” Andrew is the author of "Learning Begins: The Science of Working Memory and Attention for the Classroom Teacher."

Can Multiple-Choice Tests Really Help Students?
Andrew Watson
Andrew Watson

Multiple-choice tests have a bad reputation. They’re easy to grade, but otherwise seem…well…hard to defend.

After all, the answer is RIGHT THERE. How could the student possibly get it wrong?

Given that undeniable objection, could multiple-choice tests possibly be good for learning?

The Benefits of Distraction

A multiple-choice test includes one correct answer, and other incorrect answers called “distractors.” Perhaps the effectiveness of a multiple-choice question depends on the plausibility of the distractors.

So, a multiple choice question might go like this:

“Who was George Washington’s Vice President?”

a) John Adams

b) Mickey Mouse

c) Tom Brady

d) Harriet Tubman

In this case, none of the distractors could possibly be true. However, I could ask the same question a different way:

“Who was George Washington’s Vice President?”

a) John Adams

b) Thomas Jefferson

c) Alexander Hamilton

d) James Madison

In THIS case, each of the distractors could reasonably have held that role. In fact, all three worked closely with–and deeply admired–Washington. Two of the three did serve as vice presidents. (And the other was killed by a VP.)

Why would the plausibility of the distractor matter?

We know from the study of retrieval practice that pulling information out of my brain benefits memory more than repeatedly putting information into it.

So, we might hypothesize this way:

If the distractors are implausible, a student doesn’t have to think much to figure out the correct answer. No retrieval required.

But, if the distractors are plausible, then the student has to think about each one to get the answer right. That’s lots of retrieval right there.

In other words: plausible distractors encourage retrieval practice, and thereby might enhance learning.

Better and Better

This line of reasoning leads to an even more delicious possibility.

To answer that question about Washington’s VP, the student had to think about four people: Adams, Jefferson, Hamilton, Madison.

Presumably she’ll learn the information about Adams–who was the correct answer to the question.

Will she also learn more about the other three choices? That is: will she be likelier to answer a question about Alexander Hamilton correctly? (“Who created the first US National Bank as Washington’s Secretary of the Treasury?”)

If the answer to that question is YES, then one multiple-choice question can help students consolidate learning about several different facts or concepts.

And, according to recent research, the answer is indeed YES.

The research paradigm used to explore this question requires lots of complex details, and goes beyond the scope of a blog post. If you’re interested, check out the link above.

Classroom Implications

If this research holds up, we might well have found a surprisingly powerful tool to help students acquire lots of factual knowledge.

A well-designed multiple-choice question–that is: one whose plausible distractors require lots of careful thought–helps students learn four distinct facts or concepts.

In other words:

“Multiple-choice questions…

a) are easy to grade

b) help students learn the correct answer

c) help students learn information about the incorrect answers

or

d) all of the above.”

Me: I’m thinking d) sounds increasingly likely…

Default Image
Andrew Watson
Andrew Watson

Earlier this month, I wrote about the distinction between autobiographical memory and semantic memory.

Both kinds help us live meaningful lives.

But, schools focus on semantic memory: we want our students to know facts and skills over the long term.

We don’t really need them to remember the class or the exercise (or even the teacher) who taught them those facts and skills. That’s autobiographical memory.

That blog post was inspired by Clare Sealy’s recent essay ironically entitled “Memorable Experiences Are the Best Way to Help Children Remember Things.”

Happily, Sealy is the guest on a recent EdNext podcast: you can hear her in-depth explanation.

Equally happy, that podcast includes Sealy’s essay itself.

To understand Sealy’s argument, and its full implications, you can both have a look and have a listen.

Does Music Training Help Us Pay Attention?
Andrew Watson
Andrew Watson

Schools help students learn specific skills and facts: long division, and the preamble to the US Constitution, and glorious mysteries of the sonnet.

Wouldn’t it be great if schools could improve general cognitive capabilities?

For instance, it would be AWESOME if we could artificially increase working memory capacity. (Alas, we can’t. Really.)

It would be great if we could teach general critical thinking skills. (Alas: although we can teach those skills in discrete disciplinary topics, we probably can’t teach critical thinking generally.)

I would be super helpful if we could improve our students’ ability to pay attention…wait a minute: maybe we can.

We know that musicians must concentrate intensely to accomplish their marvelous work. To focus on the sheet music, ignore myriad distractions, accomplish nimble finger skills—all these require impressive degrees of attention.

Does all that attending help musicians both play music better and pay attention better? In other words: can they use those attention skills in other parts of their life?

Defining Attention

To answer that question, we have to start by defining the concept of “attention.”

Surprisingly, psychologists and neuroscientists don’t see attention as one unified thing. Instead, the see it as a behavior that takes place when three other things are happening.

First, they measure alertness. That’s a basic biological readiness: are the students awake enough? Or, so wildly overstimulated that they can’t focus? Those questions examine alertness. (Notice: they don’t directly examine attention—alertness is one small part of that bigger picture.)

Second, they measure orienting. When we ask about orienting, we consider the stimuli that the student is consciously perceiving.

So, for instance, at this moment I’m orienting to the letters on the screen as I type, to the mug of tea to my right, and to my cat Pippin who keeps nudging my arm. I’m not orienting to—say—the comfy chair in the corner, or the color of paint on the ceiling, or the gentle thump of the laundry machine downstairs.

I know all that stuff is there, but I’m not consciously processing it. (Well, I suppose, now that I’m writing about it, I must be processing it. But, I wasn’t orienting to it until I tried to identify stimuli that I wasn’t orienting to…)

Finally, to define the third part of attention, we consider executive attention. That segment takes much more time to describe and define, and overlaps a lot with working memory. It also includes our ability to ignore unimportant stimuli. We deliberately decide to focus on this topic here, not that one there.

So, when we ask the question “does music training improve attention,” we’re really asking three questions:

“Does music training improve alertness?”

“Does music training improve orienting?”

“Does music training improve executive attention?”

With these three questions in mind, we know what to do next.

Musician Inhibition

 To test attention, researchers often use the Attention Network Test (ANT) to measure all three sub-segments of our attentional processes.

In this study, scholars in Chile worked with about 40 adults. Half were “professional pianists,” with an average of more than 12 years of music training. The other half had never taken music lessons, and couldn’t read sheet music.

Did the musicians outperform the non-musicians on the ANT?

No, no, and yes.

That is: musicians and non-musicians did equally well at the first two parts of attention: alertness and orienting.

But, musicians scored higher on the executive attention part of the test than the non-musicians did.

Basically, they ignored irrelevant stimuli better than their age-matched peers.

What Does This Research Mean in the Classroom?

 You can probably anticipate all the reasons we shouldn’t over-react to this study.

It’s quite small: fewer than 40 people participated.

It doesn’t necessarily show cause and effect. It’s entirely possible that people who start with better executive attention are more likely to become professional musicians than people with lower executive attention.

The professional musicians had YEARS of musical experience: more than twelve, on average. So: even if music training does improve executive attention, it’s not a quick fix.

At the same time, this study does suggest something important: at least in this one case, we might be able to train a general cognitive capability.

That is: we can’t speed up our students’ working memory development. We can’t train a general critical thinking skill. We can’t improve processing speed.

But, maybe, we can find ways to strengthen executive attention.

Given how important attention is in the classroom, that’s potentially great news indeed.

Getting the Timing Right: Critical Thinking Online
Andrew Watson
Andrew Watson

If we want students to remember what we teach–and, what teacher doesn’t?–we’ve got a vital strategy: spread practice out over time.

We’ve got scads of research showing that the same number of practice problems results in a lot more learning if those problems are spread out over days and weeks, compared to being done all at once.

We call this the spacing effect, and it’s as solid a finding as we’ve got in the field of educational psychology.

As teachers interested in psychology research, we should always be asking: “yes, but does that work in my specific context.”

For instance: if research shows that college students learn stoichiometry better in a flipped-classroom model, that doesn’t necessarily mean that my 3rd graders will learn spelling better that way.

In the language of psychology research, we’re looking for “boundary conditions.” What are the limits of any particular technique?

The Spacing Effect Meets Critical Thinking

Researchers in Canada wanted to know: does the spacing effect apply to the teaching of critical thinking?

Of course, we want our students to be effective critical thinkers. But, there’s heated debate about the best way to teach this skill.

Lots of people doubt that critical thinking can be taught as a free-standing skill. Instead, they believe it should be nested in a specific curriculum.

That is: we can be critical thinkers about sonnets, or about football play-calling strategy, or about the design of bridges. But, we can’t learn to think critically in an abstract way.

The Canadian researchers start with that perspective, and so they teach critical thinking about a specific topic: the reliability of websites. And, they go further to ask: will the spacing effect help students be better critical thinkers?

In other words: if we spread out practice in critical thinking, will students ultimately practice their critical craft more effectively?

The Research; The Results

To answer this question, researchers used a 3-lesson curriculum exploring the credibility of websites. This curriculum asked 17 questions within 4 categories: the authority of the website’s authors, the quality of the content, the professionalism of the design, and so forth.

Half of the 4th-6th graders in this study learned this curriculum over 3 days. The other half learned it over 3 weeks.

Did this spacing matter? Were those who spread their practice out more proficient critical website thinkers than those who bunched their practice together?

In a word: yup.

When tested a month later, students who spread practice out were much likelier to use all four categories when analyzing websites’ reliability. And, they used more of the 17 questions to explore those four categories.

To Sum Up

This research leads us to two encouraging, and practical, conclusions.

First: we can help our students be better critical thinkers when they analyze websites. (Heaven knows that will be a useful skill throughout their lives.)

Second: we can improve their ability by relying on the spacing effect. As with so many kinds of learning, we get better at critical thinking when we practice over relatively long periods of time.

Can a Neuromyth Result in a Truce?
Andrew Watson
Andrew Watson

We teachers feel passionately about our work, and so–no surprise–our debates and disagreements get heated.

Few debates rage as fiercely as that between champions of direct instruction (with or without capital “D” and “I”), and champions of constructivism (in its many forms: project-based learning, student-centered learning, etc.).

In a recent essay, writer and blogger Tom Sherrington would like soothe this ferocity by declaring the whole debate a myth.

As his title declares: it’s a myth that “teacher-led instruction and student-centred learning are opposites.” (Sherrington is British, so we can overlook the missing “e” from “centred.”)

In brief, he argues: no matter how passionately we disagree about pieces of this debate, almost everyone agrees on a sensible core of ideas. We’re arguing at the margins, but could just as easily refocus on our agreements at the center.

Passionate Debates

One well-known meta-analysis sports this dramatic title: “Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching.”

Not much grey area there.

But, as Sherrington notes in his essay (I’ve tweaked the punctuation to make it blog-friendly):

[The authors] present their case most strongly for novice and intermediate learners but they appear to concede that for students approaching a more expert position, the different approaches are at least ‘equally effective.’

This means the debate is more about sequencing approaches appropriately in the learning journey.

Students will reach a point where these approaches represent a genuine choice.

And, critics of that meta-analysis also find a middle ground (again with the punctuation tweaking):

The more important questions to ask are: under what circumstances do these guided inquiry approaches work? What are the kinds of outcomes for which they are effective? What kinds of valued practices do they promote?

In other words: even the champions of the strongest claims concede that they see both approaches being appropriate at different times.

Specifically: novices need (relatively more) direct instruction. Experts benefit from (relatively more) open-ended, project-y methods.

Beyond Knowledge

Sherrington argues for a truce between direct instruction and PBL, first, because even strong advocates admit that the “other side’s” methods have a place under certain circumstances.

Teaching novices? Try direct instruction. Working with relative experts? Bring on the projects.

Second, he argues that schools exist both to help students acquire knowledge and to help them acquire social habits and practices we value.

As Sherrington writes: “there are many aspects of student activity and teacher-student engagement that are desirable simply because we value them as social constructs.”

So, for example: our society–heck, our very form of government–requires that people be able to work together effectively. For that reason, we benefit our students when we help them learn how to do so.

When we coach students along with group work, that teaches them skills that our society values–above and apart from the knowledge they gain while doing that work.

Of course, Sherrington’s essay includes many other thoughtful points beyond these two: it’s worth reading in full.

A Recent Example

Sherrington’s first argument struck me because I’ve been trying to make it for some time now.

Just ten days ago on this blog, I wrote about a huge study from South American purporting to show that collaborative, inquiry based learning produced substantial advantages.

And yet, as I found when I read its methods, the study didn’t contrast student-centered teaching with teacher-centered teaching.

Instead, it contrasted good teaching (combining both explicit instruction and projects) with really bad teaching (“copy down the names of the 206 bones of the human body”). Unsurprisingly, bad teaching produces bad results.

In other words: I’d like to spread the word of Sherrington’s truce. I hope you’ll join me!


Sherrington’s essay appears in The researchED guide to education myths: An evidence-informed guide for teachers, published by John Catt.

I wrote about Clare Sealy’s essay in this collection last week as well, so you can tell I think it’s got lots of quality work.

I don’t agree with everything I read in this guide, but neither does its editor (Craig Barton) or the series editor (Tom Bennett). They want to foster the debate, and this volume does that admirably.

Welcome to Boston! (Almost)
Andrew Watson
Andrew Watson

This is me. (I’m the one on the left.)

I mostly stay out of the way on this blog: the research and the teachers are the stars.

But, I always enjoy the e-conversations I get to have with people from across the globe. (Just yesterday, an email from Australia!)  I’ve learned so much, even (especially?) when we disagree.

I’m in New Jersey and Philadelphia right now, talking about self-control and adolescence; attention and working memory; the benefits of optimistic skepticism.

And, I’m more excited day by day to catch up with my Learning and the Brain peeps in just a few days.

So, I hope you’ll come introduce yourselves to me at our November conference: Learning How to Learn. It will be wonderful to put names to faces!

If you’re interested, I’ll be talking about the science of motivation on Friday morning. I hope to see you there.

Fostering Curiosity in the Classroom: “What Percentage of Animals are Insects?”
Andrew Watson
Andrew Watson

As teachers, we know that learning works better when students are curious about the subject they’re studying.

Obviously.

So, what can we do to encourage curiosity?

We could choose a topic that (most) students find intrinsically interesting. Dinosaurs, anyone?

But, we can’t always work on that macro level. After all, many of us work within a set curriculum.

What strategies work on a smaller, more day-to-day level? In other words: is there anything we can do in the moment to ramp up students’ curiosity?

Before you read on, pause a moment to ask yourself that question. What do you predict might work?

Predictions, Please

According to a recent study, the very fact that I asked you to make a prediction increases your curiosity about the answer.

Here’s the story.

Researchers in Germany asked college students look at a question, such as “X out of 10 animals are insects.”

Sometimes the students made a prediction: “4 out of 10 are insects.”

Sometimes they thought about an example of an insect: “mosquitoes.”

Sure enough, students rated their curiosity higher after they made a prediction than after they provided an example.

And…drum roll please…they also remembered those facts better when their curiosity levels were elevated.

Don’t Take My Word For It

By the way: how did the researchers know how curious the students were to find the answer?

First, they asked them to rate their curiosity levels. That’s a fairly standard procedure in a study like this.

But, they also went a step further. They also measured the dilation of the students’ pupils. (You may know that our pupils dilate when we’re curious or surprised.)

And, indeed, by both measures, making predictions led to curiosity. And, curiosity led to better memory of these fact.

What To Do Next?

On the one hand, this study included relatively few students: 33, to be precise.

On the other hand, we’ve got LOTS of research pointing this direction. Some studies show that pretesting helps students learn better, even if the students can’t possibly know the answer to the question on the test.

So, some kind of early attempt to answer a question (like, say, making a prediction) does seem to help learning.

At the same time, I think it would be quite easy to overuse this technique. If students always take a pretest, they’ll quickly learn that they aren’t expected to know the answers and (reasonably enough) won’t bother to try.

If students always make predictions, I suspect they’ll quickly pick up on this trick and their curiosity will wear down.

As teachers, therefore, we should know that this approach can help from time to time. If you’ve got a list of important facts you want students to learn, you build predictions into your lesson plan.

I myself wouldn’t do it every time. But, I think it can be a useful tool–especially if you need to know how many animals are insects. (In case you’re wondering: the answer is, “7 out of 10.” Amazing!)

Tea and Macbeth: Autobiographical vs. Semantic Memory
Andrew Watson
Andrew Watson

A few years ago, a former student named Jeremy invited me out for coffee. (I haven’t changed his name, because I can’t think of any reason to do so.)

We were reminiscing about the good old days–in particular, the very fun group of students in the sophomore class with him.

At one point he said: “You know what I remember most vividly about your class?”

I waited.

“Instead of using a spoon, you’d wrap your teabag string around your pen to wring it out into the mug. That always AMAZED me.”

In my early days as a teacher, I would have been horrified by this comment.

We had done such good work  in this class! We analyzed the heck out of Macbeth. Jeremy had become a splendid writer–he could subordinate a quotation in an appositive like a pro. We had inspiring conversations about Their Eyes Were Watching God.

And all he remembered was the way I wrung out a tea bag?

The Hidden Compliment

Jeremy’s comment might seem like terrible news, but I think it’s good news. Here’s why:

The goal of sophomore English is for Jeremy to learn particular skills, facts, and habits of mind.

That is: he should remember–say–how to write a topic sentence with parallel abstract nouns.

However, he need not remember the specific tasks he undertook to learn that skill.

For example, when he wrote his essay about Grapes of Wrath, he got better at writing essays. Whether or not he remembers the argument he made in that paper, he honed his analytical habits and writing skills. (How do I know? His next paper was better. And the next.)

He doesn’t remember the day he learned how to do those things. But, he definitely learned how to do them.

Many Memories

When psychologists first began studying memory, they quickly realized that “memory” isn’t one thing. We’ve got lots of different kinds of memory.

Those distinct memory systems remember different kinds of things. They store those memories in different places.

For instance: I’ve written a lot about working memory. That essential cognitive system works in a very particular way, with very important strengths and limitations.

But, say procedural memory works very differently. Procedural memory helps us remember how to do things: like, say, ride a bike, or form the past tense of an irregular verb.

These distinctions help me understand Jeremy’s memories of my class.

Jeremy had a strong autobiographical memory: my wringing out a teabag with my pen.

As the name suggests, autobiographical memories are rich with details about the events and people and circumstances.

You have countless such memories:

The time you poured coffee on your boss’s desk;

The first time you met your current partner;

The time you forgot your lines on stage.

You can call up vivid specifics with delicious–or agonizing–precision.

At the same time, Jeremy has lots of semantic memories from our class. As Clare Sealy describes them, semantic memories are “context free.” They “have been liberated from the emotional and spatiotemporal context in which they were first acquired.”

For instance:

Jeremy knows the difference between a direct object and a subject complement.

Having read The Ballad of the Sad Cafe, he knows how to analyze love triangles in literature.

Knowing how we define the word “romance” in English, he can explain the (many) bizarrenesses of The Scarlet Letter.

However, those semantic memories have an entirely feel from autobiographical memories. They lack the vivid specifics.

Jeremy knows that a subject complement “renames or describes” the subject. But he can’t tell you the tie I was wearing when I first explained that. He can’t tell you the (probably gruesome) example I used to make the distinction clear.

If he could, they would be autobiographical memories as well as semantic memories.

Why The Distinction Matters

As teachers, we’re tempted–often encouraged–to make our classes dramatically memorable. We want our students to remember the time that we…

Surprisingly, that approach has a hidden downside.

As Clare Sealy explains in a recent essay, we can easily use information in semantic memory in a variety of circumstances. That is: transfer is relatively easy with semantic memory.

However, that’s not true for autobiographical memory. Because autobiographical memory is bound up with the vivid specifics of that very moment on that very day (in that very room with those very people), students can struggle to shift the underlying insight to new circumstances.

In other words: the vivid freshness of autobiographical memory impedes transfer.

Sealy explains this so nimbly that I want to quote her at length:

Emotional and sensory cues are triggered when we try to retrieve an autobiographical memory. The problem is that sometime they remember the contextual tags but not the actual learning.

Autobiographical memory is so tied up with context, it is no good for remembering things once that context is no longer present.

This means that it has serious limitations in terms of its usefulness as the main strategy for educating children, since whatever is remembered is so bound up with the context in which it was taught. This does not make for flexible, transferable learning that can be brought to bear in different contexts and circumstances.

By the way, in the preceding passage, I’ve used the phrase “autobiographical memory” when Sealy wrote “episodic memory.” The two terms mean the same thing; I think that “autobiographical memory” is a more intuitive label.

To Sum Up

Of course we want our students to remember us and our class: the fun events, the dramatic personalities, the meaningful milestones.

And, we also want them to remember the topics and ideas and processes they learned.

Crucially, the word “remember” means something different in those two sentences; the first is autobiographical memory, the second is semantic.

Teaching strategies that emphasize remembering events might (sadly) make it harder to remember ideas and processes.

So, we should use teaching strategies that foster the creation of semantic memories.

Happily, the autobiographical memories will take care of themselves.


Clare Sealy’s essay appears in The researchED Guide to Education Myths: An Evidence-Informed Guide for Teachers. The (ironic) title is ” Memorable Experiences Are the Best Way to Help Children Remember Things.”

Inquiry- and Problem-Based Pedagogy: Dramatic Results in South America (?)
Andrew Watson
Andrew Watson

A recent study, published by the Center for Effective Global Action, sees big benefits from teaching build around student collaboration, inquiry, and problem-solving.

Working with in four countries (!), in ten different schools, (!!), with over 17,000 students (!!!), researchers find that K-4 students made more progress in math and science when they explored questions, compared with students who listened to lectures.

They report these results in stats-y language that doesn’t translate well: after 7 months, students averaged 0.18 standard deviations higher in math, and 0.14 in science. After four years, those differences bloomed to 0.39 and 0.23.

That not as sexy sounding as, say, they scored X% higher on a standardized test. But, however you look at it, those are eye-catching numbers.

Inquiry Learning vs. What?

Despite these dramatic numbers and claims, I don’t think the study supports the strong claims made by these researchers.

Here’s why.

First, the research purports to study the difference between “inquiry and problem based pedagogy” with “traditional instruction.”

If you look over the description of the classes, however, I think you’ll quickly see that it studies the difference between “good” teaching and “bad” teaching.

So, for instance, in a “traditional” unit on the skeletal system in Argentina:

[S]tudents copy facts about bone tissues and the names of 206 bones of the human skeleton that teachers have written on the blackboard into notebooks.

That’s not traditional. That’s absurd. They copy the names of two hundred and six bones? The mind boggles.

And, by the way, the “inquiry and problem based pedagogy” [IPP] is full of good, old-fashioned direct instruction:

When done well, IPP includes elements of explicit instruction and scaffolding.

Teachers facilitate learning by guiding students through a series of steps and explicitly relating learning to students’ prior knowledge and experiences.

Teachers guide learners through complex tasks with explicit instructions that are relevant to the problems at hand.

They provide structure and scaffolding that help students not only carry out specific activities, but also comprehend why they are doing those activities and how they are related to the set of core concepts they are exploring.

So, yes, these students are inquiring and problem solving. And, they’re getting lots of explicit teacherly guidance.

So, again, the labels used in this study don’t fully align with the concepts we typically use them to mean.

Compared to Whom?

A second questions jumps out here as well.

The teachers who used IPP methods got impressive training and support. For instance:

They got 20 hours of professional training in these methods. (When was the last time your school provided twenty hours of training on one topic?)

They got lesson plans. They got teaching materials.

They got “continuous in-school teacher support.”

What did the teachers in the control-group schools get? The study doesn’t say.

That silence leads to the possibility that they got…nothin’.

Which is to say: the study compares teachers who got lots and lots (and lots) of support, with teachers who didn’t get any support.

So, the difference might have come from the specifics of the teaching method: in this case, “IPP.”

Or, it might have come from the energizing effects of working at a school getting so much researcher support and attention.

We simply don’t know. And, if I’m right that this was a “business as usual” control group, then the study design doesn’t let us know.

Strong Conclusions

Based on this study, I think we can conclude that…

4th graders should not have to copy 206 vocabulary words into their notebooks. (I’ll go out on a limb and say NO ONE should have to do that.)

Some level of explicit teacherly support and guidance is essential.

Once foundational knowledge has been established, an appropriate level of independent questing can solidify and extend knowledge.

Most of us, I suspect, would have agreed with all of those statements before we read the study.

I don’t think, however, we can conclude from this study that “Inquiry and Problem Based Pedagogy” (as we typically use those words in the US) is the best approach. Because: that’s not what this study tested and measured.

Today’s Neuro-Nonsense: Reading Brainwaves in the Classroom
Andrew Watson
Andrew Watson

A friend recently sent me this link to a video story in the Wall Street Journal.

The 6 minute video discusses a program in China that uses a…welll…gizmo to measure students’ attention levels during class.

Everything I know about this particular program comes from the video, so it’s possible I’m missing important information. But, based on the video, I render this temporary verdict:

THIS IS BONKERS.

Electroencephalography simply doesn’t work this way. That is: the gizmo is far too crude (and subject to irrelevant stimuli) to produce meaningful measurements.

And, attention is far too complex a cognitive process to be measured by three contact points with the skull. (Not joking. THREE.)

If I gave you a stethoscope and said, “Here. Please measure that’s person mental health,” I suspect you’d refuse.

You’d say something like this: “I can gather data, and then pretend that those data say something meaningful about a really complex network of mental functions. But you and I both know that this tool simply can’t do what you’re asking it to do.”

You’d be right.

One More Point

Of course, there’s LOTS to say about the social policy implications of using this gizmo. This isn’t a blog about social policy, so I’ll leave that to other analysts.

I’m answering the very basic technical question: “can we really do that?” The answer, as far as I know, is “certainly not.”

 

h/t Scott MacClintic