Andrew Watson – Page 29 – Education & Teacher Conferences Skip to main content

Andrew Watson About Andrew Watson

Andrew began his classroom life as a high-school English teacher in 1988, and has been working in or near schools ever since. In 2008, Andrew began exploring the practical application of psychology and neuroscience in his classroom. In 2011, he earned his M. Ed. from the “Mind, Brain, Education” program at Harvard University. As President of “Translate the Brain,” Andrew now works with teachers, students, administrators, and parents to make learning easier and teaching more effective. He has presented at schools and workshops across the country; he also serves as an adviser to several organizations, including “The People’s Science.” Andrew is the author of "Learning Begins: The Science of Working Memory and Attention for the Classroom Teacher."

Sharing the Learning and the Brain Experience with Colleagues
Andrew Watson
Andrew Watson

You’ve just gotten back from a Learning and the Brain conference, and – frankly – you’re revved up!

You’ve learned so much new information and gained so many new perspectives, you just want to share it all with your colleagues right away.

Of course, it can be challenging to synthesize and organize all that information. Where do you begin? How do you fit all the pieces together? What was the name of that researcher again?

Happily, lots of people have done this synthesis work for you.

In the past, for instance, I’ve recommended this document by Deans for Impact. In a few brisk pages, it summarizes 6 key findings from the world of cognitive science.

Today’s News

I recently stumbled across another synthesis: this one with a twist. It doesn’t just boil lots of information down to easy-to-understand pages. (Although it certainly does do that.) This synthesis provides questions, examples, and activities to help you share the information with other teachers.

For instance: like many other writers (me included), this one focuses on the science of attention. Simply put, students don’t learn about information they’re not attending to.

This Learning Curriculum (2.0) reinforces that crucial point with a video you might show your colleagues.

It describes teaching strategies that – once we understand the importance of attention – will clearly be more and less effective.

It offers specific classroom suggestions and a warning or two. (Videos get students’ attention. But, alas, students might attend to surface features and miss the core concept we want them to understand.)

It also provokes deeper thought with questions you might put to fellow teachers.

Beyond Attention

Of course, this Learning Curriculum goes beyond attention. It considers working memory (my obsession), and prior misconceptions, and retrieval practice, and a host of other important topics.

I don’t agree with everything written here. (Heck, I don’t agree with everything written anywhere.) But, I think this curriculum…

synthesizes and organizes lots of essential information,

offers specific examples with getting bogged down in details, and

creates something usefully new: a strategy for sharing this information with other teachers. (I particularly like the “hinge questions” designed to be sure teachers understand the principles in application.)

For those reasons, I encourage you to check it out. I hope you’ll let me know your thoughts

“The” Effect of “Exercise” on “the Brain”
Andrew Watson
Andrew Watson

A month ago, I wrote about a Twitter feud focusing on exercise during learning.

When a PE teacher posted a video of his students reading on exer-cycles, edu-Twitter irrupted with champions (“love it!”) and critics (“bonkers!”).

My response at the time was:

First: I rather suspect that exercise during learning will distract students from their reading; however,

Second: we don’t have research on this specific question; and

Third: in the absence of research, it’s probably good for PE teachers to be experimenting in this realm.

In other words: Edu-Twitter, relax.

Today’s Update

Since that mini-controversy, I’ve stumbled across an intriguing research addendum.

Researchers in Germany wanted to know: how does exercise affect the brain. More specifically, does a difference in intensity level matter?

That is: if I exercise 35% below lactate threshold, will that influence brain connectivity differently than if I exercise 20% above that threshold? (“Lactate threshold” measures intensity of exercise; the specifics aren’t super important here.)

To answer that question, they had had about 2 dozen men exercise at those different levels on different days.

The specific results quickly turn into a tornado of acronyms. But, briefly summarized, the researchers found that:

Low intensity exercise enhanced connectivity in networks that help process cognitive and attentional functions, while

High intensity exercise enhanced connectivity in networks that help process emotional responses.

And surprisingly (to me), high intensity exercise also diminished connectivity in networks that process motor coordination.

In other words, “exercise” does not have “an” effect on “the brain.”

Instead, different kinds of exercise have distinct effects on particular brain regions and networks.

Core Conclusions

First: in the short term, different exercise intensities may influence brain regions differently.

Second: that “short term” caveat is important. Notice for instance that high intensity exercise muddles motor coordination networks. Why would that be? The study’s authors suggest it indicates temporary “motor fatigue.”

That is: exercise doesn’t make us worse at motor coordination over the long term — that would be bizarre. Instead, it temporarily tires us out.  Presumably, motor coordination bounces back after we stop exercising.

So, too, we might be tempted to enhance cognitive function with low-intensity exercise. But, just as the motor-fatigue effect was temporary, so too the cognitive-function effect might be temporary.

Third: back to those readers on bicycles. I don’t think this research applies directly to that classroom experiment. (Although, if low-intensity exercise really improves cognitive function — even temporarily — that finding makes those bikes somewhat more appealing. That is: low-intensity exer-cycling might improve the students’ cognition and focus while they read.)

Instead, I think it highlights the importance of patience and specificity. Until we have more research on this specific point, I don’t think we have nearly enough reason to cry either “bravo!” or “bonkers!”

Instead, let’s gather more data. And, in the meanwhile, we can encourage one another in reasonable classroom experiments.

And yes, I do mean even PE class experiments including exer-cycles.

Welcome to San Francisco!
Andrew Watson
Andrew Watson

We can’t wait to see you at our conference this weekend: Educating Anxious Brains.

Various reports have found that many children and teens are experiencing significant stress, anxiety, and mental health issues.

More than 1 in 20 children ages 6-17 (and one in three teens) suffer from anxiety disorders, according to a June 2018 study in the Journal of Developmental and Behavioral Pediatrics. A new UC Berkeley study finds that college students have seen a sharp rise in anxiety over the past decade and an American College Health Association study reveals that sixty-two percent of college students report a significant level of anxiety. In addition, according to a 2016 National Survey of Children’s Health (NSCH), almost forty-five percent of all children in the US have experienced at least one Adverse Childhood Experience (ACE).

Discover how toxic stress, anxiety, and trauma negatively affect developing brains, student behavior, and academic success.

This conference will help you reduce anxiety and stress in your schools and practice; address teen depression and challenging classroom behaviors; foster coping, calming, mindful practices; create trauma-sensitive schools; and improve school success by promoting positive teacher-student relationships.

If you’ll be there, you might want a quick preview of our upcoming speakers.

You can meet many of them at this link. And, many more right here.

If you can’t join us in San Francisco, perhaps you’re free the weekend of May 1st in New York City. We’ll be exploring the Schooling of the Self

Interested in Action Research? Try This Instead
Andrew Watson
Andrew Watson

We don’t do a lot of cross posting here at Learning and the Brain. I believe this is the first time we’ve done so while I’ve been editor.

I think the initiative below is very exciting, and you — Learning and the Brain readers — are just the right audience to take advantage of it.

In this post, Ben Keep and Ulrich Boser of the Learning Agency Lab explain how we teachers can do valuable research in our own classrooms.

If that grabs your attention: read on!


New technologies can help educators become high-quality researchers. ​

When it comes to teaching, there are a million questions to ask about the nature of instruction.

What examples to use? What analogies to draw on? What sequences to teach new ideas? The people in the best position to both ask and answer these questions are often teachers.

Teacher-driven research isn’t new, but — at least in the U.S. — it’s relatively rare. Teaching loads are high and work hours are long, making teachers reluctant to lead education research projects, even when they want to. And, generally speaking, the U.S. school system is not set up to support teacher-driven research.

But in spite of the challenges, teachers want to engage in research. One survey found that over 90% of teachers wanted to influence the direction of research. And 59% wanted to participate in research themselves.

One way to engage is through action research, which certainly has its place in the field. And while the approach has clear benefits, it also has some limitations — like missing comparison groups.

A new kind of tool might help solve this problem. Over the next year, different learning platforms plan on offering tools to assist teachers in running their own research projects. Take ASSISTments ETRIALS Project. There’s already currently a small community of teachers who are performing independent classroom research on ASSISTments and that’s scheduled to expand.

RCE Coach also has plans to put out a version of their software this fall that will facilitate teacher use of the platform. They plan on fostering collaborations and providing workshops and other resources to support teacher research.

There’s also Carnegie Learning’s UpGrade platform. The company has plans to release an easy-to-use UI that lets teachers perform research on the platform. They’re particularly interested in testing whether letting students move ahead at their own pace benefits student outcomes.

These tools all help teachers run randomized controlled trials in their classrooms. That is, they help teachers to randomly assign students to different instructional conditions so that we can figure out which teaching approaches work best — and why.

Action Research Isn’t Action(able) Enough. Or why RCTs?

Current teacher-driven research efforts emphasize action research, which is an approach to deliberately reflecting on one’s own teaching practice with an aim to improve it. Under this model, teachers will often experiment with new teaching approaches, conduct interviews or surveys of students, and make detailed observations along the way. Often, the entire class makes a change, and the teacher reflects on whether the change was effective at improving learning outcomes.

This has led to a lot of fascinating work. But one of the limitations of action research is that, without a meaningful group comparison, it’s hard to know whether the proposed change made a difference.

Putting teachers in charge of running RCTs offers several intriguing benefits. First, teachers are likely to ask questions that researchers might not think of. The tests would also be in the context of a real classroom environment. And the results could be put into practice immediately.

Second, a wider group of teachers becoming involved with research might help bridge the research-practice divide. Teachers do not often learn about the science of learning during teacher training programs. Simultaneously, many teachers feel like existing education research is inaccessible, hard-to-understand, or simply not relevant.

Transparent randomly controlled trials would also give teachers the ability to hone their intuitions about instructional choices. By posting the study design before posting the results, teachers, researchers, and anyone else who was interested could make predictions about what’s likely to happen. This gives people the kind of practice they need to become expert forecasters.

Of course, the approach also comes with significant challenges. With average class sizes of around 25 students, a single class yields very small sample sizes for carrying out RCTs. Teachers also have varying experience with research methods. And it’s still unclear what platform features will best serve the teachers-as-researchers community, and which questions simply can’t be tested using learning platforms.

More Actionable ResearchRCTs In Action

Do students benefit from solving math problems with pencil and paper (as opposed to on a computer)?

Suppose we had a group of students perform a homework assignment where they solved problems with pencil and paper, while a comparable group of students solved the same homework problem on a computer (with no incentive to write it out). Would the first group learn more or less than the second?

A math teacher in Maine — Bill Hinkley — actually decided to test this very question, through an RCT. One group of students was encouraged to use paper and pencil, and had to turn in a piece of paper showing their work. The other group of students went through the homework problems as usual — through a computer screen. Both groups saw and submitted their answers through the same math platform: ASSISTments.

The result? Students who used paper and pencil outperformed those who didn’t by about 13 points. The difference was just shy of statistical significance, but suggestive given the small sample size (15 students in one condition, 12 in the other). Bill Hinkley plans to replicate and expand on the experiment in the near future.

Want To Join The RCT Teacher Research Community?

What would happen if we could scale up this style of research? There are 3.7 million teachers in the U.S. If just one percent of them started engaging in education research, there would be 37,000 teacher-researchers. The largest education research association, AERA, by comparison, has about 25,000 members.

Suppose each teacher-researcher only performed one experiment a year. That’s still 37,000 small experiments, run in realistic, noisy, classroom settings using rigorous research methods. Imagine what we might learn.

Interested in using RCTs in your classroom? Get in touch with us: Email Ulrich at [email protected]

We’re looking to build a community of teacher researchers who are doing this work in schools every day.

How Does Self-Control Really Work? Introducing a Debate
Andrew Watson
Andrew Watson

Every teacher I know wishes that our students could control themselves just a little bit better. Or, occasionally, a whole lot better.

Rarely do we worry that students have too much self-control.

All these observations prompt us to ask: how does this thing called self-control really work?

In the field of psychology, that question has led to a fierce debate. If you’d like to enter into that debate, well, I’ve got some resources for you!

A Very Brief Introduction

Roy Baumeister has developed a well-known theory about self-control. You can read about it in depth in his book Willpower: Rediscovering the Greatest Human Strength, written with John Tierney.

Think of self-control as a kind of inner reservoir. My reservoir starts the day full. However, when I come down for breakfast, I see lots of bacon. I know I…MUST…RESIST…BACON, and that self-control effort drains my reservoir a bit.

However, once I finish my oatmeal and leave the kitchen, the bacon no longer tempts me so strongly. I’ve stopped draining the reservoir, and it can refill.

Baumeister’s theory focuses on all the things that drain the reservoir, and all the strategies we can use to a) refill it, or b) expand it.

Baumeister calls this process by a somewhat puzzling name: “ego depletion.” The “depletion” part makes good sense: my reservoir is depleted. The “ego” part isn’t as intuitive, but we’ll get used to that over time.

The key point: in recent years, the theory of ego depletion has come under debate — especially as part of the larger “replication crisis” in psychology.

Some say the theory has (literally) hundreds of studies supporting it. Others note methodological problems, and worry that non-replications languish in file drawers.

Welcome Aboard

Because self-control is so important to teachers, you just might be intrigued and want to learn more.

One great resource is a podcast, charmingly titled “Two Psychologists, Four Beers.” A couple times a month, Yoel Inbar and Michael Inzlicht get together over a few brews and chat about a topic.

In this episode, they talk about this controversy at length and in detail. SO MUCH interesting and helpful information here.

One key point to know: Inzlicht himself is a key doubter of Baumeister’s research. He’s not a dispassionate observer, but an important critic.

Friendly On Ramp

However interested you are in the topic of self-control, you might not have 80 minutes to devote to it.

Or, you might worry it will be overly complex to understand the first time through.

Good news! Ahmad Assinnari has put together a point-by-point summary of the podcast. 

You could read it as an introduction to an upcoming debate, and/or follow along to be sure you’re tracking the argument clearly. (BTW: Assinnari refers to Inzicht both as “Inzlicht” and as “Michael.” And, beware: it’s easy to confuse “Michael” with “Michel,” another scholar in the field.)

So, if you’d like to learn more, but you’re not sure you want to read Baumeister’s book, this post serves as an introduction to Assinnari’s summary. And, Assinnari’s summary introduces the podcast.

With these few steps, you’ll be up to speed on a very important debate.

A Fresh Approach to Evaluating Working Memory Training
Andrew Watson
Andrew Watson

Because working memory is SO IMPORTANT for learning, we would love to enhance our students’ WM capacity.

Alas, over and over, we find that WM training programs just don’t work (here and here and here). I’ve written about this question so often that I’ve called an informal moratorium. Unless there’s something new to say, or a resurgence of attempts to promote such products, I’ll stop repeating this point.

Recently I’ve come across a book chapter that does offer something new. A research team led by Claudia C. von Bastian used a very powerful statistical method to analyze the effectiveness of WM training programs.

This new methodology (which I’ll talk about below) encourages us to approach the question with fresh eyes. That is: before I read von Bastian’s work, I reminded myself that it might well contradict my prior beliefs.

It might show that WM training does work. And, if it shows that, I need to announce that conclusion as loudly as I’ve announced earlier doubts.

In other words: there’s no point in reading this chapter simply to confirm what I already believe. And, reader, the same applies for you. I hereby encourage you: prepare to have your beliefs about WM training challenged. You shouldn’t read the rest of this post unless you’re open to that possibility.

New Methodology

One problem with arguments about WM training is that sample sizes are so small. In one recent meta-analysis, the average sample size per study was 20 participants.

In a recent book on cognitive training, von Bastian, Guye, and De Simoni note that small sample sizes lead to quirky p-values. In other words, we struggle to be sure that the findings of small studies don’t result from chance or error.

Instead, von Bastian & Co. propose using Bayes factors: an alternate technique for evaluating the reliability of a finding, especially with small sample sizes. The specifics here go WAY beyond the level of this blog, but the authors summarize handy tags for interpreting Bayes factors:

1-3               Ambiguous

3-10            Substantial

10-30         Strong

30-100      Very Strong

100+         Decisive

They then calculate Bayes factors for 28 studies of WM training.

Drum Roll, Please…

We’ve braced ourselves for the possibility that a new analytical method will overturn our prior convictions. Does it?

Well, two of the 28 studies “very strongly” suggest WM training works. 1 of the 28 “substantially” supports WM training. 19 are “ambiguous.” And 6 “substantially” suggest that WM training has no effect.

In other words: 3 of the 28 show meaningful support of the hypothesis. The other 25 are neutral or negative.

So, in a word: “no.” Whichever method you use to evaluate the success of WM training, we just don’t have good reason to believe that it works.

Especially when such training takes a long time, and costs lost of money, schools should continue to be wary.

Three Final Notes

First: I’ve focused on p-values and Bayes factors in this blog post. But, von Bastian’s team emphasizes a number of problems in this field. For instance: WM training research frequently lacks an “active” control group. And, it often lacks a substantial theory, beyond “cognitive capacities should be trainable.”

Second: This research team is itself working on an intriguing hypothesis right now. They wonder if working memory capacity cannot be trained, but working memory efficiency can be trained. That’s a subtle but meaningful distinction, and I’m glad to see they’re exploring this question.

So far they’re getting mixed results, and don’t make strong claims. But, I’ll keep an eye on this possibility — and I’ll report back if they develop helpful strategies.

Third: I encouraged you to read von Bastian’s chapter because it might change your mind. As it turns out, the chapter probably didn’t. Instead it confirmed what you (and certainly I) already thought.

Nonetheless, that was an important mental exercise. Those of us committed to relying on research for teaching guidance should be prepared to change our approach when research leads us in a new direction.

Because, you know, some day a new WM training paradigm just might work.


von Bastian, C. C., Guye, S., & De Simoni, C. (2019). How strong is the evidence for the effectiveness of working memory training? In M. F. Bunting, J. M. Novick, M. R. Dougherty & R. W. Engle (Eds.), Cognitive and Working Memory Training: Perspectives from Psychology, Neuroscience, and Human Development (pp. 58–75). Oxford University Press.

Where Should Students Study?
Andrew Watson
Andrew Watson

We’ve got lots of advice for the students in our lives:

How to study: retrieval practice

When to study: spacing effect

Why study: so many answers

Where to study: …um, hold please, your call is very important to us…

As can happen, research provides a counter-intuitive — and sometimes contradictory — answers to that last question.

I grew up hearing the confident proclamation that we should create a perfect study environment in one place, and always study there. (The word “library” was spoken in reverent tones.)

As I think about the research I’ve seen in the last ten years, my own recommendations to students have been evolving.

Classic Beginnings

In a deservedly famous study, Smith, Glenberg and Bjork (1978) tried to measure the effect on environment on memory.

They found that, in the short run, I associate the words that I learn in this room with the room itselfThat is: if I learn words in room 27, I’ll do better on a test of those words in room 27 than in room 52.

One way to interpret those findings is that we should teach in the place where students will be tested.

If the final exam, inevitably, is in the gym, I should teach my students in the gym. And they should study in the gym. This approach ensures that they’ll associate their new knowledge with the place they have to demonstrate that knowledge.

In this theory, students should learn and study in the place they’ll ultimately be tested.

Priority Fix #1

This interpretation of Smith’s work makes sense if — and only if — the goal of learning is to do well on tests.

Of course, that’s not my goal. I don’t want my students to think carefully about literature for the test; I want them to think carefully about literature for life.

I want them to have excellent writing skills now, and whenever in the future they need to write effectively and clearly.

We might reasonably worry that a strong association between the room and the content would limit transfer. That is: if I connect the material I’ve learned so strongly with room 27, or the gym, I might struggle to remember or use it anywhere else.

Smith worried about that too. And, sure enough, when he tested that hypothesis, his research supported it.

In other words, he found that students who study material in different locations can use it more flexibly elsewhere. Students who study material in only one location can’t transfer their learning so easily. (By the way: Smith’s research has been replicated. You can read about this in Benedict Carey’s How We Learn. Check out chapter 3.)

This finding leads to a wholly different piece of advice. Don’t do what my teachers told me to do when I was a student. Instead, study material in as many different places as reasonably possible. That breadth of study will spread learning associations as widely as possible, and benefit transfer.

That’s what I’ve been telling students for the last several years.

Voila. Generations of teaching advice overturned by research!

Priority Fix #2

Frequent readers have heard me say: “Researchers work by isolating variables. Schools work by combining variables.”

The longer I do this work, the longer I think that this “where to study” advice makes sense only if I focus exclusively on that one variable.

If I start adding in other variables, well, maybe not so much.

True enough, research shows that I’ll remember a topic better if I study it in different places … as long as all other variables being held constant. But, in life, other variables aren’t constant.

Specifically, some study locations are noisier than others. Starbucks is louder than the library: it just is. And, some locations are visually busier than others.

And, as you would expect, noise — such as music — distracts from learning. So, too, do visually busy environments.

So, a more honest set of guidelines for students goes like this:

You should review material in different places. But, you want each of those places to be quiet. And, you don’t want them to have much by way of visual distraction.

You know what that sounds like to me? The library.

I suppose it’s possible for students to come up with several different study locations that are equally quiet and visually bland. Speaking as a high school teacher, I think it’s unlikely they’ll actually do that.

So, unless they’ve got the bandwidth to manage all those demands even before they sit down to study, then I think the traditional advice (“library!”) is as good as anything.

Final Thoughts

People occasionally ask me where I am in the “traditional vs. progressive” education debate.

The honest answer is: I’m indifferent to it. I (try to) focus on practical interpretations of pertinent psychology and neuroscience research.

If that research leads to a seemingly innovative suggestion (“study in many locations!”), that’s fine. If it leads to a traditional position (“library”), that’s equally fine.

I think that, for the most part, having teams in education (prog vs. trad) doesn’t help. If we measure results as best we can, and think humbly and open-mindedly about the teaching implications, we’ll serve our students best.

Today’s Humble Pie: 206 Bones
Andrew Watson
Andrew Watson

Back in early November, I wrote about a study seeming to contrast direct instruction with more constructivist approaches.

I argued that those labels simply didn’t apply to the actual teaching methodologies measured in the research.

So, the “inquiry and problem-based pedagogy” [IPP] used all sorts of direct instruction. Here’s the authors’ summary of that method; I’ve put some words in bold:

“When done well, IPP includes elements of explicit instruction and scaffolding.

Teachers facilitate learning by guiding students through a series of steps and explicitly relating learning to students’ prior knowledge and experiences.

Teachers guide learners through complex tasks with explicit instructions that are relevant to the problems at hand.

They provide structure and scaffolding that help students not only carry out specific activities, but also comprehend why they are doing those activities and how they are related to the set of core concepts they are exploring.”

So, as you can see “direct instruction” techniques are built into this method.

And, the method described as “traditional” seems to me an obvious straw man. Again, quoting from the research:

“Students copy facts about bone tissues and the names of 206 bones of the human skeleton that teachers have written on the blackboard into notebooks.”

I mean, seriously, who does that? Copies the names of 206 bones? Except for Charles Dickens’s Gradgrind — “facts, facts, facts!” — who would do such a thing?

Slice of Pie

I was discussing this study with a friend recently, and it turns out: her college professor would do such a thing. Right here in Massachusetts — home of the very first Learning and the Brain conference!  — her vertebrate anatomy professor put up slides of skeletons and labeled all the bones.

Slide after slide after slide. After slide. (Apparently he interspersed them with funny stories, just to keep the students awake. In my friend’s telling: his students remembered the stories, but not the anatomy.)

Except for the funny stories, Gradgrind would be proud indeed.

In any case, it’s clear that this “traditional” method is NOT a straw man, and at least one professor seems to think it a good idea.

So, to be clear: I do think asking students to memorize some core facts is not only plausible but beneficial. Without essential information in long-term memory, working memory will be overloaded by too much external information.

But: I can’t think of any research-based argument for an entire class — much less an entire course! — devoted to listing bones. That’s not direct instruction. That’s purgatory.

Two Core Points

Point from November’s post: as I wrote back in November, we can’t use this research to champion a pure constructivist approach to learning, because IPP includes lots o’ direct instruction.

Point from today’s post: “direct instruction” does not mean “presenting unbroken lists of facts, and then calling for them to be repeated.” Even if that really happens [shudder], that’s a profound misunderstanding of research and terminology.

“Direct instruction” does mean introducing enough factual or conceptual information to allow students to work thoughtfully — and increasingly independently — on a well-scaffolded series of problems.

Of course, this definition can be expanded and formalized. But: whatever you call “copy the names of 206 bones,” please don’t call it direct instruction.

Is it Better to be a “Natural” or a “Striver”?
Andrew Watson
Andrew Watson

Consider top performers in a given field: inventors, artists, athletes, academics, and so forth.

Presumably, their elite performance results from some mysterious combination of innate ability and effortful practice.

But which of those two variables matters more, the ability or the practice?

And — here’s a super interesting question — does my answer to that explicit question line up with the implicit value judgments that I make in real life?

In other words: I might say I prefer ability (or practice), but end up valuing the practice (or ability).

How might we measure such a troubling possibility?

Expert Evaluators

Two researchers — Chia-Jung Tsay and Mahzarin Banaji — developed a clever strategy to answer this question.

Tsay and Banaji gave professional musicians brief bios of two pianists. One bio emphasized the all the hard work that the pianist had put into her growth as a musician. (In the researchers’ language, she was a “striver.”)

The other bio emphasized the innate ability that the pianist had. (She was a “natural.”)

The expert musicians then heard brief excerpts of recordings of these two musicians. They rated the performances on various scales, including their “musical achievement,” and whether or not they would like to hear the performance again.

Finally, they answered questions asking them directly whether they valued “effortful training” or “natural talent.”

What did the researchers learn from all these questions and evaluations?

The Envelope, Please

Tsay and Banaji’s research paradigm includes a surprise: the two brief musical excerpts came from the same pianist playing the same piece. Heck, they were from the same recording.

In other words: they were of identical musical achievement. And, we would predict that the expert evaluators would be equally eager to hear these two performances again — because they were the same performance.

When asked explicitly, the evaluators said they valued practice more than talent. (The d value here is 0.57, which is noteworthy.) So, presumably, given this set of circumstances, they might prefer the performance by the striver.

But, nope.

They preferred the natural. (The d value here is 0.79. That’s really big.)

So, even though the performances were equally accomplished, and the evaluators said they valued effort, their evaluations suggest that they actually valued talent.

Teaching Implications

First: we shouldn’t panic. This is one study looking at a specific evaluation of a specific kind of expert performance. Yes: Tsay and Banaji did all the responsible things to test their hypothesis in different ways — I haven’t summarized two related experiments they did.

But: before we extrapolate too zealously, we should be curious about other research into this question.

Second: Specifically, I wonder how much this preference for “naturals” over “strivers” has a cultural influence. This research was done in an American cultural context. Are Americans unusually keen on talent over effort? What do we find when we look within other cultural norms?

Third: Even with these caveats, I myself will be even more skeptical about my ability to judge between talent and effort objectively. I’m sure that, if you ask me, I’ll tell you I value the effort. But, this research suggest I’ll make decisions based on my appreciation of your talent.

To take a provocative example: when I talk with people who manage “Gifted and Talented” programs, I often hear they value hard work as much as “gifts and talents.” In the future, I will encourage people with those (laudable) values to look under the hood.

Do they have systems in place to measure hard work? Do those measurements, in fact, influence program decisions? Do they — more specifically — benefit people who truly work harder?

In sum: if we in fact value striving, then we should be sure we reward striving — even though it might not feel natural to do so.

“How We Learn”: Wise Teaching Guidance from a Really Brainy Guy
Andrew Watson
Andrew Watson

Imagine that you ask a neuro-expert: “What’s the most important brain information for teachers to know?”

The answer you get will depend on the expertise of the person you ask.

If you ask Stanislas Dehaene, well, you’ll get LOTS of answers — because he has so many areas of brain expertise.

He is, for example,  a professor of experimental cognitive psychology at the Collège de France; and Director of the NeuroSpin Center, where they’re building the largest MRI gizmo in the world. (Yup, you read that right. IN THE WORLD.)

He has in fact written several books on neuroscience: neuroscience and reading, neuroscience and math, even neuroscience and human consciousness.

He’s also President of a newly established council to ensure that teacher education in all of France has scientific backing: the Scientific Council for Education. (If the United States had such a committee, we could expunge Learning Styles myths from teacher training overnight.)

If that’s not enough, Dehaene is interested in artificial intelligence. And statistics. And evolution.

So, when he writes a book called How We Learn: Why Brains Learn Better than Any Machine…for Now, you know you’re going to get all sorts of wise advice.

Practical Teaching Advice

Dehaene wants teachers to think about “four pillars” central to the learning process.

Pillar 1: Attention

Pillar 2: Active engagement

Pillar 3: Error feedback

Pillar 4: Consolidation

As you can see, this blueprint offers practical and flexible guidance for our work. If we know how to help students pay attention (#1), how to help them engage substantively with the ideas under discussion (#2), how to offer the right kind of feedback at the right time (#3), and how to shape practice that fosters consolidation (#4), we’ll have masterful classrooms indeed.

Learning, of course, begins with Attention: we can’t learn about things we don’t pay attention to. Following Michael Posner’s framework, Dehaene sees attention not as one cognitive process, but as a combination of three distinct cognitive processes.

Helpfully, he simplifies these processes into three intuitive steps. Students have to know:

when to pay attention

what to pay attention to, and

how to pay attention.

Once teachers start thinking about attention this way, we can see all sorts of new possibilities for our craft. Happily, he has suggestions.

Like other writers, Dehaene wants teachers to focus on active engagement (pillar #2). More than other writers, he emphasizes that “active” doesn’t necessarily mean moving. In other words, active engagement requires not physical engagement but cognitive engagement.

This misunderstanding has led to many needlessly chaotic classroom strategies, all in the name of “active learning.” So, Dehaene’s emphasis here is particularly helpful and important.

What’s the best way to create cognitive (not physical) engagement?

“There is no single miraculous method, but rather a whole range of approaches that force students to think for themselves, such as: practical activities, discussions in which everyone takes part, small group work, or teachers who interrupt their class to ask a difficult questions.”

Error Feedback (pillar #3) and Consolidation (#4) both get equally measured and helpful chapters. As with the first two, Dehaene works to dispel myths that have muddled our approaches to teaching, and to offer practical suggestions to guide our classroom practice.

Underneath the “Four Pillars”

These four groups of suggestions all rest on a sophisticated understanding of what used to be called the “nature/nurture” debate.

Dehaene digs deeply into both sides of the question to help teachers understand both brain’s adaptability (“nurture”) and the limits of that adaptability (“nature”).

To take but one example: research with babies makes it quite clear that brains are not “blank slates.” We come with pre-wired modules for processing language, numbers, faces, and all sorts of other things.

One example in particular surprised me: probability. Imagine that you put ten red marbles and ten green marbles in a bag. As you start drawing marbles back out of that bag, a 6-month-old will be surprised — and increasingly surprised — if you draw out green marble after green marble after green marble.

That is: the baby understands probability. They know it’s increasingly likely you’ll draw a red marble, and increasingly surprising that you don’t. Don’t believe me? Check out chapter 3: “Babies’ Invisible Knowledge.”

Of course, Dehaene has fascinating stories to tell about the brain’s plasticity as well. He describes several experiments — unknown to me — where traumatized rats were reconditioned to prefer the room where the traumatizing shock initially took place.

He also tells the amazing story of “neuronal recycling.” That is: the neural real-estate we train to read initially housed other (evolutionarily essential) cognitive functions.

Human Brains and Machine Learning

Dehaene opens his book by contemplating definitions of learning — and by contrasting humans and machines in their ability to do so.

By one set of measures, computers have us beat.

For instance, one computer was programmed with the rules of the game Go, and then trained to play against itself. In three hours, it became better at the game than the human Go champion. And, it got better from there.

However, Dehaene still thinks humans are the better learners. Unlike humans, machines can’t generalize their learning. In other words: that Go computer can’t play any other games. In fact, if you changed the size of the Go board even slightly, it would be utterly stumped.

And, unlike humans, it can’t explain its learning to anyone else.

And, humans need relatively little data to start learning. Machines do better than us when they can crank millions of calculations. But, when they calculate as slowly as we do, they don’t learn nearly as much as we do.

As his subtitle reassures us, brains learn better than any machine. (And, based on my conversation with him, it’s clear that “…for now” means “for the long foreseeable future.”)

Final Thoughts

At this point, you see what I mean when I wrote that Dehaene has an impressive list of brain interests, and therefore offers an impressive catalog of brain guidance.

You might, however, wonder if this much technical information ends up being a little dry.

The answer is: absolutely not.

Dehaene’s fascination with all things brain is indeed palpable in this book. And, his library of amazing studies and compelling anecdotes keeps the book fresh and easy-to-read. I simply lost track of the number of times I wrote “WOW” in the margin.

This has been a great year for brain books. Whether you’re new to the field, or looking to deepen your understanding, I recommend How We Learn enthusiastically.

https://www.youtube.com/watch?time_continue=62&v=23KWKoD8xW8&feature=emb_logo