Skip to main content

Andrew Watson About Andrew Watson

Andrew began his classroom life as a high-school English teacher in 1988, and has been working in or near schools ever since. In 2008, Andrew began exploring the practical application of psychology and neuroscience in his classroom. In 2011, he earned his M. Ed. from the “Mind, Brain, Education” program at Harvard University. As President of “Translate the Brain,” Andrew now works with teachers, students, administrators, and parents to make learning easier and teaching more effective. He has presented at schools and workshops across the country; he also serves as an adviser to several organizations, including “The People’s Science.” Andrew is the author of "Learning Begins: The Science of Working Memory and Attention for the Classroom Teacher."

“Soft” vs. “Hard” Skills: Which Create a Stronger Foundation?
Andrew Watson
Andrew Watson

As teachers, should we focus on our students’ understanding of course content, or on our students’ development of foundational academic skills?

Do they benefit more from learning history (or chemistry or spelling or flute), or from developing the self-discipline (grit, focus, executive skills) to get the work — any work — done?

I’ve found a recent study that explores this question. It stands out for the rigor of its methodology, and the tough-mindedness of its conclusions.

Here’s the setup:

Daunting Problems; Clever Solutions

Researchers struggle to answer these questions because student choice can complicate the data.

When college students choose courses and professors, when they opt out of one section and opt into another, we can’t tell if the professor’s quality or the students’ preferences led to particular research results.

How to solve this problem? We find a school where students get no choices.

They must take the same courses.

They can’t change sections.

Students start the year randomly distributed, and they stay randomly distributed.

Where shall we find such a school? Here’s a possibility: the United States Naval Academy. All students take the same courses. They can’t switch. They can’t drop. Sir, yes sir!

Even better: several USNA courses are sequential. We can ask this question: how does the student’s performance in the first semester affect his/her performance in the second semester?

Do some 1st semester teachers prepare their students especially well — or especially badly — for the 2nd semester?

We can even fold in extra data. The website Rate My Professors lets students grade professors on many qualities — including the difficulty of the course, and their overall rating. Perhaps those data can inform our understanding of teacher effectiveness.

Provocative Conclusions

A research team has followed this logic and recently published their conclusions.

In their findings:

Easygoing teachers — who don’t demand lots of work, who don’t communicate high standards, who routinely give lots of high grades — harm their students. 

How so? Their students — quite consistently — do badly on subsequent courses in the field.

In other words: if I have an easygoing teacher for Calculus I, I’m likely to do badly in Calculus II — compared to my identical twin brother who had a different teacher.

On the other hand, tough-minded teachers — who insist on deadlines, who require extra work, who remain stingy with high grades — benefit their students.

How so? These students — like my identical twin — do better in subsequent courses than I do.

This research team calls such executive function topics — getting work done, even if it’s dull; prioritizing; metacognition — “soft skills.” In their analysis, professors who are tough minded about these soft skills ultimately help their students learn more.

More Provocative Still

This logic certainly makes sense; we’re not shocked that students learn more when we insist that they work hard, focus, and set high standards.

Of course, professors who DON’T insist that their students work hard get lots of student compliments (on average). We teachers know that — all things being equal — students are happier when they get less work. Their RateMyProfessor scores average higher than those of their tough-minded peers.

In turn, colleges notice student popularity ratings. School leaders feel good when students praise particular teachers. They give them awards and promotions and citations. Why wouldn’t they? After all, those highly-praised professors give the college a good reputation.

In other words: according to this research team, colleges are tempted to honor and promote teachers who get high student ratings — even though those very professors harm their students’ long term learning, and thereby diminish the quality of the academic program.

That’s a scathing claim indeed.

Caveats

Like everything I write about here, this finding comes with caveats.

First: although these students were randomly assigned once they got to the Naval Academy, admission to that Academy is very challenging indeed. (Google tells me that 8.3% of their applicants get in.)

So, a tough-minded approach might benefit this extremely narrow part of the population — who, let’s be honest, signed up for a rigorous academic program, rigorously delivered.

However, that finding doesn’t necessarily mean that this approach works for younger students, or a broader swath of the population, or students who didn’t apply for such demanding treatment.

It might. But, this study by itself shouldn’t persuade us to change our work dramatically. (Unless we work in a similar academic setting.)

Second: this report’s authors define “soft” and “hard” in a very specific way (see their page 3).

Your school might use these terms quite differently, so their claims might not apply directly to your terminology.

Equally important, the strategies they use to distinguish between “tough-minded” and “easy-going” professors require lots of intricate parsing.

I myself don’t have the stats skills to interrogate their process; I can imagine a more expert reading asking sharp questions about their methods.

Conclusion

In many parts of life, short-term challenges lead to long-term benefits.

We might not like exercise, but it helps us as we get older.

We might like bacon and ice cream, but leeks and salmon keep us fitter.

This research report suggests that we help our students in the long run by maintaining tough-minded high standards right now.

Doing so might not make us popular. Our administrative leaders don’t always recognize our wisdom. But if our students learn more, their strong “soft-skills” foundation really does help them thrive.

I’m Not Excited, YOU’RE Excited (OK: I’m Excited)
Andrew Watson
Andrew Watson

I’ve been going to Learning and the Brain conferences since 2008, so it takes a lot to for a roster of speakers to WOW me. But this week I’m officially WOWed. Next weekend’s conference looks remarkable.

In some cases, I’m happy to see family favorites return to the LatB microphone:

Dan Willingham wrote the book that launched a thousand MBE careers. If you haven’t read Why Don’t Students Like School?, do so. If you HAVE read it, good news: the second edition is coming out soon.

Barbara Oakley has created some of the most popular online courses EVAH. Her topic: “learning how to learn.” Every time I hear her, I’m reminded why so many people rely on her wisdom and experience.

John Almarode and Doug Fisher both manage to apply the learning sciences to their own daily work in inspiring and unexpected ways. They make you think that good teaching really is possible: a reminder we all need these days.

I could go on. And on.

New Voices

However excited I am to hear these speakers again, I might be even more verklempt at the new speakers — or, more precisely, speakers new to Learning and the Brain.

Paul Kirschner is a real giant in this field. He reminds us constantly to be sure that teaching ideas don’t just need to sound good; they need to benefit students. His article Why Minimal Guidance Instruction Does Not Work [link], written with John Sweller and Richard Clark, remains a frequency-cited manifesto for teaching methods that really help students learn.

Daisy Christodoulou has written several field-defining books, beginning with Seven Myths about Education. (I once described this book as having the highest mic-drop/page ratio I know of.) You can see our review of her latest book — Teachers vs. Tech: The Case for an Ed Tech Revolutionhere.

Kenneth Wesson brings a neuroscience perspective to fields that have traditionally been the focus of psychology: for instance, reading instruction, or, the importance of play for learning. I’m deeply curious to hear how his work on the brain can inform our understanding of the mind.

Dylan Wiliam (yes, that’s the correct spelling) helped launch the idea of assessment for learning, and he hasn’t stopped there. His reminder that — in the world of educational innovation —  “everything works somewhere, but nothing works everywhere” keeps us humble and grounded.

Again, I could list many more.

In short, if you haven’t signed up yet, I truly recommend you do so.

Does MOVEMENT Help LEARNING?
Andrew Watson
Andrew Watson

In the exaggerated stereotype of an obsessively traditional classroom, students sit perfectly silent and perfectly still. They listen, and watch, and do nothing else.

Few classrooms truly function that way.

But, how far should we go in the other direction? Can teachers — and should teachers — encourage noise and movement to help students learn?

In recent years, the field of embodied cognition has explored the ways that we think with our bodies.

That is: movement itself might help students learn.

Of course, this general observation needs to be explored and understood in very specific ways. Otherwise, we might get carried away. (About a year ago, for instance, one teacher inspired a Twitter explosion by having his students read while pedaling exercycles. I’ve spent some time looking at research on this topic, and concluded … we just don’t know if this strategy will help or not.)

So, let’s get specific.

Moving Triangles

An Australian research team worked with 60 ten- and eleven-year olds learning about triangles. (These students studied in the intermediate math track; they attended a private high school, with higher-than-usual SES. These “boundary conditions” might matter.)

Students learned about isosceles triangles, and the relationships between side-lengths and angles, and so forth.

20 of the students studied in a “traditional way“: reading from the book.

20 studied by watching a teacher use software to manipulate angles and lengths of sides.

And, 20 studied by using that software themselves. That is: they moved their own hands.

Researchers wanted to know:

Did these groups differ when tested on similar (nearly identical) triangle problems?

Did they differ when tested on somewhat different problems?

And, did they rate their mental effort differently?

In other words: did seeing movement help students learn better? Did performing the movement themselves help?

The Envelope, Please

The software clearly helped. The actual movement sort-of helped.

Students who interacted with the software themselves, and those who watched the teachers do so, did better on all the triangle problems. (Compared — that is — to students who learned the traditional way.)

And, they said it took less mental effort to answer the questions.

HOWEVER:

Students who used the software themselves did no better than the students who watched the teachers use it. (Well: they did better on the nearly identical problems, but not the newer problems that we care more about.)

In other words: movement helped these students learn this material — but it didn’t really matter if they moved themselves, or if they watched someone else move.

The Bigger Picture

Honestly: research into embodied cognition could someday prove to make a big difference in schools.

Once we’ve done enough of these studies — it might be dozens, it might be hundreds — we’ll have a clearer picture explaining which movements help which students learn what material.

For the time being, we should watch this space. And — fingers crossed — within the next 5 years we’ll have an Embodied Cognition conference at Learning and the Brain.

Until then: be wise and cautious, and use your instincts. Yes, sometimes movement might help. But don’t get carried away by dramatic promises. We need more facts before we draw strong conclusions.


Bokosmaty, S., Mavilidi, M. F., & Paas, F. (2017). Making versus observing manipulations of geometric properties of triangles to learn geometry using dynamic geometry software. Computers & Education113, 313-326.

“Students Switch Their Screens Every 19 Seconds.” Sort Of…
Andrew Watson
Andrew Watson

I recently read an arresting claim: when students have web browsers open, they switch their screens — on average — every 19 seconds.

Such data boggle the mind with horror.

As teachers, we want our students to devote sustained thought to complex topics. Nineteen seconds just isn’t going to cut it.

As an English teacher, I think nineteen minutes won’t reveal the full joys of a Shakespearean sonnet. In nineteen seconds, my students won’t have time to find the main verb. The Horror.

Although it took a moment’s sleuthing to track down this claim (see Coda below), this statement does in fact have research support. Back in 2014, Leo Yeykelis and his intrepid team undertook quite a complex study — including screen monitoring and measures of skin conductance! — to arrive at this conclusion.

Clearly, it’s time to panic.

Don’t Panic Just Yet

Whenever we hear shocking “research based” conclusions  — ESPECIALLY conclusions that confirm our prior beliefs — we should look hard at that underlying research.

We need to know exactly what the researchers did before we jump to big conclusions about their work.

If you click the link above, you can read Yeykelis’s study. When I did, two points jumped out at me very quickly:

First: this study draws its conclusions based on research into 10 college undergraduates. TEN.

Now, researchers have very good reasons to run studies with only ten people in them. Research is pricey and time consuming, so it makes sense to do a small-ish study before ramping up to study hundreds or thousands of people.

In other words, they didn’t do anything wrong.

However, for the same reason you wouldn’t take a medicine that had been tested on only ten people, you shouldn’t make dramatic changes to your classroom based on that research.

Second: because this study has so few participants, we want to know what other studies have found on this topic.

Regular readers know: my go-to sites to answer that question are Scite.ai and ConnectedPapers.com.

When I plugged Yeykelis’s study into those search engines, I found some support — but not lots-n-lots. In other words: this field simply hasn’t been studied very much.

For instance: the ConnectPapers grid doesn’t highlight any related research after 2017. (That finding doesn’t mean that no research has been done after that date, but that it hasn’t reached prominence yet.)

Don’t Panic; Do Respond

For all these reasons, I don’t think we should get too focused on “19 seconds.”

At the same time, the other studies highlighted by Scite.ai and ConnectedPapers.com do point consistently in the same direction: screen switching really is a thing.

A bad thing.

We don’t know exactly how prevalent the problem is, but it’s not isolated; it’s not trivial.

For that reason, I think we should consider our own teacherly responsibilities here.

ESPECIALLY during online classes, we can remind students to turn off other technologies. “Remember our class rules: please turn your cell phones off. And, close all the other tabs in your browser except this one, and your homework.”

We can explicitly teach students to purge tech from their study areas and habits. “When you start your work, have your book open, your pencil on your paper, and your computer off.”

We can include those reminders in homework assignments. “This paragraph is due Wednesday before class. Remember: have your web browser closed before you start writing.”

We can look in the mirror. How often do we switch screens? What prompts us to do so? For instance, as I write, I’ve got 7 browser tabs open. I am — REALLY — using them all to write this post. Based on this experience, what realistic guidance can I give my students about multi-screening?

Of course, you’ll translate such suggestions to fit your own teaching circumstances. In every case, we’re making reasonable and measured changes.

The precise number “19” doesn’t really matter. The big picture — learning requires sustained attention, so we should foster it — really does.

Coda

I wrote last week, improbably, about the benefits of Twitter. Yes, it can be mean-spirited and dreadful, but it can also provide helpful insight and useful resources.

The day after I wrote that blog post, my own Twitter experience highlighted that lesson.

I first read about the “19 Seconds” research in Doug Lemov’s excellent new book Teaching in the Online Classroom. (Lemov’s Twitter handle: @Doug_Lemov).

Curious, I tweeted at him asking for the citation.

Before Lemov could answer, another scholar — Christian Bokhove (@cbokhove) — BOTH provided me with Yeykelis’s 2014 study AND linked to a follow-up study with even more dire results.

And then — because Twitter can be great — Lemov himself followed up to be sure I’d gotten the research I needed.

I don’t mean to downplay its flaws. But I’ve found Twitter mighty useful in learning about teaching, and about research; and in finding a kind of community.

It’s ironic but: I’m encouraging you to OPEN another web browser and check out Twitter.

EduTwitter Can Be Great. No, Really…
Andrew Watson
Andrew Watson

Twitter has a terrible reputation, and EduTwitter isn’t an exception.

The misinformation.

The name-calling.

The “team” rivalries: all heat and little light.

Did I mention the misinformation?

You might wonder: why bother? Honestly, I wouldn’t blame you if you didn’t. I myself was hesitant to sign up.

Despite all these flaws — none of which is exaggerated, by the way — I do find lots of benefits. This experience recently got my attention.

The Setup

On my personal Twitter account, I posted a link to research that had me puzzled. According to a small study, the motor cortex does not “remap” to represent prosthetic limbs.

Given all the research we have into neuroplasticity, I was genuinely shocked by that finding.

In fact, I’m currently reading Barbara Tversky’s book Mind in Motion, which talks about brains remapping in response to TOOL USE.

If brains remap because of tools, but not because of prosthetics — which are, from one perspective, tools that have been attached to the body — well: that’s a very strange.

But, people on Twitter know things I don’t. I thought: maybe someone knows more about this research pool than I…

Rising Action

Soon after I posted that link, my Twitter friend Rob McEntarffer (@rmcenta) retweeted it, sharing my curiosity. (By the way: “Twitter friends” are really a thing. I know LOTS of people — too many to name here — whom I have come to respect and like entirely by “meeting” them on Twitter. I would NOT have predicted that.)

One of his Twitter followers — someone I have never met and don’t know — retweeted Rob’s retweet, with a question to her professor.

So, we’re now at 3 or 4 degrees of separation. What happens next?

The Payoff

Turns out: this professor — whom I also don’t know — has lots of expertise in this research field. He briskly explained why the study couldn’t draw strong conclusions. (If I understand him correctly, its measurement methodology doesn’t allow it to make those claims.)

In other words: within a few hours, I went from…

being ASTONISHED because a research finding dramatically contradicted my (fairly basic) understanding of neural remapping,

to…

having a SUCCINCT AND CLEAR EXPLANATION why that research shouldn’t concern me,

and…

feeling RELIEVED that my understanding of neuroplasticity wasn’t so wrongheaded.

And, what made those changes possible — or, at least, a whole lot easier? Twitter.

Caveats

To be clear, Twitter really does include (and produce) foul, cruel nonsense. If you look for that, you’ll find it. (Tom Lehrer says: “Life is like a sewer. What you get out of it depends [at least in part] on what you put into it.”)

At the same time, I routinely come across generous teachers & researchers. They freely share perspectives and resources and contacts and information.

If you can stand the background noise, you might give it a look.

One place to start: @LearningAndTheB. Perhaps I’ll see you there.

How Can We Help Students Study Better?
Andrew Watson
Andrew Watson

This story might sound familiar:

You attend a Learning and the Brain conference (like, say, our upcoming conference about Teaching During a Pandemic) and come away with FANTASTIC ideas.

You go back to your classrooms — in person, online, asynchronous — and tell your students all about the amazing research you saw. (Perhaps you discuss the importance of retrieval practice, which helps much more than old-fashioned review.)

Your students sound thrilled!

And yet, the very next day they ignore your retrieval practice suggestion, and go right back to rereading their notes. Ugh.

SO FRUSTRATING!

What can we do to help our students study correctly — which is to say: how can we help them learn more, and more effectively?

In a recent article, Mark McDaniel and Gilles Einstein offer a 4-step framework to help change students’ study behavior.

Called KBCP — which stands for “Knowledge, Belief, Commitment, and Planning” — this framework could make a real difference for long-term learning.

The Short Version

In brief:

Knowledge: we should tell students about the study strategy or technique that research has shown to be effective: say, spacing, or generative learning strategies.

Belief: students then undertake an exercise that demonstrates the benefits of this strategy.

Commitment: students get onboard with the idea. They don’t just know and believe; they buy in.

Planning: next, they make a specific and measurable plan to enact their commitment.

As McDaniel and Einstein’s article shows, each of these steps has good research behind it. Their contribution to this field: they bring them all together in a coherent system.

McDaniel and Einstein emphasize that teachers shouldn’t rely on just one or two of these steps. They all work together to help students learn more:

Our central premise is that all four components must and can be explicitly targeted in a training program to maximize self-guided transfer of effective learning strategies.

The problem with the story that began this blog post, in other words, is that it targets only the first of these four steps. To help our students learn, we need to do more and better.

One Example

This article makes for such compelling reading because the authors both explain the research behind each step and offer specific classroom examples to show what they mean.

For instance: the “belief” step encourages teachers to design an exercise that helps students really believe that the technique will work. What would such an exercise look like?

If, for instance, we want to encourage students to “generate explanations” as a memory strategy, what exercise would persuade them that it works?

M&E describe a strategy they’ve often used.

First: have students learn several simple sentences. For instance: “The brave man ran into the house.”

Second: for half of those sentences, encourage students to (silently) generate an explanation: perhaps, “to rescue the kitten from the fire.”

Third: when we test students on those sentences later, they will (almost certainly) remember the second group better than the first. That is: they’ll have reason to believe the strategy works because they experienced it themselves.

McDaniel and Einstein include such examples for each of their four steps.

And Beyond

This article gets my attention for another reason as well. The authors write:

There are many potentially effective ways to actualize the key components of the KBCP framework, and we offer the following as one possible example of a training program.

Frequent readers recognize my mantra here: “don’t just do this thing; instead, think this way.”

In other words, McDaniel and Einstein don’t offer readers a to-do list — a set of instructions to follow. Instead, they provide ideas for teachers to consider, and then to adapt to our own specific teaching context.

KBCP will look different in a 2nd grade classroom than a high-school classroom; different in a gym class than a tuba lesson; different in a Brazilian cultural context than a Finnish one.

Research can offer us broad guidance on the directions to go; it can’t tell us exactly what to do with our own students.

The KBCP framework creates another intriguing possibility.

I recently saw an article saying — basically — that “teaching study skills doesn’t work.”

Its provocative abstract begins:

This paper argues that the widespread approach to enhancing student learning through separate study skills courses is ineffective, and that the term ‘study skills’ itself has misleading implications, which are counterproductive to learning.

The main argument is that learning how to study effectively at university cannot be separated from subject content and the process of learning.

Having seen McDaniel and Einstein’s article, I wonder: perhaps these courses don’t work not because they can’t work, but because they’re currently being taught incorrectly.

Perhaps if study skills classes followed this KBCP framework, they would in fact accomplish their mission.

M&E acknowledge that their framework hasn’t been tested together as a coherent strategy. To me at least, it sounds more promising than other approaches I’ve heard.

Does Online Learning Work? Framing the Debate to Come…
Andrew Watson
Andrew Watson

With news that several very effective vaccines will be increasingly available over the upcoming months, we teachers can now start thinking about “a return to normal”: that is — in person teaching as we (mostly) worked before February of 2020.

One question will inevitably be debated: did online learning work?

I suspect that the “debate” will go something like this. One voice will stake an emphatic opinion: ONLINE CLASSES WERE AN UNEXPECTED TRIUMPH! Some data will be offered up, perhaps accompanied by a few stories.

An equally emphatic voice will respond: ONLINE CLASSES FAILED STUDENTS, TEACHERS, AND PARENTS! More data. More stories.

This heated exchange will reverberate, perhaps improved by all of Twitter’s nuance and common sense.

A Better Way?

Rather than launch and participate a BATTLE OF EXTREMES, I hope we can look for a more level-headed approach. As is so often the case when research meets teaching, a key question should be boundary conditions.

Whenever we look for a research  finding (e.g.: drawing helps students learn!), we should ask: under what precise conditions is this true?

Does drawing help older students and younger ones? In math and in phonics? Autistic students, dyslexic students, aphantasic students, and neurotypical students?

We’re always looking for boundaries, because every research finding has boundaries. As Dylan Wiliam (who will be speaking at our February Conference) famously says: “When it comes to educational interventions, everything works somewhere. Nothing works everywhere.”

If we ask about boundary conditions for the strengths and weaknesses of online learning, we can have a much more productive discussion.

Places to Start

Age: I suspect we’ll find that — on average — older students did better with online classes than younger ones. My friends who teach college/high school don’t love online teaching, but they don’t seem quite as overwhelmed/defeated by those who teach younger grades.

Additional Technology: Is it better to have a simple Zoom-like platform with occasional breakout sessions? Does it help to use additional, elaborate programs to supplement online learning?

Discipline: Perhaps online teaching worked better with one kind of class (science?) than another (physical education?).

Personality: Although most high school students I know emphatically prefer in-person classes, I do know two who greatly prefer the online version. Both really struggle negotiating adolescent social networks; they’ve been frankly grateful to escape from those pressures and frustrations.

Teachers’ personalities could matter as well. Some of us comfortably roll with the punches. Some of us feel set in our ways.

Administration: Did some school leaders find more effective ways to manage transitions and support teachers and students? The question “does online learning work” might get different answers depending on the managerial skill supervising the whole process. (In my work, I find teachers appreciated decisiveness and clear communication above all else. Even when they didn’t like the decision itself, they liked knowing that a decision had been made.)

SES: No doubt the socio-economic status (SES) of school districts made a big difference. It’s hard to run online classes in schools and communities that don’t have money for technology, or infrastructure to support its use.

Pedagogy: Do some styles of teaching work better online? Or — a slightly different version of this questions — do teachers and schools with experience “flipping the classroom” have greater success with an online model?

Teacher Experience: Perhaps well-seasoned teachers had more experience to draw on as they weathered the muddle? Or, perhaps younger teachers — comfortable with tech, not yet set in their ways — could handle all the transitions more freely?

Country/Culture: Do some countries or cultures manage this kind of unexpected social transition more effectively than others?

Two Final Points

First: We should, I think, expect complex and layered answers to our perfectly appropriate question.

In other words: online learning (a la Covid) probably worked well for these students studying this topic in this country using this technology. It was probably so-so for other students in other circumstances. No doubt it was quite terrible for still other students and disciplines and pedagogies.

Second: I myself have long been skeptical of the idea that “online learning is the future of education (and everything else)!”

And yet, I don’t think we can fairly judge the validity of that claim based on this last year’s experience.

After all: most teachers and school and students didn’t get well-designed and deliberately-chosen online education. They got what-can-we-throw-together-with-grit-and-hope online education.

Of course that didn’t work as well as our old ways (for most students). Nothing worked well: restaurants struggled to adjust. The travel industry struggled. Retail struggled.

Yes: I think that — for almost everybody learning almost everything — in-person learning is likely to be more effective. But I myself won’t judge the whole question based on this year’s schooling.

We all benefit from forgiveness for our lapses and muddles during Covid times.

Let’s learn what we reasonably can about online education, and use that experience to improve in-person and remote learning in the future.

Seriously: What Motivates Teachers to Be Funny?
Andrew Watson
Andrew Watson

To start 2021 in the right spirit, let’s think about humor in the classroom.

It seems that, obviously, humor might be a good classroom strategy. When the lesson slows down, a joke or two might brighten the mood.

Once we begin studying this question the way researchers study things, well, it gets much more complicated. (I once heard the claim that “laughter improves learning 44%!” Unsurprisingly, so vague a statement doesn’t hold up to much scrutiny. For starters, the cited research says 28%, not 44%…)

We might study, for instance:

What kind of humor do teachers use?

Are there differences between K-12 teachers’ and college professors’ use of humor?

Are there gender differences in use of humor? (Believe it or not, there’s a controversy here.)

What motivates teachers to attempt humor?

I’ve recently found research trying to answer this question:

What motivation traits prompt college professors to try content-relevant humor? (“Content relevant” means they’re not just telling jokes; they’re being funny about the topic they’re discussing.)

What did the researchers find?

Kinds of Motivation

Psychologists often divide behavior up into plausible categories for analysis.

For instance, if you know Carol Dweck’s work on mindset, you know that some people feel motivated to learn more, while others feel motivated to demonstrate what they already know.

In this case, college professors might want to improve at professing, or they might want to demonstrate that they’re already top-notch professors.

Also, motivations can be either positive or negative.

In this case, a professor might want to demonstrate that they’re good (positive), or not reveal that they’re bad (negative).

Researchers have other motivational categories as well.

In this study, they wonder if professors use humor to improve relationships with students.

And, they wonder if a prof’s desire to avoid work influences their use of humor.

To start answering these questions, the researchers had more than 250 professors fill out surveys that give insight into their motivation. (Important: these data, in other words, come from self report. Not everyone is persuaded by such data.)

They also asked students to rate — on a scale of 1 to 5 — their agreement with this statement: “Instructor enhances presentations with the use of humor.”

The Envelope, Please

So, what did they learn?

For instance: when did students endorse the statement that their professor “enhances presentations with the use of humor?”

Answer: when those professors themselves said s/he wanted to “build amicable and valued relationships with students.” That is: profs with relational goals use humor to build those relationships.

How about the reverse? When did students consistently reject that statement?

Answer: When their professors said they wanted to avoid looking bad. (If you’re keeping score, that’s a “negative performance” goal.)

In brief: professors who want to connect with students make content-related jokes. Those who fear looking incompetent remain humorless.

Three Interesting Points

First: the researchers here are scrupulous to distinguish between college professors and K-12 teachers. They don’t claim that these findings apply to earlier grades.

Second: This research team — contradicting others — finds that women use humor more often than men. (The controversy continues!)

Third: One word in particular jumps out at me: “enhances.” Students didn’t say that the professors “tried” to be funny, but that they “enhanced presentations with humor.”

That is: the students suggest that — for teachers who want to build relationships — humor really does make lectures better. The researchers don’t address that question directly, but — at least to me — that conclusion flows directly from this research.

Oh, what the heck. Let’s have another.

Fourth: In humor as in all things, personality matters. If you’re not a funny teacher, don’t feel that you have to start telling jokes to build relationships. You — almost certainly — have your own ways to do so. Use your own authentic strategies to connect with your students.

I can’t cite research, but I’m almost certain: your own honest self-presentation will be MUCH more effective at building relationships that forced humor.

The Best Teaching Advice We’ve Got
Andrew Watson
Andrew Watson

You want to improve your teaching with psychology research?

We’ve got good news, and bad news.

And more good news.

Good News: we have lots and LOTS of research. We can talk about attention, or working memory, or the spacing effect, or motivation, or stress…the list is long. And super helpful.

So much practical advice!

Bad News: actually, the bad news is the same as the good news. We’ve got SO MUCH good research that it’s honestly hard to keep track of it all.

I mean, seriously. Should you start by looking at attention research? Or stress research?

Should we think about the motivational effects of student-teacher relationships, or the perils of working memory overload, or the benefits of desirable difficulty?

Which is most important?

Honestly, I think our next priority is not so much finding out new truths about learning, but organizing all the information we already have.

More Good News

If you agree that we really need someone to sort all these suggestions into a coherent system, you’ll be delighted to read this article by Stephen Chew (Twitter handle: @SChewPsych) and William Cerbin (@BillCerbin).

Other scholars — for instance, Barak Rosenshine — have put together a coherent system based on learning principles. Chew and Cerbin, instead, organize their system around cognitive challenges.

That is:

If students feel anxiety about a topic or discipline, that emotion will interfere with their learning.

If students have prior misconceptions, they will distort students’ understanding.

If classroom work or assignments go beyond working memory limits, students won’t learn effectively (or, at all).

When planning a course or a lesson or an assignment, teachers can think their way through these specific challenges. By contemplating each one, we can design our work to best facilitate learning.

Getting the Emphasis Right

If you’re thinking “this is such excellent news! It just can’t get any better!” — well — I’ve got some news: it gets better.

Chew and Cerbin write:

There is no single best teaching strategy for all students, topics, and situations. The proposed framework is not prescriptive … and can guide adaptation of teaching practice.

In other words, they’re not saying: here’s a list of things to do.

Instead, they are saying: here are several topics/problems to consider.

Teaching advice should not include “best practices.” (That’s a business concept.) It should include “best questions to ponder as we make decisions.” Chew and Cerbin make this point repeatedly.

Frequent readers know that I’ve been banging on for years with this mantra: “Don’t just do this thing; instead, think this way.”

We should think about our students’ working memory limitations. The strategies we use might differ for 1st graders and 8th graders.

We should think about the importance of transfer. A Montessori school and a KIPP school will (almost certainly) use differing strategies to reach that goal.

We should think about our students’ prior knowledge. The best way to measure that knowledge might be different for students with diagnosed learning differences.

Yes: we should consider these nine topics. But the ways we answer them must depend on our students, our schools, our curriculum, and ourselves.

For all these reasons, I recommend Chew and Cerbin’s article with great enthusiasm.

And, happily, you can meet Dr. Chew at our online conference in February! (In case you’re wondering: I was planning to write about this article before I knew he was joining the conference. A happy synchronicity.)

James Flynn Changed the Way We Think about Intelligence
Andrew Watson
Andrew Watson

In 1950, the average score on an IQ test was ~100. In 2020, the average score on an IQ test was ~100.

Nothing, it seems, had changed.

Those facts, however, disguise a surprising truth.

IQ tests are based on scaled scores. That is: the student’s raw test score is translated into an IQ score according to a formula. And — here’s the key part — that formula is readjusted every ten years.

So: the reason that average IQ scores haven’t changed is that the formula has changed to keep the average at 100. A lot. Raw scores on the underlying test have gone considerably over the history over the test.

If our grandparents’ raw scores were translated according to today’s formula, their average would be about 70. If our scores were translated according to our grandparents’ formula, the average would be about 130.

What’s going on here?

Are we really that much smarter than our grandparents?

https://www.ted.com/talks/james_flynn_why_our_iq_levels_are_higher_than_our_grandparents?language=en#t-524785

James Flynn, who first noted this astonishing increase, has a different way of thinking about this question.

He, instead, pointed to differences in schooling. For instance:

Children today, on average, get LOTS more schooling than did children when the IQ test was first developed.

Today’s schooling tends to focus more on abstract and hypothetical thinking than did school 100 years ago.

Flynn’s argument, of course, includes many more details and insights.

Even people who don’t care much for IQ as a way to measure intelligence admire the way Flynn has prompted us to think differently and deeply about the history of intelligence: how we measure it, how we develop it, how we understand it.

You can read Wikipedia’s article on the Flynn Effect here.

Today’s Sad News

James Flynn died last week in New Zealand, where he had been a professor for decades.

He’s being remembered not only as a giant in the field of intelligence research, but also as an unusual combination of intelligence and kindness.

Over on Twitter, Richard Haier — editor of the journal Intelligence — tweeted:

Jim Flynn became famous for showing a global secular rise in IQ scores but he was admired even more for engaging with critics as a gentleman. In a field raked with controversy and not without personal attacks, Jim sought out contrary opinions and engaged intellectually.

We have learned from Flynn’s insights into human cognition. We can also learn from his example on conversing with people we disagree with.