Skip to main content
Prior Knowledge: Building the Right Floor [Updated]
Andrew Watson
Andrew Watson

When I first published this post back in 2019, the underlying research wasn’t publicly available; now it is!


Take a gander at this passage from Michael Jones’s recent biography of The Black Prince:

“In the fourteenth century England used a silver standard of currency. The unit of account was the pound sterling (£) which was equal to one and a half marks of silver. The pound was divided into twenty shillings (s), each of twelve pence (d). There was also, from 1344, a gold coinage based on the noble, which was conventionally worth 6s 8d, but was rarely used. It would, however, be significant in the calculation of the ransom of King John II and also in the introduction of gold coinage into Gascony and then the principality of Aquitaine by the Black Prince.”

Many readers, I suspect, felt tempted to give up relatively quickly. (Don’t blame yourself if you did.) Unless you’re really up to speed on 14th century English currency–both silver and gold!–the paragraph quickly becomes overwhelming.

The vocabulary in this passage probably doesn’t strain our cognition. Except for the phrase “marks of silver,” I know what all those words mean. (And, I can guess from context that a “mark” is some unit of measurement.)

However, the passage does place several mental demands on the reader.

First, it invites you to undertake several rapid mathematical calculations. (Quick: how many shillings in a mark?)

Second, it requires you to learn abbreviations as you go. To understand the fourth sentence, you need to remember the (wildly counter-intuitive) abbreviation of “pence” as “d” from the third sentence.

Third, it assumes you recall several events and places unfamiliar–I suspect–to most Americans. Who was King John II? Why was he ransomed…was he kidnapped? Where are Gascony and Aquitaine? They don’t sound very English — why did an English prince introduce coinage to them? Actually: why is a prince empowered to introduce new currency?

Essential Background Knowledge

I thought of this paragraph when I read a recent article by Robert Pondiscio. In it, Pondiscio summarizes a study trying to determine how much background knowledge is necessary for comprehension.

In this study, students who scored higher than a 59% on a background knowledge test understood a reading passage substantially better than those who scored below 59%.

As summarized by Pondiscio, the study’s authors see some clear teaching implications here.

First, we can meaningfully measure our students’ relevant background knowledge.

Second, students who fall short on that measure will benefit A LOT if we provide them with the essentials.

For instance, students who understood that “habitat,” “species,” and “ecosystems” were relevant vocabulary for the study of ecology understood the reading passage more deeply. (The study included 3500 students, so I believe they controlled for various confounds. I haven’t read the study itself–it’s behind a paywall.)

I think those conclusions point to another:

Third: models of teaching that focus on “pure discovery” will create substantial challenges for students who lack background knowledge. Students who don’t know the basics of a topic simply can’t understand the field of inquiry within which they’re meant to discover.

And, they won’t feel motivated by curiosity to find out. They’ll feel discouraged by their confusion. (Few readers, I suspect, were motivated by the paragraph above to learn more about medieval English currency.)

A Final Thought

This study finds that 59% was the essential tipping point. Students who scored lower than 59% on the prior knowledge test found themselves in a different cognitive category than those who scored above.

Howeverthat percentage does not necessarily apply to all circumstances.

In other words: we shouldn’t give our students prior-knowledge tests, and focus only on those who score 58% and below.

Instead, we should plan our lessons and units knowing that some floor-level of knowledge will be crucial for learning most things.

In every case–as you hear me say so often–we’ll have to rely on the teacher’s judgment to discover that level.

Researchers can remind us that the floor exists. But they can’t identify it for every teacher in every classroom. Ultimately, with that research guidance in mind, we’ll find the right place for the floor. And, we’ll build it.

Assembling the Big Classroom Picture
Andrew Watson
Andrew Watson

The last 20 years have brought about powerful new ways to think about teaching and learning.

When teachers combine our experience, professional traditions, and instincts with the scientific insights of psychology and neuroscience research, we find new ways to understand our classroom work.

As I’ve written before, we currently suffer from an embarrassment of riches.

With SO MANY ideas that have strong and consistent research backing, how can we understand and prioritize all the suggestions coming our way? (One recent answer: Chew and Cerbin’s approach, focusing on the cognitive challenges students face.)

The Big 6

You’d like another way to think about all this research?

Over at The Learning Scientists, several wise scholars have been championing a blend of six research-based approaches to teaching: spacing, interleaving, retrieval practice, elaboration, concrete examples, and dual coding.

Frequent readers will recognize almost every strategy on that list. I write about at least one of those each month. (If you want more information on “retrieval practice,” for instance, click on the tag in the right column.)

Two of the Learning Scientists, Dr. Yana Weinstein and Dr. Megan Sumeracki, have written a great book: Understanding How We Learn, a Visual Guide. If you want to explore those big 6 in depth, you should put it on your “must read” pile right away.

If, however, you’d like a brisker introduction to these ideas, I’ve recently found a review article by Weinstein and Sumeracki (and Christopher Madan) that summarizes and illustrates all six.

Newcomers to this field you can read its fifteen pages in an hour.

Veterans can instead scan it for a refresher, and for new insights and combinations.

For Example

Although I’ve written frequently about most of the Big Six, I don’t often discuss elaboration. More than the other concepts, it resists easy definition.

Luckily — more than most psychology terms (I’m looking at you, “theory of mind”) — the term means what it says. When we ask students to elaborate on new information or new skills, this deeper processing helps consolidate long-term memories.

For instance, in preparation for teaching Their Eyes Were Watching God, I took some time in Thursday’s class to define the terms “voice” and “agency” as English scholars use them. These concepts will be essential to understand Janie’s development over the course of the novel.

We then elaborated on those definitions by discussing two plays we had just finished reading: Fences and Macbeth.

When does Lady Macbeth have “voice”?

How does Troy claim “agency”?

What strategies does Roes use to claim voice and agency in her life?

How does the set of Fences represent limitations on agency?

And so forth.

By connecting these two new words to events they already knew and understood, students elaborated on their definitions. (And — as a bonus — got some retrieval practice about Fences and Macbeth.)

By the way, the Enser’s handy book Generative Learning in Action offers LOTS of well-researched elaboration strategies.

In Sum

At Learning and the Brain, we bring teachers and researchers together for productive conversation.

Sometimes, we should focus on research details to ensure they’re persuasive enough for us to reshape our teaching practices.

At other times, we should refocus on the big picture — to understand how each of the techniques adds up to a coherent teaching practice.

Weinstein, Madan, and Sumeracki help us do just that.

Enjoy!

Retrieval Practice and Metacognition: What and How Do Students Think about This Powerful Learning Strategy?
Andrew Watson
Andrew Watson

Ask almost anyone in Learning and the Brain world, they’ll tell you: retrieval practice benefits students.

More than most any other technique we have, this one both has lots of research support and can easily be integrated into our classrooms. (For a handy review of its benefits, check out Agarwal and Bain’s great bookPowerful Teaching.)

Of course, because retrieval practice offers so many benefits, we want our students to use it.

Do they?

Metacognition and Retrieval Practice

The short answers are: not enough, and not very well.

Michelle Rivers wanted to know why, and so explored research into students’ metacognitive beliefs about retrieval practice. That is:

What do they believe about retrieval practice?

How do they use it to monitor their learning?

How do they use its insights to control their learning activities?

The more we understands students’ metacognitive perspective on their learning, the more wisely we can guide them.

What did she find?

Beliefs about Retrieval Practice

Sadly, most students don’t understand RP’s benefits.

In 10 studies that asked “why do you quiz yourself,” only 26% of students say they do so in order to learn more.

Instead, most students (52%) do so “to figure out how well I’ve learned the information I’m studying.”

In other words: even the students who use RP most often do so for the wrong reasons.

Of course: they’re not harming themselves by using retrieval practice this way. But — and this is a big but — they’re not getting the benefits that RP can offer.

In fact, Rivers’s survey suggests one reasons students might not use retrieval practice to help themselves learn. Studies suggest that when students try both methods, they don’t predict that they’ll remember more after retrieval practice. (Check out this study by Kornell and Son.)

I find this research pool baffling, even disheartening. Even when students experience greater success with RP than with simple rereading, they don’t internalize the broader lesson that active retrieval helps them learn.

Little wonder, then, that most students review material (43%) or copy their notes (11%) as a go-to strategy, rather than self-testing (8%).

Uses of Retrieval Practice

Given these flawed beliefs, how do students use RP?

Among Rivers’s findings: students try retrieval practice …

… when the questions are easy

right before a test

relatively late in the learning process.

… relatively few times for any given pool of information.

Of course, retrieval practice benefits students when they do so…

… with questions that are challenging

well before a test (in fact, RP immediately before a test might be counterproductive)

throughout the learning process

several times for any given pool of information.

Simply put: even when students use this excellent study strategy, they do so in less-than-optimal ways.

Next Steps: Learning How to Learn

So far, this is quite the glum post. A potentially powerful learning strategy is largely going to waste.

What can we teachers do?

I’ve got two suggestions.

First, this recent post summarizes a promising approach from Mark McDaniel and Gilles Einstein. Their multi-step process not only works to persuade students of RP’s benefits; it encourages them to make specific retrieval practice plans and to follow through on them.

In other words: we shouldn’t just tell our students about its benefits. We shouldn’t just tell them to do it. We should go the next steps to create plans and structures.

Second, I’ve seen LOTS of online programs to help teachers and students with their retrieval practice.

For instance, Adam Boxer has created a program called Carousel. This program allows teachers to create retrieval questions, and to jumble them together in useful ways. It allows students to self-score their work (and teachers to correct any errors). It keeps track of right and wrong answers, so we can see how well our students are learning specific topics and questions.

I have used Carousel enough to find it intriguing; I haven’t used it enough to make strong claims about it. (Given responses to it on Twitter, however, it seems that teachers kind of love it.)

Whichever program you choose, I think students will learn how to learn more effectively if we build tools like these into our teaching practice.

In Sum

A: We know that retrieval practice can help students learn, but only if they use it correctly.

B: We know that, for the most part, they don’t.

A + B = We should focus more on helping students use this strategy wisely. And, we’ve got the tools to do so!

 

“Kids These Days!”: A (Partial) Defense of Ignorance and Distractibility
Andrew Watson
Andrew Watson

You’ve seen the videos.

An earnest reporter wielding a microphone accosts a college student and asks extremely basic questions:

“What are the three branches of government?”

“What is the capital of France?”

“Who wrote the Declaration of Independence?”

When students can’t answer these questions, the reporter eyes the camera wryly, as if to say, “What’s wrong with Kids These Days?”

One such video made the rounds recently. Middle schoolers (I think) didn’t know what D-Day was: they hypothesized he might be a rapper.

So, really: what is wrong with these kids? How can they POSSIBLY not know about D-Day?

Beyond History

Our lament frequently goes beyond students’ lack of historical knowledge.

We worry about “kids and their devices.”

They’re always looking at screens! (I’m here to tell you: back in the ’80s, we never looked at screens.)

They’re always texting! (We never texted.)

They’re so distractible! (Nope. Not us.)

If students know so little and concentrate so badly, we have to wonder what’s up with them.

Distateful Mirrors

I understand the frustration. (I’ve taught a class of well-educated students who didn’t know the story of Noah’s Ark. That was a shocker.)

At the same time, I think that question distracts us from the essential underlying point.

If an entire room of students didn’t know what D-Day was, it’s clear that the school system didn’t teach them about D-Day; or — at least — that didn’t teach them well enough for them to consolidate that knowledge.

If we can easily find college students who don’t know from geography and history, we can blame the college students. But I think we should first pause to consider the education system in which it’s possible to complete high school without enduring knowledge of such things.

It is, in my view, simply not fair or helpful to blame students for being in the system that adults created.

Those videos shouldn’t make us condemn the students; they should instead make us look in the mirror.

We might not like what we see. But: their shortcomings tell us more about our education system than about our students.

The Ingenious Tech Switcheroo

The same argument, in my view, applies to laments about technology. Notice the impressive blame-shifting here:

Step 1: technology companies invent a must-have gadget. They market it to — say — 10 year olds.

Step 2: 10-year-olds want the gadget, and pester their parents to buy it. (The tech company doesn’t need to get money from adults; they persuade children to get money from their parents and give it to the company. BRILLIANT!)

Step 3: the tech industry then highlights the narrative that the 10-year-olds are to blame for being distractible. The problem is not in the adults’ behavior; it’s in the children! “Why oh why can’t these kids focus!”

Here again, the students’ behavior gives us essential feedback about adults.

If we want today’s students to concentrate better, maybe we should create — and provide them with — fewer distractions. Perhaps we should model the behavior we want to see. (Quick: how many browser tabs do you have open?)

Caveats, Always Caveats

One: not everyone worries when students don’t know stuff.  (I do, but some don’t share that concern.) For adults who don’t emphasize factual knowledge, those videos seem trivial, not alarming.

Two: not all the data suggest that students are “more distractible.” Perhaps they simply have more distractions. (How many times did you check your cell phone in the last hour?)

Three: Of course, students bear responsibility for working effectively within the systems adults create. If twenty-four of my students learn something and one doesn’t, we can reasonably wonder what’s going on with that one. But, if none of my students know the importance of the Treaty of Versailles, I should think about the adult-driven systems, not “what’s up with kids these days.”

Four (this is a biggie): As I strive refocus popular outrage away from the students and toward the system in which they learn, I might seem to be blaming teachers and school leaders. I very much don’t mean to do that.

In my experience, the great majority of both groups work extremely hard, and do so with the best intentions. (Few people say: “I went into teaching to become rich.”)

At the same time: our well-intentioned efforts simply aren’t producing the results we want. That feedback — evident in those videos — should prompt honest and searching self-reflection.

In Sum

I promised a (partial) defense of ignorance and distractibility. Here goes:

Of course we want our children to know important information and skills, and to be able to concentrate on them.

If most students don’t and can’t, the fault probably lies with the education system, not children who learn within it.

Children who don’t know what D-Day is don’t deserve to be ridiculed on Twitter. They do deserve a curriculum that fosters knowledge, skill, and concentration. They deserve pedagogy that helps them master all three.

At Learning and the Brain, we connect education with psychology and neuroscience in order to start conversations. Conversations that include those three perspectives can help create such a curriculum; can help foster such pedagogy.

We hope you’ll join us!

When Evidence Conflicts with Teachers’ Experience
Andrew Watson
Andrew Watson

Here’s an interesting question: do students — on average — benefit when they repeat a grade?

As you contemplate that question, you might notice the kind of evidence that you thought about.

Perhaps you thought: “I studied this question in graduate school. The research showed that answer is X.”

Perhaps you thought: “I knew a student who repeated a grade. Her experience showed that the answer is X.”

In other words: our teaching beliefs might rest on research, or on personal experience. Almost certainly, they draw on a complex blend of both research and experience.

So, here’s today’s question: what happens when I see research that directly contradicts my experience?

If I, for instance, think that cold calling is a bad idea, and research shows it’s a good idea, I might…

… change my beliefs and conclude it’s a good idea, or

… preserve my beliefs and insist it’s a bad idea. In this case, I might…

… generalize my doubts and conclude education research generally doesn’t have much merit. I might even…

… generalize those doubts even further and conclude that research in other fields (like medicine) can’t help me reach a wise decision.

If my very local doubts about cold-calling research spread beyond this narrow question, such a conflict could create ever-widening ripples of doubt.

Today’s Research

A research team in Germany, led by Eva Thomm, looked at this question, with a particular focus on teachers-in-training. These pre-service teachers, presumably, haven’t studied much research on learning, and so most of their beliefs come from personal experience.

What happens when research contradicts those beliefs?

Thomm ran an online study with 150+ teachers-in-training across Germany. (Some were undergraduates; others graduate students.)

Thomm’s team asked teachers to rate their beliefs on the effectiveness of having students repeat a year. The teachers then read research that contradicted (or, in half the cases, confirmed) those beliefs. What happened next?

Thomm’s results show an interesting mix of bad and good news:

Alas: teachers who read contradictory evidence tended to say that they doubted its accuracy.

Worse still: they started to rely less on scientific sources (research) and more on other sources (opinions of colleagues and students).

The Good News

First: teachers’ doubts did not generalize outside education. That is: however vexed they were to find research contradicting prior beliefs about repeating a year, they did not conclude that medical research couldn’t be trusted.

Secondteachers’ doubts did not generalize within education. That is: they might have doubted findings about repeating a year, but they didn’t necessarily reject research into cold calling.

Third: despite their expressed doubts, teachers did begin to change their minds. They simultaneously expressed skepticism about the research AND let it influence their thinking.

Simply put, this research could have discovered truly bleak belief trajectories. (“If you tell me that cold calling is bad, I’ll stop believing research about vitamin D!”) Thomm’s research did not see that pattern at work.

Caveats, Caveats

Dan Willingham says: “one study is just one study, folks.” Thomm’s research gives us interesting data, but it does not answer this question completely, once and for all. (No one study does. Research can’t do that.)

Two points jump out at me.

First, Thomm’s team worked with teachers in Germany. I don’t know if German society values research differently than other societies do. (Certainly US society has a conspicuously vexed relationship with research-based advice.) So, this research might not hold true in other countries or social belief systems.

Second, her participants initially “reported a positive view on the potency of research and indicated a higher appreciation of scientific than of non-scientific sources.” That is, she started with people who trusted in science and research. Among people who start more skeptical — perhaps in a society that’s more skeptical — these optimistic patterns might not repeat.

And a final note.

You might reasonably want to know: what’s the answer to the question? Does repeating a year help students?

The most honest answer is: I’m not an expert on that topic, and don’t really know.

The most comprehensive analysis I’ve seen, over at the Education Endowment Foundation, says: NO:

“Evidence suggests that, in the majority of cases, repeating a year is harmful to a student’s chances of academic success.” (And, they note, it costs A LOT.)

If you’ve got substantial contradictory evidence that can inform this question, I hope you’ll send it my way.

“Soft” vs. “Hard” Skills: Which Create a Stronger Foundation?
Andrew Watson
Andrew Watson

As teachers, should we focus on our students’ understanding of course content, or on our students’ development of foundational academic skills?

Do they benefit more from learning history (or chemistry or spelling or flute), or from developing the self-discipline (grit, focus, executive skills) to get the work — any work — done?

I’ve found a recent study that explores this question. It stands out for the rigor of its methodology, and the tough-mindedness of its conclusions.

Here’s the setup:

Daunting Problems; Clever Solutions

Researchers struggle to answer these questions because student choice can complicate the data.

When college students choose courses and professors, when they opt out of one section and opt into another, we can’t tell if the professor’s quality or the students’ preferences led to particular research results.

How to solve this problem? We find a school where students get no choices.

They must take the same courses.

They can’t change sections.

Students start the year randomly distributed, and they stay randomly distributed.

Where shall we find such a school? Here’s a possibility: the United States Naval Academy. All students take the same courses. They can’t switch. They can’t drop. Sir, yes sir!

Even better: several USNA courses are sequential. We can ask this question: how does the student’s performance in the first semester affect his/her performance in the second semester?

Do some 1st semester teachers prepare their students especially well — or especially badly — for the 2nd semester?

We can even fold in extra data. The website Rate My Professors lets students grade professors on many qualities — including the difficulty of the course, and their overall rating. Perhaps those data can inform our understanding of teacher effectiveness.

Provocative Conclusions

A research team has followed this logic and recently published their conclusions.

In their findings:

Easygoing teachers — who don’t demand lots of work, who don’t communicate high standards, who routinely give lots of high grades — harm their students. 

How so? Their students — quite consistently — do badly on subsequent courses in the field.

In other words: if I have an easygoing teacher for Calculus I, I’m likely to do badly in Calculus II — compared to my identical twin brother who had a different teacher.

On the other hand, tough-minded teachers — who insist on deadlines, who require extra work, who remain stingy with high grades — benefit their students.

How so? These students — like my identical twin — do better in subsequent courses than I do.

This research team calls such executive function topics — getting work done, even if it’s dull; prioritizing; metacognition — “soft skills.” In their analysis, professors who are tough minded about these soft skills ultimately help their students learn more.

More Provocative Still

This logic certainly makes sense; we’re not shocked that students learn more when we insist that they work hard, focus, and set high standards.

Of course, professors who DON’T insist that their students work hard get lots of student compliments (on average). We teachers know that — all things being equal — students are happier when they get less work. Their RateMyProfessor scores average higher than those of their tough-minded peers.

In turn, colleges notice student popularity ratings. School leaders feel good when students praise particular teachers. They give them awards and promotions and citations. Why wouldn’t they? After all, those highly-praised professors give the college a good reputation.

In other words: according to this research team, colleges are tempted to honor and promote teachers who get high student ratings — even though those very professors harm their students’ long term learning, and thereby diminish the quality of the academic program.

That’s a scathing claim indeed.

Caveats

Like everything I write about here, this finding comes with caveats.

First: although these students were randomly assigned once they got to the Naval Academy, admission to that Academy is very challenging indeed. (Google tells me that 8.3% of their applicants get in.)

So, a tough-minded approach might benefit this extremely narrow part of the population — who, let’s be honest, signed up for a rigorous academic program, rigorously delivered.

However, that finding doesn’t necessarily mean that this approach works for younger students, or a broader swath of the population, or students who didn’t apply for such demanding treatment.

It might. But, this study by itself shouldn’t persuade us to change our work dramatically. (Unless we work in a similar academic setting.)

Second: this report’s authors define “soft” and “hard” in a very specific way (see their page 3).

Your school might use these terms quite differently, so their claims might not apply directly to your terminology.

Equally important, the strategies they use to distinguish between “tough-minded” and “easy-going” professors require lots of intricate parsing.

I myself don’t have the stats skills to interrogate their process; I can imagine a more expert reading asking sharp questions about their methods.

Conclusion

In many parts of life, short-term challenges lead to long-term benefits.

We might not like exercise, but it helps us as we get older.

We might like bacon and ice cream, but leeks and salmon keep us fitter.

This research report suggests that we help our students in the long run by maintaining tough-minded high standards right now.

Doing so might not make us popular. Our administrative leaders don’t always recognize our wisdom. But if our students learn more, their strong “soft-skills” foundation really does help them thrive.

I’m Not Excited, YOU’RE Excited (OK: I’m Excited)
Andrew Watson
Andrew Watson

I’ve been going to Learning and the Brain conferences since 2008, so it takes a lot to for a roster of speakers to WOW me. But this week I’m officially WOWed. Next weekend’s conference looks remarkable.

In some cases, I’m happy to see family favorites return to the LatB microphone:

Dan Willingham wrote the book that launched a thousand MBE careers. If you haven’t read Why Don’t Students Like School?, do so. If you HAVE read it, good news: the second edition is coming out soon.

Barbara Oakley has created some of the most popular online courses EVAH. Her topic: “learning how to learn.” Every time I hear her, I’m reminded why so many people rely on her wisdom and experience.

John Almarode and Doug Fisher both manage to apply the learning sciences to their own daily work in inspiring and unexpected ways. They make you think that good teaching really is possible: a reminder we all need these days.

I could go on. And on.

New Voices

However excited I am to hear these speakers again, I might be even more verklempt at the new speakers — or, more precisely, speakers new to Learning and the Brain.

Paul Kirschner is a real giant in this field. He reminds us constantly to be sure that teaching ideas don’t just need to sound good; they need to benefit students. His article Why Minimal Guidance Instruction Does Not Work [link], written with John Sweller and Richard Clark, remains a frequency-cited manifesto for teaching methods that really help students learn.

Daisy Christodoulou has written several field-defining books, beginning with Seven Myths about Education. (I once described this book as having the highest mic-drop/page ratio I know of.) You can see our review of her latest book — Teachers vs. Tech: The Case for an Ed Tech Revolutionhere.

Kenneth Wesson brings a neuroscience perspective to fields that have traditionally been the focus of psychology: for instance, reading instruction, or, the importance of play for learning. I’m deeply curious to hear how his work on the brain can inform our understanding of the mind.

Dylan Wiliam (yes, that’s the correct spelling) helped launch the idea of assessment for learning, and he hasn’t stopped there. His reminder that — in the world of educational innovation —  “everything works somewhere, but nothing works everywhere” keeps us humble and grounded.

Again, I could list many more.

In short, if you haven’t signed up yet, I truly recommend you do so.

Does MOVEMENT Help LEARNING?
Andrew Watson
Andrew Watson

In the exaggerated stereotype of an obsessively traditional classroom, students sit perfectly silent and perfectly still. They listen, and watch, and do nothing else.

Few classrooms truly function that way.

But, how far should we go in the other direction? Can teachers — and should teachers — encourage noise and movement to help students learn?

In recent years, the field of embodied cognition has explored the ways that we think with our bodies.

That is: movement itself might help students learn.

Of course, this general observation needs to be explored and understood in very specific ways. Otherwise, we might get carried away. (About a year ago, for instance, one teacher inspired a Twitter explosion by having his students read while pedaling exercycles. I’ve spent some time looking at research on this topic, and concluded … we just don’t know if this strategy will help or not.)

So, let’s get specific.

Moving Triangles

An Australian research team worked with 60 ten- and eleven-year olds learning about triangles. (These students studied in the intermediate math track; they attended a private high school, with higher-than-usual SES. These “boundary conditions” might matter.)

Students learned about isosceles triangles, and the relationships between side-lengths and angles, and so forth.

20 of the students studied in a “traditional way“: reading from the book.

20 studied by watching a teacher use software to manipulate angles and lengths of sides.

And, 20 studied by using that software themselves. That is: they moved their own hands.

Researchers wanted to know:

Did these groups differ when tested on similar (nearly identical) triangle problems?

Did they differ when tested on somewhat different problems?

And, did they rate their mental effort differently?

In other words: did seeing movement help students learn better? Did performing the movement themselves help?

The Envelope, Please

The software clearly helped. The actual movement sort-of helped.

Students who interacted with the software themselves, and those who watched the teachers do so, did better on all the triangle problems. (Compared — that is — to students who learned the traditional way.)

And, they said it took less mental effort to answer the questions.

HOWEVER:

Students who used the software themselves did no better than the students who watched the teachers use it. (Well: they did better on the nearly identical problems, but not the newer problems that we care more about.)

In other words: movement helped these students learn this material — but it didn’t really matter if they moved themselves, or if they watched someone else move.

The Bigger Picture

Honestly: research into embodied cognition could someday prove to make a big difference in schools.

Once we’ve done enough of these studies — it might be dozens, it might be hundreds — we’ll have a clearer picture explaining which movements help which students learn what material.

For the time being, we should watch this space. And — fingers crossed — within the next 5 years we’ll have an Embodied Cognition conference at Learning and the Brain.

Until then: be wise and cautious, and use your instincts. Yes, sometimes movement might help. But don’t get carried away by dramatic promises. We need more facts before we draw strong conclusions.


Bokosmaty, S., Mavilidi, M. F., & Paas, F. (2017). Making versus observing manipulations of geometric properties of triangles to learn geometry using dynamic geometry software. Computers & Education113, 313-326.

“Students Switch Their Screens Every 19 Seconds.” Sort Of…
Andrew Watson
Andrew Watson

I recently read an arresting claim: when students have web browsers open, they switch their screens — on average — every 19 seconds.

Such data boggle the mind with horror.

As teachers, we want our students to devote sustained thought to complex topics. Nineteen seconds just isn’t going to cut it.

As an English teacher, I think nineteen minutes won’t reveal the full joys of a Shakespearean sonnet. In nineteen seconds, my students won’t have time to find the main verb. The Horror.

Although it took a moment’s sleuthing to track down this claim (see Coda below), this statement does in fact have research support. Back in 2014, Leo Yeykelis and his intrepid team undertook quite a complex study — including screen monitoring and measures of skin conductance! — to arrive at this conclusion.

Clearly, it’s time to panic.

Don’t Panic Just Yet

Whenever we hear shocking “research based” conclusions  — ESPECIALLY conclusions that confirm our prior beliefs — we should look hard at that underlying research.

We need to know exactly what the researchers did before we jump to big conclusions about their work.

If you click the link above, you can read Yeykelis’s study. When I did, two points jumped out at me very quickly:

First: this study draws its conclusions based on research into 10 college undergraduates. TEN.

Now, researchers have very good reasons to run studies with only ten people in them. Research is pricey and time consuming, so it makes sense to do a small-ish study before ramping up to study hundreds or thousands of people.

In other words, they didn’t do anything wrong.

However, for the same reason you wouldn’t take a medicine that had been tested on only ten people, you shouldn’t make dramatic changes to your classroom based on that research.

Second: because this study has so few participants, we want to know what other studies have found on this topic.

Regular readers know: my go-to sites to answer that question are Scite.ai and ConnectedPapers.com.

When I plugged Yeykelis’s study into those search engines, I found some support — but not lots-n-lots. In other words: this field simply hasn’t been studied very much.

For instance: the ConnectPapers grid doesn’t highlight any related research after 2017. (That finding doesn’t mean that no research has been done after that date, but that it hasn’t reached prominence yet.)

Don’t Panic; Do Respond

For all these reasons, I don’t think we should get too focused on “19 seconds.”

At the same time, the other studies highlighted by Scite.ai and ConnectedPapers.com do point consistently in the same direction: screen switching really is a thing.

A bad thing.

We don’t know exactly how prevalent the problem is, but it’s not isolated; it’s not trivial.

For that reason, I think we should consider our own teacherly responsibilities here.

ESPECIALLY during online classes, we can remind students to turn off other technologies. “Remember our class rules: please turn your cell phones off. And, close all the other tabs in your browser except this one, and your homework.”

We can explicitly teach students to purge tech from their study areas and habits. “When you start your work, have your book open, your pencil on your paper, and your computer off.”

We can include those reminders in homework assignments. “This paragraph is due Wednesday before class. Remember: have your web browser closed before you start writing.”

We can look in the mirror. How often do we switch screens? What prompts us to do so? For instance, as I write, I’ve got 7 browser tabs open. I am — REALLY — using them all to write this post. Based on this experience, what realistic guidance can I give my students about multi-screening?

Of course, you’ll translate such suggestions to fit your own teaching circumstances. In every case, we’re making reasonable and measured changes.

The precise number “19” doesn’t really matter. The big picture — learning requires sustained attention, so we should foster it — really does.

Coda

I wrote last week, improbably, about the benefits of Twitter. Yes, it can be mean-spirited and dreadful, but it can also provide helpful insight and useful resources.

The day after I wrote that blog post, my own Twitter experience highlighted that lesson.

I first read about the “19 Seconds” research in Doug Lemov’s excellent new book Teaching in the Online Classroom. (Lemov’s Twitter handle: @Doug_Lemov).

Curious, I tweeted at him asking for the citation.

Before Lemov could answer, another scholar — Christian Bokhove (@cbokhove) — BOTH provided me with Yeykelis’s 2014 study AND linked to a follow-up study with even more dire results.

And then — because Twitter can be great — Lemov himself followed up to be sure I’d gotten the research I needed.

I don’t mean to downplay its flaws. But I’ve found Twitter mighty useful in learning about teaching, and about research; and in finding a kind of community.

It’s ironic but: I’m encouraging you to OPEN another web browser and check out Twitter.

EduTwitter Can Be Great. No, Really…
Andrew Watson
Andrew Watson

Twitter has a terrible reputation, and EduTwitter isn’t an exception.

The misinformation.

The name-calling.

The “team” rivalries: all heat and little light.

Did I mention the misinformation?

You might wonder: why bother? Honestly, I wouldn’t blame you if you didn’t. I myself was hesitant to sign up.

Despite all these flaws — none of which is exaggerated, by the way — I do find lots of benefits. This experience recently got my attention.

The Setup

On my personal Twitter account, I posted a link to research that had me puzzled. According to a small study, the motor cortex does not “remap” to represent prosthetic limbs.

Given all the research we have into neuroplasticity, I was genuinely shocked by that finding.

In fact, I’m currently reading Barbara Tversky’s book Mind in Motion, which talks about brains remapping in response to TOOL USE.

If brains remap because of tools, but not because of prosthetics — which are, from one perspective, tools that have been attached to the body — well: that’s a very strange.

But, people on Twitter know things I don’t. I thought: maybe someone knows more about this research pool than I…

Rising Action

Soon after I posted that link, my Twitter friend Rob McEntarffer (@rmcenta) retweeted it, sharing my curiosity. (By the way: “Twitter friends” are really a thing. I know LOTS of people — too many to name here — whom I have come to respect and like entirely by “meeting” them on Twitter. I would NOT have predicted that.)

One of his Twitter followers — someone I have never met and don’t know — retweeted Rob’s retweet, with a question to her professor.

So, we’re now at 3 or 4 degrees of separation. What happens next?

The Payoff

Turns out: this professor — whom I also don’t know — has lots of expertise in this research field. He briskly explained why the study couldn’t draw strong conclusions. (If I understand him correctly, its measurement methodology doesn’t allow it to make those claims.)

In other words: within a few hours, I went from…

being ASTONISHED because a research finding dramatically contradicted my (fairly basic) understanding of neural remapping,

to…

having a SUCCINCT AND CLEAR EXPLANATION why that research shouldn’t concern me,

and…

feeling RELIEVED that my understanding of neuroplasticity wasn’t so wrongheaded.

And, what made those changes possible — or, at least, a whole lot easier? Twitter.

Caveats

To be clear, Twitter really does include (and produce) foul, cruel nonsense. If you look for that, you’ll find it. (Tom Lehrer says: “Life is like a sewer. What you get out of it depends [at least in part] on what you put into it.”)

At the same time, I routinely come across generous teachers & researchers. They freely share perspectives and resources and contacts and information.

If you can stand the background noise, you might give it a look.

One place to start: @LearningAndTheB. Perhaps I’ll see you there.