classroom advice – Page 14 – Education & Teacher Conferences Skip to main content
Ask a Simple Question, Get an Oversimplified Answer
Andrew Watson
Andrew Watson

handwritten notes

If learners were widgets, then educational research would be simple. The same teaching technique would work (or not work) equally well for all students.

It would also help if all teachers were widgets. And, if we all taught the same topic the same way.

We could ask simple research questions, get uncomplicated answers, and be ENTIRELY CERTAIN we were doing it right.

A Sample Case: Handwritten Notes

For example, if all students were identical, then we could know for sure the best way to take notes in class.

(It would help if teachers all taught the same way too.)

Are handwritten notes better than laptop notes? Vice versa? The study design couldn’t be simpler.

Mueller and Oppenheimer famously argue that “the pen is mightier than the keyboard.” (I’ve argued strenuously that their research does not support this claim, and probably contradicts it.)

But what if the question just can’t be answered that simply?

What if students do different things with their notes?

What if the classes in which they take notes are different?

Really, what then?

Mixing It Up

Linlin Luo and colleagues explore these questions in a recent study.

Happily, they start from the assumption that students use notes in different ways. And, that professors’ lectures include important differences.

For example: some students take notes, but don’t review them. (They probably should…but, there are LOTS of things that students probably should do. For instance, attend lectures.)

Others students do review the notes they take.

Some lectures include lots of visuals. Others don’t include many.

Once we start asking more complicated questions … that is, more realistic questions … we start getting more interesting answers.

More Interesting Answers

What did Luo and colleagues find? Unsurprisingly, they found a complex series of answers.

First: students who didn’t review their notes before a quiz did better using a laptop.

Second: students who did review their notes did better taking handwritten notes.

Third: in both cases, the differences weren’t statistically significant. That’s a fancy way of saying: we can’t say for sure that the laptop/handwriting distinction really mattered.

Fourth: unsurprisingly, students who took handwritten notes did better recording visuals than did laptop users. (Students who took laptop notes basically didn’t bother with visuals.)

Advice to Teachers and Students

What advice can we infer from this study? (And: from its analysis of previous studies?)

A: teachers can give students plausible guidance. “If you really will study these notes later, then you should take them by hand. But, if you really won’t, then use a laptop.”

B: teachers who present a lot of visuals should encourage handwritten notes. Or, make copies of those visuals available.

C: given that the differences weren’t statistically significant, we might encourage students to use the medium in which they’re more comfortable. If they (like me) have dreadful handwriting, then maybe they should use a laptop no matter what.

D: I continue to think — based on the Mueller and Oppenheimer study — that we should train students to take notes in a particular way. If they both use laptops AND reword the teachers ideas (rather than copying them verbatim), that combination should yield the most learning.

Most importantly, we should let this study remind us: simple answers are oversimplified answers.

If you’d like to meet two of the researchers who worked on this study, check out this video:

https://www.youtube.com/watch?v=BfCZ0K0HoJE

 

 

 

Evaluating the Best Classroom Practices for Teaching Math
Andrew Watson
Andrew Watson

What strategies work best for math teaching?

math teaching

And, crucially, how do we know?

To answer this question, we might rely on our teacherly instincts. Perhaps we might rely on various educational and scientific theories. Or, we might turn to data. Even big data.

Researchers in Sweden wondered if they could use the TIMSS test to answer this question.

(“TIMSS” stands for “Trends in International Mathematics and Science Study,” given every four years. In 2015, 57 countries participated, and 580,000 students. That’s A LOT of students, and a lot of data.)

3 Math Teaching Strategies

When students take these tests, they answer questions about their classroom experience.

In particular, they answer questions about 3 math teaching strategies. They are asked how often they…

Listen to the teacher give a lecture-style presentation.

Relate what they are learning in mathematics to they daily lives.

Memorize formulas and procedures.

Researchers want to know: do any of these teaching practices correlate with higher or lower TIMSS scores? In other words, can all these data help us evaluate the effectiveness of specific teaching practices?

2 Math Teaching Theories

Helpfully, the researchers outline theories why each of these practices might be good or bad.

As they summarize recent decades of math-teaching debate, they explain that “researchers with their roots in psychology and cognitive science” champion

formal mathematical notions,

explicit instruction where teachers show students how to solve math problems,

practicing and memorizing rules and worked examples.

On the other hand, “researchers with their roots in the reform movement” champion

connecting math to students’ daily lives,

a problem-solving approach,

understanding ideas and connections, rather than memorization.

Doubtless you’ve heard many heated debates championing both positions.

Predictions and Outcomes

These theories lead to clear predictions about TIMSS questions.

A cognitive science perspective predicts that “lecture-style presentations” and “memorizing formulas” should lead to higher TIMSS scores.

A reform-movement perspective predicts that “relating math to daily life” should lead to higher scores.

What did the data analysis show?

In fact, the cognitive science predictions came true, and the reform predictions did not.

In other words: students who listened to presentations of math information, and who memorized formulas did better on the test.

Likewise, students who applied math learning to daily life learned less.

An Essential Caveat

As these researchers repeatedly caution, their data show CORRELATION not causation.

It’s possible, for instance, that teachers whose students struggle with math resort to “daily life” strategies. Or that both variables are caused by a third.

Potential Explanations

“Connecting new math learning to real life situations” seems like such a plausible suggestion. Why doesn’t it help students learn?

These researchers offer two suggestions.

First, every math teaching strategy takes time. If direct instruction is highly effective, then anything that subtracts time from it will be less effective. In other words: perhaps this strategy isn’t harmful; it’s just less effective than the others.

Second, perhaps thinking about real-life examples limits transfer. If I use a formula to calculate the area of a table, I might initially think of it as a formula about tables. This fixed notion might make it harder for me to transfer my new knowledge to — say — rugby fields or floor plans.

At present, we can’t know for sure.

A final point. Although this research suggests that direct instruction helps students learn math, we should remember that bad direct instruction is still bad.

Lectures can be helpful, or they can be deadly tedious.

Students can memorize pertinent and useful information. Or, they can memorize absurd loads of information.

(A student recently told me she’d been required to memorize information about 60 chemical elements. Every science teacher I’ve spoken with since has told me that’s ridiculous.)

And so: if this research persuades to you adopt a direct-instruction approach, don’t stop there. We need to pick the right pedagogical strategy. And, we need to execute it well.

Cognitive science can help us do so..

Can Quiet Cognitive Breaks Help You Learn?
Andrew Watson
Andrew Watson

We write a lot on the blog about “desirable difficulties” (for example, here and here). Extra cognitive work during early learning makes memories more robust.

cognitive breaks

Retrieval practice takes more brain power than simple review — that is, it’s harder. But, it helps students remember much more.

Wouldn’t it be great if some easy things helped too?

How about: doing nothing at all?

Cognitive Breaks: The Theory

When a memory begins to form, several thousand neurons begin connecting together. The synapses linking them get stronger.

Everything we do to help strengthen those synapses, by definition, helps us remember.

We know that sleep really helps in this process. In fact, researchers can see various brain regions working together during sleep. It seems that they’re “rehearsing” those memories.

If sleep allows the brain to rehearse, then perhaps a short cognitive break would produce the same result.

Cognitive Breaks: The Research

Michaela Dewar and colleagues have been looking into this question.

They had study participants listen to two stories. After one story, participants had to do a distracting mental task. (They compared pictures for subtle differences.)

After the other, they “rest[ed] quietly with their eyes closed in the darkened testing room for ten minutes.”

Sure enough, a week later, the quiet rest led to better memory. As a rough calculation, they remember 10% more than without the quiet rest.

10% more learning with essentially 0% extra cognitive effort: that’s an impressive accomplishment!

Classroom Questions

A finding like this raises LOTS of practical questions.

Dewar’s study didn’t focus on K-12 learners. (In fact, in this study, the average age was over 70.) Do these findings apply to our students?

Does this technique work for information other than stories? For instance: mathematical procedures? Dance steps? Vocabulary definitions?

Does this finding explain the benefits of mindfulness? That is: perhaps students can get these memory benefits without specific mindfulness techniques. (To be clear: some mindfulness researchers claim benefits above and beyond memory formation.)

Can this finding work as a classroom technique? Can we really stop in the middle of class, turn out the lights, tell students to “rest quietly for 10 minutes,” and have them remember more?

Would they instead remember more if we tried a fun fill-in-the-blank review exercise?

I’ll be looking into this research pool, and getting back to you with the answers I find.

Cognitive Breaks: The Neuroscience

If you’d like to understand the brain details of this research even further, check out the video at this website. (Scroll down just a bit.) [Edit 11/4/19: This link no longer works; alas, I can’t find the video.]

The researchers explain a lot of science very quickly, so you’ll want to get settled before you watch. But: it covers this exact question with precision and clarity.

(By the way: you’ll hear the researchers talk about “consolidation.” That’s the process of a memory getting stronger.)

If you do watch the video, you might consider resting quietly after you do. No need to strain yourself: just let your mind wander…

hat tip: Michael Wirtz

How to Stop Cheating: An Awkward Debate
Andrew Watson
Andrew Watson

We would, of course, LOVE to prevent cheating.

prevent cheatingIt does moral damage to the cheater. It undermines classroom trust. And: it makes it hard for us to know how much our students are actually learning.

So: what techniques might help us do so?

How To Prevent Cheating: “Moral Reminders”

For some time now, Dan Ariely has made this his field. (Check out his book:  The (Honest) Truth about Dishonesty: How We Lie to Everyone — Especially Ourselves.)

Over the years, he developed a clever research paradigm to see how much people cheat. With that in place, he tested various strategies to prevent cheating.

(He can also promote cheating, but that’s not exactly what we’re looking for.)

One strategy that has gotten a lot of attention over the years: moral reminders.

Ariely asked some students to write down ten books they had read in high school. He asked the others to write down the 10 Commandments.

That is: he made them think about foundational moral standards in our culture.

Sure enough, once reminded about moral standards, students cheated less. (The Cohen’s d was 0.48, which is an impressive effect for such an easy intervention.)

Then Again, Maybe Not

In a study published just a month ago, Bruno Verschuere (and many others) retested Ariely’s hypothesis. Whereas the original study included 209 students, this meta-analysis included almost 4700. That is … [checks math] … more than 20 times as many students.

Studying much more data, they found that “moral reminders” made no difference.

(In fact, they found that students who recalled the 10 commandments were just a smidge likelier to cheat; but, the difference was tiny — not even approaching statistical significance.)

As we’ve seen in other cases of the “replication crisis,” seemingly settled results are back in question.

What’s a Teacher to Do?

Of course, Ariely had other suggestions as well. Signing  pledges not to cheat reduces cheating.  And, of course, teachers who supervise students closely reduce their opportunities to cheat.

As far as I know, these strategies have not been retested (although the second one seems too obvious to need much retesting).

For the time being, sadly, we should rely less on indirect moral reminders, and more on direct pledges — and direct supervision.

Using and Misusing Averages: The Benefits of Music?
Andrew Watson
Andrew Watson

The “10 Minute Rule” tells us that people can’t pay attention to something for longer than ten minutes.

As teachers, therefore, we shouldn’t do any one thing for longer than ten minutes. We need to mix it up a bit.

There’s an obvious problem here. The “rule” assumes that all people think alike — that one number is correct for all students in all situations.

That’s a bizarre assumption. It’s also wildly untrue.

(In fact, the “rule” itself has a weird history. )

The Bigger Picture: When teachers convert averages into absolutes — like, say, the 10 minute rule — we’re likely to miss out on the distinct needs of our particular students.

Today’s Example

Should students listen to music when they study or read?

If we go by averages, the answer is: no! We’ve got data to prove it. We’ve even got meta-analyses.

And yet, as Daniel Willingham argues, we should be aware of the variety in the data:

While mean of the grand distribution may show a small hit to comprehension when background music plays, it’s NOT the case that every child reads a little worse with background music on.

He’s got a specific example in mind:

Some of my students say they like music playing in the background because it makes them less anxious. It could be that a laboratory situation (with no stakes) means these students aren’t anxious (and hence show little cost when the music is off) but would have a harder time reading without music when they are studying.

In other words: psychology research can be immensely helpful. It can produce useful — even inspiring — guidance.

At the same time: when we work with our own students, we should always keep their individual circumstances in mind.

If this student right here needs music to stay focused and relaxed, then data on “the average student” just isn’t the right guide.

 

Does Hands-On Learning Benefit Science Students?
Andrew Watson
Andrew Watson

Phrases like “inquiry learning” or “project-based learning” inspire both enthusiasm and skepticism.

hands-on learning

In part, the difference of opinion results from a very basic problem: it’s hard to define either term precisely. What, exactly, are the essential elements of inquiry learning?

If we can’t even answer that question, it will be jolly hard for researchers to know if the method “really works.”

Questions without Answers; Hands-On Learning

A study published earlier this year focuses on two key elements of inquiry learning.

First: teachers should let students investigate a scientific phenomenon without telling them what they’ll find. It’s called inquiry learning because teachers withhold the correct answers.

Second: teachers should encourage hands-on learning. As much as possible, students should do the work themselves, not watch the teacher do it.

If you approach education with a constructivist lens, you’re likely to favor both approaches. Students who make sense of ideas on their own — with their own thoughts and their own hands, without too much teacher guidance — are likeliest to think deeply about concepts.

If instead you start with cognitive load theory, you’re likely to worry about these practices. Students have relatively little working memory with which to process new ideas. The absence of teacher guidance, and the need to manipulate physical objects might well overwhelm precious cognitive resources.

What They Did; What They Found

Researchers taught 4th and 5th graders about converting potential energy to kinetic energy. They used balls rolling down ramps of different heights to illustrate these concepts.

In one case, a teacher told the students what to expect: the higher the ramp, the farther the ball will roll. The students then watched the teacher do the experiment. (That is: “direct instruction.”)

In another, the teacher told students what to expect, but let them roll balls down the ramps.

In the third case, the teacher didn’t tell students what to expect, and let them do the experiment. (That is: “inquiry learning.”)

So: which combination of inquiry techniques yielded the most learning?

Direct instruction did. By a fair peg. (Cohen’s d was 0.59: not huge, but certainly respectable.)

In fact, in this paradigm, “inquiry learning” was the least effective at helping students take these concepts on board.

(To be complete: direct instruction helped students a) remember what they learned and b) reason with that new knowledge. On a third measure–applying this new knowledge to real world situations–both approaches worked equally well.)

At least in this one research paradigm, working memory limitations made constructivist pedagogy too difficult.

On The Other Hand…

When I first planned this post, I was excited to contrast Zhang’s study with a dramatic report from Washington State.

According to this report — here’s a one-page summary — 9th- and 10th-grade students who followed a constructivist inquiry curriculum (including hands-on learning) learned four extra months of science over two years.

That’s a simply staggering result.

I was hoping to argue that we should expect contradictory studies, and learn from the tensions between them.

In particular, the difference between a 1-shot study and a 2-year-long study should really get our attention.

Alas, I can’t make that argument here.

Compared to What?

In the ramp-and-ball study, Zhang’s three student groups learned under three equally plausible conditions. That is: she compared something to something else.

The Washington study, however, compares something to nothing.

That is: teachers at some schools got a shiny new curriculum and lots of dedicated professional development. Teachers at comparison schools got bupkis.

So, it’s entirely possible that the inquiry curriculum caused the extra learning.

It’s also possible that simply doing something new and exciting enlivened the teachers at the inquiry schools.

They might have been equally enlivened by some other kind of curriculum. Who knows: they might have found a well-designed direct-instruction curriculum inspiring.

Unless your control group is doing something, you can’t conclude that your intervention created the change. “Business as usual” — that’s what the researchers really called the control group! — doesn’t count as “doing something.”

An Invitation

Do you have a well-designed inquiry learning study that you love? Please send it to me: [email protected]. I’d love to write about it here…

 

Default Image
Andrew Watson
Andrew Watson

Over at the Cult of Pedagogy, Jennifer Gonzalez has a FANTASTIC post summarizing lots of research on note-taking.

Some headlines:

Note-taking is a skill we should teach.

Visuals improve notes.

Pauses for revision and reflection help a lot.

I should note: Gonzalez cites the well-known Mueller & Oppenheimer study showing that handwritten notes help learning more than laptop notes do. Long-time readers know that I don’t think this study supports that conclusion.

In fact: I think it suggests that the opposite is true. My argument is here.

Despite our disagreement on this one point, there’s so much to savor in this summary that I recommend it highly.

Enjoy!

What’s the Best Timing for Collaborative Learning?
Andrew Watson
Andrew Watson

Learning can be a lonely business.

Does collaborative learning help students? If yes, what guidelines should teachers follow?

Collaborative Learning: Benefits and Detriments

collaborative learning

Overall, we’ve got lots of research suggesting that collaboration helps students learn. And, happily, it doesn’t cost lots of extra dollars.

More specifically: the average score for students who learn in groups exceeds that of those who learn individually.

Unsurprisingly, students who struggle to learn benefit from practice with peers who understand better than they do.

At the same time, the highest scores tend to be lower in groups than among individual learners.

Working in groups, it seems, reduces the mental exploration necessary to find the best answers.

Given this background, we arrive at a really interesting question:

Can we get the benefits of group learning (higher average) AND the benefits of individual learning (highest scores).

It’s All in the Timing

Researchers at several Boston universities wondered if timing mattered. What would happen if students worked in groups at times and alone at other times?

The research team invited college students to work on a spatial puzzle. (It’s called the “Euclidean travelling salesperson problem.” I myself doubt that many of Euclid’s peers were travelling salespeople.)

Some of the students could always see their peers’ solutions. Some could never see those solutions. And some got to see every third solution.

Which groups progressed faster?

As they had hoped, the team found that the third group yielded both the highest average and the highest score.

In brief: teamwork helps most when team members also spend time working by themselves.

Classroom Implications for Collaborative Learning

This study offers a helpful suggestion. Teachers who use group work might ensure that group members work together at some times and solo at others.

At the same time, we should note some important caveats before we follow this guidance too strictly.

First: this study worked with college students. Its findings might apply to younger students. But, then again, they might not.

Second: this research is most easily described as “collaboration,” but that’s not exactly what the research team was studying. Notice: the participants never worked together on the travelling salesperson problem. Instead, they solved the problem on their own and then could (or could not) look at other students’ solutions.

That’s not typically how collaborative learning happens in schools.

More often, “collaborative learning” means that students work together on the project or problem. This study didn’t explore that approach.

(To be precise: the researchers focus on “collective intelligence,” not “collaborative learning.”)

Final Words

I myself think this research offers a helpful suggestion: occasional teamwork might lead to better results than constant (or absent) teamwork.

However, we should keep a sharp eye out for the actual results in our own classrooms. Unless you teach college students by having them look at each others’ correct answers, this study doesn’t explore your methodology precisely.

User mileage will vary.

Improve Your Syllabus & Lesson Plan With “Prior Knowledge”
Andrew Watson
Andrew Watson

When I talk with my English students about The Glass Menagerie, we always identify the protagonist and the antagonist. This discussion helps them understand useful literary terms. It also clarifies their understanding of the play.

prior knowledge

Of course, as they consider this question, I want them to recall a similar conversation we had about Macbeth. In that play as well, we can struggle to determine who the antagonist might be.

In psychology terminology, I want my students to “activate prior knowledge.” Their discussion of The Glass Menagerie will improve if they think about their prior knowledge of Macbeth.

Here’s the simplest teaching strategy in the world. If I want them to think about Macbeth‘s protagonist before they discuss TGM, I can start our class discussion with Shakespeare.

Rather than hope my students draw on their prior Macbeth knowledgeI can ensure that they do so.

This remarkably simple strategy has gotten recent research support. In this study, Dutch psychologists simply told students to recall prior learning before they undertook new learning. Those simple instructions boosted students’ scores.

Prior Knowledge: From Lesson Plan to Syllabus

This research advice might seem quite simple — even too simple. At the same time, I think it helps us understand less intuitive teaching advice.

You have probably heard about “the spacing effect.” When students spread practice out over time, they learn more than if they do all their practice at once.

To illustrate this idea, let’s look at a year-long plan in a blog by Mr. Benney:

Benney Syllabus 1

As you can see, Mr. Benney teaches his first science topic in September. He then includes topic-1 problems in his students’ October homework (“lag homework”). He reintroduces the subject in December. And returns to it one final time in April.

Clearly, he has spaced out his students’ interactions with this topic.

But, notice what happens when he does this with all eight topics:

Benney Syllabus 2

For many teachers, May looks quite scary indeed. Students are learning topic 8. They’re doing lag homework on topic 7. They’re being reintroduced to topics five and six. And they’re being re-re-introduced to topics 2 and 3.

Six topics all at the same time?

And yet, spacing requires interleaving. If Mr. Benney spreads out topic 1, then it will automatically interleave with the topics he’s teaching in October, December, and April. You can’t do one without the other.

Believe it or not, we have research that “interleaving,” like “spacing,” improves student learning.

Why would this be? After all, May’s syllabus looks really complicated.

Perhaps recent research on “prior knowledge” explains this result. If students are thinking about several topics at the same time, then their prior knowledge from previous months remains active.

Macbeth isn’t something we talked about 3 months ago. We have talked about it several times, including just last week.

Here’s the equation. Spacing automatically leads to interleaving. And, interleaving in turn keeps prior knowledge active. These three teaching strategies combine in multiple ways to help our students learn.

Don’t Just Do This Thing; Think This Way
Andrew Watson
Andrew Watson

Teachers love hearing about brain research because it offers us specific and persuasive guidance.

using research well

The researcher says: when I DID THIS THING, students learned more than when I DID THAT THING.

As a thoughtful teacher, I draw the obvious conclusion. I too should DO THIS THING.

And yet, you might reach a different conclusion. If you’re interested in using research well, you might even reach a better conclusion.

Using Research Well: Finding the Right Font?

Here’s a specific example.

Back in 2011, Connor Diemand-Yauman published a study about unusual fonts. (You read that right. Font. As in: typeface.)

He had students learn some information in an easy-to-read font (Arial, 100% black). They learned other information in a harder-to-read font (for example, Bodoni MT, 60% black).

When retested, they remembered more information in the hard to read font.

Being a thorough researcher, Diemand-Yauman tried this hypothesis out in a high school. He had teachers use the Arial font in one of their sections, and the Bodoni MT in another.

Sure enough, the hard-to-read fonts (called “disfluent”) lead to greater learning.

We teachers might take this study as an instruction to DO THIS THING. Given Diemand-Yauman’s results, that is, we should start using unusual fonts.

Using Research Well: Finding the Right Difficulty

Instead of DOING THIS THING, however, I think Diemand-Yauman’s research should inspire us to THINK THIS WAY.

Specifically, we should think about finding the right level of difficulty.

When students take on relatively simple material, we can help them learn it better by adding a bit of challenge.

We might — for example — print that information in a disfluent font.

We might space out practice further than usual.

Or, we might interleave this topic with other, similar kinds of information.

But: when students learn complex material, we don’t want to make it any more difficult. In this case, the font should be as fluent as possible. We would space practice out, but not so far. We would interleave, but not so much.

In other words: Diemand-Yauman’s research doesn’t tell us to use quirky fonts (“do this thing”).

Instead, it gives us another option for creating desirable difficulty (“think this way”).

But Wait: Does That Font Thing Really Work?

A just-published meta-analysis says: not so much. In the authors’ words:

“there is not enough evidence to show that it [using disfluent fonts] either stimulates analytic processing or increases extraneous cognitive load.”

In other words: hard-to-read fonts aren’t a desirable difficulty. And, they don’t stress working memory too much.

Although I haven’t looked at the nitty-gritty of this study (it’s behind a paywall), I have an alternate interpretation.

Perhaps in some cases disfluent fonts are a desirable difficulty. And, in other cases they stress working memory. In this case, those two findings would offset each other in a meta-analysis. The result would be — as this study finds — no consistent effect.

Who Decides?

If I’m right, a disfluent font might improve learning. Or, it might hinder learning.

So: who decides when to use one?

The answer is clear: THE TEACHER DECIDES. Only you know if the material is already hard enough (in which case, use a fluent font). Only you know if it needs some extra cognitive challenge to make it stick (in which case, think about a disfluent font).

No researcher can answer that question, because no researcher knows your curriculum, your school, and your students.

Rather than ask researchers tell you what to do, let them guide you in thinking about teaching problems in more effective ways.

 

If you’re especially interested in desirable difficulties, here’s an article about a potentially desirable difficulty that turns out to be…not.