classroom advice – Page 14 – Education & Teacher Conferences Skip to main content
How to Stop Cheating: An Awkward Debate
Andrew Watson
Andrew Watson

We would, of course, LOVE to prevent cheating.

prevent cheatingIt does moral damage to the cheater. It undermines classroom trust. And: it makes it hard for us to know how much our students are actually learning.

So: what techniques might help us do so?

How To Prevent Cheating: “Moral Reminders”

For some time now, Dan Ariely has made this his field. (Check out his book:  The (Honest) Truth about Dishonesty: How We Lie to Everyone — Especially Ourselves.)

Over the years, he developed a clever research paradigm to see how much people cheat. With that in place, he tested various strategies to prevent cheating.

(He can also promote cheating, but that’s not exactly what we’re looking for.)

One strategy that has gotten a lot of attention over the years: moral reminders.

Ariely asked some students to write down ten books they had read in high school. He asked the others to write down the 10 Commandments.

That is: he made them think about foundational moral standards in our culture.

Sure enough, once reminded about moral standards, students cheated less. (The Cohen’s d was 0.48, which is an impressive effect for such an easy intervention.)

Then Again, Maybe Not

In a study published just a month ago, Bruno Verschuere (and many others) retested Ariely’s hypothesis. Whereas the original study included 209 students, this meta-analysis included almost 4700. That is … [checks math] … more than 20 times as many students.

Studying much more data, they found that “moral reminders” made no difference.

(In fact, they found that students who recalled the 10 commandments were just a smidge likelier to cheat; but, the difference was tiny — not even approaching statistical significance.)

As we’ve seen in other cases of the “replication crisis,” seemingly settled results are back in question.

What’s a Teacher to Do?

Of course, Ariely had other suggestions as well. Signing  pledges not to cheat reduces cheating.  And, of course, teachers who supervise students closely reduce their opportunities to cheat.

As far as I know, these strategies have not been retested (although the second one seems too obvious to need much retesting).

For the time being, sadly, we should rely less on indirect moral reminders, and more on direct pledges — and direct supervision.

Using and Misusing Averages: The Benefits of Music?
Andrew Watson
Andrew Watson

The “10 Minute Rule” tells us that people can’t pay attention to something for longer than ten minutes.

As teachers, therefore, we shouldn’t do any one thing for longer than ten minutes. We need to mix it up a bit.

There’s an obvious problem here. The “rule” assumes that all people think alike — that one number is correct for all students in all situations.

That’s a bizarre assumption. It’s also wildly untrue.

(In fact, the “rule” itself has a weird history. )

The Bigger Picture: When teachers convert averages into absolutes — like, say, the 10 minute rule — we’re likely to miss out on the distinct needs of our particular students.

Today’s Example

Should students listen to music when they study or read?

If we go by averages, the answer is: no! We’ve got data to prove it. We’ve even got meta-analyses.

And yet, as Daniel Willingham argues, we should be aware of the variety in the data:

While mean of the grand distribution may show a small hit to comprehension when background music plays, it’s NOT the case that every child reads a little worse with background music on.

He’s got a specific example in mind:

Some of my students say they like music playing in the background because it makes them less anxious. It could be that a laboratory situation (with no stakes) means these students aren’t anxious (and hence show little cost when the music is off) but would have a harder time reading without music when they are studying.

In other words: psychology research can be immensely helpful. It can produce useful — even inspiring — guidance.

At the same time: when we work with our own students, we should always keep their individual circumstances in mind.

If this student right here needs music to stay focused and relaxed, then data on “the average student” just isn’t the right guide.

 

Does Hands-On Learning Benefit Science Students?
Andrew Watson
Andrew Watson

Phrases like “inquiry learning” or “project-based learning” inspire both enthusiasm and skepticism.

hands-on learning

In part, the difference of opinion results from a very basic problem: it’s hard to define either term precisely. What, exactly, are the essential elements of inquiry learning?

If we can’t even answer that question, it will be jolly hard for researchers to know if the method “really works.”

Questions without Answers; Hands-On Learning

A study published earlier this year focuses on two key elements of inquiry learning.

First: teachers should let students investigate a scientific phenomenon without telling them what they’ll find. It’s called inquiry learning because teachers withhold the correct answers.

Second: teachers should encourage hands-on learning. As much as possible, students should do the work themselves, not watch the teacher do it.

If you approach education with a constructivist lens, you’re likely to favor both approaches. Students who make sense of ideas on their own — with their own thoughts and their own hands, without too much teacher guidance — are likeliest to think deeply about concepts.

If instead you start with cognitive load theory, you’re likely to worry about these practices. Students have relatively little working memory with which to process new ideas. The absence of teacher guidance, and the need to manipulate physical objects might well overwhelm precious cognitive resources.

What They Did; What They Found

Researchers taught 4th and 5th graders about converting potential energy to kinetic energy. They used balls rolling down ramps of different heights to illustrate these concepts.

In one case, a teacher told the students what to expect: the higher the ramp, the farther the ball will roll. The students then watched the teacher do the experiment. (That is: “direct instruction.”)

In another, the teacher told students what to expect, but let them roll balls down the ramps.

In the third case, the teacher didn’t tell students what to expect, and let them do the experiment. (That is: “inquiry learning.”)

So: which combination of inquiry techniques yielded the most learning?

Direct instruction did. By a fair peg. (Cohen’s d was 0.59: not huge, but certainly respectable.)

In fact, in this paradigm, “inquiry learning” was the least effective at helping students take these concepts on board.

(To be complete: direct instruction helped students a) remember what they learned and b) reason with that new knowledge. On a third measure–applying this new knowledge to real world situations–both approaches worked equally well.)

At least in this one research paradigm, working memory limitations made constructivist pedagogy too difficult.

On The Other Hand…

When I first planned this post, I was excited to contrast Zhang’s study with a dramatic report from Washington State.

According to this report — here’s a one-page summary — 9th- and 10th-grade students who followed a constructivist inquiry curriculum (including hands-on learning) learned four extra months of science over two years.

That’s a simply staggering result.

I was hoping to argue that we should expect contradictory studies, and learn from the tensions between them.

In particular, the difference between a 1-shot study and a 2-year-long study should really get our attention.

Alas, I can’t make that argument here.

Compared to What?

In the ramp-and-ball study, Zhang’s three student groups learned under three equally plausible conditions. That is: she compared something to something else.

The Washington study, however, compares something to nothing.

That is: teachers at some schools got a shiny new curriculum and lots of dedicated professional development. Teachers at comparison schools got bupkis.

So, it’s entirely possible that the inquiry curriculum caused the extra learning.

It’s also possible that simply doing something new and exciting enlivened the teachers at the inquiry schools.

They might have been equally enlivened by some other kind of curriculum. Who knows: they might have found a well-designed direct-instruction curriculum inspiring.

Unless your control group is doing something, you can’t conclude that your intervention created the change. “Business as usual” — that’s what the researchers really called the control group! — doesn’t count as “doing something.”

An Invitation

Do you have a well-designed inquiry learning study that you love? Please send it to me: [email protected]. I’d love to write about it here…

 

Default Image
Andrew Watson
Andrew Watson

Over at the Cult of Pedagogy, Jennifer Gonzalez has a FANTASTIC post summarizing lots of research on note-taking.

Some headlines:

Note-taking is a skill we should teach.

Visuals improve notes.

Pauses for revision and reflection help a lot.

I should note: Gonzalez cites the well-known Mueller & Oppenheimer study showing that handwritten notes help learning more than laptop notes do. Long-time readers know that I don’t think this study supports that conclusion.

In fact: I think it suggests that the opposite is true. My argument is here.

Despite our disagreement on this one point, there’s so much to savor in this summary that I recommend it highly.

Enjoy!

What’s the Best Timing for Collaborative Learning?
Andrew Watson
Andrew Watson

Learning can be a lonely business.

Does collaborative learning help students? If yes, what guidelines should teachers follow?

Collaborative Learning: Benefits and Detriments

collaborative learning

Overall, we’ve got lots of research suggesting that collaboration helps students learn. And, happily, it doesn’t cost lots of extra dollars.

More specifically: the average score for students who learn in groups exceeds that of those who learn individually.

Unsurprisingly, students who struggle to learn benefit from practice with peers who understand better than they do.

At the same time, the highest scores tend to be lower in groups than among individual learners.

Working in groups, it seems, reduces the mental exploration necessary to find the best answers.

Given this background, we arrive at a really interesting question:

Can we get the benefits of group learning (higher average) AND the benefits of individual learning (highest scores).

It’s All in the Timing

Researchers at several Boston universities wondered if timing mattered. What would happen if students worked in groups at times and alone at other times?

The research team invited college students to work on a spatial puzzle. (It’s called the “Euclidean travelling salesperson problem.” I myself doubt that many of Euclid’s peers were travelling salespeople.)

Some of the students could always see their peers’ solutions. Some could never see those solutions. And some got to see every third solution.

Which groups progressed faster?

As they had hoped, the team found that the third group yielded both the highest average and the highest score.

In brief: teamwork helps most when team members also spend time working by themselves.

Classroom Implications for Collaborative Learning

This study offers a helpful suggestion. Teachers who use group work might ensure that group members work together at some times and solo at others.

At the same time, we should note some important caveats before we follow this guidance too strictly.

First: this study worked with college students. Its findings might apply to younger students. But, then again, they might not.

Second: this research is most easily described as “collaboration,” but that’s not exactly what the research team was studying. Notice: the participants never worked together on the travelling salesperson problem. Instead, they solved the problem on their own and then could (or could not) look at other students’ solutions.

That’s not typically how collaborative learning happens in schools.

More often, “collaborative learning” means that students work together on the project or problem. This study didn’t explore that approach.

(To be precise: the researchers focus on “collective intelligence,” not “collaborative learning.”)

Final Words

I myself think this research offers a helpful suggestion: occasional teamwork might lead to better results than constant (or absent) teamwork.

However, we should keep a sharp eye out for the actual results in our own classrooms. Unless you teach college students by having them look at each others’ correct answers, this study doesn’t explore your methodology precisely.

User mileage will vary.

Improve Your Syllabus & Lesson Plan With “Prior Knowledge”
Andrew Watson
Andrew Watson

When I talk with my English students about The Glass Menagerie, we always identify the protagonist and the antagonist. This discussion helps them understand useful literary terms. It also clarifies their understanding of the play.

prior knowledge

Of course, as they consider this question, I want them to recall a similar conversation we had about Macbeth. In that play as well, we can struggle to determine who the antagonist might be.

In psychology terminology, I want my students to “activate prior knowledge.” Their discussion of The Glass Menagerie will improve if they think about their prior knowledge of Macbeth.

Here’s the simplest teaching strategy in the world. If I want them to think about Macbeth‘s protagonist before they discuss TGM, I can start our class discussion with Shakespeare.

Rather than hope my students draw on their prior Macbeth knowledgeI can ensure that they do so.

This remarkably simple strategy has gotten recent research support. In this study, Dutch psychologists simply told students to recall prior learning before they undertook new learning. Those simple instructions boosted students’ scores.

Prior Knowledge: From Lesson Plan to Syllabus

This research advice might seem quite simple — even too simple. At the same time, I think it helps us understand less intuitive teaching advice.

You have probably heard about “the spacing effect.” When students spread practice out over time, they learn more than if they do all their practice at once.

To illustrate this idea, let’s look at a year-long plan in a blog by Mr. Benney:

Benney Syllabus 1

As you can see, Mr. Benney teaches his first science topic in September. He then includes topic-1 problems in his students’ October homework (“lag homework”). He reintroduces the subject in December. And returns to it one final time in April.

Clearly, he has spaced out his students’ interactions with this topic.

But, notice what happens when he does this with all eight topics:

Benney Syllabus 2

For many teachers, May looks quite scary indeed. Students are learning topic 8. They’re doing lag homework on topic 7. They’re being reintroduced to topics five and six. And they’re being re-re-introduced to topics 2 and 3.

Six topics all at the same time?

And yet, spacing requires interleaving. If Mr. Benney spreads out topic 1, then it will automatically interleave with the topics he’s teaching in October, December, and April. You can’t do one without the other.

Believe it or not, we have research that “interleaving,” like “spacing,” improves student learning.

Why would this be? After all, May’s syllabus looks really complicated.

Perhaps recent research on “prior knowledge” explains this result. If students are thinking about several topics at the same time, then their prior knowledge from previous months remains active.

Macbeth isn’t something we talked about 3 months ago. We have talked about it several times, including just last week.

Here’s the equation. Spacing automatically leads to interleaving. And, interleaving in turn keeps prior knowledge active. These three teaching strategies combine in multiple ways to help our students learn.

Don’t Just Do This Thing; Think This Way
Andrew Watson
Andrew Watson

Teachers love hearing about brain research because it offers us specific and persuasive guidance.

using research well

The researcher says: when I DID THIS THING, students learned more than when I DID THAT THING.

As a thoughtful teacher, I draw the obvious conclusion. I too should DO THIS THING.

And yet, you might reach a different conclusion. If you’re interested in using research well, you might even reach a better conclusion.

Using Research Well: Finding the Right Font?

Here’s a specific example.

Back in 2011, Connor Diemand-Yauman published a study about unusual fonts. (You read that right. Font. As in: typeface.)

He had students learn some information in an easy-to-read font (Arial, 100% black). They learned other information in a harder-to-read font (for example, Bodoni MT, 60% black).

When retested, they remembered more information in the hard to read font.

Being a thorough researcher, Diemand-Yauman tried this hypothesis out in a high school. He had teachers use the Arial font in one of their sections, and the Bodoni MT in another.

Sure enough, the hard-to-read fonts (called “disfluent”) lead to greater learning.

We teachers might take this study as an instruction to DO THIS THING. Given Diemand-Yauman’s results, that is, we should start using unusual fonts.

Using Research Well: Finding the Right Difficulty

Instead of DOING THIS THING, however, I think Diemand-Yauman’s research should inspire us to THINK THIS WAY.

Specifically, we should think about finding the right level of difficulty.

When students take on relatively simple material, we can help them learn it better by adding a bit of challenge.

We might — for example — print that information in a disfluent font.

We might space out practice further than usual.

Or, we might interleave this topic with other, similar kinds of information.

But: when students learn complex material, we don’t want to make it any more difficult. In this case, the font should be as fluent as possible. We would space practice out, but not so far. We would interleave, but not so much.

In other words: Diemand-Yauman’s research doesn’t tell us to use quirky fonts (“do this thing”).

Instead, it gives us another option for creating desirable difficulty (“think this way”).

But Wait: Does That Font Thing Really Work?

A just-published meta-analysis says: not so much. In the authors’ words:

“there is not enough evidence to show that it [using disfluent fonts] either stimulates analytic processing or increases extraneous cognitive load.”

In other words: hard-to-read fonts aren’t a desirable difficulty. And, they don’t stress working memory too much.

Although I haven’t looked at the nitty-gritty of this study (it’s behind a paywall), I have an alternate interpretation.

Perhaps in some cases disfluent fonts are a desirable difficulty. And, in other cases they stress working memory. In this case, those two findings would offset each other in a meta-analysis. The result would be — as this study finds — no consistent effect.

Who Decides?

If I’m right, a disfluent font might improve learning. Or, it might hinder learning.

So: who decides when to use one?

The answer is clear: THE TEACHER DECIDES. Only you know if the material is already hard enough (in which case, use a fluent font). Only you know if it needs some extra cognitive challenge to make it stick (in which case, think about a disfluent font).

No researcher can answer that question, because no researcher knows your curriculum, your school, and your students.

Rather than ask researchers tell you what to do, let them guide you in thinking about teaching problems in more effective ways.

 

If you’re especially interested in desirable difficulties, here’s an article about a potentially desirable difficulty that turns out to be…not.

Just Not a Useful Debate: Learning Styles Theory [Updated]
Andrew Watson
Andrew Watson

At one of the first Learning and the Brain conferences I attended, a speaker briefly mentioned that learning styles theory doesn’t have much good evidence to support it.

learning styles

That comment turned into a heated debate. Several attendees asked vexed, unhappy questions. The speaker held her ground.

When I got in the elevator at the end of that session, I heard one attendee curtly dismiss the speaker’s objection: “well, it’s all just statistics.”

It’s All Just Statistics

Well, it IS all statistics.

In the worlds of neuroscience and psychology, researchers rely on statistical methods to ensure their recommendations aren’t simply hunches.

Anyone can stand behind a microphone and have an opinion. But: if you’re going to do scientific research, your numbers have to add up.

And, as researchers look at valid statistical models, they just don’t find good support for the idea that — for instance — some people are visual learners and others are auditory learners.

The numbers just don’t add up. Or, in this case: if you teach “visual learners” “visually,” they don’t learn any more than if you had taught them “auditorily” or “kinesthetically.”

Multiple Entry Points

Instead, the content itself often offers guidance on the best way to teach. If you’re teaching a French or Spanish or Japanese accent, that content is — by its nature — auditory.

If you’re teaching geography, that content is visual.

Free throws? Kinesthetic.

Most content, however, can be taught in multiple ways.

For example: I’m thinking of an actress. She’s Australian. She played Virginia Woolf in that movie. And, she was married to Tom Cruise.

If you’re shouting NICOLE KIDMAN, you’re right. Notice that I gave you three entry points to the neural network that encodes this memory: her country of origin, a role she played, and her marriage.

So: “teaching to learning styles” helps because you probably teach your content in different ways — auditorily, visually, and kinesthetically. Those three different approaches give distinct connections to the memory you want your students to form.

This approach to teaching helps not because of a student’s learning style, but because all your students now have multiple ways to access that memory.

In other words, the theory helps students learn — but not for the reason it claims to.

“Learning Styles”: Today’s News

Daniel Willingham — one of the early debunkers of learning styles myths — has recently posted his current thoughts on learning styles. The short version:

Nope. Learning Styles still don’t exist. Really.

Learners should “tune their thinking to the task.” That is: learn about geography visually — even if you think you’re not a “visual learner.”

More than many researchers, Willingham gets teachers and teaching. So: if you’re still a learning-styles believer, I encourage you to check out his article.

 

In related news: Greg Ashman argues that, no, rejecting learning styles theory isn’t sexist. After all, LOTS of thoughtful female researchers reject the theory.

And: the Learning Scientists have a great take on this debate. We shouldn’t focus simply on rejecting learning styles theory. Instead, we should replace it with a better theory. They have thoughts on how to do so

[Update, 6/25/18]

Finally, Scientific American has a recent article showing that most students don’t use the learning styles that they believe would benefit them. And, when they do, those strategies don’t help them learn.

Default Image
Andrew Watson
Andrew Watson

Unless you’ve been napping under a LatB rock, you’ve heard about the importance of research-based study habits.

study habits

In particular, you know that students should spread practice out over time rather than bunching practice all together. (The benefits are called the spacing effect.)

And, you know that students should not simply look over what they already know. Instead, they should quiz themselves to see what they can actively retrieve from memory. (That’s called retrieval practice.)

Here’s a little secret you might not know: most of the research about the spacing effect and retrieval practice takes place in psychology labs.

What happens in the real world? Do students who use these techniques actually learn more than those who don’t?

Asking Students about their Study Habits

In a recent study, Fernando Rodriguez and colleagues surveyed students about their study practices.

Do these students space practice over time? Do they do all of their studying all in one session?

Perhaps they quiz themselves on what they know? Or, perhaps they reread the textbook?

Rodriguez & Co. then compared these answers to the students’ grade in the class. By this method, they could tease out the effects of spacing and retrieval practice on actual learning.

So: did these research-endorsed study habits translate into classroom learning?

No. And, Yes.

Rodriguez found mixed results.

Study habits that spaced practice out didn’t make any difference. Students who crammed and students who studied material in several brief sessions got the same final grade.

(I’ll propose an explanation for this finding below.)

However, retrieval practice made a clearly measurable difference. Students who reviewed material averaged a B-. Those who self-tested averaged a B.

Given that both study techniques take the same amount of time, it obviously makes sense to self-test. Students who do so learn more. Retrieval practice just works.

Spacing Doesn’t Help? Or, Spacing Already Helped?

If we’ve got so much research showing the benefits of spacing, why didn’t it help students in this class?

We don’t know for sure, but one answer stands out as very probable: the professor already did the spacing for the students.

That is: the syllabus included frequent review sessions. It had several cumulative tests. The class structure itself required students to think about the material several times over the semester.

Even if students wanted to cram, they couldn’t wait until the last moment to review. The test schedule alone required them to review multiple times.

So: the students’ own additional spacing study habits didn’t help.

However, in a class where the professor hadn’t required spacing, it most likely would have done so.

The Bigger Picture

This possibility, in my view, underlines a bigger point about spacing and retrieval practice:

For the most part, students have primary responsibility for retrieval practice, whereas teachers have primary responsibility for spacing.

That is: students — especially older students — should learn to review by using retrieval practice strategies. (Of course, especially with younger students, teachers should teach RP strategies. And, offer frequent reminders.)

Teachers — in our turn — should design our courses to space practice out. (Of course, students should do what they can to space practice as well.)

In other words: retrieval practice is largely a study habit. Spacing is largely a teaching habit.

Students will get the most benefit from this research when we divide up responsibility this way.

The Best Way to Take Notes: More Feisty Debate
Andrew Watson
Andrew Watson

Over at The Learning Scientists, Carolina Kuepper-Tetzel asks: is it better to take longhand notes? Or to annotate slides provided by the speaker? Or, perhaps, simply to listen attentively?

longhand notes

(Notice, by the way, that she’s not exploring the vexed question of longhand notes vs. laptop notes.)

Before we get to her answer, it’s helpful to ask a framing question: how do brain scientists approach that topic in the first place? What lenses might they use to examine it?

Lens #1: The Right Level of Difficulty

Cognitive scientists often focus on desirable difficulties.

Students might want their learning to be as easy as possible. But, we’ve got lots of research to show that easy learning doesn’t stick.

For instance: reviewing notes makes students feel good about their learning, because they recognize a great deal of what they wrote down. “I remember that! I must have learned it!”

However, that easy recognition doesn’t improve learning. Instead, self-testing is MUCH more helpful. (Check out retrievalpractice.org for a survey of this research, and lots of helpful strategies.)

Of course, we need to find the right level of difficulty. Like Goldilocks, we seek out a teaching strategy that’s neither too tough nor too easy.

In the world of note-taking, the desirable-difficulty lens offers some hypotheses.

On the one hand, taking longhand notes might require just the right level of difficulty. Students struggle — a bit, but not too much — to distinguish the key ideas from the supporting examples. They worry — but not a lot — about defining all the key terms just right.

In this case, handwritten notes will benefit learning.

On the other hand, taking longhand notes might tax students’ cognitive capacities too much.  They might not be able to sort ideas from examples, or to recall definitions long enough to write them down.

In this case, handing out the slides to annotate will reduce undesirable levels of difficulty.

Lens #2: Working Memory Overload

Academic learning requires students to

focus on particular bits of information,

hold them in mind,

reorganize and combine them into some new mental pattern.

We’ve got a particular cognitive capacity that allows us to do that. It’s called working memory. (Here’s a recent post about WM, if you’d like a refresher.)

Alas, people need WM to learn in schools, but we don’t have very much of it. All too frequently, working memory overload prevents students from learning.

Here’s a key problem with taking longhand notes: to do so, I use my working memory to

focus on the speaker

understand her ideas

decide which ones merit writing down

reword those ideas into simpler form (because I can’t write as fast as she speaks)

write

(at the same time that I’m deciding, rewording, and writing) continue understanding the ideas in the lecture

(at the same time that I’m rewording, writing, and continuing) continue deciding what’s worth writing down.

That’s a HUGE working memory load.

Clearly, longhand notes keep a high WM load. Providing slides to annotate reduces that load.

Drum Roll, Please…

What does recent research tell us about longhand notes vs. slide annotation? Kuepper-Tetzel, summarizing a recent conference presentation, writes:

participants performed best … when they took longhand notes during the lecture compared to [annotating slides or passively listening].

More intriguing, the group who just passively viewed the lecture performed as well as the group who were given the slides and made annotations.

Whether the lecture was slow- or fast-paced did not change this result.

Longhand notetaking was always more beneficial for long-term retention of knowledge than both annotated slides and passive viewing.

By the way: in the second half of the study, researchers tested students eight weeks later. They found that longhand note-takers did as well as annotators even though they studied less.

It seems that the desirable difficulty of handwriting notes yielded stronger neural networks. Those networks required less reactivation — that is, less study time — to produce equally good test results.

Keep In Mind…

Note that Kuepper-Tetzel is summarizing as-of-yet unpublished research. The peer-review process certainly has its flaws, but it also can provide some degree of confidence. So far, this research hasn’t cleared that bar.

Also note: this research used lectures with a particular level of working memory demand. Some of our students, however, fall below the average in our particular teaching context. They might need more WM support.

We might also be covering especially complicated material on a particular day. That is: the WM challenges in our classes vary from day to day. On the more challenging days, all students might need more WM support.

In these cases, slides to annotate — not longhand notes — might provide the best level of desirable difficulty.

As is always the case, use your best professional judgment as you apply psychology research in your classroom.