Skip to main content
Concrete + Abstract = Math Learning
Andrew Watson
Andrew Watson

Early math instruction includes daunting complexities.

We need our students to understand several sophisticated concepts. And, we need them to learn a symbolic language with which to represent those concepts.

Take, for example, the concept of equivalence. As adults, you and I can readily solve this problem :   3+4 = 4 + __

Early math learners, however, can easily stumble. Often, they take the equals sign to mean “add up all the numbers,” and believe the correct answer to that question is “10.”

How can we help them through this stage of understanding?

Strategy #1: Switch from Abstract to Concrete

The first answer to the question seems quite straightforward. If the abstract, symbolic language of math (“3+4=___”) confuses students, let’s switch to a more concrete language.

For instance: “If my frog puppet has three oranges, and your monkey puppet has four oranges, how many oranges do they have together?”

It just seems logical: the switch from abstract to concrete ought to help.

Alas, those concrete examples have a hidden downside.

As Dan Willingham argues in Why Don’t Students Like School?, humans naturally focus on surface features of learning.

When children see monkeys and frogs and oranges, they associate the lesson with those specific entities–not with the underlying mathematical properties we want them to learn.

In edu-lingo, concrete examples can inhibit transfer. Students struggle to transfer a lesson about oranges and puppets to anything else.

Strategy #2: “Fade” from Concrete to Abstract

Taking their cue from Jerome Bruner, psychology researchers wondered if they could start with concrete examples and then, over time, switch to more abstract examples.

For instance, students might start learning about mathematical equivalence by using a balance. When they put an equal number of tokens on both sides, the balance is level.

In the second step, they do practice problems with pictures of a balance and tokens.

And, in the final step, they see abstract representations: 2 + 5 = 5 + __.

They describe this technique as “concreteness fading.”

And, sure enough, it worked. In this case, “worked” meant that students who learned equivalence though a concreteness fading method transferred their knowledge to different–and more difficult–problems.

They did so better than students who learned in a purely abstract way. And, better than students who learned in a purely concrete way. (And even, as a control condition, better than students who started with an abstract representation, and then switched to concrete.)

By the way: these researchers tested their hypothesis both with students who had a relatively low level of knowledge in this area, and those who had a high level of knowledge. They got (basically) the same results both times.

An Essential Detail

When we teachers try to incorporate psychology research into our teaching, we can sometimes find that it conflicts with actual experience.

In this case, we might find that our young math learners just “get it” faster when we use frog puppets. Given that experience, we might hesitate to fade over to abstract teaching.

This research shows an intriguing pattern.

Sure enough, students who began with concrete examples made fewer mistakes on early practice problems. And, that finding was true for both the “concrete only” group and the “concreteness fading” groups.

In other words, the “abstract only” group did worse on the early practice problems than did those groups.

But…and this is a CRUCIAL “but”…the “concrete only” group didn’t do very well on the transfer test. Their raw scores were the lowest of the bunch.

However, the “concreteness fading” group did well on the early problems AND on the transfer test.

It seems that, as the researchers feared, too much concrete instruction reduced transfer.

 

In sum: “concreteness fading” gives young math learners both a helpfully clear introduction to math concepts and the abstract understanding that allows transfer.


Fyfe, E. R., McNeil, N. M., & Borjas, S. (2015). Benefits of “concreteness fading” for children’s mathematics understanding. Learning and Instruction35, 104-120.

When Good Classroom Assignments Go Bad
Andrew Watson
Andrew Watson

As an English teacher, I rather love this assignment for 9th graders reading Romeo and Juliet:

Choose a character from the play.

Write a short monologue–20 lines or so–exploring that character’s feelings about a particular moment, or another character.

Be sure to write in iambic pentameter.

This assignment lets my students explore a character’s point of view in thoughtful detail. It encourages empathy and imagination. And, it allows them to play with a poetic meter that’s been at the rhythmic heart of English literature since we had English literature.

So, again, as an English teacher I love it.

But as someone who knows from cognitive science, I fear it’s simply not going to work (for most 9th graders on the planet).

Good Intentions Meet Cognitive Limitations

Regular readers know that students use their working memory all the time to grok their classroom work.

Working memory is vital to all classroom learning. And, alas, we just don’t have very much of it.

And, this assignment (almost certainly) places far too great a demand on my students’ WM.

Students must use their WM to…

…choose among the characters of the play. (Yes: choices take up WM resources.)

…choose among the dramatic events their chosen character experiences.

…create a wisely empathetic response to a dramatic event. (Yes: creativity requires working memory.)

And, on top of that, to…

…express richly Shakespearean logic and emotion within a tightly structured, largely unpracticed poetic meter. (If you doubt that writing in iambic pentameter takes working memory, try rewriting this sentence in iambic pentameter. Your prefrontal cortex will be aching in no time.)

So much cognitive load will overwhelm all but the most inventive of students.

Solving the Problem

Given that this assignment could be so powerful, how might we adapt it to fit within working memory limitations?

Two strategies come quickly to mind.

Firstredistribute the working memory demands. That is: don’t have them do all the WM work at the same time.

In this case, that suggestion can be easily implemented.

First night’s homework: choose the character, and describe or outline the dramatic moment.

Second night’s homework: write the monologue in modern English.

This approach spreads out the working memory demands over time. All the choosing, and some of the creativity, happens on the first night. The rest of creativity happens on night #2.

Secondreduce the working memory demands. Unless your students have practiced with iambic pentameter A LOT more than my students have, they’re likely to struggle to compose 20 fresh lines.

My own teacherly instincts would be to have them experiment with existing poetry. For instance, a fun sonnet might serve as a scaffold for early, tentative work.

In sonnet 130, Shakespeare famously laments the use of extravagant metaphors to hyper-praise women:

My mistress’ eyes are nothing like the sun.

Coral is far more red than her lips’ red.

And yet, by heav’n, I think my love as rare

As any she belied with false compare.

Can my students devise their own version of these sentiments? And, can the preserve the meter?

My boyfriend’s eyes are not as blue as sky.

For reals, his abs just aren’t what you’d call “shredded.”

And yet, by heav’n, I think my guy as hott

As any bae that Beyoncé has got.

Of course, scaffolding is called “scaffolding” because we can take it down. So, once students can manage iambic pentameter with this level of support, we can prompt them to devise more and more free-form iambic creations.

With enough practice, they might–some day–be able to compose 20 fresh lines of their own.

Can Multiple-Choice Tests Really Help Students?
Andrew Watson
Andrew Watson

Multiple-choice tests have a bad reputation. They’re easy to grade, but otherwise seem…well…hard to defend.

After all, the answer is RIGHT THERE. How could the student possibly get it wrong?

Given that undeniable objection, could multiple-choice tests possibly be good for learning?

The Benefits of Distraction

A multiple-choice test includes one correct answer, and other incorrect answers called “distractors.” Perhaps the effectiveness of a multiple-choice question depends on the plausibility of the distractors.

So, a multiple choice question might go like this:

“Who was George Washington’s Vice President?”

a) John Adams

b) Mickey Mouse

c) Tom Brady

d) Harriet Tubman

In this case, none of the distractors could possibly be true. However, I could ask the same question a different way:

“Who was George Washington’s Vice President?”

a) John Adams

b) Thomas Jefferson

c) Alexander Hamilton

d) James Madison

In THIS case, each of the distractors could reasonably have held that role. In fact, all three worked closely with–and deeply admired–Washington. Two of the three did serve as vice presidents. (And the other was killed by a VP.)

Why would the plausibility of the distractor matter?

We know from the study of retrieval practice that pulling information out of my brain benefits memory more than repeatedly putting information into it.

So, we might hypothesize this way:

If the distractors are implausible, a student doesn’t have to think much to figure out the correct answer. No retrieval required.

But, if the distractors are plausible, then the student has to think about each one to get the answer right. That’s lots of retrieval right there.

In other words: plausible distractors encourage retrieval practice, and thereby might enhance learning.

Better and Better

This line of reasoning leads to an even more delicious possibility.

To answer that question about Washington’s VP, the student had to think about four people: Adams, Jefferson, Hamilton, Madison.

Presumably she’ll learn the information about Adams–who was the correct answer to the question.

Will she also learn more about the other three choices? That is: will she be likelier to answer a question about Alexander Hamilton correctly? (“Who created the first US National Bank as Washington’s Secretary of the Treasury?”)

If the answer to that question is YES, then one multiple-choice question can help students consolidate learning about several different facts or concepts.

And, according to recent research, the answer is indeed YES.

The research paradigm used to explore this question requires lots of complex details, and goes beyond the scope of a blog post. If you’re interested, check out the link above.

Classroom Implications

If this research holds up, we might well have found a surprisingly powerful tool to help students acquire lots of factual knowledge.

A well-designed multiple-choice question–that is: one whose plausible distractors require lots of careful thought–helps students learn four distinct facts or concepts.

In other words:

“Multiple-choice questions…

a) are easy to grade

b) help students learn the correct answer

c) help students learn information about the incorrect answers

or

d) all of the above.”

Me: I’m thinking d) sounds increasingly likely…

Aware: The Science and Practice of Presence–The Groundbreaking Meditation Practice by Daniel J. Siegel, MD
Rebecca Gotlieb
Rebecca Gotlieb

Aware: The Science and Practice of Presence guides readers through a meditative practice based on focused attention, open awareness, and kind intentions to strengthen the mind and improve mental and physical well-being. Daniel J. Siegel, the author, is a NYT bestselling writer, clinical professor of psychiatry at the University of California, Los Angeles School of Medicine, founder of the Mindful Awareness Research Center, and the executive director of the Mindsight Institute. Aware will be of interest to individuals seeking to promote well-being and build resilient minds by understanding consciousness and training their mind.

Siegel begins with the stories of five people at different life stages and in different and challenging circumstances. These individuals’ lives were greatly improved by committing to Siegel’s “Wheel of Awareness” practice. The practice is premised on the idea that, “where attention goes, neural firing flows, and neural connection grows” (P.19)—i.e., that what our mind does changes how our brain behaves and this can have enduring effects on how we act and who we are. He argues that human experience is shaped by interactions among our bodies, brains, minds, and social relationships.  Each of these forces contributes to our continually emerging sense of self (i.e., self as a verb rather than a noun).

Siegel offers tips for how to prepare one’s mind to meditate and how to focus on one’s breath. He then explains that the wheel practice involves guided shifts in attention. The first step is to attend to one’s breath, then to each of the five senses, then to internal bodily signals (e.g., signals from the heart).  Next, the practice involves attending to one’s active thoughts, feelings and memories, and generally to the content of one’s awareness. The final steps involve opening oneself to connections with others, and focusing on wishes of happiness, health, safety, and flourishing for others.

We are often led to believe that we are each alone. Siegel argues that this not only causes suffering, but also is inaccurate. We are inherently social creatures and are deeply connected to one another. Our compassionate connections with others powerfully shape our mind and identity. When we share ourselves with others we all benefit. Laughter among friends, for example, helps us be in the present moment, be open to learning, and mitigates suffering.

Although many people believe the brain gives rise to the mind, Siegel offers compelling neuroscientific evidence that the body also contributes meaningfully to the construction of the mind. Further, the mind can change the body and brain. For example, experiences of trauma, especially in early life, can shape how people behave and the ways in which regions of their brain communicate.  Working to heal the effects of trauma and finding meaning in life gives the individual renewed personal strength and also can move the brain to become more integrated.

Drawing parallels from quantum physics theories about energy flow, probability, and the malleability of space and time, Siegel offers intriguing novel suggestions about the mind, consciousness, and the way we experience reality. He argues that mental illness or anguish is often characterized by rigid or chaotic thinking. Releasing the brain from its typical conscious experiences, thinking more freely, and striving for integration within ourselves and with other people can be therapeutic and helpful for making sense of an unpredictable world.

Aware and the related materials freely available on Siegel’s website offer readers an accessible, scientifically-informed meditative practice that can relieve suffering, increase mental strength, and improve health.

 

Siegel, D. (2018). Aware: The Science and Practice of Presence–The Groundbreaking Meditation Practice. New York, NY: Penguin Publishing Group.

 

Default Image
Andrew Watson
Andrew Watson

Earlier this month, I wrote about the distinction between autobiographical memory and semantic memory.

Both kinds help us live meaningful lives.

But, schools focus on semantic memory: we want our students to know facts and skills over the long term.

We don’t really need them to remember the class or the exercise (or even the teacher) who taught them those facts and skills. That’s autobiographical memory.

That blog post was inspired by Clare Sealy’s recent essay ironically entitled “Memorable Experiences Are the Best Way to Help Children Remember Things.”

Happily, Sealy is the guest on a recent EdNext podcast: you can hear her in-depth explanation.

Equally happy, that podcast includes Sealy’s essay itself.

To understand Sealy’s argument, and its full implications, you can both have a look and have a listen.

Does Music Training Help Us Pay Attention?
Andrew Watson
Andrew Watson

Schools help students learn specific skills and facts: long division, and the preamble to the US Constitution, and glorious mysteries of the sonnet.

Wouldn’t it be great if schools could improve general cognitive capabilities?

For instance, it would be AWESOME if we could artificially increase working memory capacity. (Alas, we can’t. Really.)

It would be great if we could teach general critical thinking skills. (Alas: although we can teach those skills in discrete disciplinary topics, we probably can’t teach critical thinking generally.)

I would be super helpful if we could improve our students’ ability to pay attention…wait a minute: maybe we can.

We know that musicians must concentrate intensely to accomplish their marvelous work. To focus on the sheet music, ignore myriad distractions, accomplish nimble finger skills—all these require impressive degrees of attention.

Does all that attending help musicians both play music better and pay attention better? In other words: can they use those attention skills in other parts of their life?

Defining Attention

To answer that question, we have to start by defining the concept of “attention.”

Surprisingly, psychologists and neuroscientists don’t see attention as one unified thing. Instead, the see it as a behavior that takes place when three other things are happening.

First, they measure alertness. That’s a basic biological readiness: are the students awake enough? Or, so wildly overstimulated that they can’t focus? Those questions examine alertness. (Notice: they don’t directly examine attention—alertness is one small part of that bigger picture.)

Second, they measure orienting. When we ask about orienting, we consider the stimuli that the student is consciously perceiving.

So, for instance, at this moment I’m orienting to the letters on the screen as I type, to the mug of tea to my right, and to my cat Pippin who keeps nudging my arm. I’m not orienting to—say—the comfy chair in the corner, or the color of paint on the ceiling, or the gentle thump of the laundry machine downstairs.

I know all that stuff is there, but I’m not consciously processing it. (Well, I suppose, now that I’m writing about it, I must be processing it. But, I wasn’t orienting to it until I tried to identify stimuli that I wasn’t orienting to…)

Finally, to define the third part of attention, we consider executive attention. That segment takes much more time to describe and define, and overlaps a lot with working memory. It also includes our ability to ignore unimportant stimuli. We deliberately decide to focus on this topic here, not that one there.

So, when we ask the question “does music training improve attention,” we’re really asking three questions:

“Does music training improve alertness?”

“Does music training improve orienting?”

“Does music training improve executive attention?”

With these three questions in mind, we know what to do next.

Musician Inhibition

 To test attention, researchers often use the Attention Network Test (ANT) to measure all three sub-segments of our attentional processes.

In this study, scholars in Chile worked with about 40 adults. Half were “professional pianists,” with an average of more than 12 years of music training. The other half had never taken music lessons, and couldn’t read sheet music.

Did the musicians outperform the non-musicians on the ANT?

No, no, and yes.

That is: musicians and non-musicians did equally well at the first two parts of attention: alertness and orienting.

But, musicians scored higher on the executive attention part of the test than the non-musicians did.

Basically, they ignored irrelevant stimuli better than their age-matched peers.

What Does This Research Mean in the Classroom?

 You can probably anticipate all the reasons we shouldn’t over-react to this study.

It’s quite small: fewer than 40 people participated.

It doesn’t necessarily show cause and effect. It’s entirely possible that people who start with better executive attention are more likely to become professional musicians than people with lower executive attention.

The professional musicians had YEARS of musical experience: more than twelve, on average. So: even if music training does improve executive attention, it’s not a quick fix.

At the same time, this study does suggest something important: at least in this one case, we might be able to train a general cognitive capability.

That is: we can’t speed up our students’ working memory development. We can’t train a general critical thinking skill. We can’t improve processing speed.

But, maybe, we can find ways to strengthen executive attention.

Given how important attention is in the classroom, that’s potentially great news indeed.

Getting the Timing Right: Critical Thinking Online
Andrew Watson
Andrew Watson

If we want students to remember what we teach–and, what teacher doesn’t?–we’ve got a vital strategy: spread practice out over time.

We’ve got scads of research showing that the same number of practice problems results in a lot more learning if those problems are spread out over days and weeks, compared to being done all at once.

We call this the spacing effect, and it’s as solid a finding as we’ve got in the field of educational psychology.

As teachers interested in psychology research, we should always be asking: “yes, but does that work in my specific context.”

For instance: if research shows that college students learn stoichiometry better in a flipped-classroom model, that doesn’t necessarily mean that my 3rd graders will learn spelling better that way.

In the language of psychology research, we’re looking for “boundary conditions.” What are the limits of any particular technique?

The Spacing Effect Meets Critical Thinking

Researchers in Canada wanted to know: does the spacing effect apply to the teaching of critical thinking?

Of course, we want our students to be effective critical thinkers. But, there’s heated debate about the best way to teach this skill.

Lots of people doubt that critical thinking can be taught as a free-standing skill. Instead, they believe it should be nested in a specific curriculum.

That is: we can be critical thinkers about sonnets, or about football play-calling strategy, or about the design of bridges. But, we can’t learn to think critically in an abstract way.

The Canadian researchers start with that perspective, and so they teach critical thinking about a specific topic: the reliability of websites. And, they go further to ask: will the spacing effect help students be better critical thinkers?

In other words: if we spread out practice in critical thinking, will students ultimately practice their critical craft more effectively?

The Research; The Results

To answer this question, researchers used a 3-lesson curriculum exploring the credibility of websites. This curriculum asked 17 questions within 4 categories: the authority of the website’s authors, the quality of the content, the professionalism of the design, and so forth.

Half of the 4th-6th graders in this study learned this curriculum over 3 days. The other half learned it over 3 weeks.

Did this spacing matter? Were those who spread their practice out more proficient critical website thinkers than those who bunched their practice together?

In a word: yup.

When tested a month later, students who spread practice out were much likelier to use all four categories when analyzing websites’ reliability. And, they used more of the 17 questions to explore those four categories.

To Sum Up

This research leads us to two encouraging, and practical, conclusions.

First: we can help our students be better critical thinkers when they analyze websites. (Heaven knows that will be a useful skill throughout their lives.)

Second: we can improve their ability by relying on the spacing effect. As with so many kinds of learning, we get better at critical thinking when we practice over relatively long periods of time.

Can a Neuromyth Result in a Truce?
Andrew Watson
Andrew Watson

We teachers feel passionately about our work, and so–no surprise–our debates and disagreements get heated.

Few debates rage as fiercely as that between champions of direct instruction (with or without capital “D” and “I”), and champions of constructivism (in its many forms: project-based learning, student-centered learning, etc.).

In a recent essay, writer and blogger Tom Sherrington would like soothe this ferocity by declaring the whole debate a myth.

As his title declares: it’s a myth that “teacher-led instruction and student-centred learning are opposites.” (Sherrington is British, so we can overlook the missing “e” from “centred.”)

In brief, he argues: no matter how passionately we disagree about pieces of this debate, almost everyone agrees on a sensible core of ideas. We’re arguing at the margins, but could just as easily refocus on our agreements at the center.

Passionate Debates

One well-known meta-analysis sports this dramatic title: “Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching.”

Not much grey area there.

But, as Sherrington notes in his essay (I’ve tweaked the punctuation to make it blog-friendly):

[The authors] present their case most strongly for novice and intermediate learners but they appear to concede that for students approaching a more expert position, the different approaches are at least ‘equally effective.’

This means the debate is more about sequencing approaches appropriately in the learning journey.

Students will reach a point where these approaches represent a genuine choice.

And, critics of that meta-analysis also find a middle ground (again with the punctuation tweaking):

The more important questions to ask are: under what circumstances do these guided inquiry approaches work? What are the kinds of outcomes for which they are effective? What kinds of valued practices do they promote?

In other words: even the champions of the strongest claims concede that they see both approaches being appropriate at different times.

Specifically: novices need (relatively more) direct instruction. Experts benefit from (relatively more) open-ended, project-y methods.

Beyond Knowledge

Sherrington argues for a truce between direct instruction and PBL, first, because even strong advocates admit that the “other side’s” methods have a place under certain circumstances.

Teaching novices? Try direct instruction. Working with relative experts? Bring on the projects.

Second, he argues that schools exist both to help students acquire knowledge and to help them acquire social habits and practices we value.

As Sherrington writes: “there are many aspects of student activity and teacher-student engagement that are desirable simply because we value them as social constructs.”

So, for example: our society–heck, our very form of government–requires that people be able to work together effectively. For that reason, we benefit our students when we help them learn how to do so.

When we coach students along with group work, that teaches them skills that our society values–above and apart from the knowledge they gain while doing that work.

Of course, Sherrington’s essay includes many other thoughtful points beyond these two: it’s worth reading in full.

A Recent Example

Sherrington’s first argument struck me because I’ve been trying to make it for some time now.

Just ten days ago on this blog, I wrote about a huge study from South American purporting to show that collaborative, inquiry based learning produced substantial advantages.

And yet, as I found when I read its methods, the study didn’t contrast student-centered teaching with teacher-centered teaching.

Instead, it contrasted good teaching (combining both explicit instruction and projects) with really bad teaching (“copy down the names of the 206 bones of the human body”). Unsurprisingly, bad teaching produces bad results.

In other words: I’d like to spread the word of Sherrington’s truce. I hope you’ll join me!


Sherrington’s essay appears in The researchED guide to education myths: An evidence-informed guide for teachers, published by John Catt.

I wrote about Clare Sealy’s essay in this collection last week as well, so you can tell I think it’s got lots of quality work.

I don’t agree with everything I read in this guide, but neither does its editor (Craig Barton) or the series editor (Tom Bennett). They want to foster the debate, and this volume does that admirably.

Welcome to Boston! (Almost)
Andrew Watson
Andrew Watson

This is me. (I’m the one on the left.)

I mostly stay out of the way on this blog: the research and the teachers are the stars.

But, I always enjoy the e-conversations I get to have with people from across the globe. (Just yesterday, an email from Australia!)  I’ve learned so much, even (especially?) when we disagree.

I’m in New Jersey and Philadelphia right now, talking about self-control and adolescence; attention and working memory; the benefits of optimistic skepticism.

And, I’m more excited day by day to catch up with my Learning and the Brain peeps in just a few days.

So, I hope you’ll come introduce yourselves to me at our November conference: Learning How to Learn. It will be wonderful to put names to faces!

If you’re interested, I’ll be talking about the science of motivation on Friday morning. I hope to see you there.

Fostering Curiosity in the Classroom: “What Percentage of Animals are Insects?”
Andrew Watson
Andrew Watson

As teachers, we know that learning works better when students are curious about the subject they’re studying.

Obviously.

So, what can we do to encourage curiosity?

We could choose a topic that (most) students find intrinsically interesting. Dinosaurs, anyone?

But, we can’t always work on that macro level. After all, many of us work within a set curriculum.

What strategies work on a smaller, more day-to-day level? In other words: is there anything we can do in the moment to ramp up students’ curiosity?

Before you read on, pause a moment to ask yourself that question. What do you predict might work?

Predictions, Please

According to a recent study, the very fact that I asked you to make a prediction increases your curiosity about the answer.

Here’s the story.

Researchers in Germany asked college students look at a question, such as “X out of 10 animals are insects.”

Sometimes the students made a prediction: “4 out of 10 are insects.”

Sometimes they thought about an example of an insect: “mosquitoes.”

Sure enough, students rated their curiosity higher after they made a prediction than after they provided an example.

And…drum roll please…they also remembered those facts better when their curiosity levels were elevated.

Don’t Take My Word For It

By the way: how did the researchers know how curious the students were to find the answer?

First, they asked them to rate their curiosity levels. That’s a fairly standard procedure in a study like this.

But, they also went a step further. They also measured the dilation of the students’ pupils. (You may know that our pupils dilate when we’re curious or surprised.)

And, indeed, by both measures, making predictions led to curiosity. And, curiosity led to better memory of these fact.

What To Do Next?

On the one hand, this study included relatively few students: 33, to be precise.

On the other hand, we’ve got LOTS of research pointing this direction. Some studies show that pretesting helps students learn better, even if the students can’t possibly know the answer to the question on the test.

So, some kind of early attempt to answer a question (like, say, making a prediction) does seem to help learning.

At the same time, I think it would be quite easy to overuse this technique. If students always take a pretest, they’ll quickly learn that they aren’t expected to know the answers and (reasonably enough) won’t bother to try.

If students always make predictions, I suspect they’ll quickly pick up on this trick and their curiosity will wear down.

As teachers, therefore, we should know that this approach can help from time to time. If you’ve got a list of important facts you want students to learn, you build predictions into your lesson plan.

I myself wouldn’t do it every time. But, I think it can be a useful tool–especially if you need to know how many animals are insects. (In case you’re wondering: the answer is, “7 out of 10.” Amazing!)