Skip to main content
Do *Goals* Motivate Students? How about *Feedback*?
Andrew Watson
Andrew Watson

Motivation has been a HOT TOPIC this year in all the schools I’ve visited. Everywhere I go, I get questions about students’ apathy and indifference, and teachers’ frustration.

So, what can schools and teachers do?

Well, Self-Determination Theory offers a framework to answer that pressing question.

In this post, I’m going to introduce the theory — with a focus on its key concepts.

And then I’ll describe a study which helpfully reveals the complexity of enacting the theory wisely.

And, yes, as this post’s title suggests, that helpful study focuses on goals and feedback as motivational strategies.

Let’s see what researchers have discovered about the motivational benefits of goals and feedback.

Introducing Self-Determination Theory

Like many theories, self-determination theory (SDT) can be easily caricatured. Here’s the caricature:

  • Extrinsic motivation BAD!
  • Intrinsic motivation GOOD!!

These six words fall short in lots of ways, starting with this startling observation. SDT doesn’t contrast intrinsic and extrinsic motivation.

Instead, it defines six (yes, SIX) different motivational states — including four (yes, FOUR) different kinds of extrinsic motivation.

Here’s the kicker:

  • Unsurprisingly, intrinsic motivation is GOOD for learning.
  • Surprisingly, two flavors of extrinsic motivation are ALSO good for learning. (The other two flavors: not so much.)

The scholars who created the theory — Deci and Ryan — have a name for “good flavors of extrinsic motivation”; they call them “autonomous extrinsic motivation.”

At the top of this blog post, I asked: what can teachers do about apathetic students? Deci and Ryan answer: “foster the good kinds of motivation.”

Let’s Get Fostering!

Okay, if “the good kinds of motivation” can help, how do we teachers conjure them?

If I’m understanding SDT correctly, it includes bad news and good news.

  • Bad news: we really can’t create intrinsic motivation (as Deci and Ryan define it).
  • Good news: we really CAN create autonomous extrinsic motivation, which — as you recall — benefits learning.

We foster this good extrinsic motivation by focusing on three internal experiences: autonomy, relatedness, and competence.

That is: the more that my students feel in control (“autonomous”), close to one another (“related”), and effective at dealing with their environment (“competent”), the more autonous extrinsic motivation they will experience. And: the more they will learn.

The obvious implication of this theory, then: let’s focus on enhancing our students’ autonomy, relatedness, and competence.

Plausible Start

When I talk with teachers about this theory, they can easily start to brainstorm suggestions for creating autonomy, relatedness, and competence — and, presumably, the good kind of extrinsic motivation.

As a thought experiment, we can easily imagine that clear goals will have those results. And, while we’re at it, we might predict that process feedback will likewise.

Several middle school students eagerly raise their hands to answer questions

But let’s go beyond a thought experiment. Let’s have an experiment experiment — with students and data and calculations and all that good stuff.

What happens?

Happily, a research team in the Netherlands wanted to know. They ran a survey study with almost 600 students — aged 11 to 18 — in PE classes.

They asked two sets of questions.

First: did the teachers clarify the goals during class? That is, did they…

  • … tell the students what they were going to learn, or
  • … how they would be evaluated?

Likewise, did they offer process feedback? That is, did they …

  • … encourage reflection on how to improve, or
  • … discuss how to use the students’ strengths?

And so forth.

Second: they asked if the students experienced greater autonomy, relatedness, and/or competence.

To be thorough, they also asked if they experienced LESS autonomy, relatedness, and/or competence.

Once they crunched all the numbers, what did this research team find?

Not Surprising, or Surprising?

From one perspective, this study seems to be asking rather obvious questions. I mean: OF COURSE students will feel more autonomous if we tell them what the goals are, or more related if we give them feedback.

What other result would we expect?

Here’s the thing: in the world of research, we don’t just assume; we measure. And, sure enough, those measurements gave us the results we (probably) expected.

Yesclear goals enhance autonomy, relatedness, and competence.

And yesprocess feedback does too.

At the same time, the number crunching also provided surprising results.

In some cases, process feedback reduced two of those classroom experiences: “relatedness” and “competence.”

While this result might seem surprising at first, I think it’s easy to understand the chain of emotional events here.

If I give my students lots of feedback, they might feel like I’m hovering or pestering or interfering.

Of course, “hovering, pestering, and interfering” could quite easily reduce the quality of the teacher/student relationship. And, they might also reduce my students’ feelings of competence.

In other words: all that feedback could suggest the students are not doing very well. And that feeling of incompetence could — in turn — reduce the quality of their relationship with the teacher.

Solving the Conundrum

So, which is it? Should teachers give students process feedback because it enhances autonomy, relatedness, and competence? Or, should we limit process feedback, because it reduces relatedness and competence?

As is so often the case, I think we answer that question by rethinking the relationship between research and classroom practice.

Research can almost never tell teachers what to do. Instead, research is awesome at helping us think about what we do.

In this case, our thought process might sound something like this:

  • I want to create autonomous extrinsic motivation, so I should enhance my students’ sense of competence.
  • [Thinking]
  • I wonder if I can promote competence by giving them lots of feedback during today’s class.
  • [more thinking]
  • Now that I think about it, my feedback could enhance their sense of competence. But if I give too much feedback — or unwanted feedback — students could infer that I don’t have confidence in them.
  • [even more thinking]
  • So, I’ll put a note in my lesson plan to make time for feedback. But first, I need to think about the cues my students give me when the feedback is just too much…

Of course, those cues will look different depending on context.

  • 2nd graders will give different cues than 7th graders.
  • I suspect that — for cultural reasons — students in Japan signal frustration differently than those in New Zealand.
  • Students react differently to the cool, with-it teacher than they do with me. (It’s been a minute since I was the cool, with-it teacher.)

And so forth.

But if I consider self-determination theory as a THOUGHT PROCESS, not a TO-DO LIST, I’m much likelier to get the results I want.

In this case: my feedback is likelier to enhance than reduce competence. It’s therefore likelier to promote autonomous extrinsic motivation.

And my students are likelier to learn.


Krijgsman, C., Mainhard, T., van Tartwijk, J., Borghouts, L., Vansteenkiste, M., Aelterman, N., & Haerens, L. (2019). Where to go and how to get there: Goal clarification, process feedback and students’ need satisfaction and frustration from lesson to lesson. Learning and Instruction61, 1-11.

Just Tell Them: The Power of Explanations and Explicit Teaching by Zach Groshell
Erik Jahner, PhD
Erik Jahner, PhD

The sage-on-the-stage is not the enemy. For years, educators have been told that the best teaching happens when students discover knowledge for themselves. Zach Groshell, PhD, turns that assumption on its head. In Just Tell Them: The Power of Explanations and Explicit Teaching, he makes a bold case for something refreshingly straightforward—teachers should teach. Clear, explicit explanations aren’t just helpful; they’re essential. Backed by cognitive science and decades of research, Groshell dismantles the myth that “less teacher talk” means “more learning” and offers a compelling argument for direct instruction done right. His message? Good teaching isn’t about withholding information; it’s about equipping students with the knowledge they need to think critically, problem-solve, and truly understand what they’re learning.

groshell

I’ve seen many teachers, myself included, wrestle with the tension between explicit teaching and discovery learning. The belief that students learn best when they “figure it out” on their own is pervasive, but sometimes we may be asking them to construct knowledge without giving them the raw materials? Groshell’s book is a refreshing reality check, and I found myself nodding along as he unraveled the myth.

The book is structured around key principles of effective explanation, each grounded in research and practical application. Groshell starts with an overview of human cognitive architecture—how working memory and long-term memory shape learning—to explain why clear explanations matter. Students aren’t blank slates; they need structured guidance to process new material without overload.

One of the book’s greatest strengths is its focus on the worked-example effect, a well-documented phenomenon demonstrating that students learn more effectively when they see step-by-step demonstrations before being asked to apply their knowledge. Groshell explores ways to maximize clarity—eliminating vagueness, using visuals effectively, and reinforcing understanding through interaction. His candid reflections on his early teaching missteps make even the more technical discussions feel relatable and engaging.

Beyond simply telling, Groshell lays out a structured approach to explanation, covering interactive techniques like choral response and student self-explanations, alongside the power of visuals, strategic questioning, analogies, and storytelling to make concepts more memorable. His discussion of erroneous examples, where students learn by identifying and correcting mistakes, is particularly compelling.

A particularly valuable section details the Explain and Release model, which follows the ‘I do, We do, You do’ approach—gradually shifting responsibility from the teacher to the student as they gain expertise. This aligns with cognitive load theory, emphasizing that novices require structured support, while experts benefit from increasing independence. Groshell references the expertise reversal effect, illustrating how instructional methods should evolve as students grow more proficient—moving from explicit guidance to independent problem-solving.

Groshell’s writing is refreshingly candid, filled with humor and engaging insights. He reflects on his early preference for student-led discovery and how he came to embrace explicit teaching as a necessity. As I read, I couldn’t help but think of the countless times I’ve watched students breathe a sigh of relief when a difficult concept was finally explained clearly.

Another key focus of the book is creating the right conditions for explanation. Groshell discusses managing student attention by minimizing distractions, reducing classroom clutter, and banning cell phones to improve focus. He argues that classroom seating arrangements and behavior management directly impact how well students absorb explanations.

For educators who have been told to minimize their role as the sage on the stage, this book offers a persuasive counterpoint. It reaffirms the value of direct instruction while advocating for its thoughtful application—explanations should be clear, concise, interactive, and strategically designed to maximize learning. Groshell’s insights are invaluable for teachers, instructional coaches, and education professionals looking to refine their approach.

Ultimately, Just Tell Them is a must-read for educators seeking to optimize their instructional practices through cognitive science. If students could absorb complex concepts without explicit guidance, would we even need teachers? Groshell doesn’t just advocate for explanations—he makes them impossible to ignore. This is a practical, research-driven, and accessible guide that dismantles myths about teacher talk while empowering educators. After reading this book, you’ll never see explanation the same way again.

Attention Must Be Paid
Guest Blogger
Guest Blogger

This guest review of Blake Harvard’s Do I Have Your Attention is written by Justin Cerenzia.


Having followed Blake Harvard’s “The Effortful Educator” blog from its very beginning, it feels especially fitting that his new book – Do I Have Your Attention? Understanding Memory Constraints and Maximizing Learning – poses a question many of us have enthusiastically answered “yes” to for nearly a decade.

Yet this book represents more than an extension of Harvard’s blog—it marks the culmination of his long-standing influence as a leading educator: one who connects cognitive science with classroom practice. Thoughtfully structured into two complementary sections, the book skillfully integrates theoretical perspectives on how memory functions with actionable classroom strategies, offering educators practical tools to foster meaningful and lasting learning.

Book Cover of "Do I Have Your Attention" by Blake Harvard

Harvard deftly navigates the complexities often inherent in cognitive science research. His writing style is both approachable and authoritative, resonating equally with newcomers and seasoned readers alike.

Much of Part I leverages Professor Stephen Chew’s An Advance Organizer for Student Learning: Choke Points and Pitfalls in Studying. Harvard uses this foundational framework to clarify key concepts and common misunderstandings about memory and learning. Crucially, Harvard’s position as a classroom teacher lends him credibility and authenticity, grounding his insights firmly in practical experience rather than mere theory.

It’s as though we’re invited into Blake’s classroom, watching him expertly guide us through Chew’s graphic.

And this is precisely how he frames the opening of Part II, writing:

“It can be quite overwhelming to know just what is the best bet for optimizing working memory without overloading it while also making the most of moving the content to long-term memory. Compound that with the fact we are tasked with educating, not one brain, but a classroom full of them. That’s a job that only a teacher can understand and appreciate” (65).

Harvard then succinctly-yet-thoroughly guides readers through seven carefully considered strategies to maximize learning. In each case, he showcases a diverse array of tactics that enrich any skilled teacher’s toolkit—all with the ultimate goal of positively influencing student outcomes.

Throughout, he pulls back the curtain even further, transparently revealing how specific shifts in his own teaching practice improved student learning. Clearly, each change has been guided by careful investigation and thoughtful application of research.

That Harvard’s insights—long influential in the educational blogosphere—are now available in book form represents a win for educators everywhere. Rich in research yet highly accessible, this text serves as both an inviting entry point and a resource for deeper exploration.

So too does it underscore the essential role teachers can and should play alongside the research community, brokering knowledge and further bridging the unnecessary divide that sometimes impedes meaningful change. In an era rife with educational theory, Harvard’s concrete examples of classroom success help ensure that even hesitant educators find meaningful, practical guidance.

If Blake Harvard didn’t already have your attention, you’d do well to give it to him now.


If you’d like to learn more, Blake’s webinar on attention and memory will be May 4.


Justin Cerenzia is the Buckley Executive Director of Episcopal Academy’s Center for  Teaching and Learning. A Philadelphia area native, Justin is a veteran of three independent schools over the last two decades, dedicating his career to advancing educational excellence and innovation. A history teacher by trade, Justin nonetheless considers the future of education to be a central focus of his work. At Episcopal Academy, he leads initiatives that blend cognitive science, human connection, and an experimenter’s mindset to enhance teaching and learning. With a passion for fostering curious enthusiasm and pragmatic optimism, Justin strives to make the Center a beacon of learning for educators both within and beyond the school.

Enjoyment or Skill? The Case of Reading
Andrew Watson
Andrew Watson

Do we want our students to ENJOY math, or to BE SKILLED AT math?

At first, this question sounds like a false choice. Obviously, we want BOTH.

As an English teacher, I want my students to have fun analyzing the books we read…and I want their analyses to have heft, merit, and substance.

I suspect that most teachers, no matter the subject  — Math, English, Chemistry, Religion, Pickleball — want our students to revel in core ideas and arrive at correct answers.

At some times, alas, we probably need to prioritize one or the other. Especially at the beginning of a unit, should I focus on …

… ensuring that my students like this stuff (even if they don’t immediately understand it), or on

… ensuring they understand the stuff (even if they don’t immediately like it)?

In teaching as in life: if I try to accomplish both goals simultaneously, I’m likely to accomplish neither.

Reading Research

I’m not surprised to discover in a recent study that students’ enjoyment of reading correlates with their skill at reading.

That is: students who get high scores on various reading tests report enjoying reading more than their low-test-scoring peers.

Of course, correlation (say it with me) isn’t causation.

Does the enjoyment lead to the skill? The skill lead to the enjoyment?

Both?

Neither?

To answer these questions, Elsje van Bergen’s research team looked at twins in Finland — more than 3500 of them.

In theory, if we ask all the right questions, gather the right data, and run the right calculations, we should glean insight into the correlation/causation question.

So: what did Team van Bergen find?

But First…

Before you read the answers to that question, you might pause to make a committment. Try to decide NOW if you’re inclined to trust this methodology.

That is:

a) you think well-done twin studies are likely to be a good way to answer this question. For that reason, you will be inclined to accept this answer even if you initially disagree with it.

or

b) you think twin studies can’t answer questions about skill and enjoyment. Thus, you will not cite this study to support your beliefs even if it aligns with those beliefs.

If we’re going to use research to make decisions about education, we should be scrupulous about doing so even when research contradicts the conclusions we had initially held.

Answers, and Questions

Now, back to this post’s main narrative…

Unlike many studies, this one can be summarized in a few pithy sentences.

A young student looks at a book open on her desk and scratches her head in confusion

Based on the twin data they analyzed, van Bergen’s team concludes that:

  • reading skill increases reading enjoyment,
  • reading enjoyment has no effect on reading skill,
  • genetics influences both positively.

Unsurprisingly, the stats get all stats-y. But the above-the-fold headlines are that simple.

Because I don’t teach reading, I’ve always hesitated to be too opinionated on the topic. Now that this study is in the wild, I do think it adds a useful perspective while the reading wars rage on.

For instance: teachers whom I like and respect have told me that older methods might not have science behind them, but they’re excellent at “making students feel like readers.”

This claim has always puzzled me. How can a student feel like a reader if s/he can’t read?

Van Bergen’s study, I think, gives me permission to address that point directly: “this study suggests that skill at reading will be the more important place to start in reading instruction.”

Zooming the Camera Back

While this study and this post have focused on reading instruction, I do think there’s a broader message here as well.

We frequently hear about the importance of intrinsic motivation; that is, a motivation that springs from students’ natural interests, not from external encouragement (or pressure).

This study, to the contrary, finds that the work teachers do to improve students’ skill simultaneously enhances their motivation. That motivation might be — in effect — extrinsic; but, it’s working. (Working = students read better, and want to read more.)

Overall, I believe we need a substantial rethink of the (false) intrinsic/extrinsic dichotomy, and the (unhelpful) criticism of motivational strategies that many teachers currently find themselves using.

If you want to join me for just such a rethink, I’m giving a webinar for Learning and the Brain on April 5th. We’ll be talking about several research-informed approaches to intrinsic motivation, and brainstorming strategies to make those ideas fit in our classrooms.

I hope I’ll persuade you that we have better ways to talk about motivation than “intrinsic/extrinsic,” and those better ways give us useful teacherly guidance.

I hope you’ll join us!


van Bergen, E., Hart, S. A., Latvala, A., Vuoksimaa, E., Tolvanen, A., & Torppa, M. (2023). Literacy skills seem to fuel literacy enjoyment, rather than vice versa. Developmental Science26(3), e13325.

Still Doubting My Doubts: The Case of PBL
Andrew Watson
Andrew Watson

Last week, I described my enduring concerns about “embodied cognition.” I’m not sure I understand the concept clearly: what exactly counts as “embodied cognition” — mindfulness? Direct instruction? (No, seriously, a well-known book on the subject says it does!)

And the “best research” supporting some of the claims doesn’t feel persuasive to me.

Could using gestures help learning? SURE. Have I found enough research for me to advocate for this strategy? Not yet…

This week, I wanted to test my doubts about project-based learning (universally acronymed as PBL). SURPRISE: I end up feeling kinda persuaded — at least in this one case.

Here’s the story…

Another Steelman

If I’m going to critique a teaching method, I want to be sure to avoid straw men. Neither you nor I learn anything if I point out the flaws in an obviously foolish study or approach. I’m going to learn something if and only if I take on the very best case.

Some thoughtful soul — I’m embarrased to say, I can’t remember who — recommended this PBL study to me.

Given the strength of that recommendation, I thought it worth a read — despite my PBL concerns.

What are those PBL concerns?

As is so often the case for me, I worry about working memory overload. If I ask my students to

  • Film a scene from Hamlet, but re-imagine it in a new setting, or
  • Build a model city that enacts 3 core principles of ecological design, or
  • Write a new law that prevents a problem in our school’s community

I’m certainly giving them a rich cognitive task.

However, they almost certainly don’t have enough foundational knowledge to manage any of those tasks. Heck, graduate students in those fields struggle with such problems.

So, while I find the questing adventurousness of such tasks intriguing, my knowledge of working memory limitations tells me: ain’t gonna happen.

I should also confess: my experience assigning project-y work hasn’t gone well.

In brief: although “constructivist” approaches often sound appealing, my focus on basic cognitive capacities makes me extra skeptical.

(Important note: “constructivism” is an ENORMOUSLY broad category, and it’s inaccurate/unfair to lump so many pedagogies together into one ill-defined word.)

The Goals; The Problems

When I look at research, I’ve got a few desiderata:

One: The study should — as much as possible — isolate the variable. I can’t say that (to take a comic example) “chewing gum improves learning” if the participants both chewed gum and tap-danced.

Another one: the study should have a plausible control group. The question isn’t “did X improve learning?” but “did X improve learning compared to the plausible alternative Y?”

Yet another one: the researchers should try hard to measure what they claim. If I say “PBL helps students learn stuff,” I should have some reliable measurement of what they learned. If reseachers make up their own test…well…I worry that they’re (subconsciously) putting a thumb on the scale.

Because I’m a PBL doubter, I read this study with a keen eye on those topics. I’m used to finding such problems. For instance:

Isolate the variable: the study about “using gestures” actually used gestures AND cool tech stuff. I don’t believe claims about X if the students did both X and Y.

Plausible control group: again, the “using gestures” study compared teachers who got something (extra PD; extra curricular materials) with teachers who got nothing (no extra anything).

Measuring the claim: a study claiming that “handwriting helps students learn” didn’t measure learning. (I still can’t get over how many people are citing this study despite this extraordinary flaw.)

So, would this PBL study fall short of these standards?

To be clear — and fair — no study is perfect. Psychology is complicated; teaching is complicated; PEOPLE are complicated. So, I’m not asking that everything be perfect.

But I am asking that the study make a good-faith effort on most of those things.

Envelopes, Please

As a skeptic, I was pleasantly surprised by what I read. Two points stood out in particular:

First: unlike the “gesture” study, the PBL study made an impressive effort to treat teachers in both groups equally.

  • Both groups — not just the PBL group — got extra PD time.
  • Both groups — not just the PBL group — were told that classroom visits were a part of the program.

This kind of care is, in my experience, unusual. I was pleasantly surprised.

Second: the “measurement” sounds (largely) plausible. The researchers did NOT simply make up their own test of the science learning.

Instead, they used the Michigan State standardized test for both the PBL group and the control group. For time reasons, they didn’t use all the questions from that test — so they did have a chance to put that thumb on the scale. But they had less of a chance than if they’d simply created their own test.

Now, don’t get me wrong. I do have some concerns. For instance:

  • Although the teachers in both groups got special treatment, the students didn’t. That is: both groups of teachers got extra PD, but the students in the control group got “same old, same old.” The study would be more persuasive if they too got a new teaching approach.
  • The teachers in both groups got extra stuff, but the teachers in the PBL group got MORE extra stuff. They got more (and more frequent) PD, and more curriculur support, and class visits. (For scheduling reasons, the promised class visits for the control group largely didn’t happen.)
  • As noted above, the research team didn’t exactly use someone else’s measurement — although it seems they made a good-faith effort to do so.

In brief, I can quibble with the study — but I don’t think its flaws merit easy disqualification.

Final Verdict

The research team measured LOTS of variables, and scrupulously tallied scores for MANY important sub-groups and special circumstances.

A student appears to be flying like a superhero in mid-air, but he is actually lying on his side against a dark gray background. He extends one arm forward in a classic "superhero flight" pose, while his legs are bent, creating the illusion of movement. He wears brown pants, a leather belt, and sneakers. The creative perspective and lighting make it look as if he is defying gravity.

If we take the headline number, they found an effect size of 0.277 (technically, “small”) for the amount of additional science knowledge that the students in the PBL group learned compared to the control group.

That is: PBL produced more learning, but not lots-n-lots. We can’t rule out the possibility that all that extra learning resulted from the “shiny new thing,” not from the PBL.

At the same time, my concerns about working memory overload were — at least in this one example — calmed. If this PBL program had overwhelmed WM for these 3rd graders, they wouldn’t have learned much at all; instead, they learned a bit more.

I still have lots of questions and concerns. But I’m heartened to see that — done right — this PBL program offers a potential pathway for further exploration.


Krajcik, J., Schneider, B., Miller, E. A., Chen, I. C., Bradford, L., Baker, Q., … & Peek-Brown, D. (2023). Assessing the effect of project-based learning on science learning in elementary schools. American Educational Research Journal60(1), 70-102.

Doubting My Doubts; The Case of Gesture and Embodied Cognition
Andrew Watson
Andrew Watson

The more time I spend hearing “research-informed educational advice,” the more I worry about the enticing words “research-informed.”

Many MANY people toss around the phrase “research says…”; all too often, even a brief investigation suggests that research really doesn’t say that.

Young girl swinging on a playground swing; a wooden structure behind her

For this reason, I find myself slower to get excited about new “research-based” teaching ideas than many of my colleagues…even colleagues whom I admire, respect, and generally trust.

For instance: lots of scholars are investigating the field of embodied cognition and — more specifically — of using gestures to promote learning.

I’m certainly open to the idea that combining gestures with words and visuals will improve learning. And: I want to know A LOT more about the specifics of this idea:

  • Who is making these gestures? Teachers? Students? Actors in videos?
  • What kind of gestures are they? “Deictic” or”iconic”? Rehearsed or improvised?
  • Does the strategy work well in all disciplines/grades/cultures?

And so forth.

I’d also love to see some straightforwardly convincing research to support the answers to those questions.

So, for instance, I wrote a post about students using gestures to learn about Brownian motion. While the outline of the study made sense to me, it…

… didn’t have a control group,

… chose a topic easily translated into gestures, and

… measured “learning” 2 days later. (Does 2 days count as learning?)

While I’m glad I read the study, and appreciate some of its nuances, I don’t think it’s a slam dunk.

At the same time, I should turn some of my skeptical energy towards myself.

In other words: given all of my doubts, I should also be ready to doubt my own doubtsMaybe the wisdom of the crowd should outweigh my own habitual caution here. Maybe I’m so invested in my skeptic’s persona that I’m subconsciously unwilling to be persuaded…

Enter the Steelman

Because I doubt my doubts, I’m always on the lookout for EXCELLENT research contradicting my outlier point of view. I genuinely WANT to have my errors pointed out to me.

For that reason, I was delighted to find a highly touted study about teaching physics with embodied cognition.

My source here — published by the Educational Endowment Foundation — looks for the very best evidence supporting all sorts of cognitive science-based teaching advice: interleaving, retrieval practice, schemas, and so forth.

Of the 26 studies they found looking at embodied cognition, one stood out for its excellence. (In their rating system, it’s the only only one they rated “high priority.”) If the EEF, and all the wise scholars behind this report, find this study persuasive, it’s likely to be among the best research I can find.

In other words: I’m not analyzing a straw man here. This study is the “steelman.”

Playground Physics

The idea behind this study sounds both sensible and fun. Many of the abstract concepts studied in physics class are acted out quite concretely — that is, they are EMBODIED — when our children get to the playground.

If we could connect abstract classroom physics with embodied playground phyics, that approach could be really helpful.

This study begins with a good idea…and an ENORMOUS sample size. Over 3400 (!) students were in the initial sample; after (unusually high) attrition, that number dropped to about 1300 — roughly 800 in the “playground physics” group, and 500 in the control group.

The researchers wanted to see if the students in the playground group would a) learn more physics, b) feel more engaged, and c) feel more motivated — all compared to the control group.

The special “playground physics” program begins with a training session for the teachers, and includes curricular materials.

Crucially, playground physics also includes a phone app that students use to analyze their own motion:

“Using the app, users record videos of themselves and their friends engaging in physical play, and the app generates graphs of distance traveled, speed, direction, and kinetic and potential energy. As users watch the video, they see graphs of their movement unfolding. Users can pause to examine where they are moving fastest or slowest, where a force is pushing or pulling, and where their kinetic and potential energies are at their highest and lowest points. This is intended to support conversations grounded in the children’s physical experience”

Honestly, the whole experience sound really interesting!

Persistent Doubts

Although I tried to find a Steelman Study to support the case for Team Embodied Cognition, I’m still not persuaded.

I have two substantial concerns:

First:

This study does not measure the benefits of embodied cognition for learning physics.

Instead, it measures the benefits of embodied cognition PLUS cool tech gadgetry for learning physics. In fact, the study is published in a journal that focuses on technology in education.

Yes, the students learned more — but the extra learning could have come from the app (so much fun with video!) or from the embodied cognition (moving is so cool!) or both. We just don’t know.

I am not the only person pointing out this concern. The study’s authors say several times that they don’t know what the “mechanism” is that created additional learning. In other words: they do not claim that the embodiment matter more than the tech — or that it mattered at all. They don’t know.

To be persuaded by research into the use of gestures, I want to see a study that singles out the gestures; it should — in the lingo of research — “isolate the variable.” This one doesn’t.

Second:

When we compare two groups, we want them to be close enough to each other to be good proxies for each other. I’m not sure we can say that for this study.

A) The teachers of Playground Physics received extra PD; the teachers in the control group didn’t. Did the PD itself make the difference? We don’t know.

B) The study used a “business-as-usual control group.” That is: control group teachers just did what they always did. Teachers and students in the Playground Physics group got a Shiny New Thing. Was it the novelty that made the difference? We don’t know.

C) The Playground Physics group spent 15.5 hours studying physics; the control group spent 13.2 hours. The study’s authors write that this difference isn’t “statistically significant.” But — as a classroom teacher — I’m thinking two hours and fifteen minutes of additional practice would be significant, even if it isn’t “significant.” *

Because the study doesn’t isolate the variable (that’s the first concern) and the two groups don’t sufficiently resemble each other (that’s the second concern), I’m still stuck thinking: “this study doesn’t persuade me that embodied cognition is a thing.”

And — as you recall — I looked at this study because a respected group said it’s the best one they found.

TL;DR

I’m still looking for the study that makes the Embodied Cognition approach to teaching persuasive enough for me to recommend it to others.

I haven’t found it yet…but I haven’t given up hope.

By the way: if you know of such a study, please send it my way!


* I spoke with a stats-whisperer friend, who agrees with me that this simply isn’t a reasonable claim.


Margolin, J., Ba, H., Friedman, L. B., Swanlund, A., Dhillon, S., & Liu, F. (2021). Examining the impact of a play-based middle school physics program. Journal of Research on Technology in Education53(2), 125-139.