Skip to main content
Learn Like a Pro: Science-Based Tools to Become Better at Anything by Barbara Oakley and Olav Schewe
Rebecca Gotlieb
Rebecca Gotlieb

With the school year starting in just a couple of weeks, Learn Like a Pro: Science-Based Tools to Become Better at Anything by Barbara Oakley and Olav Schewe is an excellent resource to help students start the school year with strong study habits. Using a fun, accessible tone and helpful graphics this book instructs readers about how to manage procrastination, exert self-discipline, stay motivated, study actively, think deeply, memorize new content, take better notes, read more efficiently, and ace the next test. Oakley is a professor of engineering at Oakland University and known for her widely popular massive open online course. Schewe is the founder and CEO of an EdTech start up, Educas.

Part of effective learning and studying involves developing persistence and motivation to stick with one’s studies. One tool Oakley and Schewe recommend to beat procrastination is the Pomodoro technique, which involves remove all distractions, setting a timer for 25 minutes during which one works intently on a single task, then rewarding oneself with a 5 minute relaxing break (i.e., not a break that involves one’s smart phone). Meditation, yoga, and taking time to relax can also help build attention and focus. Removing temptations can make it easier to stick with a goal. Setting specific, measurable, ambitious, realistic, and time limited short- and long-term goals can help increase motivation. Working with others (e.g., in a study group) and finding value in one’s work can also increase motivation. Metacognitive awareness about one’s progress are also helpful. Finally, a healthy lifestyle, which involves physical exercise, high quality and sufficient sleep, and a balanced diet, is key for effective learning.

Oakley and Schewe review good study habits. Active studying (e.g., by using flashcards, explaining concepts and their relations to one another, and brainstorming possible test questions) rather than passive studying (e.g., re-reading notes) is likely to yield results. Studying in frequent, small chunks and reviewing, previewing, and mixing content during those chunks of study time is helpful. Sometimes studying involves memorizing ideas so that a student has mental power available to solve advanced problems with simpler ideas already clearly in mind. Using acronyms, metaphors and other memory tricks can help make ideas stick. Working through practice problems is a great way to check for understanding while studying.

Being a good test taker involves some different skills than being a good student or studier. Oakley and Schewe suggest reading through test instructions and questions carefully, checking the time while taking the test, starting the test by previewing the hardest questions so one can passively think about them while answering other questions, and reviewing answers at the end.

Oakley and Schewe conclude the book with a checklist of ways to become an effective learner. To learn more about these and other helpful study suggestions you may be interested in Learn Like a Pro, as well as other works by Oakley, including Learning How to Learn.

Oakley, B. & Schewe, O. (2021).  Learn Like a Pro: Science-based Tools to Become Better at Anything. St. Martin’s Publishing Group.

“Once Upon a Time”: Do Stories Help Learning?
Andrew Watson
Andrew Watson

When Daniel Willingham wrote Why Don’t Students Like School, he accomplished a mini-miracle: he made abstract psychology research…

…easy to understand, and

… obviously helpful to classroom teachers.

Its invaluable pages include emphatically practical teaching advice: “memory is the residue of thought,” novices and experts think differently. (Little wonder its third edition was just published.)

In his third chapter, Willingham included one important strategy for helping students understand and remember: use stories.

We understand and remember stories for many reasons:

They follow a familiar cause/effect structure.

They focus on people and conflicts.

We (most of us) grew up hearing stories.

Stories evoke emotions.

Expository writing — essays, textbooks — has its own advantages, but they probably can’t compete with the advantages of narrative.

Today’s News

Willingham first published Why Don’t Students Like School in 2009. What have we learned about narratives vs. exposition since then?

After all, research conclusions change over time. Does this advice still hold?

Earlier this year, Raymond Mar and colleagues published a meta-analysis of research on this topic. They wanted to know:

Does narrative presentation of information improve memory, compared to expository texts?

Does it improve comprehension?

Are there boundary conditions?

They identified 37 studies (with 78 data sets and more than 33,000 participants!) that matched their criteria.

So, what did they find?

The Envelope Please…

Sure enough, narratives help students understand. And they help students remember. And — this news is surprising to me — those benefits don’t have quirky exceptions. (Most research findings do: e.g., “This technique works in these circumstances but not those.”)

For the stats minded, they calculated a Hedges’s g of 0.55. In my experience, that’s a surprisingly big effect for measurements across such a big field. (Hedges’s g is a version of Cohen’s d — it’s more appropriate for meta-analyses.)

One of my favorite examples of this strategy wasn’t (I believe) included in this study. McNamara and Scott asked students to remember a list of words. They coached one group to turn those words into a story.

Given the words “foot, cow, shirt, hut,” students created sentences like “my foot was stepped on by a cow that wore a shirt and lived in a hut.”

McNamara and Scott coached the other students to remember their words by “thinking out loud.”

Sure enough, the group that composed stories remembered a lot more words.

Getting the Story Just Right

Although Mar’s meta-analysis did not find boundary conditions, I do want to add a note of caution. This advice — like so much of cognitive science — can be easily misconstrued.

The idea makes sense, but its classroom application can be tricky.

Imagine that, as a science teacher, I want my students to understand Boyle’s Law. I’m tempted to tell my students that P1V1=P2V2, to define the various terms, and to run some sample problems.

If I hear the advice that stories improve comprehension, I’m probably tempted to tell some interesting stories about Boyle’s life. For instance, he made a list a wish list of 24 inventions (a machine that could fly, drugs that could “exalt the imagination”). Way back in the 1600s! So cool!!

That story is interesting and memorable, but it doesn’t have anything to do with the pressure or volume of gas.

I shouldn’t shouldn’t, in other words, tell stories about the general subject; we’ve got lots of research about the dangers of “seductive details.”

I should instead tell stories tailored specifically to the meaning and importance of the topic.

Why does the relationship between volume and pressure matter?

Who else was trying to find out?

What changed as a result of Boyle’s discovery?

If I can fashion those questions into a story, now I’ve got all the characteristics of a memorable narrative: cause and effect, personal conflict, familiarity, and emotion.

And, all those benefits attach to the content I want my students to learn.

In Brief

Do narratives help students understand and remember?

Yes, the right stories do.

Conflicting Advice: What to Do When Cognitive Science Strategies Clash?
Andrew Watson
Andrew Watson

Teachers like research-informed guidance because it offers a measure of certainty.

“Why do you run your classes that way?”

“Because RESEARCH SAYS SO!”

Alas, we occasionally find that research encourages AND DISCOURAGES the same strategy simultaneously.

What to do when expert advice differs?

In fact, I got this question on Thursday during a Learning and the Brain Summer Institute. Here’s the setup.

“Difficult” Can be Good

Regular readers know that desirable difficulties help students learn. As explained by Bob Bjork and Elizabeth Ligon Bjork — and researched by countless scholars — some degree of cognitive challenge enhances long-term memory formation.

In brief: “easy learning doesn’t stick.”

And so: why do spacing and interleaving help students learn? Because they ramp up desirable difficulty.

Why is retrieval practice better than simple review? Because (among other reasons) review isn’t difficult enough. Retrieval practice, done correctly, adds just the right amount of challenge.

And so, if you attend Learning and the Brain conferences (like this one on “Teaching Thinking Brains”), or if you read any of the great books about long-term memory formation, you’ll hear a lot about desirable difficulty.

Memory at Work

Cognitive scientists who don’t focus on long-term memory might instead focus on a distinct mental capacity: working memory. 

Working memory allows us to gather information — facts, procedures, etc. — into a mental holding space, and then to reorganize and combine them into new patterns and ideas.

In other words: it’s absolutely vital for thinking and learning. If students are learning academic information, they are using their working memory.

Alas, all this good news comes with some bad news: we don’t have much working memory. And, our students probably have less than we do. (For evidence, try this mental exercise: try alphabetizing the workdays of the week. No problem alphabetizing 5 words? Now try alphabetizing the twelve months of the year. OUCH.)

For this reason, effective teachers pay scrupulous attention to working memory load. Every time we go beyond working memory constraints, we make learning MUCH HARDER.

In fact, I think working memory is so important that I wrote a lengthy series of blog posts on the topic. I’m kind of obsessed. (Heck: I even wrote a book on the topic, called Learning Begins.)

Trouble in CogSci Paradise

Because both topics — desirable difficulties and working memory — provide teachers with important and powerful insights, I devoted much of last week’s workshop to them. Almost every day, in fact, we talked about both.

On Thursday, one participant asked this wise and provocative question:

Wait a minute. You’ve told us that desirable difficulties help learning. And you’ve told us that working memory overload hinders learning.

But: isn’t desirable difficulty a potential working memory overload? Don’t those two pieces of advice conflict with each other? Won’t “spacing” and “interleaving” vex working memory?

Yes, reader, they certainly might.

So, what’s a research-focused teacher to do? Team Desirable Difficulty tells us to space and interleave practice. Team Working Memory tells us to beware overload. How can we make sense of this conflicting advice?

This (entirely reasonable) question has two answers: one specific, one general.

A Specific Answer

When we consider the tension between “working memory” and “desirable difficulty,” we can focus for a moment on the adjective “desirable.”

In almost every case, working memory overload is UNdesirable.

So, if our teaching strategy — spacing, interleaving, retrieval practice, metacognition — results in overload, we shouldn’t do it: it’s not desirably difficult. We should, instead, back off on the difficulty until students can manage that cognitive load.

How do we get that balance just right?

We use our teacherly experience and insight. If I create a homework assignment with lots of interleaved practice AND ALL MY STUDENTS DO TERRIBLY, then interleaving wasn’t desirably difficult. (Or, perhaps, I taught the concepts ineffectively.)

In this case, I know the next night’s assignment should be working-memory-friendlier.

No research can tell us exactly what the best balance will be. Our expertise as teachers will guide us.

The General Answer

Researchers and teachers have different goals, and follow different practices. In brief: researchers isolate variables; teachers combine variables.

We think about stress and about working memory and about alertness and about technology and about spacing and

That list goes on almost infinitely.

For that reason, I chant my mantra: when adopting cognitive science approaches to teaching, “don’t just do this thing; instead, think this way.”

That is: don’t just DO “spacing and interleaving” because research tells us they’re good ideas. Instead, we have to THINK about the ideas that guide spacing and interleaving, and be sure they make sense at this particular moment.

Should we have students meditate at the beginning of each class? It depends on our students, our school, our schedule, our culture, our curriculum, our goals, and … too many other variables to list here.

Should we ban laptops from classrooms? Ditto.

Should high schools start later? Ditto.

Should 3rd graders learn by doing projects? Ditto.

Should students read on exercycles? Ditto.

One isolated piece of research advice can’t effectively guide teaching and school-keeping decisions. We have to combine the variables, and think about them in our specific context.

Simply put: we can’t just “do what the research says.” It’s not possible; different research pools almost certainly conflict.

Instead, we’re doing something more challenge, more interesting, and more fun.

Let the adventure begin!

Does Online Learning Work? Framing the Debate to Come…
Andrew Watson
Andrew Watson

I first published this blog post back in January. I’ve been seeing more and more discussion of this question on social media, so I thought it might be helpful to offer this perspective once again.


With news that several very effective vaccines will be increasingly available over the upcoming months, we teachers can now start thinking about “a return to normal”: that is — in person teaching as we (mostly) worked before February of 2020.

One question will inevitably be debated: did online learning work?

I suspect that the

“debate” will go something like this. One voice will stake an emphatic opinion: ONLINE CLASSES WERE AN UNEXPECTED TRIUMPH! Some data will be offered up, perhaps accompanied by a few stories.

An equally emphatic voice will respond: ONLINE CLASSES FAILED STUDENTS, TEACHERS, AND PARENTS! More data. More stories.

This heated exchange will reverberate, perhaps improved by all of Twitter’s nuance and common sense.

A Better Way?

Rather than launch and participate a BATTLE OF EXTREMES, I hope we can look for a more level-headed approach. As is so often the case when research meets teaching, a key question should be boundary conditions.

Whenever we look for a research  finding (e.g.: drawing helps students learn!), we should ask: under what precise conditions is this true?

Does drawing help older students and younger ones? In math and in phonics? Autistic students, dyslexic students, aphantasic students, and neurotypical students?

We’re always looking for boundaries, because every research finding has boundaries. As Dylan Wiliam (who will be speaking at our February Conference) famously says: “When it comes to educational interventions, everything works somewhere. Nothing works everywhere.”

If we ask about boundary conditions for the strengths and weaknesses of online learning, we can have a much more productive discussion.

Places to Start

Age: I suspect we’ll find that — on average — older students did better with online classes than younger ones. My friends who teach college/high school don’t love online teaching, but they don’t seem quite as overwhelmed/defeated by those who teach younger grades.

Additional Technology: Is it better to have a simple Zoom-like platform with occasional breakout sessions? Does it help to use additional, elaborate programs to supplement online learning?

Discipline: Perhaps online teaching worked better with one kind of class (science?) than another (physical education?).

Personality: Although most high school students I know emphatically prefer in-person classes, I do know two who greatly prefer the online version. Both really struggle negotiating adolescent social networks; they’ve been frankly grateful to escape from those pressures and frustrations.

Teachers’ personalities could matter as well. Some of us comfortably roll with the punches. Some of us feel set in our ways.

Administration: Did some school leaders find more effective ways to manage transitions and support teachers and students? The question “does online learning work” might get different answers depending on the managerial skill supervising the whole process. (In my work, I find teachers appreciated decisiveness and clear communication above all else. Even when they didn’t like the decision itself, they liked knowing that a decision had been made.)

SES: No doubt the socio-economic status (SES) of school districts made a big difference. It’s hard to run online classes in schools and communities that don’t have money for technology, or infrastructure to support its use.

Pedagogy: Do some styles of teaching work better online? Or — a slightly different version of this questions — do teachers and schools with experience “flipping the classroom” have greater success with an online model?

Teacher Experience: Perhaps well-seasoned teachers had more experience to draw on as they weathered the muddle? Or, perhaps younger teachers — comfortable with tech, not yet set in their ways — could handle all the transitions more freely?

Country/Culture: Do some countries or cultures manage this kind of unexpected social transition more effectively than others?

Two Final Points

First: We should, I think, expect complex and layered answers to our perfectly appropriate question.

In other words: online learning (a la Covid) probably worked well for these students studying this topic in this country using this technology. It was probably so-so for other students in other circumstances. No doubt it was quite terrible for still other students and disciplines and pedagogies.

Second: I myself have long been skeptical of the idea that “online learning is the future of education (and everything else)!”

And yet, I don’t think we can fairly judge the validity of that claim based on this last year’s experience.

After all: most teachers and school and students didn’t get well-designed and deliberately-chosen online education. They got what-can-we-throw-together-with-grit-and-hope online education.

Of course that didn’t work as well as our old ways (for most students). Nothing worked well: restaurants struggled to adjust. The travel industry struggled. Retail struggled.

Yes: I think that — for almost everybody learning almost everything — in-person learning is likely to be more effective. But I myself won’t judge the whole question based on this year’s schooling.

We all benefit from forgiveness for our lapses and muddles during Covid times.

Let’s learn what we reasonably can about online education, and use that experience to improve in-person and remote learning in the future.

Putting It All Together: “4C/ID”
Andrew Watson
Andrew Watson

We’ve got good news and bad news.

Good news: we’ve got SO MUCH research about learning that can guide and inform our teaching!

Bad news: we’ve got SO MUCH research about learning that…well, it can honestly overwhelm us.

I mean: should we focus on retrieval practice or stress or working memory limitations or handshakes at the door? How do we put all these research findings together?

Many scholars have created thoughtful systems to assemble those pieces into a conceptual whole. (For example: here, and here, and here, and here.)

Recently, I’ve come across a system called 4C/ID — a catchy acronym for “four component instructional design.” (It might also be R2D2’s distant cousin.)

First proposed by van Merriënboer, and more recently detailed by van Merriënboer and Kirschner, 4C/ID initially strikes me as compelling for two reasons.

Reason #1: Meta-analysis

Here at Learning and the Brain, we always look at research to inform our decisions. Often we look at one study — or a handful of studies — for interesting findings and patterns.

Scholars often take another approach, called “meta-analysis.” When undertaking a meta-analysis, researchers gather ALL the studies that fit certain criteria, and aggregate their findings. For this reason, some folks think of meta-analytic conclusions as very meaningful. *

A recent meta-analysis looked at studies of 4C/ID, and found … well … found that it REALLY HELPS. In stats language, it found a Cohen’s d of 0.79.

For any one intervention, that’s a remarkably high number. For a curriculum and instruction planning system, that’s HUGE. I can’t think of any other instructional design program with such a substantial effect.

In fact, it was this meta-analysis, and that Cohen’s d, that prompted me to investigate 4C/ID.

Reason #2: Experience

Any substantial instructional planning concept resists easy summary. So, I’m still making my way through the descriptions and diagrams and examples.

As I do so, I think: well, it all just makes a lot of sense.

As you can see from this graphic, the details get complex quickly. But (I think) the headlines are:

A graphical view on the four components: (a) learning tasks, (b) supportive information, (c) procedural information, and (d) part-task practice. Author: Jeroen J. G. van Merriënboer

… ensure students know relevant procedures fluently before beginning instruction

… organize problems from simple to complex, aiming to finish with “real-life” tasks

… create varied practice

… insist on repetition

And many others. (Some summaries encapsulate 4C/ID in 10 steps.)

None of that guidance sounds shocking or novel. But, if van Merriënboer and Kirschner have put it together into a coherent program — one that works across grades and disciplines and even cultures — that could be a mighty enhancement to our practice.

In fact, as I review the curriculum planning I’m doing for the upcoming school year, I think: “I’m trying to do something like this, but without an explicit structure to guide me.”

In brief: I’m intrigued.

The Water’s Warm

Have you had experience with 4C/ID? Has it proved effective, easy to implement, and clear? The opposite?

I hope you’ll let me know in the comments.


* Others, however, remain deeply skeptical of meta-analysis. The short version of the argument: “garbage in, garbage out.” In this well-known post, for instance, Robert Slavin has his say about meta-analysis.