neuroscience – Page 2 – Education & Teacher Conferences Skip to main content
Understanding Adolescents: Emotion, Reason, and the Brain
Andrew Watson
Andrew Watson

Kurt Fischer — who helped create Learning and the Brain, and the entire field of Mind, Brain, and Education — used to say: “when it comes to the brain, we’re all still in kindergarten.”

He meant: the brain is so FANTASTICALLY complicated that we barely know how little we know.

Yes, we can name brain regions. We can partially describe neural networks. Astonishing new technologies let us pry into all sorts of secrets.

And yet, by the time he left the program he founded at Harvard, Dr. Fischer was saying: “when it comes to the brain, we’re now just in 1st grade.”

The brain is really that complicated.

Fascinating Questions

Adolescents — with their marvelous and exasperating behavior — raise all sorts of fascinating questions.

In particular, we recognize a real change in their ability to think abstractly.

Unlike their younger selves, teens can often “infer…system-level implications…and lessons that transcend the immediate situation.”

We can say in a general way that, well, teens improve at this cognitive ability. But: can we explain how?

More specifically, can we look a their brains and offer a reasonable explanation? Something like: “because [this part of the brain] changes [this way], teens improve at abstract thinking.”

A research team at the University of Southern California wanted answers.

Networks in the Brain

These researchers showed 65 teens brief, compelling videos about “living, non-famous adolescents from around the world.” They discussed those videos with the teens, and recorded their reactions.

And then they replayed key moments while the teens lay in an fMRI scanner.

In this way, they could (probably) see which brain networks were most active when the teens had specific or abstract reactions.

For example, the teen might say something specific and individual about the teen in the video, or about themselves: “I just feel so bad for her.”

Or, she might say something about an abstract “truth, lesson, or value”: e.g., “We have to inspire people who have the potential to improve society.”

If some brain networks correlated with specific/individual statements, and other networks with abstract/general statements, that correlation might start to answer this question.

As usual, this research team started with predictions.

They suspected that abstract statements would correlate with activity in the default mode network.

And, they predicted that concrete statements would correlate with activity in the executive control network.

What did they find?

Results and Conclusions

Sure enough, the results aligned with their predictions. The orange blobs show the teens’ heightened neural activity when they made abstract statements.

And: those blobs clearly overlap with well-established regions associated with the Default Mode Network.

Neural correlates of abstract construals. Results from a whole-brain analysis reveal regions whose activity while responding to documentary-style stories positively correlates with abstract construal scores from the interview (N = 64). The image is subjected to a cluster forming threshold of P < 0.001, and cluster extent thresholded at k = 177 voxels (for illustrative purposes). The in-set image depicted in purple correspond to 6 mm spherical ROIs located in the DMN. The in-set scatterplot depicts participants’ average parameter estimates (β) from all voxels within the identified ROIs relative to abstract construal scores. Each dot represents one participant. Inf-post PMC = inferior/posterior posteromedial cortices; DMPFC = dorsomedial prefrontal cortex; VMPFC = ventromedial prefrontal cortex. Unless provided in the caption above, the following copyright applies to the content of this slide: © The Author(s) 2021. Published by Oxford University Press.This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.

The study includes a second (even more intricate!) picture of the executive control network — and its functional overlap with concrete statements.

The headline: we can see a (likely) brain basis for concrete and abstract thought in teens.

Equally important, a separate element of the study looks at the role of emotion in adolescent cognition. (One of the study’s authors, Dr. Mary Helen Immordino-Yang, has worked on this topic for years.)

In brief, emotions don’t necessarily limit thinking. They can focus and motivate thinking:

“Rather than interfering with complex cognition, emotion in the context of abstract thinking may drive adolescents’ thinking forward.”

The much-discussed emotionality of teenage years might not be a bug, but a feature.

A Final Note

I’m especially happy to share this research because its lead author — Dr. Rebecca Gotlieb — has long been the book reviewer for this blog.

If you’ve ever wondered how she knows so much about the books she reviews, well, now you know.

Because of work that she (and so many other) researchers are doing, Dr. Fischer could now say that we’re entering 2nd grade in our understanding of the brain…


A Final Final Note

Neuroscience studies always include more details than can be clearly summarized in a blog post. For those of you who REALLY want to dig into the specifics, I’ll add three more interesting points.

First: knowing that scientific research focuses too much on one narrow social stratum, the researchers made a point to work with students who aren’t typically included in such studies.

In this case, they worked with students with a lower “socio-economic status” (SES), as measured by — among other things — whether or not they received free- or reduced-priced lunch. Researchers often overlook low SES students, so it’s exciting this team made a point to widen their horizons.

Second: researchers found that IQ didn’t matter to their results. In other words, “abstract social reasoning” isn’t measured by IQ — which might therefore be less important than some claim it to be.

Third: teachers typically think of “executive function” as a good thing. In this study, LOWER activity in the executive control network ended up helping abstract social thought.

Exactly what to make of this result — and how to use it in the classroom — is far from clear. But it underlines the dangers of oversimplification of such studies. Executive functions are good — obviously! But they’re not always beneficial for everything.


Rebecca Gotlieb, Xiao-Fei Yang, Mary Helen Immordino-Yang, Default and executive networks’ roles in diverse adolescents’ emotionally engaged construals of complex social issues, Social Cognitive and Affective Neuroscience, 2021;, nsab108, https://doi.org/10.1093/scan/nsab108

Two New Ways of Thinking About Memory
Andrew Watson
Andrew Watson

In our classroom work, we teachers focus on learning; in their research, psychologists and neuroscientists often focus on memory. We have, in other words, different frameworks for talking about the same topic.

Photo by Vlad Tchompalov on Unsplash

When I find one review article that provides TWO fresh ways to understand memory and learning, well, that’s worth sharing.

Humans have MANY memory systems with many daunting (and overlapping) names: working memory, declarative memory, semantic memory, and so forth.

In our day-to-day lives, we often focus on episodic memory. As the name suggests, this memory strand acquires vividly detailed pictures of specific events:

My birthday party (I can describe the cinnamon in the chocolate cake, and why my brother was looking so grouchy).

The time you found a stranger’s wallet (You can remember the chilly, opaque puddle from which you plucked the wallet, and the stranger’s shocked gratitude when you sleuthed down his phone number to return it).

The day the principal literally dropped the mic (Students still talk about the hollow bang and the agonizing reverb when the mic hit the stage floor in the gym).

Episodic memories fill our scrapbooks and dinner-table stories.

Over time, episodic memories gradually turn into semantic memories: general knowledge of abstract facts.

For instance:

At one point, probably in school, you learned that Abraham Lincoln had been assassinated. That evening, you had an episodic memory of learning that truth. You could say which teacher told you; you might wince at the sound of squeaking chalk as s/he wrote on the board.

Over time, however, that detailed episodic memory has become semantic memory. You know the abstract fact (Booth killed Lincoln), but not the rich details of when you learned it.

You no longer remember — episodically — when you learned that fact, but you remember the fact itself — semantically.

In psychology language, your brain semanticized this episodic memory.

Too Much of a Good Thing

Truthfully, we want our students to semanticize most of their learning.

For instance: I don’t want them to know that 3 + 4 = 7 at this specific moment.

I want them to abstract a general, semantic rule: three of something, combined with four more of the somethings, add up to seven somethings. (Unless those somethings are rumors, in which case they add up to a billion.)

I want them to know that the pen is mightier than the sword is an example of metonymy. But I don’t want them to limit their knowledge to that one example.

Instead, I want them to recognize other metonyms — which they can do if they semanticize that example.

At times, however, students can abstract too far.

If they conclude that a roundish number (like 3) plus a pointy number (like 4) add up to seven, then they might conclude that a roundish 8 plus a sharpish 1 add up to seven. In this case, they over-generalized: that is, over-semanticized.

If they conclude that the words pen and sword are always metonyms — that they never mean literally “pen” and “sword” — then they have gone too far.

When learning to speak, children pick up the abstract rule that “-ed” makes words past tense in English. But, they over-semanticize when they say “goed” instead of “went.”

As teachers, we want students to get the balance just right. We want them to translate individual examples into abstract rules.

But: we don’t want them to over-apply those abstract rules to inappropriate situations.

Teaching Implications?

At this point, you might worry: gosh, ANOTHER set of teaching techniques I have to master.

This research team has good news for you: the techniques you’ve heard of at Learning and the Brain conferences help students get this balance right.

That is: retrieval practice helps students get the episodic/semantic balance right.

So do spacing and interleaving.

So does sleep, and (probably) mindfulness and mindful rest.

This episodic/semantic balance is a new way of thinking about old teaching techniques, not a call for new teaching techniques.

Second “New Way”

Authors van Kesteren and Meeter also offer a neuroscientific account of long-term memory formation.

The (very) brief summary goes like this.

We know that both the prefrontal cortex (PFC) and the hippocampus participate in new learning.

In their framework, the PFC helps connect new information with pre-existing mental models (often called schema). And the hippocampus helps organize new information that doesn’t align with a pre-existing mental model/schema.

That is: the brain’s response to new information depends on our current knowledge of it.

If we know something, the PFC does lots of the memory work.

If we know very little, the hippocampus does lots of the memory work.

As always, this neurobiological account wildly oversimplifies a hugely complicated series of events. (This is a blog, after all.)

And, it doesn’t provide new teaching strategies. We don’t “teach the PFC this way, and teach the hippocampus this other way.” (If you hear someone say that, be SUPER skeptical.)

However, it does offer a fascinating theory about the brain activity underlying our amazing mental abilities.

Putting It Together

This post’s title offered “two new ways to think about memory.”

First, teachers can think about converting episodic memories into semantic memories (without going too far).

Second, we can think about the PFC’s role in adding to existing schema, and the hippocampus’s role in developing new schema.

Neither new framework changes your teaching — assuming you’re already using the strategies that you hear about at LatB conferences so frequently. But, both offer us new ways to view our teaching from new perspectives — that is, to use both our PFCs and our hippocampi at the same time.


For earlier thoughts on episodic (also called “autobiographical”) memory vs. semantic memory, click here. And here for Clare Sealy’s discussion of the topic.

Have I Been Spectacularly Wrong for Years? New Research on Handwriting and Learning
Andrew Watson
Andrew Watson

Long-timer readers know my weakness.

I’m usually an easy-going guy. But if you want to see me frantic with frustration, tell me about the superiority of handwriting for taking notes.

Here’s the story.

Back in 2014, two Princeton researchers did a study which concluded that handwritten notes lead to better learning than notes taken on laptops.

That’s a helpful question to have answered, and so I read their study with a mixture of curiosity and gratitude.

Imagine my surprise when I found that their conclusion rests on the assumption that students can’t learn to do new things. (That’s a VERY weird belief for a teacher to have.)

If you believe a student CAN learn new to do things, then the researchers’ data strongly suggest that laptop notes will be better.

Oh, and, by the way, their study does not replicate.

Despite these glaring flaws, people still cite this study — and look at me with pity (contempt?) when I try to convince them otherwise. “But research says so,” they say wearily. I seethe, but try to do so politely.

Today’s Exciting News

When I try to explain my argument, my interlocutor often says something like “handwriting engages more neural processing through kinesthetic yada yada,” and therefore boosts learning.

In the first place, that’s NOT the argument that the Princeton researchers make. It might be true, but that’s changing the subject — never a good way to prove a point.

In the second place, where is the evidence of that claim? I’d love to review it.

To date, no one has taken me up on that offer.

But — [sound of trumpets blaring] — I recently found a post at Neuroscience News with this splendid headline: “Why Writing by Hand Makes Kids Smarter.”

Here’s the first sentence of the article:

Children learn more and remember better when writing by hand, a new study reports. The brains of children are more active when handwriting than typing on a computer keyboard.

“Learn more.” “Remember better.” That’s impressive. At last: the research I’ve been asking for all these years!

Believe it or not, I rather enjoy finding research that encourages me to change my mind. That process reminds me of the power of the scientific method. I believe one thing until I see better evidence on the other side of the argument. Then I believe the other thing.

So, AT LAST, I got to read the research showing that handwriting helps students learn more and remember better.

Want to know what I found?

The Study

The researchers did not test anyone’s learning or memory.

You read that right. This article claims that handwriting improves learning and memory, but they didn’t test those claims.

This research team asked 24 participants — twelve adults and twelve 12-year-olds — to write by hand, or write on a laptop. They then observed the neural regions involved in those tasks.

Based on what they saw, they inferred that handwriting ought to result in better learning.

But they did not test that hypothesis.

So, based on a tiny sample size and a huge leap of neuro-faith, they have concluded that handwriting is better. (And, astonishingly, some big names in the field have echoed this claim.)

The Bigger Picture

Believe it or not, I’m entirely open to the possibility that handwritten notes enhance learning more than laptop notes do.

I’m even open to the possibility that kinesthetic yada yada is the reason.

To take one example, Jeffrey Wammes has done some splendid research showing that — in specific circumstances — drawing pictures helps students remember words and concepts.

If drawing boosts learning, maybe handwriting does too. That’s plausible.

But here’s the thing: before Wammes made his claim, he tested the actual claim he made.

He did not — as the Princeton researchers did — start from the assumption that students can’t learn to do new things.

He did not — as this current research does — extrapolate from neural patterns (of 24 people!) to predict how much learning might happen later on.

Wammes designed a plausible study to measure his hypothesis. In fact, he worked hard to disprove his interpretation of the data. Only when he couldn’t did he admit that — indeed — drawing can boost learning.

Before I believe in the superiority of either handwritten notes or laptop notes, I want to see the study that works hard to disprove its own claims. At present, the best known research on the topic conspicuously fails to meet that test.

Do you know of research that meets this standard? If yes, please let me know!

The Neuroscience of Retrieval Practice
Andrew Watson
Andrew Watson

What’s the best way for students to practice? Should they review information or procedures? Or, should they try to remember or enact them?

We’ve got scads of research showing that retrieval practice helps brains learn.

That is: if I want to learn the definition of a word I’ve studied, I should try to recall it before I look it up again. (For a handy review, check out RetrievalPractice.org.)

So, we know that retrieval practice works. But: why? What’s happening in the brain that makes it work?

Two Possibilities

We’ve got several possible answers, but let’s focus conceptually on two of them.

Increased neural connections

Reduced neural connections

That is: when I engage in retrieval practice, I push myself to remember X. But it takes me a while to get to X. I might start with S, and then wonder about Y. Perhaps I’ll take a detour to gamma. Eventually, I figure out X.

During this mental work, I both remember X and connect X to all those other (rejected) possibilities: S and Y and gamma. By increasing connections among all these topics, I make it easier to remember X later on. If I accidentally think about S, I can quickly get to X.

Or, maybe the opposite process happens.

The first time I try to remember X, I waste mental time with S and gamma. But, the next time, I’ve gotten better at remembering X, and so I take less time to get there. I can “prune away” extraneous mental connections and thereby simplify the remembering process.

In this account, by reducing the steps involved in remembering X, I see the benefits of retrieval practice.

We Have a Winner (?)

A research team in Europe took on this question, and looked at several studies in this field.

Whenever you start looking at neuroscience research, you should brace yourself for complexity. And, this research is no exception. It’s REALLY complicated.

The short version goes like this. Van den Broek and colleagues identify several brain regions associated with memory formation and retrieval. You might have heard of the angular gyrus. You might not have heard of the inferior parietal lobe. Anyway, they’ve got a list of plausible areas to study.

They then asked: did retrieval practice produce more activity in those regions (compared to review)? If yes, that finding would support the “increased connection” hypothesis.

Or, did retrieval practice result in less activity in those regions? That finding would support the “reduced connection” hypothesis.

The answer? Less activity. At least in the studies van den Broek’s team analyzed, the “reduced connection” hypothesis makes better predictions than the “increased connection hypothesis.”

To be clear: I’ve left out a few other explanations they consider. And: I’ve simplified this answer a bit. If you’re intrigued, I encourage you to look at the underlying review: it’s FASCINATING.

To Sum Up

We have at least a tentative idea about why retrieval practice works.

And: we have SUPER PERSUASIVE evidence that retrieval practice works.

Even though we’re not 100% sure about the why, we should — as teachers — give our students as many opportunities as we can to retrieve.

Good Morning, I love You: Mindfulness and Self-Compassion Practice to Rewire Your Brain for Calm, Clarity, and Joy by Shauna Shapiro
Rebecca Gotlieb
Rebecca Gotlieb

Shauna Shapiro, expert in mindfulness and compassion, recently authored Good Morning, I love You: Mindfulness and Self-Compassion Practice to Rewire Your Brain for Calm, Clarity, and Joy. In this book she draws on both scientific evidence and ancient wisdom to offer practices and thoughts to support readers’ well-being. Mindfulness is a way of living that allows us to pause and observe situations calmly. It has been associated with a host of psychological, physical, and cognitive benefits. Shapiro reports that only one-third of Americans are truly happy. As such, this book is relevant to a wide audience seeking to increase its happiness and well-being. Shapiro ends each chapter with mindfulness practices readers can try and with pearls of wisdom that inspire. Daniel Siegel, author of Aware, contributed the forward to this book.

One of the most inspiring insights from neuroscience, according to Shapiro, is that our brains change throughout life. By engaging in mindful practice, we can increase our psychological resources and change our brains. She emphasizes that change occurs in small increments, and continual practice matters most. Even just twelve minutes of daily mindfulness practice has been linked to improved outcomes. Specifically, mindfulness has been shown to increase or improve empathy, compassion, social relations, ethical decision-making, happiness, attention, memory, creativity, immune function, sleep, and cardiovascular functioning. It also reduces depression, anxiety, stress, pain, and mind wandering.

Shapiro contends that intention, attention, and attitude are the three pillars of mindfulness. Intention involves building a connection to and being guided by one’s aspirations and motivation. What we attend to is what becomes the basis of our mental life. People experience tremendous temptation to multitask. Doing so, however, decreases productivity and happiness. Shapiro emphasizes that we should have a kind and curious attitude about that to which we attend. For example, when we consider our own painful emotions with kindness and curiosit;, when we understand that pain, but not suffering, is inevitable; and when we label our emotions and appreciate that they serve a purpose, we can then develop self-compassion, learn from our failures, and engage in better behaviors for our physical health and the health of our relationships. Too many people today feel lost and lonely. Meditation can help us appreciate that we all belong to one another and that everything and everyone is connected.

Shapiro suggests a host of practices for meditating and living mindfully. These include: bringing attention to one’s breath, writing compassionate letters to oneself, forgiving oneself and others, smiling more, writing letters of gratitude, doing daily random acts of kindness, looking for the good in others, celebrating others’ happiness, and experiencing awe and wonder.  Because mindfulness is a way of living and not just a set of practices or a type of meditation, Shapiro describes how to introduce mindfulness into sex, eating, decision-making, the workplace, and parenting. Doing so can help us savor experiences, connect to our bodily intuitions, and move through life with less urgency and fear.

Shapiro concludes with the story of an especially important mindfulness practice for her. Amid a painful divorce, she began starting each day by saying “Good Morning, Shauna” and eventually “Good Morning, Shauna. I love you.” Shapiro spoke about this practice in a TEDx talk. She has seen in her own experience healing from her divorce and, with many other individuals whom she has supported, how this simple practice can transform lives. Good Morning, I Love You can help anyone begin a personal mindfulness journey to improved well-being.

Shapiro, S. (2020). Good Morning, I Love You: Mindfulness and Self-compassion Practices to Rewire Your Brain for Calm, Clarity, and Joy.

“How We Learn”: Wise Teaching Guidance from a Really Brainy Guy
Andrew Watson
Andrew Watson

Imagine that you ask a neuro-expert: “What’s the most important brain information for teachers to know?”

The answer you get will depend on the expertise of the person you ask.

If you ask Stanislas Dehaene, well, you’ll get LOTS of answers — because he has so many areas of brain expertise.

He is, for example,  a professor of experimental cognitive psychology at the Collège de France; and Director of the NeuroSpin Center, where they’re building the largest MRI gizmo in the world. (Yup, you read that right. IN THE WORLD.)

He has in fact written several books on neuroscience: neuroscience and reading, neuroscience and math, even neuroscience and human consciousness.

He’s also President of a newly established council to ensure that teacher education in all of France has scientific backing: the Scientific Council for Education. (If the United States had such a committee, we could expunge Learning Styles myths from teacher training overnight.)

If that’s not enough, Dehaene is interested in artificial intelligence. And statistics. And evolution.

So, when he writes a book called How We Learn: Why Brains Learn Better than Any Machine…for Now, you know you’re going to get all sorts of wise advice.

Practical Teaching Advice

Dehaene wants teachers to think about “four pillars” central to the learning process.

Pillar 1: Attention

Pillar 2: Active engagement

Pillar 3: Error feedback

Pillar 4: Consolidation

As you can see, this blueprint offers practical and flexible guidance for our work. If we know how to help students pay attention (#1), how to help them engage substantively with the ideas under discussion (#2), how to offer the right kind of feedback at the right time (#3), and how to shape practice that fosters consolidation (#4), we’ll have masterful classrooms indeed.

Learning, of course, begins with Attention: we can’t learn about things we don’t pay attention to. Following Michael Posner’s framework, Dehaene sees attention not as one cognitive process, but as a combination of three distinct cognitive processes.

Helpfully, he simplifies these processes into three intuitive steps. Students have to know:

when to pay attention

what to pay attention to, and

how to pay attention.

Once teachers start thinking about attention this way, we can see all sorts of new possibilities for our craft. Happily, he has suggestions.

Like other writers, Dehaene wants teachers to focus on active engagement (pillar #2). More than other writers, he emphasizes that “active” doesn’t necessarily mean moving. In other words, active engagement requires not physical engagement but cognitive engagement.

This misunderstanding has led to many needlessly chaotic classroom strategies, all in the name of “active learning.” So, Dehaene’s emphasis here is particularly helpful and important.

What’s the best way to create cognitive (not physical) engagement?

“There is no single miraculous method, but rather a whole range of approaches that force students to think for themselves, such as: practical activities, discussions in which everyone takes part, small group work, or teachers who interrupt their class to ask a difficult questions.”

Error Feedback (pillar #3) and Consolidation (#4) both get equally measured and helpful chapters. As with the first two, Dehaene works to dispel myths that have muddled our approaches to teaching, and to offer practical suggestions to guide our classroom practice.

Underneath the “Four Pillars”

These four groups of suggestions all rest on a sophisticated understanding of what used to be called the “nature/nurture” debate.

Dehaene digs deeply into both sides of the question to help teachers understand both brain’s adaptability (“nurture”) and the limits of that adaptability (“nature”).

To take but one example: research with babies makes it quite clear that brains are not “blank slates.” We come with pre-wired modules for processing language, numbers, faces, and all sorts of other things.

One example in particular surprised me: probability. Imagine that you put ten red marbles and ten green marbles in a bag. As you start drawing marbles back out of that bag, a 6-month-old will be surprised — and increasingly surprised — if you draw out green marble after green marble after green marble.

That is: the baby understands probability. They know it’s increasingly likely you’ll draw a red marble, and increasingly surprising that you don’t. Don’t believe me? Check out chapter 3: “Babies’ Invisible Knowledge.”

Of course, Dehaene has fascinating stories to tell about the brain’s plasticity as well. He describes several experiments — unknown to me — where traumatized rats were reconditioned to prefer the room where the traumatizing shock initially took place.

He also tells the amazing story of “neuronal recycling.” That is: the neural real-estate we train to read initially housed other (evolutionarily essential) cognitive functions.

Human Brains and Machine Learning

Dehaene opens his book by contemplating definitions of learning — and by contrasting humans and machines in their ability to do so.

By one set of measures, computers have us beat.

For instance, one computer was programmed with the rules of the game Go, and then trained to play against itself. In three hours, it became better at the game than the human Go champion. And, it got better from there.

However, Dehaene still thinks humans are the better learners. Unlike humans, machines can’t generalize their learning. In other words: that Go computer can’t play any other games. In fact, if you changed the size of the Go board even slightly, it would be utterly stumped.

And, unlike humans, it can’t explain its learning to anyone else.

And, humans need relatively little data to start learning. Machines do better than us when they can crank millions of calculations. But, when they calculate as slowly as we do, they don’t learn nearly as much as we do.

As his subtitle reassures us, brains learn better than any machine. (And, based on my conversation with him, it’s clear that “…for now” means “for the long foreseeable future.”)

Final Thoughts

At this point, you see what I mean when I wrote that Dehaene has an impressive list of brain interests, and therefore offers an impressive catalog of brain guidance.

You might, however, wonder if this much technical information ends up being a little dry.

The answer is: absolutely not.

Dehaene’s fascination with all things brain is indeed palpable in this book. And, his library of amazing studies and compelling anecdotes keeps the book fresh and easy-to-read. I simply lost track of the number of times I wrote “WOW” in the margin.

This has been a great year for brain books. Whether you’re new to the field, or looking to deepen your understanding, I recommend How We Learn enthusiastically.

https://www.youtube.com/watch?time_continue=62&v=23KWKoD8xW8&feature=emb_logo

What Students Want to Know about Brains and Learning, Part II
Andrew Watson
Andrew Watson

I recently wrote about a conversation I had with high school students in Japan. What questions did they have about brains and learning? What answers did they have?

In that last post, I wrote about the more school-centric questions that students asked: “what is the optimal amount of time to study?” I also noted their welcome skepticism: “does studying the night before a test really do nothing?”

Of course, lots of their questions pushed the boundaries of our knowledge. Several, in fact, pushed the boundaries of the plausible.

What Can We Know? What Can We Do?

How many signals does the brain send throughout the body in a second?

Well, there’s a question you don’t hear every day.

The answer is: I have no idea. In fact, I can’t imagine how we’d start answering that question. We’ve got BILLIONS of neurons (say, 85 billion for a round number guess). Each one makes something like 10,000 connections with other neurons.

How many signals do they send to each other? How many signals do they send to the body? The mind delights in (and boggles at) the calculations.

How much of your brain can you lose and survive?

Hard to say, precisely.

Here’s an article that will blow your mind: a man whose ventricles were so enlarged that he barely had a brain left. And, he more-or-less did just fine.

If you don’t believe me, check out the images. I mean: WOW.

Does your gut health/what you eat have a significant effect on your brain?

Every day I see more research on this topic. In brief: YES.

You’ll often hear the gut called “the second brain.” I suspect we’ll see lots more news in this field over the next decade.

Can I transfer my brain into a computer?

I have yet to meet the neuroscientist who thinks so. Given that the brain is roughly as complex as the universe, I suspect it will take a while for us to so.

Unless, of course, students keep asking high-quality questions like these. In which case, we’ll have the job done by next Thursday.

“How You Got to Be So Smart”: The Evolution of our Brains
Andrew Watson
Andrew Watson

When did learning first begin?

For me, individually, you might say it began when I first attended preschool. But, truthfully, learning began well before then.

I learned how to walk and speak, and to do (a very few of) the things my parents told me to do.

In the womb, I even learned to recognize sounds – like my mother’s voice.

But, let’s go much further back.

When did our species start learning? Or, before then, great apes? Or, even earlier, mammals?

Did dinosaurs learn?

How about those little one-celled organisms that developed when life began, over 3.5 billion years ago? Did they do anything we could meaningfully call “learning”?

Paul Howard-Jones answers that question with a resounding yes. And, most intriguingly, the biological mechanisms that allowed them to learn still help us to do so…all these billions of years later.

As Howard-Jones writes, learning “changes not just our mental world but also our biological form.” The basic biological and chemical mechanisms necessary for the earliest kinds of learning still help us learn today.

The Story Begins

Let’s start with E. coli. This single cellular organism has a bad rep, but we’ve got lots of very useful E. coli in our guts. And, they can – in a manner of speaking – learn.

In order to eat, E. coli have to move. And, they have two options for movement. If they’re successfully getting nutrition as they move, they want to keep going straight. If they’re not, they want to move randomly about – until they stumble into a better path to follow. Once they do, they start going straight again.

To accomplish this goal, E coli need to “remember” how much nutrition they were getting a few seconds ago, and compare that level to the current intake. Remembering, of course, is a kind of learning.

Howard-Jones helpfully describes the cellular mechanism that allows this memory comparison to happen. It’s a little complicated: think “methyl groups” and “receptors.” But, this clever and efficient system allows cells to remember, and thereby to eat and flourish. (Check out pages 24-5 for a full version of this story.)

Learning gets even cooler from there.

As evolution brought single-cellular organisms together into eukaryotes – from which sprang reptiles and amphibians and mammals and you – it produced ever-more-intricate systems for learning.

For instance, neurons evolved to ensure that multi-cellular organisms could coordinate their movements. (If each cell did its own thing, then we’d get no benefits from having all those cells.)

And, of course, neurons now form the biological basis of learning that happens in our brains.

Vertebrates and Primates

As evolution led to the development of more-and-more complex organisms, so too it produced increasingly complex kinds of learning: the ability to organize information by association, for example, or to recall something that happened yesterday.

The Evolution of the Learning Brain, devotes considerable time to primate development. In particular, it asks this question: since most evolutionary developments favor specialization, why did our species prove so successful? After all, our brains allow for great cognitive flexibility – the ability to be generalists, not specialists.

Howard-Jones answers this question by looking at the extraordinary climatic and geological upheaval at the time of our evolution.

Primates developed cognitive complexity – probably – in order to keep track of larger and larger social networks.

For instance, female vervet monkeys recognize their own offsprings’ cries. When they hear their children cry, unsurprisingly, they look at the child. When they hear someone else’s child cry, amazingly, they look at that child’s mother.

The story gets even more complicated when we look at chimpanzee dominance networks.

At the same time, later primates developed basic “theory of mind”: the ability to think about what others are thinking.

In one astonishing study, chimpanzees preferred to steal back food when researchers weren’t present – or when the container from which they stole the food was opaque. That is, chimps can think about what others can see, and behave accordingly.

All this complexity – social intelligence, theory of mind – proved especially important during the opening of the Great Rift in Africa: geological changes that led to rapidly changing climate and terrain. In this unusual set of circumstances, a species (like, say, Homo sapiens) with extra cognitive complexity was in a better position to manage upheavals.

As Howard-Jones writes:

The unique geology of the Rift Valley …is thought to have produced extreme climate variability with cycles lasting 400,000 or 800,000 years. […]

This inconsistent environment provided a novel genetic testing ground in which different hominin species were pursuing different approaches to survival, including generalizing vs. specializing. […]

Rather than evolving to fit one change, [Homo sapiens] evolved greater ability to respond to change itself.

Wow.

Classroom Implications

How should this understanding of evolution and learning shape our classroom practice?

Howard-Jones remains helpfully modest in answering this question. As he writes:

Evolution cannot tell us how to teach and learn, but it can help us frame and understand this research.

In his closing chapters, therefore, Howard-Jones encourages us to think about teaching with this perspective.

He suggests several insights about a) engagement, b) building of knowledge, and c) consolidation of learning that have evolutionary and neuro-biological grounding.

For instance: engagement. How can we help students pay attention?

Teachers have long known that novelty helps students focus. (Evolution helps explain why. Anything new could be a threat. Or, it could be food!)

Howard-Jones points out that shared attention is itself motivating:

Our strong motivation to share attention is a uniquely human characteristic that may have played a key role in our ancient cultural accumulation of knowledge, as it does today. When self-initiated, this capturing of shared attention also leads to reward-related brain activation.

In other words: schooling works because we invite our students to look with us, and to look with each other.

Another practical application: embodied cognition. Howard-Jones details several studies where a particular kind of movement helps students learn particular content.

He also explains why numbers and reading – more cultural practices than evolved cognitive capabilities – prove an enduring challenge to our students.

In Sum

Howard-Jones brings together many disciplines and a few billion years of history to tell this story.

Some readers might wish for more immediate, concrete teaching strategies. Some specialists, no doubt, disagree with his interpretation of the evidence.

I recommend this book so highly not because it tells us to do particular things, but because it helps us think in new and fresh ways about the work we have to do.

If we understand the evolutionary and neuro-biological sources of our difficulties and our enormous potential, we can think more realistically about avenues of success in schools.

In the words of Howard-Jones’s subtitle, we’ll understand how we got to be so smart. We might even understand how to get smarter still.

Constructivism: In The Brain, In The Classroom
Andrew Watson
Andrew Watson

In this helpfully provocative post, Mike Hobbiss argues that we often misapply the theory of constructivism.

For Hobbiss, the theory makes perfect sense when describing learning. However, he  worries that constructivism is unlikely to be helpful as a theory of pedagogy.

As he argues, drawing on extensive neuroscientific research, we can help students construct their own understandings by creating multiple, partial, and overlapping mental schema.

That kind of “constructivism as learning” might not be best fostered by “constructivism as teaching.”

Hobbiss offers this potentially controversial argument in measured and thoughtful tones. Even if you disagree with him — perhaps especially if you disagree with him — his ideas merit a careful read.

Right Brained Language Learning (And Other Reasons to Ignore Brain Myths)
Andrew Watson
Andrew Watson

As you know from reading this blog, cognitive psychology offers teachers dozens of helpful ideas.

We’re all better teachers when we enhance executive function and foster attention and manage working memory load.

Alas, over the years, many brain myths have gathered to clutter our thinking.

No, we don’t use only 10% of our brains.

No, the “learning pyramid” doesn’t tell you anything useful. (It doesn’t even make sense.)

No, learning styles aren’t a thing.

“Left-Brained” Skepticism

You might think I’m using my “rational, left-brained thinking skills” to offer these skeptical opinions.

Alas, the whole left brain/right brain distinction is itself another myth.

In some cases, brain functions happen more on one side of the brain than the other. But, even when we’re undertaking that particular function, we’re using brain regions from all over to get the mental job done.

A case in point…

“Lateralized” Language. Or, not.

Dedicated left-brain/right-brain advocates often point to language function to make their case.

For instance, Broca’s area — which helps us produce coherent speech — is in the left hemisphere. (So is Wernicke’s area, which helps us understand speech.)

Given these truths, they argue that speech is a “lateralized” brain function. In other words, it takes place in one hemisphere of the brain, not the other.

This claim, however, suffers from several flaws.

In the first place, Broca’s area is in the left hemisphere for 95% of right-handed people. But, that’s not 100%. And, that percentage falls to 50% for left-handed people.

Not so left-lateralized after all.

A second problem: language learning requires lots of right-hemisphere participation.

In a recent study, activity in the right hemisphere predicted participants’ later success in learning Mandarin. In fact, “enhanced cross-hemispheric resting-state connectivity [was] found in successful learners.”

Phrases like “cross-hemispheric resting-state connectivity ” might cause your eyes to glaze over. But, this key point jumps out: we can’t meaningfully ascribe language function to one hemisphere or another.

All complex mental activities require activation across the brain.

Teaching Implications

If you get teaching advice that you should do XYZ because a particular mental function takes place in a particular hemisphere: STOP.

Almost certainly, this claim

a) isn’t meaningfully accurate, and

b) comes from sources who don’t know as much about brains as they think they do.

Instead, ask yourself: does this guidance make sense even without claims about lateralization.

If yes, go ahead! If no, don’t bother.

In other words: use your whole brain and be skeptical.