From Fashion to Coding: Computational Thinking as a Cognitive Tool

Comp Thinking

The months of September and October are known for their succession of fashion weeks around the world. Every year, I browse through the multitude of pictures of models sporting extravagant looks that I will most likely never buy or wear. Part of it is that I enjoy staying up-to-date with the fashion world, but really, I just like to daydream about designs that I wish I could create. This year, what caught my eye on the screen wasn’t a wild hairdo, or a unique accessory, but a sparkling message below my Google search bar that was inviting me to start creating my own designs.

The message redirected me to the Made with Code website, a Google initiative that showcases projects connecting fashion, music, art, films, and other students’ interests with concepts of computer science. The featured project on the website laid out the relevance of programming in the fashion world and described how this year, some students collaborated to create code-animated LED dresses that fashion designer Zac Posen included in his show during fashion week. I was aware that “coding” was a buzzword lately, but how did it make it to the fashion world?

And more importantly, why?

It is expected that the technology sector will keep growing in the near future. Computer Science (CS) jobs may become the highest paying sector over the next decade1, and with CS playing a role in everything from research to fashion, reports suggest that technical competencies will be in increasingly higher demand2.

If a primary goal of education is to prepare students to be successful adults, then this changing professional landscape should be reflected in the ways (and what) we teach. Coding is a practice that is linked to “computational thinking” – a skill that has been advocated as a way to promote 21st century problem solving across disciplines3. This type of thinking is already beginning to transcend from the fashion world to a variety of compelling educational projects (e.g. National Science Foundation, Next Generation Science Standards, International Society for Technology in Education, Computer Science Teachers Association, College Board).

So what is computational thinking? And why is it catching on?

Problem solving has always been an integral part of our personal and professional lives, and the rise of computers has given us new ways to reflect on, structure, and optimize the ways we solve problems. Computational thinking is the set of cognitive skills that lend themselves to breaking a problem down and expressing its solution in a way that computers can understand. It is a way of translating how we think into how computers think–a challenge that both stretches our own perspective and provides great collaborative power between humans and machines. It involves the application of skills that we already use in our daily lives without knowing it, like algorithmic thinking (e.g., following steps in a recipe), pattern recognition (e.g., adapting to traffic on your daily commute), abstraction (e.g. zooming from an outlined map of the world to specific details of a city), and problem decomposition (tackling a complicated problem piece by piece)4.

Computational thinking is therefore a collection of mental tools from the field of computer science that can help students solve problems and design systems that are responsive to human needs, regardless of subject area.

What is computational thinking’s full potential for learning?

Any time computers are used in problem solving situations, computational thinking is needed. However, research suggests that mastering this type of thinking may also be related to more general capacities. Researchers have looked at the link between computational thinking, coding, and cognitive skills like problem solving to help us understand that interaction. For instance, Ambrosio et al. (2014) administered a set of four tests to 12 introductory programming students to understand which skills needed to be developed for students to employ computational thinking.5 Their tests were based on four central cognitive processes involved in programming, namely spatial reasoning, academic success, arithmetic reasoning, and attention to detail. The authors crossed students’ cognitive abilities (based on their scores from these four tests) with their performance in an introductory programming course. While the two latter did not correlate with programming capacities, the study shows that general abilities, like academic success, were crucial dimensions for introductory programming.

In other words, the students’ computational skills seemed to link with their overall deductive reasoning skills and their spatial organization of information.

This underlines the argument that computational thinking, regardless of specific skills, can be thought of as a mental process that students can act upon. More importantly, what computers bring to that equation is what Pea & Kurland (1984) have described as self-consciousness about the process of solving problems. In other words, when students discuss the process of solving problems in programming, they can learn to recognize connections between different problems that are structured in the same way. This heightened awareness of their personal problem-solving processes helps students to go beyond learning the language of computers: to promote metacognitive practices (or thinking about thinking), which can transfer across domains or subject areas6.

Finally, Voskoglou & Buckley (2012) argued that because students will pursue a career that will most likely be influenced by computing in the future, computational thinking should be practiced early in schools to expose students to solving problems using computers7. They also suggested that coding is a great way to learn computational thinking explicitly through active and creative experiences with computers.

How do we actually apply it to the classroom?

While the research makes a case for integrating computational thinking and coding skills into our educational curricula, how would it actually look like in the classroom?

To make the jump from research to practice possible, various initiatives and resources are available to teachers. For me, Google CS First provided step-by-step lessons about fashion design that I could follow. And here are a few other examples to introduce coding and computational thinking in class:

● Introduce computational thinking with this lesson to solving real-world problems through digital representations.
● Look at “Made with Code,” to see what projects students could get involved with, especially in the fashion world.
● Try the “Hour of Code,” a initiative to promote computer science and computational thinking during Computer Science Week (December 7-13).
● Explore Scratch, a coding platform aiming to introduce students to the basics of computational thinking and coding.

These resources can be used as tools to practice computational thinking in digital or physical environments (check out CSunplugged for computational thinking activities without computers).

Let’s look closer at one of these examples: Scratch. It is a programming language for beginner programmers that promotes digital fluency through the creation of videos, stories, music, simulations, interactive art, or games8. The program is made of blocks that snap together like a puzzle while emphasizing programming concepts and computational thinking. Users can also share their creations online through the Scratch community9.

Current Scratch Online Community Manager, Eric Schilling, has watched Scratch users (i.e. “Scratchers”) master much more than coding. He recently described his experiences to the Learning & the Brain Blog, specifically related to the skill he calls “debugging”:

“[It’s about] identifying and breaking down problems into smaller pieces. It’s easy to become discouraged when faced with a big, thorny problem. ‘Why the heck isn’t this working!?’ Scratchers embrace problems as learning opportunities. ‘Failing forward’ as many of them like to call it. Let’s find the root of the issue, come up with a solution, and make this cat superhero save the world (see image above)”.

It turns out there may be benefits to having students explore their imagine and learn real skills at the same time. In a study on digital storytelling, Shelby-Caffey et al. (2014) demonstrated that tools like Scratch could improve students’ analytical as well as technological skills by providing an engaging platform where students could express themselves creatively10. Armoni et al. (2015) also suggested that participating in computing activities early on (like using Scratch) could help increase students’ levels of motivation and self-efficacy to pursue coding, sometimes even facilitating the transition to more professional text-based programming languages like C# or Java, or to computer science studies at the secondary level11. To make it easier to implement, Scratch provides instructional support for educators through their ScratchEd online community.

As an educator, I recognize that the world is changing, and these qualities are becoming increasingly necessary for students to become college, career and citizenship-ready in the 21st century. Additionally, though they may seem very focused at first, research continues to suggest that these skills have the potential to transfer to other domains. By incorporating coding into fields like fashion or projects with “cat superheroes”, we can rebrand this type of thinking to appeal to a wider range of students who can all reap the potential professional and cognitive benefits. Girls who Code, along with Made with Code and other aforementioned sites, are part of a growing list of useful resources. Teachers can use them to connect what their students are truly passionate about with skills that will enrich their future.

These initiatives promote the set of cognitive skills needed for coding and computational thinking, and deliver them by engaging students in activities related to their interests. Whether said interest is fashion, music, video games, or problem solving, prioritizing computational thinking in the classroom promises benefits well beyond the computer screen.

References & Further Reading

  1. National Association of Colleges and Employers (2013). Salary Survey. [Survey]
  2. (2015). 2015 technology industry outlook. [Report]
  3. Barr, V., & Stephenson, C. (2011). Bringing computational thinking to K-12: What is involved and what is the role of the computer science education community?. ACM Inroads, 2(1), 48-54. [Article]
  4. Wing, J. M. (2006). Computational thinking. Communications of the ACM, 49(3), 33–5. [Article]
  5. Ambrosio, A. P., Almeida, L. S., Macedo, J., & Franco, A. H. R. (2014). Exploring core cognitive skills of Computational Thinking. In Psychology of Programming Interest Group Annual Conference 2014 (PPIG 2014) (pp. 25-34). [Paper]
  6. Pea, R. D., & Kurland, D. M. (1984). On the cognitive effects of learning computer programming. New ideas in psychology, 2(2), 137-168. [Paper]
  7. Voskoglou, M. G., & Buckley, S. (2012). Problem solving and computational thinking in a learning environment. arXiv preprint arXiv:1212.0750. [Paper]
  8. Resnick, M., Maloney, J., Monroy-Hernández, A., Rusk, N., Eastmond, E., Brennan, K., & Kafai, Y. (2009). Scratch: programming for all. Communications of the ACM, 52(11), 60-67. [Web Article]
  9. Lee Jr, J. (2009). Scratch Programming for Teens. Boston: Cengage. [Book]
  10. Shelby-Caffey, C., Úbéda, E., & Jenkins, B. (2014). Digital Storytelling Revisited. The Reading Teacher, 68(3), 191-199. [Paper]
  11. Armoni, M., Meerbaum-Salant, O., & Ben-Ari, M. (2015). From scratch to “real” programming. ACM Transactions on Computing Education (TOCE), 14(4), 25. [Paper]

Language Nutrition & the Developing Brain

Language Nutrition

We’re told that a picture is worth a thousand words, but this adage robs words of much-deserved credit. When you’re an infant with a rapidly developing brain, words are one of the most valuable things you can receive. They’re so valuable that a new initiative in Georgia called “Talk With Me Baby” promotes the importance of “language nutrition”. When it comes to language, infants are sponges: essentially every baby growing up in a normal environment masters the complex language system he or she’s exposed to. It helps that the adults around them hold up objects and emphatically enunciate their names, saying something like “ba-na-na” while waving the fruit in the child’s face, but that’s not the only way babies learn. Infants are constantly immersed in linguistic environments that are full of people expressing real and complicated thoughts through varied sentence structures. This provides the rich experience that children need to rapidly become fluent speakers. If you’ve ever tried to learn a new language in your teens or later, you know that this sponge-like capacity doesn’t last forever. Talk With Me Baby makes no bones about why they want to increase the amount of language that babies are exposed to: hearing more words in infancy promotes stronger language skills, which in turn form a foundation for academic and other successes throughout life.

Why are words so crucial during infancy?

The idea that there’s a sacred window of time in which language can be learned – referred to as a critical period – was first articulated in 19591, but it’s still a widely researched and debated topic. There isn’t yet a consensus on whether attempting to learn (a first) language after the critical period is futile or just more difficult than learning it earlier, and if there is a critical period, researchers still debate about exactly when that period is. Since intentionally raising a child without linguistic input (exposure to language) would be unethical, much of the support for the critical period hypothesis comes from tragic cases of children who grew up in abnormal environments that lacked language. Genie is a classic example of a girl who spent her entire childhood locked in a room without any stimulation or proper nourishment until she was discovered at 13 years old. At that time, researchers tried to provide her with therapy for her physical and cognitive abnormalities. Although she seemed able to learn a limited vocabulary, most scholars claim that Genie never truly learned language: she could not use grammar to put words together in a meaningful way. Although her case suggests the importance of receiving linguistic input during the critical period, it’s unclear whether Genie was disadvantaged from the start – her father claimed that he locked her up because she was cognitively disabled – and there are many other aspects of Genie’s deprived childhood that could have contributed to her inability to learn language at 13.

There are a few characteristics of the developing brain that speak to why we might be better at absorbing language as babies than as adults. For one, a critical period is not unique to language. Other biological processes also have their own critical periods2. Some of these periods have been demonstrated most clearly in animals deprived of specific sensory stimuli. For example, Hubel and Wiesel studied a cat whose eye was sewn closed as a kitten. When they removed the stitches, the cat was still unable to see out of the previously deprived eye. During the deprivation period, the visual cortex became dominated by the normal, unobstructed eye, which hijacked the brain space typically devoted to the second eye.

The cat’s visual cortex demonstrates a crucial feature of the brain: its plasticity. Neuroplasticity refers to the brain’s ability to reorganize itself based on the inputs it receives. Our brains are constantly reorganizing themselves (that is how we learn anything), but infants’ brains are especially plastic3. Developing brains are highly sensitive to incoming information and experiences, allowing them to learn massive amounts of information rapidly.

Perhaps counter intuitively, another explanation for why immature brains are ripe for language learning is that their prefrontal cortex (PFC) – the area of the brain most associated with higher-level and rational thinking – is undeveloped4. A paper by Sharon Thompson-Schill, Michael Ramscar, and Evangelina Chrysikou gives an example of watching a football game to highlight how adults’ and toddlers’ pattern-learning strategies differ. In the example game, you notice that the team passes the ball 75% of the time and runs with it the other 25%. Your task is to predict what the team will do in subsequent plays. As an adult, you’re likely to match probabilities: 75% of the time you’ll guess that the team will pass, and the other 25% you’ll guess that it’ll run. You’ve taken the less frequent event (the run plays) into account. However, since you don’t know when those rare events will occur, the optimal strategy would actually be to always guess that the team will pass. That’s precisely what a toddler would do. Toddlers ignore irregularities and grasp conventions quickly, at least partially thanks to their undeveloped PFCs. Thompson-Schill and colleagues argue that toddlers’ tendency to ignore inconsistencies might be ideal for learning the foundations of language, especially the syntactic patterns that govern our grammar. Toddlers eventually discover and master their language’s irregularities, moving from forms like “drinked” to “drank” as their PFCs develop and help them filter exceptions to rules.

Infants’ and toddlers’ brains are ready and waiting for linguistic input. This input allows their brains to develop new neural pathways in response to the language conventions they’re exposed to. As they get older and continue to use language more (whether they’re listening, speaking, reading, or writing), these pathways continue to strengthen. Talk With Me Baby asserts that “early language exposure is the single strongest predictor of third grade reading proficiency,” and that third grade reading proficiency, in turn, predicts further academic and economic successes. This is because third grade is when most kids transition from learning to read to reading to learn. In this way, linguistic exposure as an infant has cascade effects that last long after infancy. Just as proper nutrition promotes physical growth and is crucial for babies’ future health, proper linguistic nutrition promotes the mental growth necessary for future success.

The 30 Million Word Gap

It’s almost impossible for a baby to grow up without any exposure to language, but many children still grow up in environments that lack sufficient language exposure. In one seminal study, researchers found that the number of words addressed to children differed dramatically across families of different socioeconomic statuses (SES)5. SES is a measure that combines income, occupation, and education to reflect a family’s economic and social position in society. Children from families in the highest SES category heard an average of 2,153 words per hour, while those is the lowest SES group only heard 613 words per hour. From these numbers, the researchers calculated that by 4 years old, the average child from a higher-income family hears a total of about 45 million words, while the average child from the low-income family hears a measly 13 million words. The authors referred to this disparity as the 30 Million Word Gap. The gap may result, at least in part, from the fact that parents who are struggling financially are often unable to devote the same amount of focused time to their children that parents with fewer financial struggles can6. Reduced linguistic input is one consequence of the quality-time deficit that lower-SES kids often experience.

If a child from a low-income family enters school at age 4 after hearing 30,000,000 fewer words than his or her other classmates have, this child will immediately have an immense disadvantage. Because learning is sequential, in the sense that that many concepts build on each other, the child on the disadvantaged side of the word gap will have difficulty learning new information that requires an understanding of language. As a result of missing out on valuable linguistic input as a baby, this student may never catch up.

Talk With Me Baby

The state of Georgia has launched an effort to close the 30 Million Word Gap7, acknowledging that “a lack of early language exposure has lifelong consequences,” like dropping out of high school, incarceration, becoming a teen parent, involvement in violence, unemployment, and poverty8. Their initiative, Talk With Me Baby, is being implemented mainly by spreading awareness. Because the word gap can have future physical health consequences and because almost all babies are seen in hospitals, nurses in particular are helping spread the message that babies are listening, even before they’re born. They’re absorbing what they hear, so they should hear as much language as possible. The website for Talk With Me Baby also advertises an app that parents will soon be able to download with features like topics to talk about, milestones to look for, reminders to talk, and resources.

Raising a child is complicated. It can be hard to know what to feed your child, when to do it, and even how to afford the ideal nutrition. Luckily, providing babies with proper linguistic nutrition is fairly straightforward and accessible to all. What words should you feed your child? As many as you can! When should you feed your child his or her words? Whenever you can! Ideally, babies should hear not only as many different words as possible, but they should also hear as many different sentence structures as possible. Long sentences are the linguistic equivalent of milk: consuming them helps children’s cognitive foundations get strong enough to support all of the lessons and skills that they’ll learn in school. Perhaps best of all, words are free and we can all make them, which means closing the 30 Million Word Gap is within our reach.

References & Further Reading

  1. Penfield, W., & Roberts, L. (1959). Speech and brain-mechanisms. Princeton, N.J: Princeton Univ. Press. [Book]
  2. Sengpiel, F. (2007). The critical period. Current Biology, 17(17), R742-R743. [Paper]
  3. Mundkur, N. (2005). Neuroplasticity in children. Indian Journal of Pediatrics, 72(10), 855-857. [Paper]
  4. Thompson-Schill, S., Ramscar, M., & Chrysikou, E. (2009). Cognition without control. Current Directions in Psychological Science, 18(5), 259-263. [Paper]
  5. Hart, B. & Risley, T. (2003). The early catastrophe: 30 Million word gap by age 3. American Educator, Spring 2003, 4-9. [Paper]
  6. National Journal. (2015). 30-million word gap divides rich and poor kids. [Web Article]
  7. Deruy, E. (2015). Why boosting poor children’s vocabulary is important for public health. Atlantic Magazine. [Web Article]
  8. Talk with Me Baby [Educational Initiative]

3 Things Neuroscience Teaches Us About the Changing “Teenage Brain”

Teenage Brain

Adolescence is the period between childhood and adulthood. And though it can stretch into our early twenties, we spend many of these years in high school. This stage of life is marked by increased cognitive abilities, social sensitivity, and agency (or increasing independence). These changes make this time particularly perplexing to some adults, as they struggle to make sense of stereotypical adolescent behaviors such as risk taking and increased allegiance to peers.

At the end of the 20th century, it was common to discuss adolescent behavior as being influenced by “raging hormones.” Today, it is becoming increasingly common to discuss adolescent behavior in terms of the “teenage brain.” But what makes the teenage brain different from the child or adult brain? And do these differences have implications for education and learning? This blog will discuss the latest research in adolescent brain development and how the current evidence might inform education during the teenage years. This post outlines three of the most interesting things neuroscience has taught us about the physical changes that take place in the brain during adolescence.

1. The brain continues to change throughout adolescence.

Perhaps the most important consideration to keep in mind regarding the brain during adolescence is that it is continuing to change. There is evidence for this from multiple lines of research, including cellular work on post-mortem human brain tissue1, as well as longitudinal magnetic resonance imaging (MRI) studies of brain structure and function.

What do we mean by “physically change”?
With MRI, we have the ability to see how the living human brain changes from birth to old age by taking different kinds of pictures. One kind of picture we can take is of the structure–or anatomy–of the human brain, and we can use this picture to look specifically at two components of the brain’s structure: one component is grey matter, which is largely made up of brain cell bodies and their connections. And the other is white matter, which is primarily the long connecting fibers that carry signals between brain regions. The thing that gives white matter its color is “myelin”, which is a fat that wraps around connecting fibers in order to make communication more efficient.

There have been a few studies now where hundreds of participants had their brains scanned multiple times across development, and we know from these studies that the amount of grey matter is greatest during childhood, but decreases during adolescence before roughly stabilizing in the mid- to late- twenties2. We also know that the amount of white matter increases almost linearly across adolescence3. These are two major changes happening in the structure of our brain during adolescence.

2. The brain doesn’t all change at once.

Structural changes are not occurring at the same time across the whole brain. Actually, areas of the brain that are involved in basic sensory processing or movement develop earlier than areas of the brain involved in more complex processes such as inhibiting inappropriate behavior, planning for the future, and understanding other people. These and other complex processes rely on areas in the prefrontal, temporal and parietal cortices, which are continuing to change in structure across the second decade of life4.

How do these changes happen?
We still do not know the specific cellular mechanisms that underlie developmental changes in measures of grey or white matter. It is often thought that these decreases in grey matter reflect, at least in part, changes in connectivity between brain cells. These changes include decreases in dendritic spine density (which is basically a proxy for how interconnected cell bodies are in the grey matter) and other cellular processes involved in synaptic pruning (which is the way that connections in the brain are broken). Histological work, which involves studying the cells using microscopes, has given us a better understanding of the cellular changes occurring in the human brain across the lifespan.

In one specific study, researchers at the Croatian Institute for Brain Research counted the number of dendritic spines in an area of the prefrontal cortex5. They found that the number of spines continued to decrease across the second and third decades of life. So, this finding gives some cellular evidence for the continued structural development of the human brain across adolescence, at least in a section of the prefrontal cortex.

Is this a bad thing?
Not necessarily. The continued reduction in synapses seen in the prefrontal cortex means that the brain is still undergoing changes in organization during adolescence. As humans, we have an excess amount of brain connections when we are children, and almost half of these connections can be lost in adolescence. We know that experience influences what connections are kept and subsequently strengthened. Thus we can think of adolescence as a time of transition rather than a time of loss in certain areas of the brain.

3. The brain is changing in more ways than one.

MRI can also be used to see how blood flows in the brain, which allows researchers to get a sense of how the brain is working. So if MRI alone reveals brain structure, you can think of fMRI (or “functional MRI”) as revealing brain function. Many fMRI studies have also shown changes in brain functionality across adolescence. For example, how we use areas of the brain involved in understanding other people changes between adolescence and adulthood6.

This is especially true for “the social brain”.
There are a number of cognitive processes that are involved in interacting with and understanding other people, and we can use functional MRI to see what areas of the brain are active when we engage in important social tasks like understanding the intentions or emotions behind facial expressions or understanding social emotions like guilt or embarrassment. Tasks like these consistently recruit a number of brain regions in the prefrontal and temporal cortex, which is sometimes referred to as the “social brain.”

Although adolescents and adults use the same areas of the brain during a number of social tasks like understanding intentions and social emotions, these tasks all show a similar decrease in activity across age in this medial prefrontal cortex area, which is a part of the brain often related to social processing Adolescents seem to use this part of the prefrontal cortex more than adults when doing certain social tasks7.

So what does it all mean?
What is the point in highlighting these biological changes if we cannot connect them to real world behavior? In this post, I discussed how the brain is changing in both its structure and function during adolescence, highlighting in particular the changes involved in areas of the brain used when we attempt to understand the thoughts, intentions and feelings of other people. These changes are relevant because of the developmental tasks that adolescents must accomplish. One of the major developmental tasks of adolescence is to learn how to successfully navigate our highly social world. Having a malleable brain during adolescence is arguably adaptive for this sort of task, as new social skills and higher level cultural rules can be acquired with greater ease. Thinking about how these changes may impact the way students interact with educational environments is also important – considering these environments are often just as social as they are learning-oriented. In the next post, I’ll discuss how the adolescent brain is not just primed to learn from the social environment, but also how it is particularly sensitive to complex social signals.

References & Further Reading

  1. Petanjek, Z., Judaš, M., Šimic, G., Rasin, M. R., Uylings, H. B. M., Rakic, P., & Kostovic, I. (2011). Extraordinary neoteny of synaptic spines in the human prefrontal cortex. Proceedings of the National Academy of Sciences of the United States of America, 108(32), 13281–13286. [Paper]
  2. Huttenlocher, P. R., & Dabholkar, A. S. (1997). Regional differences in synaptogenesis in human cerebral cortex. The Journal of Comparative Neurology, 387(2), 167–178. [Paper]
  3. Mills, K. L., & Tamnes, C. K. (2014). Methods and considerations for longitudinal structural brain imaging analysis across development. Developmental Cognitive Neuroscience, 9, 172–190. [Paper]
  4. Lebel, C., & Beaulieu, C. (2011). Longitudinal development of human brain wiring continues from childhood into adulthood. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 31(30), 10937–10947. [Paper]
  5. Tamnes, C. K., Walhovd, K. B., Dale, A. M., Østby, Y., Grydeland, H., Richardson, G., … Fjell, A. M. (2013). Brain development and aging: Overlapping and unique patterns of change. NeuroImage, 68C, 63–74. [Paper]
  6. Blakemore, S.-J., & Mills, K. L. (2014). Is Adolescence a Sensitive Period for Sociocultural Processing? Annual Review of Psychology, 65(1), 187–207. [Paper]
  7. Blakemore, S.-J. (2008). The social brain in adolescence. Nature Reviews. Neuroscience, 9(4), 267–277. [Paper]

The Dangers of Remembering What You Learned

Remember Kid

When teachers say we want our students to learn, we might also say we want them to remember; after all, if I’ve learned something, I can remember it later on. Sadly and surprisingly, there’s a curious danger to remembering: remembering can cause you to forget.

Yes, you read that right. The wrong kind of remembering causes forgetting.

Imagine the following mental exercise—a mental exercise that resembles many research studies1:

To start, you study a list of words in four different groups—say, Animals (dog, cat), Instruments (guitar, violin), Foods (pizza, steak), and Furniture (sofa, table). After a while, you recall half of the words in two of the groups. For example, in the Animal group, you recall the word “dog” (but not “cat”), and in the Foods group, you recall the word “pizza” (but not “steak”). And you don’t recall any words in the Instrument or Furniture groups.

When I test you on all these words several hours later, there are three logical categories.

First, there are the two groups of words you didn’t recall all: Instruments and Furniture. You’re likely to remember—perhaps—50 % of those words.

Second, there are the words and groups you did recall: the word “dog” in the Animal group, or “pizza” in the Food group. Because you recalled these words, you’re likelier to remember them, so your score will be higher—say, 75%.

Third, there are words that you didn’t recall (“cat,” “steak”) even though you recalled other words in Animal and Food groups.

Take a moment to ask yourself: what percentage of words in this 3rd group are you likely to remember?
Perhaps—because you practiced their groups—you’ll remember them at the 75% level. Or perhaps—because you didn’t practice these specific words—you’ll remember them at the 50% level.

It turns out both answers are wrong. You’ll remember even fewer of those words: say, 40%.

Why? Because practicing some of the words in the Animal and Food categories makes it less likely you’ll remember the un-practiced words. In other words, recalling some of the words prompts you to forget the words you didn’t recall.

The wrong kind of remembering caused you to forget.

In the neuroscience community, there is an active debate about the mechanisms that cause “retrieval-induced forgetting.”2,3 And while that debate is fascinating, it doesn’t really help teachers answer our constant question: “what should teachers do in the classroom with this scientific information?”

I haven’t read any research that addresses this question directly. (More precisely: I don’t remember having read any research that answers it; perhaps I read it, and forgot the source.) But I think the potential dangers of retrieval-induced forgetting (often abbreviated RIF) should shape our practice in very specific ways—in particular, the way we review.

Here’s an example. In yesterday’s class, my students discussed the five ways that the French and Indian War lay the foundation for the American Revolutionary War. To begin today’s class, naturally, I ask my students what conclusions we reached. One student calls out: “The French and Indian War cost a lot of money, and the British government decided to tax the colonies to pay for it. Those taxes helped spark the revolution.” Exactly so. Another student adds to the list: “George Washington gained essential military training and a cross-colony reputation for bravery.” Because we’ve gone over these two key points from yesterday, I assume my students will be prompted to remember the other three. Confident in this assumption, I move on to today’s new topic…

But there’s a problem here. Yesterday, my students got a list of five key points; today, we began class by reviewing two of them. I hoped—in fact, assumed—that my two-item review will help them remember the other three points. However, if the RIF research is true, then my two-item review will in in fact make it less likely that the students will remember the other three items. Because they practiced two of the examples in this group (“ways that one war set the stage for the next”), they are less likely to remember the un-practiced examples in that group.

When I first read this research, and started thinking about my own teaching practice, I realized with increasing alarm how often I review this way. If we studied ten vocabulary words yesterday, I’ll prompt students to recall two or three. If we looked at eight subject-verb agreement rules, I’ll asked them to jot down two, and discuss them with a partner. Of course, teachers must help their students review the material they learn, but if the first review is incomplete, we may very well be reducing—not increasing—the long-term likelihood that our students remember all the information.

In my own teaching, the RIF research has led to this guideline: the first two or three times I go over a topic, I make sure to cover all of the material that is a) conceptually related and b) equally important:

  • “Conceptually related”: RIF results from partial review of conceptually related information only; it influences Animal and Food words, not Instrument and Furniture words.1 For this reason, I don’t need to review an entire lesson—just the logically connected pieces of it. When I go over five essentials for a strong topic sentence, I don’t also need to review the highlights of “Young Goodman Brown.” We discussed both topics on the same day, but our discussion of the short story was conceptually distinct from our discussion of effective writing.
  • “Equally important”: when we go over all five ways that the French and Indian War led to the Revolutionary War, I don’t need to go through the detailed specifics; they’re not as important as the main concept. If I think of my lesson plan in an outline, I should cover all (or none) of the points on the same level of that outline.

One final danger to consider: student directed review might be especially prone to RIF. If students come up with their own list of key terms to remember, for example, their incomplete list might prompt them to forget the examples they didn’t include. As teachers, we need to find mechanisms to ensure that student generated review covers all equally important information.

Of course, research into RIF continues, and we don’t yet completely understand how and why it happens. For teachers, the key point to keep in mind is this: whenever we prompt our students to review, we must be sure that RIF doesn’t cause them to forget what we want them to remember.

References & Further Reading

  1. Jonker, T. R., Seli, P., MacLeod, C.M. (2012). Less we forget: Retrieval cues and release from retrieval-induced forgetting. Memory & cognition 40(8), 1236-1245. [Paper]
  2. Dobler, I.M. & Bäuml, K.T. (2013). Retrieval-induced forgetting: dynamic effects between retrieval and restudy trials when practice is mixed. Memory & cognition 41(4), 547-557. [Paper]
  3. Mall, J.T. & Morey, C.C. (2013). High working memory capacity predicts less retrieval induced forgetting. PLOSOne 8(9), e52806. [Paper]
  • Johansson, M. et al. (2007). When remembering causes forgetting: Electrophysiological correlates of retrieval-induced forgetting. Cerebral Cortex 17(6), 1335-1341. [Paper]

Love/Hate: The Long, Complicated Relationship between Research & Education

Research and Education

Anyone who has ever stood in front of a classroom silently praying that their curriculum is engaging, their students are comfortable, and their jokes don’t skip a generation can tell you: Teaching isn’t easy. It’s some secret blend of intuition, strategy, and deep breaths. Great teachers aren’t measured by how much they know about the brain any more than great artists are measured by how much they know about the reflective properties of light: knowing how to use it trumps knowing how it works.

So why do we think it’s so important to use research in the classroom? Why am I spending all my time hanging around the places where education and neuroscience overlap?

Well, put simply: I think it can help.

And I’m not the first. For generations, teachers have been drawn to learning more about the engine that runs the minds that they’re shaping. And researchers have believed in the power of knowledge to improve the way we teach. We have the instinct that the more we know about how things work, the better we’ll be able to control or optimize them. And for generations this has led to a somewhat rocky relationship between the researchers who can describe a student, and the teachers who can inspire one. Sometimes toes get stepped on, sometimes lines are crossed or miscommunications abound… and sometimes, it works. So I decided to take a look back at a few of the ways that research has influenced education, and what that can teach us about getting this important relationship on the right track.

(350 BC) Aristotle and the meeting of science and education.

Aristotle was the original evidence junkie, and arguably, one of the first people to view education through what can be thought of as an early iteration of a scientific lens. He was a pioneer of carefully evaluating claims through observation and reasoning and — by refusing to settle for assumptions — laid the groundwork for some of the greatest scientific discoveries. He even compared how constellations appear in the sky depending on your distance from the equator, providing physical evidence to corroborate Pythagoras’s claim that the Earth was, in fact, round (sorry, Columbus).

At the same time, he was a dedicated educator, and he applied his love of the measurable to his pedagogical beliefs. He started a school, which was built on his view that nature is best understood through structured evidence-seeking and reason. He was an early advocate for ideas that have since transformed into a slew of modern buzzwords; he believed experiential learning, educational equality, lifelong learning, and public access to education were essential components of an ethical society. His fact-forward approach to inquiry even snuck into his moral teachings through the concept of “phronesis”: a type of practical knowledge that maintains that being a good person requires taking a bit of a motivated, scientific approach to moral decision-making.

Aristotle believed in fundamental ways of knowing that informed both his investigation of nature and his approach to teaching. As a foreshadowing of professional-learning-communities-to-come, he believed that teaching itself ought to be informed by the collected knowledge of those who have taught. Basically, he’s not only one of the first scientists, but also one of the first advocates for the ways that what we know should directly influence what and how we teach.

Admittedly, Aristotle was a far cry from applying neuroscience to the classroom, but his proclivity for fusing analysis, evidence, action, and learning continue to shape the way we think about education today.

(19th-20th century) The empirically-informed standardization of learning.

We’ve all heard critics of the modern educational system summarize its shortcomings as an out-dated “factory model of education”. They’re usually referencing things like standardized assessment, IQ testing, age grouping, and the one-size-fits-all approach. Most often, the blame is placed on late 19th century adoption of the Prussian-Industrial model in the US, claiming it prioritized conformity and efficiency above all else. Of course, it’s not really that simple.

The 19th century was a time of great flux for public education. Competing models were emerging to solve the most pervasive problems of the day: inconsistent content, unclear standards, and a lack of equal access. By the mid-19th century, the inaugural Secretary of the Massachusetts State Board of Education, Horace Mann, made it his mission to empirically evaluate existing domestic and international models and to lobby for the full implementation of the one that best suited the nation’s needs. Enter: the Prussian-Industrial Model.

This model was first crafted by King Frederick William I as a state-mandated program, arguably crafted to cultivate an obedient and submissive public. Teachers were stripped of autonomy and held to strict standards uncommon to the “sage” or “mentor” models of the past. Sure it was rigid, impersonal, and reeked of indoctrination, but it also ticked off the boxes that mattered most to Mann. It was cost effective, scalable, inclusive, consistent, and prioritized teacher training; features he argued to be necessary for public education to thrive. If he leveraged this powerful system for good, he believed, society as a whole would benefit.

Of course, once this ball started rolling, it seems, Mann’s best intentions couldn’t hasten the inertial appeal of standardization or the emergent needs of the new system. As the Department of Education and similar bodies came into existence, the ambition to continue to improve our approach and evaluate progress intensified. But the new model catalyzed major increases in student-to-teacher ratios; how do you properly evaluate groups of students that large? Policymakers wanted data. They wanted to know what was working, what wasn’t, and what different students knew. By the early 20th century, research on human development, learning, and memory aligned with the goals of evaluators. The complicated entangling of research and education was well on its way as psychologists and educational strategists were recruited to design blanket assessments for growing classrooms.

One of these tests, the Binet-Simon test, was created by French psychologist Alfred Binet to determine the mental capacity of students so that those with severe difficulties could be properly accommodated. That’s it. This whole mess of an IQ debate started with the earnest goal of capturing a snapshot of a particular child’s abilities and responding accordingly. Binet was clear: intelligence is diverse, complicated, and unlikely to remain static over the lifespan. Unfortunately, not everyone was listening.

Henry Herbert Goddard, a US psychologist, caught wind of the Binet-Simon test and translated it into English. He went on to promote its use as an intelligence assessment tool, going so far as to encourage the sterilization of those deemed “feeble-minded” by its measure. Stanford psychologist Lewis Termin who adapted the test to create the Standord-Binet version (now in it’s fifth edition), also believed that intelligence was an inherited and fixed trait. The result was a national effort to rank students, citizens, and immigrants against each other, with sometimes dire consequences.

By 1936, standardized testing had become such a popular way to quickly and consistently assess large groups of people that the first automatic test scanner was developed to make doing so even easier. Basically, in less than one hundred years, the goal of systematizing public education led to a series of (sometimes) reasonable next steps that eventually landed us with Scantrons.

(Present Day) Brain-Based Learning.

Selfies and Netflix consumption aside, it’s safe to assume that people haven’t fundamentally changed much since the days of Aristotle. We’re still susceptible to the same biases, assumptions, and miscommunications that we were in the 19th century. Of course, we have the added benefit of learning from everything that’s come before us. So the question is, how do we make sure that research is used wisely?

Well, for starters, we have to talk to each other. If someone had asked Binet before implementing his test, he would have likely clarified how to use it reasonably. If someone had asked experienced teachers before they assumed a single standard for quality, they would have likely clarified the value of adapting to your students. The problem is that information doesn’t exist in a vacuum and expert does not mean right. We’re all constantly interpreting research to match our own goals, experiences, and understanding of the world. We are much better at hearing what we want to hear than we are at listening to each other.

Which leads me to a question that a teacher asked me in one of my workshops: Exactly what type of learning is not brain-based?

Her point was that the premise is flawed. The way this research is being shared is often flawed. If we present neuroscientific research as a solution, or information that lays the foundation for a “type” of learning, we miss the point — and the opportunity. Great teachers have navigated the inner workings of the brain for centuries without ever needing to know what was going on inside. To suggest that now that we have MRI machines and EEG we’re all of a sudden going to better understand how to teach a brain to learn is highly unlikely. Like any good, long-term relationship, it all comes down to goals, expectations and respect. Researchers and educators have to consider the lens and goal of each other, and adjust their expectations accordingly.

And when it comes to neuroscience, there seems to be a bit of a communication breakdown. Some people are adamant that there’s no place for neuroscience in education; it’s too premature or the questions are just too different. Others believe that it’s the answer we’ve all been waiting for; the pixie dust that’s going to fix whatever we believe to be broken. Still others see a business opportunity; if we can package up the appeal of neuroscientific answers and cater them to educators’ needs in bite size chunks, we can make some serious dough and no one will be the wiser.

I find that reality is usually somewhere in-between.

Neuroscience is unlikely to create great teachers, great tests, great classrooms, or great curricula – that’s not its goal and that’s not something I expect anyone to bottle up any time soon – but it can inform the way we think about students and the nested communities that they’re a part of. It can teach us more about ourselves, how we interaction with information, and how we interact with each other. It can be one of many tools we use to get this right. And frankly, that’s all we should ask of it.

When thinking about history, we first have to consider whether the actions that look foolish in retrospect were actually reasonable reactions to the problems of the day. Sometimes it’s successful, and sometimes it’s not. By putting ourselves in their shoes, we can empathize with their mistakes, and more easily imagine ourselves making them in similar circumstances. Education and science are similar in that both can be a reflection of the society that supports them. Every solution has flaws, and often, the solution to one problem ends up causing a whole slew of new ones. The power and perceived credibility of research was well-received by a system that felt haphazard and disorganized. The problem is, that same research viewed through different lenses can have drastically different consequences. In the case of standardization, history reminds us to both embrace the insights research may offer, while also being wary of the agendas that may be shaping its use.

Similarly, neuroscience is valuable to education, so long as we understand its limits and the biases of those who are disseminating it. If we adopt it blindly, then we run the risk of misallocating resources or creating more problems than we solve. Recent studies suggest that the majority of educators continue to believe neuromythologies, and the problem is, they didn’t come up with those themselves. Someone else told them that students are left-brained or right-brained. Someone else told them that boys and girls are born with totally different brains, or only use 10% of it, or that Mozart will make you smarter. The list goes on and on, but the point is, as history has shown us, society (and in this case mass media and capitalism) will often shape the message. It’s up to us to find ways to make sure that the darker side of history doesn’t repeat itself.

It’s a pursuit that we’ll never really finish. Research is always in progress and education is always looking for ways to adapt to the needs of the day. The goal is to work towards the best ways to keep up, so that we can collectively take the next chapter of history into our own hands.

References + Further Reading:


  • Aristotle B.C. (384-322) – Education for a Common End”. [Blog]
  • Back, S. (2002). The Aristotelian challenge to teacher education. History of Intellectual Culture, 2(1), 1-5. [Paper]
  • Curren, R.R. (2000) Aristotle on the necessity of public education. Rowman and Littlefield Publishers. [Book]
  • Kurthagen, F.A.J. in cooperation with Kessels, P.A.M., Kostler, B., Lagerwerf, B., Wubbels, T. (2001) Linking practice and theory: The pedagogy of realistic teacher education. Mahawa, N.J.: Lawrence Erlbaum Associates. [Book]
  • Popova, M. (n.d.) The art of practical wisdom: The psychology of how we use frames, categories, and storytelling to make sense of the world. [Blog]
  • Wanjek, C. (2011). Top 5 misconceptions about Columbus. LiveScience [Blog]

19th-20th Century Learning

  • Binet, A. (1905). New methods for the diagnosis of the intellectual level of subnormals. L’Anée Psychologique, 12, 191-244. [Paper]
  • EdX. (2015). Saving schools: History and politics of U.S. Education. Harvard University [MOOC]
  • Fletcher, D. (2009). Brief history: Standardized testing. TIME [Article]
  • Meshchaninov, Y. (2012). The Prussian-Industrial history of public schooling. The New American Academy. [Report]
  • Noer, M., Khan, S. (2012). The history of education. Forbes. [Video]
  • Watters, A. (2015). The invented history of ‘The Factory Model of Education’. Hack Education [Blog]
  • Zenderland, L. (2001). Measuring minds: Henry Herbert Goddard and the origins of American intelligence testing. Cambridge University Press. [Book]

Brain-Based Learning

  • BrainFacts. (n.d.). Neuromyths. [Resource]
  • Howard-Jones, P.A. (2014). Neuroscience and education: Myths and messages. Nature Reviews Neuroscience, 15(12), 1-8. [Paper]
  • Sukel, K. (2015). When the myth is the message: Neuromyths and education. Dana Foundation. [Briefing]
  • Sylvan, L.J. & Christodoulou, J.A. (2010). Understanding the role of neuroscience in brain based products: A guide for educators and consumers. Mind, Brain, and Education, 4(1), 1-7. [Paper]
  • Tardif, E., Doudin, P., Meylan, N. (2015). Neuromyths among teachers and student teachers. Mind, Brain, and Education, 9(1), 50-59. [Paper]
  • Weisberg, D.S., Keil, F.C., Goodstein, J., Rawson, E., Gray, J.R. (2008). The seductive allure of neuroscience explanations. Journal of Cognitive Neuroscience, 20(3), 470-477. [Paper]

Building Resilience in Children and Teens: Giving Kids Roots and Wings

Resilience—the ability to recover from a set-back—is one of the most important traits and mindsets to instill in children so that they may thrive in adulthood. This is the theme of Building Resilience in Children and Teens: Giving Kids Roots and Wings, Third Edition, by Dr. Kenneth R. Ginsburg. Ginsburg is a pediatrician at the University of Pennsylvania Perelman School of Medicine, a counselor and researcher about child development, and a father of two adolescent girls. The “seven crucial Cs”– competence, confidence, connections, character, contributions, coping and control—comprise the skills parents should foster in their children to promote resilience. Although Ginsburg humbly states that much of the book is “commonsense parenting,” this guide, aligned with recommendations from the American Academy of Pediatrics, is helpful to all parents and youth services providers because of the practical tips and tricks he provides for reflecting about and improving one’s parenting practices.

Children and adolescents experience stress from parents, friends, school, demanding extracurricular activities and the media. While stress had an evolutionarily productive role (i.e., spurring us to escape predators), and, while it can still be harnessed to increase productivity today, chronic stress may lead to poor health and risky decision-making. Given that many of the behaviors parents hope their children will avoid arise as a stress-coping mechanism, it is important to address children’s stress directly. Ginsburg offers numerous strategies to help children and adolescents (and their caregivers) manage stress and build resilience. He suggests physical exercise, meditation and reflection, proper nutrition and sleep, engaging in creative activities, volunteering in the community, having multiple friend groups and older mentors, and learning to ask for help. He discusses some of his clinical techniques for redirecting stress-related behaviors, such as making a decision tree. More than any of these tactics, the most critical parenting practice for building resilience and managing stress is ensuring that children know they are loved unconditionally, that their parents will always be a source of stability, and that home can be a safe haven.

Another crucial parenting practice is setting high expectations for children. Children will fulfill the expectations set for them, whether they are low or high. One of the greatest challenges of parenting is knowing how much to protect a child. Loosening protective reins to allow children to work on their emerging abilities and build on their strengths gives them an opportunity to gain competence and confidence. When children or adolescents meet expectations, praise should be realistic and based on effort. When they fail to meet reasonable expectations that parents set, parents need to avoid lecturing. Criticism should be focused. Punishments should be clearly related to the offense committed. Parents should not equate discipline with punishment; rather they should think of discipline as a way of teaching and scaffolding behavior. To understand why a child has not met an expectation, and to connect more generally, parents need to learn to listen. Doing so means creating opportunities for discussions, listening intently without interruptions, and being non-judgmental.

Parents need to model the behaviors that they hope to cultivate in their offspring. The actions that children observe will impact their behavior much more than the messages they are told. As such, parents should embody the values they hope to pass on, such as giving to charity, avoiding prejudice, delaying gratification, communicating emotions effectively, and devoting oneself to important relationships. Ginsburg argues that the aim of parenting is to raise children who will grow into people who will be successful at ages 35, 45, 55 and beyond. At each of these ages parents need to care for themselves and model a full life for their children by engaging in their own interests and maintaining their own social relations outside of their children.

These parenting practices encompass an authoritative parenting style in which parents set clear expectations, offer an overabundance of love, and urge their children to develop their own independence. Authoritative parents offer their children lots of time, attention, and opportunities for emotional vulnerability, but they do not spoil their child by indulging each material desire.

While nearly the entire book is relevant to any parent, Ginsburg offers a few specialized tips for populations that face unique challenges such as military families or adolescents with depression. Beyond the recommendations in this book, Ginsburg also refers his readers to online resources with a wealth of information about promoting grit and resilience and reducing stress. Ultimately, he suggests that first and foremost a parent should trust her own instinct about what is best for her child as this is the most important ingredient for “giving kids roots and wings.”


Ginsburg, K.R. & Jablow, M. M. (2015). Building Resilience in Children and Teens: Giving Kids Roots and Wings (3rd ed.) Elk Grove Village, IL: American Academy of Pediatrics.

Age of Opportunity: Lessons from the New Science of Adolescence by Laurence Steinberg

Laurence Steinberg, professor of psychology at Temple University, provides a compelling call to action grounded in psychological and neuroscientific research in Age of Opportunity: Lessons from the New Science of Adolescence. Adolescence (roughly defined as ages 10-25) lasts longer than in previous generations. Comparisons of today’s U.S. adolescents to adolescents of previous generations suggest that they are doing no better in terms of critical social, health, and societal measures. Comparisons to our peer-nations suggest that U.S. adolescents are doing worse. Thus, Steinberg urges parents, educators, policy makers, and other actors to rethink how they raise and interact with adolescents. Recent research about the adolescent brain suggests that adolescence is a unique time for developing skills to flourish, but this must be balanced against adolescents’ proclivity for risk-taking and poor self-control. Incisive and comprehensible, Age of Opportunity is a worthwhile read for educators and parents of adolescents as well as anyone interested in understanding the causes and implications of shifting demographic trends among young people.

Steinberg argues that society needs to invest in the period of adolescence because the brain will never again be as plastic. Research in the last fifteen years illuminates the unique features of the adolescent brain. Neural connections among various regions of the brain, which mature at different rates, are reorganized during adolescence. The more we use particular skills the better connected the regions of the brain that facilitate those skills will be. The brain systems that undergo the greatest change during adolescence are those that control reward-seeking, relationships, and regulatory behaviors.   Remembering is also exaggerated during adolescence. The adolescent brain continues to develop towards an adult-like form well into the twenties, which parallels societal changes in the protraction of adolescence into the twenties.

In the mid-1800s, adolescence—bookended by menarche (first period) and marriage—lasted about five years. In 2010, it was about fifteen years and by 2020 Steinberg suggests that it may be as many as twenty years. Obesity, low birth weights, and exposure to light, stress, and certain endocrine-disrupting chemicals have all hastened the arrival of adolescence. Steinberg presents evidence that maturing too early can increase the risk of problems like teenage pregnancy, contraction of STDs, psychological disorders, school disengagement, and even cancer. However, Steinberg argues that contrary to the media’s messages about lazy and self-indulgent 20-somethings who are unwilling to commit to a career or marriage, extending adolescence on the older end can actually improve social and cognitive development if the time is used productively to experience novelty.

One downside of prolonged adolescence is that it expands the time in which people are prone to take unreasonable risks. Connections between the brain’s limbic system, which plays a role in emotion regulation, and the prefrontal cortex, which is responsible for inhibition, are slow to develop. Also, adults make decisions about riskiness by relying on parts of their brain that control their gut (i.e., they have a “gut reaction” to risky events), but adolescents rely more on areas associated with deliberative decision-making.   Adolescents’ brains respond more strongly to the pleasure of rewards than do adults. Together this helps explain why, across the globe, we see a pattern of adolescents taking more risks, seek rewards, being less deterred by losses, behaving impulsively, and acting more violently than people at any other developmental stage. This developmental change was once evolutionarily adaptive for tasks such as finding mates outside of one’s family. Nowadays, to reduce adverse adolescent behaviors, Steinberg argues that we ought to create an environment with more adult supervision to help adolescents regulate their behavior so as not to be harmed by their own risk taking.

In particular, adolescents from economically disadvantaged backgrounds need these supportive structures. Low-SES adolescents typically have less “psychological and neurobiological capital.” Steinberg defines psychological capital as noncognitive skills (e.g., self-regulation) important for success and neurobiological capital as advantages procured from a protracted period of brain plasticity. Self-control, a skill critical for success, can be developed with training in mindfulness, consistent aerobic exercise, and interventions aimed at boosting working memory.

Parents can support productive adolescent development by adopting an authoritative style in which they show their child equal parts warmth (e.g., tender touches, emotional understanding), firmness (e.g., clear expectations, fair punishments), and support (e.g., praising effort). Educators can support authoritative parenting, make school more challenging and academically engaging, and teach self-regulation. Policy makers should continue their work to reconceptualize adolescent health education, driving restrictions, and criminal punishment. The last several decades have brought legislation geared towards helping people survive adolescence. Taking into account new research about the adolescent brain and the changing cultural construction of adolescence can usher in new policies and practices geared towards helping students thrive.

Steinberg, L. (2014). Age of opportunity: Lessons from the new science of adolescence. Houghton Mifflin Harcourt.

Neurobiology and the Development of Human Morality: Evolution, Culture, and Wisdom by Darcia Narvaez, PhD

In her 2014 book, Neurobiology and the Development of Human Morality: Evolution, Culture, and Wisdom, Darcia Narvaez aims to increase virtuous morality, empathy, and cooperativeness among adults. Anyone interested in understanding the evolutionary, biological, and social bases of morality or seeking to improve ethical behavior can learn a great deal from this book.

Narvaez argues that our sense of morality—or how we function as social beings sharing with one another life’s victories and challenges—is shaped by the integration of our physical, mental, and social experiences as well as our evolutionary history. Modern times have drawn us away from some of our intuitive wisdom about how to create communal societies and led to shrinkage of our moral and emotional capacities. This trend can be reversed, she argues. Both our culture and one’s interpretation of experiences can be altered to increase empathic concern and communal orientations, which will help create societies in which all life (humans and other natural creatures) flourish.

Narvaez begins by reporting about the decline in social interaction, health, and ethical decision making in the U.S. She believes that humans are dynamic, but early exposure to suboptimal learning environments affects our physiology, brain development, and epigenetics. Changes at these levels affect a person’s moral reasoning and ability to act consistently in a virtuous way. Citing Darwin, Narvaez argues that we evolved to be cooperative, connected, moral beings. Only recently have we abandoned the wisdom our small-band hunter gatherer (SBHG) forbearers knew about acting communally. In fact, when people (and even other animal species) are raised in supportive conditions, they tend to cooperate with one another. There are cultural differences in the emphasis placed on competition and cooperation, but cooperation is generally more adaptive.

In our early years, Narvaez asserts, we develop an “empathic core” that impacts both our understanding of ourselves as moral beings and our socio-emotional imaginative abilities. Only with responsive parenting do these systems develop properly such that we have emotional and cognitive understanding of social dynamics. When raised by inattentive parents or in a dangerous environment, our social capacities may underdevelop, and our stress response may become hyperactive. For example, we all have a “safety ethic” that helps us navigate relational stress, but among individuals who experienced early social trauma, the safety ethic may lead people to make decisions based on their own preservation rather than on maximizing group success.

Not only is the influence of early social experiences observable behaviorally, but also we see changes in people’s brains and physiology. Narvaez states that the right hemisphere, more than the left, is associated with emotional and moral processes like affective empathy, interpretation of social relations, emotional modulation, and stress regulation. The frontal lobe as a whole and especially the orbital frontal cortex, which is the primary projection of the emotional limbic system, are also critical. They aid with processing affect, making moral decisions, and imagining moral paths. When these systems are well-formed, they keep us regulated and integrate cognition with the feelings that guide our morality. When these structures are underdeveloped because of early stress or damaged because of injury people may display a lack of compassion, love, and respect as well as signs of sociopathy or other forms of maladaptive social coping.

Narvaez reviews the traditional moral wisdom in Ancient Greek, Abrahamic, Buddhist, and Native American traditions as well as “primal wisdom” from SBHG societies. Primal wisdom can teach us to view the self, and not just society, as communal, expansive and integrative. We are each equal partners with the rest of the world around us. A return to this view of oneness and reciprocity might help arrest some of the moral decline in the western world that framed Narvaez’s investigation of modern human morality. A more holistic moral orientation would help us also to raise wiser children.

Narvaez observes that across religions and cultures the most important virtues are humility, love and authenticity. However, even if an individual is not born into a society or culture that facilitates development of these qualities, that person can (and should) still change herself to have a fuller moral imagination that seeks to facilitate communal thriving and a reverence for nature.


Narváez, D. (2014). Neurobiology and the Development of Human Morality: Evolution, Culture, and Wisdom. New York: W.W. Norton & Company.

The Marshmallow Test by Walter Mischel, PhD

“I think, therefore I can change what I am.” Walter Mischel, a Columbia University psychology professor renowned for his research about self-control, concludes his 2014 book, The Marshmallow Test: Mastering Self-Control, with this modification to Descartes’ famous proposition. Mischel, the creator of the “marshmallow test”, argues that self-control and the ability to delay gratification are critical for long-term health and for social and professional success. These skills are detectable at an early age, responsive to training, and able to help us shape who we are. His admission of his personal self-control short-comings (e.g., at one point he smoked more than 3 packs of cigarettes a day while aware of the adverse health effects) and the strategies he used to exercise self-control illuminate his presentation of the field of self-control research.

The Marshmallow test (formally known as “The preschool self-imposed delay of immediate gratification for the sake of delayed but more valued rewards paradigm”) exists in many iterations, but the basic set-up begins by having a researcher ask a preschool child (age 3 or 4) to select a tasty treat. The researcher leads the child into a room with a one-way mirror in which there are no toys or colorful distractors; there is only a chair and desk with the tasty treat and a bell atop it. The researcher explains to the child that he can ring the bell at any time to bring the researcher back into the room so that the child can eat the treat, or if the child waits until the researcher returns, then he can have two of the tasty treats. The difficulty of the task arises from the tension between our “hot system,” which acts quickly and reflexively and our “cool system,” which acts slowly and reflectively.

Mischel and his colleagues found, across several cultural settings, individual differences in children’s likelihood of delaying. Decades later, brain imaging of those who delayed immediate gratification as a child compared to those who did not revealed greater activity for the delayers in the brain’s prefrontal cortex—an area associated with impulse control.   Mischel is careful to frame these results by noting that categorizing people as high or low-delayers, as though self-control is a stable and universal quality, is inaccurate. First, self-control is context-specific. For example, politicians (e.g., Bill Clinton) famously exert extreme self-control to be disciplined decision makers in their professional lives, and yet show an enormous lack of self-control in their private lives. Second, Mischel emphasizes that self-control abilities are changeable. Both nature and nurture play a role in determining self-control ability.

Several specific strategies can promote self-control. Mischel and his colleagues found that children were more or less likely to eat the marshmallow depending on how and what they thought about during that time. For example, children encouraged to think about how delicious the marshmallow would taste waited a shorter time than students who were encouraged to think about the marshmallow abstractly or to imagine it as something else, like a cloud. By about 5 or 6, children realize that obscuring the reward from view may help them delay. One trick that helped Mischel quit smoking was to associate cigarettes with the prospect of developing cancer and a haunting encounter he had with a man about to undergo radiation treatment.

Mischel encourages the use of “if-then” plans—plans in which people recognizes that if they are confronted by a trigger of the behavior they are trying to control, they will engage in a specific, more constructive behavior instead. To help self-regulate when recalling emotionally charged events (like the end of a romantic relationship) Mischel says that if people recalls the situation from an objective, fly-on-the-wall perspective, rather than recalling themselves as an actor, they are likely to be more level-headed. Mischel reports on research that suggests that individuals who view their current selves as closely related to their future selves save more for retirement.

To promote self-control in children parents should try to minimize the stress their kids experience, teach them that choices have consequences, encourage autonomy rather than controlling decision-making processes, and (perhaps most critically) model the type of self-control they would like their kids to exert. Executive function—the cognitive skill that allows us to assert self-control over our thoughts, actions, and emotions—is critical for students’ success. Mischel argues that there is no ambiguity about the need to promote executive function skills in school. He offers KIPP charter schools, schools that emphasize character development and college-going, as a model for how schools can help students (including economically disadvantaged students) learn these skills. With practice and the techniques that Mischel describes, we can resist the marshmallow, so that we can work towards becoming a better version of ourselves.


Mischel, W. (2014). The Marshmallow test: mastering self-control. New York: Little, Brown, and Company.




   May 4, 2015
Contact:Kristin Dunay(781)-449-4010 x


WHAT: This week, a distinguished group of neuroscientists, psychologists and educators will explore the cognitive skills students will need to succeed in today’s global, diverse world and ways schools need to reform to meet those needs before 1,300 educators at the Learning & the Brain® Conference in New York, NY.With a rapidly changing world, cognitive skills such as global-cultural competence, critical and scientific thinking, and world collaborations are required more than ever for career success. This conference will focus on how the learning sciences (including cognitive, social and cultural neuroscience) along with new global school models can provide ways to promote “world-class” skills and schools to improve academic performance. Discover the latest in how education can be changed to meet the needs of 21st century students.
SPONSORS AND FACULTY:     The program is co-sponsored by several organizations including the Neuroscience and Education Program at Teachers College, Columbia University, the Mind, Brain & Education Program at the Harvard Graduate School of Education, and the Comer School Development Program at the Yale University School of Medicine, the Dana Alliance for Brain Initiatives, the Learning & the Brain Foundation, both national associations of elementary and secondary school principals, and is produced by Public Information Resources, Inc.

Steven Pinker, PhD, Harvard College Professor and Johnstone Family Professor in the Department of Psychology at Harvard University, is one of the featured speakers at the conference. Dr. Pinker is an award-winning researcher on language and cognition and has been recognized as one of the world’s top global thinkers. He is a prolific author whose books include The Stuff of Thought: Language as a Window into Human Nature (2007), The Blank Slate: The Modern Denial of Human Nature (2002), Words and Rules: The Ingredients of Language (1999), How the Mind Works (1997) and The Language Instinct (1994). Dr. Pinker will address the conference on the topic of “Sense of Style: The Thinking Person’s Guide to Writing in the 21st Century which will cover some of his work from his most recent book with the same title.

In addition to Dr. Pinker, the program features some of the other of the nation’s leading experts on cognitive and global learning including:

▪   David N. Perkins, PhD, Principal Investigator, Founding Member, Harvard Project Zero; Carl H. Pforzheimer, Jr., Research Professor of Teaching and Learning, Harvard Graduate School of Education; Author, Future Wise: Educating Our Children for a Changing World (2014) and Making Learning Whole: How Seven Principles of Teaching Can Transform Education (2009)

▪   Heidi Hayes Jacobs, EdD, Creator, Curriculum21; Founder and President, Curriculum Designers, Inc.; Adjunct Associate Professor, Department of Curriculum and Teaching, Teachers College, Columbia University; Author, Curriculum 21: Essential Education for a Changing World (updated 2014), Mastering Digital Literacy (2014), Mastering Global Literacy (2013) and Leading the New Literacies (2013)

▪   Pasi Sahlberg, PhD, Visiting Professor, Harvard Graduate School of Education; Adjunct Faculty of Behavioral Science, University of Helsinki; Former Director General, Ministry of Education and Culture in Helsinki, Finland; Former Senior Education Specialist, World Bank; Author, “Global Educational Reform Movement and its Impact on Schooling” (2014, The Handbook of Global Policy-making in Education) and Finnish Lessons: What Can the World Learn from Educational Change in Finland? (2011)

▪   Yong Zhao, PhD, Presidential Chair; Associate Dean for Global Education; Director, Center for Advanced Technology in Education, College of Education, University of Oregon; Author, Who’s Afraid of the Big Bad Dragon?: Why China Has the Best (and Worst) Education System in the World (2014), World-Class Learners (2012) and Catching Up or Leading the Way (2009)

▪   Mary Helen Immordino-Yang, EdD, Associate Professor of Education, Rossier School of Education; Associate Professor of Psychology, Brain and Creativity Institute, University of Southern California; Co-Author, “Modularity and the Cultural Mind: Contributions of Cultural Neuroscience to Cognitive Theory” (2013, Perspectives on Psychological Science)


WHEN: Thursday, May 7-Saturday, May 9. Conference begins 12:45 PM. General Registration is $609. Contact Kristin Dunay at 781-449-4010 x 104 for media passes.
WHERE: Sheraton New York Times Square Hotel, New York, NY
Learning & the Brain® is a series of educational conferences that brings the latest research in neuroscience and psychology and their potential applications to education to the wider educational community. Since its inception in 1999, more than 40,000 people in Boston, San Francisco, Washington, D.C., New York and Chicago have attended this series.

For more information about the conference, visit