Skip to main content
Default Image
Ashle Bailey-Gilreath
Ashle Bailey-Gilreath

Story Time in a Classroom

Science seems to always challenge our intuitive understanding of the world. Even as an adult, I am constantly confronted with new scientific advancements and discoveries that don’t always line up with my preconceived notions. These ideas, be it physics or biology, can be considered counterintuitive in that they often present themselves in ways that are counter to one’s intuitive notions1).

One of the most challenging and powerful of these concepts, the theory of evolution by natural selection, also happens to be one of the most rewarding; its ability to explain the complexity of life on earth, and even (to a certain extent) human nature, is unprecedented. Unfortunately, it is also one of the most controversial, especially in the United States. Reports suggest that only 60% of American’s believe in evolution, and even some of those who claim to don’t seem to fully grasp its implications 2.

Why is the concept of evolution so hard to understand and accept?

Recent research lead by Deborah Kelemen3,4 and Will Gervais 5 is helping to explain this. Previous research has shown that belief in evolution can be predicted by many demographic and cultural factors, such as religious ideology 6, political affiliation 7, and even what country you live in 8. However, research within the fields of psychology and the cognitive science of religion are beginning to uncover the cognitive mechanisms that underlie this phenomena. This new research also hints at some important strategies: we should begin teaching children how to grasp concepts like evolution while they are young, rather than waiting until they are teenagers.

Regardless of their religious beliefs, adults, and especially children, are inclined to see design and purpose everywhere9,10. This tendency may be one of the main contributors as to why individuals who favor intelligent design or creationism are reluctant to incorporate evidence for evolution into their worldview. Kelemen has documented this way of thinking, termed “promiscuous teleology”, in children as young as preschool, though it is an inclination we all share throughout the lifespan. She has found in previous research that when children were asked what the purpose of a sharp rock was, they responded with purposeful statements like “Rocks are jagged so animals can scratch themselves”11. By elementary school (ages 6-10), kids begin to develop their own “folk biology” theories (that is, how people classify and reason about the organic world) about the world around them, giving explanations for biological facts in terms of intention and design.

This can be seen in children’s design-driven descriptions for the purpose for a giraffe’s long neck – so they can reach the leaves at the top of the trees. This suggests that believing in creationism may be a very natural tendency, and that introducing evolutionary frameworks in childhood may help lay the groundwork for balancing promiscuous teleology with analytical thinking.

To see whether young children could understand the mechanism of natural selection before the alternative intentional-design ideas had fully set in, Dr. Kelemen and colleagues presented 5- to 8-year-olds with a 10-page picture book that illustrated an example of natural selection with a fictional character (the “pilosas”). In the book, the pilosas are described as insect eating mammals, with some of them having thick trunks and some with thin. The children are then told about a sudden shift in climate that drives all of the insects into narrow underground tunnels. Because of this, the thin-trunked pilosas were the only ones to be able to reach the insects, causing those with thick trunks to die off. Therefore, the next generation of pilosas all had thin trunks.

Before they heard this story, the children were asked to explain why a different group of fictional animals had a particular trait. Most of them, consistent with previous research, gave explanations based on intentional design. However, after they heard the “pilosas” story, the answers they gave were very different. They began to understand the basic tenants of the theory of evolution by natural selection. Even three months later, their understanding and analytical explanations persisted.
While Dr. Kelemen’s research sheds light on our natural tendencies to think of evolution as a counterintuitive concept, there are still questions as to how the differences between individuals (such as religiosity, political orientation, or other demographic factors) produce different beliefs about evolution, and how these individual differences interact with culture and environment.

New research by Will Gervais has found an association between cognitive style and beliefs about evolution. Cognitive style refers to two distinct mental systems that everyone uses for processing information: one system provides quick and effortless intuitive responses, where as the other system relies on more effortful and analytical processing.

In an experiment with hundreds of Kentucky undergraduates, Gervais presented participants with a common task to measure the extent to which they would engage in immediate, intuitive judgments or more explicit, analytical deliberations (which can sometimes override the initial intuitive response). He found a significant relationship between the degree to which individuals would engage in more analytical styles of thinking and their endorsement of evolution. These results still held significant even after controlling for religious beliefs and political conservatism.

Gervais’ research presents three possibilities: (1) the more an individual engages in reflective, analytical thinking, the more likely it is that they will essentially ‘override’ their natural intuitive responses when presented with evidence, thus making concepts like evolution easier to grasp, (2) some individuals may naturally have stronger intuitive responses than others, which, though beneficial in some situations, may make it particularly challenging to successfully override these teleological thoughts, and (3) an individual’s cognitive style (analytical or intuitive) may be affected by cultural input. Within this third possibility, for individuals who grow up in an environment where intellectual design and creationism are more widely accepted, overriding these natural intuitions isn’t just about implementing more analytical, reflective thinking, it also involves overriding the norms of one’s community and upbringing.

This research helps to explain why counterintuitive concepts like evolution aren’t just controversial for social or scientific reasons, but are also controversial for cognitive ones. It also helps us understand the most recent Gallup poll results, which found that nearly half of the US population rejects evolution, with creationism remaining stable for the past 30 years12*.

There seems to be a constant struggle over teaching evolution in U.S. schools13, which makes it even harder for educators in anti-evolution policy states to take action. However, the above research suggests that educators and parents should start to introduce these ideas to children when they are young, rather waiting until high school, and organizations, like the National Center for Science Education, are working to support communities in this endeavor.

Deborah Keleman has shown that children as young as 5 can grasp these concepts (and retain the information); they just need to be taught through innovative ways like storytelling. Over the past few years some excellent evolutionary children’s books have come out on the market, such as Great Adaptations, Grandmother Fish, and Our Family Tree to name a few. These can be excellent tools for teaching these concepts, second only to applying some imagination and having children create their own species and animals like Dr. Keleman’s “pilosas”. These practices should be written into the curriculum for each grade, allowing the concepts to be reinforced each year.

Counterintuitive concepts like evolution can be challenging to grasp for anyone. By taking a deeper look at the underlying cognitive reasons for this, we can improve our future approaches to science education and policy, and work towards better understanding how our social and cultural environments affect our minds — and more importantly, our children’s minds.

*It is important to note that science deals with evidence and makes no claims on the existence of God, and while many people believe evolution to be consistent with their religious beliefs12, it is still essential for public schools to focus on and implement only those theories and concepts that are supported by evidence and analytical thinking structures. Personal beliefs such as religion can then be handled and discussed outside of the classroom.

References & Further Reading

  1. Champagne, A. B., Gunstone, R. F., & Klopfer, L. E. (1985). Instructional consequences of students’ knowledge about physical phenomena. In L. H. T. West & A. L. Pines (Eds.),Cognitive structure and conceptual change(pp. 61-90). New York: Academic Press. [Book]
  2. Pew Research Center. (2013). Public Views on Evolution. [Survey Report]
  3. Kelemen, et al. (2014) Young Children Can Be Taught Basic Natural Selection Using a Picture Storybook Intervention. Psychological Science, p.1-10 [Paper]
  4. Kelemen, D. (2012). Teleological minds: How natural intuitions about agency and purpose influence learning about evolution. In K. S. Rosengren, Brem, Evans & Sinatra (Eds.), Evolution challenges: Integrating research and practice in teaching and learning about evolution. Oxford: Oxford University  [Book Chapter]
  5. Gervais, W. (2015) Override the controversy: Analytic thinking predicts endorsement of evolution, Cognition, 142, p.312-321 [Paper]
  6. Pew Research Center. (2009). Religious Differences on the Question of Evolution. [Survey Report]
  7. Pew Research Center. (2013). Public Views on Evolution. [Survey Report]
  8. Miller, J.D., Scott, E.C., & Okamoto, S., (2006) Public Acceptance of Evolution, Science, 313 (5788), 765-766. [Paper]
  9. Kelemen, D. & Rosset, E. (2009). The human function compunction: Teleological explanation in adults. Cognition, 111(1), 138–143. [Paper]
  10. Kelemen, D. (2004). Are children ‘intuitive theists’? Reasoning about purpose and design in nature. Psychological Science, 15(5), 295–301. [Paper]
  11. Kelemen, D. (1999). Why are rocks pointy? children’s preference for teleological explanations of the natural world. Developmental Psychology, 35(6), 1440-1452. [Paper]
  12. (2014). Evolution, Creationism, Intelligent Design. [Report]
  13. Kopplin, Z. (2014). Bill Nye the Science Guy is trying to reason with America’s creationists. The Guardian. [Web Article]

Default Image
Myra Laldin
Myra Laldin

Lollipop

I went to a school in the foothills of the Himalayas in Pakistan. The school consisted mostly of western children of aid workers, which meant that for the majority of my school years my family members were the only students of color. The school followed a U.S. school system, with bits of the British system interspersed throughout. Although I was not fully aware of it at the time, I look back at my early years in school and realize how often I was confused and slower than my peers at grasping what was going on in the lesson.

I remember sitting in math class trying to figure out a word problem about chipmunks. We were supposed to be counting their acorns but I found myself trying to figure out what the heck a chipmunk was to begin with. In English class I would listen to a British story about “A Day at School” with some strange “lollipop lady*” who would stand on the road and hold up a giant lollipop sign. Forget the point of the story, what is a lollipop lady? Try reading “The Magic School Bus” to a young girl from rural Pakistan. I was mesmerized. While all the other kids moved on to the amazing adventure of riding down the tongue and inside the human body, I was stuck on the cool yellow bus. Years later, when I first came to the U.S., I took a picture of a yellow school bus and sent it to my siblings with the caption, “the magic school buses!”

As I observe students in our beautiful, multicultural classrooms here in the U.S., these memories of life in an international school come back to me. When I see students struggling because they can’t quite grasp the cultural nuances, I’m reminded of the out-of-place chipmunks, lollipops, and big yellow buses of my childhood. In many ways those are the only things I remember about the lessons at my little school beneath the Himalayas. It wasn’t until I began studying educational neuroscience that I was finally able to put words to what was happening. How did those cross-cultural experiences affect my learning in those early years? How many things was my working memory juggling at once? I realize now, not only was I carrying the same “cognitive load” as other students; I was carrying a “cultural load” as well. Unpacking these important ideas will help us all become better learners and better educators.

How does working memory work?

Working memory is what we use to hold on to information in the short-term, retaining the ability to use and manipulate this information. For example, if I give you a problem:

6 x 2 =?

You can keep the two numbers in your head easily and at the same time figure out the answer.

Simple, right?

The main idea of cognitive load centers around the idea that there is a limited amount of information that our brains can take in and successfully process at a given time. The more data we send to our brain, the more “processing capacity” is used.

But what if I give you this problem instead:

2 x 3 x 5 x 7 x 9 x 2 = ?

On the one hand, this will be a real test on your working memory. You most likely will not be able to hold these numbers in your head like you could with the 6 and 2. Instead, you have to keep going back and looking at the problem. This is because for the average person this problem requires greater processing capacity to figure out. Simply put, cognitive load is the amount of mental effort being used in the working memory.

Of course, like most things in life, people differ in their processing capacities. This is probably most obvious in an expert/novice relationship. Experts have greater knowledge and familiarity in their area of expertise. This makes completing a task specific to their domain of expertise less of a cognitive load compared with a novice doing the same task. The expert doesn’t have to spend as much time getting to know the problem. You can also see this in children, who naturally have fewer points of reference (knowledge) than adults for how things work. They have fewer areas in their minds to peg ideas to or build concepts on. Therefore, children have a greater cognitive load than adults when trying to perform an activity or understand a new concept. There’s a lot more work for children to do to reach the same result.

It ultimately comes down to the ability to have a point of reference that is related to the new information that is being taught. Studies show that being able to relate to new information is important for all students. This suggests that lesson plans designed to connect to students’ “real worlds” are more effective than abstract lessons. When students don’t have any background knowledge needed to perform a task or comprehend a concept, they experience greater cognitive load. The good news is this is normal. All of us experience varying degrees of cognitive load when learning. If we didn’t, we would never learn! The better question to ask is when does this become a problem in our classrooms?

What happens when you overload working memory before you get to the point?

As an elementary school student, my inability to visualize a chipmunk made it more difficult for me to spend energy on the math aspect of the word problem. It added unnecessarily to the cognitive load. And lacking a point of reference for the “lollipop lady” also taxed my cognitive load as I tried to understand the story. My working memory was trying to hold on to the words “lollipop lady” as well as the rest of the words to make sense of the story. Somewhere, my brain was also trying to understand what a lollipop lady was in the first place, not to mention why she was standing in the middle of the road! When this extra cognitive load relates to foreign or cultural references, we call this cultural load.

Cultural load is the amount of culture-specific knowledge required to understand or perform a task (like figuring out a math problem or understanding a story). The concept of cultural load has become evident in a range of studies that evaluate the role of cultural “frame[s] of reference” in student performance. Growing understanding of cultural load has inspired calls for less culturally biased tests.
Studies suggest that using cultural knowledge and experiences that directly relate to our students can avoid some of this negative cognitive and cultural overload. Researchers have also found that academic success increases when students can take ownership of their learning. Regrettably, the concept of cultural load is often overlooked in classrooms.

What happens when we don’t consider the role of cultural load in the classroom?

One 1998 report showed that a disproportionate number of students from multicultural backgrounds may be inappropriately placed in special education classes. Some case studies talk about how children are labeled mentally disabled. In fact, in many cases testing revealed the children were functioning at a normal intellectual level. Sadly, it wasn’t until years later that schools began realizing their mistaken labels. This research reinforces the susceptibility of tests to carry a cultural and social bias, and begs the question: If it’s happening in our tests and we don’t know it, what might be happening in our classrooms? Of course, there are an array of factors that play into culturally and linguistically diverse populations having a higher percentage of students in special education that have not been discussed here. That being said, the research aligns with the experiences of many students just like me: we must acknowledge the need for more culturally aware and accepting classrooms.

*Lollipop Lady – noun. British informal. A woman who is employed to help children cross the road safely near a school by holding up a circular sign on a pole to stop the traffic. Not a woman with a giant lollipop.

If you didn’t know that, you may have experienced some “cultural load” first-hand with that cultural reference 😉

References & Further Reading

  1. Artiles, A. J., & Zamora-Duran, G. (1997). Reducing disproportionate representation of culturally and linguistically diverse students in special and gifted education. Reston, VA: The Council for Exceptional Children. [Book]
  2. Benson, E. (2003). Intelligence across cultures: Research in Africa, Asian, and Latin America in showing how culture and intelligence interact. American Psychological Association, 34(2), 56. [Paper] 
  3. Campbell, T., Dollaghan, C., Needleman, H. & Janosky, J.  (1997) Reducing bias in language assessment: Processing-dependent measures. Journal of Speech and Hearing Research, 40, 519-525. [Paper]
  4. Feger, M. (2006). “I want to read”: How culturally relevant texts increase student engagement in reading. Multicultural Education, 13(3), 18. [Paper]
  5. Jordan, C. (1985). Translating Culture: From ethnographic information to educational program. Anthropology & Education Quarterly, 16(2), 105–123. [Paper]
  6. McClafferty, K., Torres, C. & Mitchell, T. (Eds.) (2000). Challenges of urban education: sociological perspectives for the next century. Albany, NY: SUNY Press. [Book]
  7. Meyer, L. (2000). Barriers to meaningful instruction for English learners. Theory into Practice. Accessed through Wilson Web on-line database on Sept 23, 2015. [Article]

Default Image
Rose Hendricks
Rose Hendricks

Language Nutrition

We’re told that a picture is worth a thousand words, but this adage robs words of much-deserved credit. When you’re an infant with a rapidly developing brain, words are one of the most valuable things you can receive. They’re so valuable that a new initiative in Georgia called “Talk With Me Baby” promotes the importance of “language nutrition”. When it comes to language, infants are sponges: essentially every baby growing up in a normal environment masters the complex language system he or she’s exposed to. It helps that the adults around them hold up objects and emphatically enunciate their names, saying something like “ba-na-na” while waving the fruit in the child’s face, but that’s not the only way babies learn. Infants are constantly immersed in linguistic environments that are full of people expressing real and complicated thoughts through varied sentence structures. This provides the rich experience that children need to rapidly become fluent speakers. If you’ve ever tried to learn a new language in your teens or later, you know that this sponge-like capacity doesn’t last forever. Talk With Me Baby makes no bones about why they want to increase the amount of language that babies are exposed to: hearing more words in infancy promotes stronger language skills, which in turn form a foundation for academic and other successes throughout life.

Why are words so crucial during infancy?

The idea that there’s a sacred window of time in which language can be learned – referred to as a critical period – was first articulated in 19591, but it’s still a widely researched and debated topic. There isn’t yet a consensus on whether attempting to learn (a first) language after the critical period is futile or just more difficult than learning it earlier, and if there is a critical period, researchers still debate about exactly when that period is. Since intentionally raising a child without linguistic input (exposure to language) would be unethical, much of the support for the critical period hypothesis comes from tragic cases of children who grew up in abnormal environments that lacked language. Genie is a classic example of a girl who spent her entire childhood locked in a room without any stimulation or proper nourishment until she was discovered at 13 years old. At that time, researchers tried to provide her with therapy for her physical and cognitive abnormalities. Although she seemed able to learn a limited vocabulary, most scholars claim that Genie never truly learned language: she could not use grammar to put words together in a meaningful way. Although her case suggests the importance of receiving linguistic input during the critical period, it’s unclear whether Genie was disadvantaged from the start – her father claimed that he locked her up because she was cognitively disabled – and there are many other aspects of Genie’s deprived childhood that could have contributed to her inability to learn language at 13.

There are a few characteristics of the developing brain that speak to why we might be better at absorbing language as babies than as adults. For one, a critical period is not unique to language. Other biological processes also have their own critical periods2. Some of these periods have been demonstrated most clearly in animals deprived of specific sensory stimuli. For example, Hubel and Wiesel studied a cat whose eye was sewn closed as a kitten. When they removed the stitches, the cat was still unable to see out of the previously deprived eye. During the deprivation period, the visual cortex became dominated by the normal, unobstructed eye, which hijacked the brain space typically devoted to the second eye.

The cat’s visual cortex demonstrates a crucial feature of the brain: its plasticity. Neuroplasticity refers to the brain’s ability to reorganize itself based on the inputs it receives. Our brains are constantly reorganizing themselves (that is how we learn anything), but infants’ brains are especially plastic3. Developing brains are highly sensitive to incoming information and experiences, allowing them to learn massive amounts of information rapidly.

Perhaps counter intuitively, another explanation for why immature brains are ripe for language learning is that their prefrontal cortex (PFC) – the area of the brain most associated with higher-level and rational thinking – is undeveloped4. A paper by Sharon Thompson-Schill, Michael Ramscar, and Evangelina Chrysikou gives an example of watching a football game to highlight how adults’ and toddlers’ pattern-learning strategies differ. In the example game, you notice that the team passes the ball 75% of the time and runs with it the other 25%. Your task is to predict what the team will do in subsequent plays. As an adult, you’re likely to match probabilities: 75% of the time you’ll guess that the team will pass, and the other 25% you’ll guess that it’ll run. You’ve taken the less frequent event (the run plays) into account. However, since you don’t know when those rare events will occur, the optimal strategy would actually be to always guess that the team will pass. That’s precisely what a toddler would do. Toddlers ignore irregularities and grasp conventions quickly, at least partially thanks to their undeveloped PFCs. Thompson-Schill and colleagues argue that toddlers’ tendency to ignore inconsistencies might be ideal for learning the foundations of language, especially the syntactic patterns that govern our grammar. Toddlers eventually discover and master their language’s irregularities, moving from forms like “drinked” to “drank” as their PFCs develop and help them filter exceptions to rules.

Infants’ and toddlers’ brains are ready and waiting for linguistic input. This input allows their brains to develop new neural pathways in response to the language conventions they’re exposed to. As they get older and continue to use language more (whether they’re listening, speaking, reading, or writing), these pathways continue to strengthen. Talk With Me Baby asserts that “early language exposure is the single strongest predictor of third grade reading proficiency,” and that third grade reading proficiency, in turn, predicts further academic and economic successes. This is because third grade is when most kids transition from learning to read to reading to learn. In this way, linguistic exposure as an infant has cascade effects that last long after infancy. Just as proper nutrition promotes physical growth and is crucial for babies’ future health, proper linguistic nutrition promotes the mental growth necessary for future success.

The 30 Million Word Gap

It’s almost impossible for a baby to grow up without any exposure to language, but many children still grow up in environments that lack sufficient language exposure. In one seminal study, researchers found that the number of words addressed to children differed dramatically across families of different socioeconomic statuses (SES)5. SES is a measure that combines income, occupation, and education to reflect a family’s economic and social position in society. Children from families in the highest SES category heard an average of 2,153 words per hour, while those is the lowest SES group only heard 613 words per hour. From these numbers, the researchers calculated that by 4 years old, the average child from a higher-income family hears a total of about 45 million words, while the average child from the low-income family hears a measly 13 million words. The authors referred to this disparity as the 30 Million Word Gap. The gap may result, at least in part, from the fact that parents who are struggling financially are often unable to devote the same amount of focused time to their children that parents with fewer financial struggles can6. Reduced linguistic input is one consequence of the quality-time deficit that lower-SES kids often experience.

If a child from a low-income family enters school at age 4 after hearing 30,000,000 fewer words than his or her other classmates have, this child will immediately have an immense disadvantage. Because learning is sequential, in the sense that that many concepts build on each other, the child on the disadvantaged side of the word gap will have difficulty learning new information that requires an understanding of language. As a result of missing out on valuable linguistic input as a baby, this student may never catch up.

Talk With Me Baby

The state of Georgia has launched an effort to close the 30 Million Word Gap7, acknowledging that “a lack of early language exposure has lifelong consequences,” like dropping out of high school, incarceration, becoming a teen parent, involvement in violence, unemployment, and poverty8. Their initiative, Talk With Me Baby, is being implemented mainly by spreading awareness. Because the word gap can have future physical health consequences and because almost all babies are seen in hospitals, nurses in particular are helping spread the message that babies are listening, even before they’re born. They’re absorbing what they hear, so they should hear as much language as possible. The website for Talk With Me Baby also advertises an app that parents will soon be able to download with features like topics to talk about, milestones to look for, reminders to talk, and resources.

Raising a child is complicated. It can be hard to know what to feed your child, when to do it, and even how to afford the ideal nutrition. Luckily, providing babies with proper linguistic nutrition is fairly straightforward and accessible to all. What words should you feed your child? As many as you can! When should you feed your child his or her words? Whenever you can! Ideally, babies should hear not only as many different words as possible, but they should also hear as many different sentence structures as possible. Long sentences are the linguistic equivalent of milk: consuming them helps children’s cognitive foundations get strong enough to support all of the lessons and skills that they’ll learn in school. Perhaps best of all, words are free and we can all make them, which means closing the 30 Million Word Gap is within our reach.

References & Further Reading

  1. Penfield, W., & Roberts, L. (1959). Speech and brain-mechanisms. Princeton, N.J: Princeton Univ. Press. [Book]
  2. Sengpiel, F. (2007). The critical period. Current Biology, 17(17), R742-R743. [Paper]
  3. Mundkur, N. (2005). Neuroplasticity in children. Indian Journal of Pediatrics, 72(10), 855-857. [Paper]
  4. Thompson-Schill, S., Ramscar, M., & Chrysikou, E. (2009). Cognition without control. Current Directions in Psychological Science, 18(5), 259-263. [Paper]
  5. Hart, B. & Risley, T. (2003). The early catastrophe: 30 Million word gap by age 3. American Educator, Spring 2003, 4-9. [Paper]
  6. National Journal. (2015). 30-million word gap divides rich and poor kids. [Web Article]
  7. Deruy, E. (2015). Why boosting poor children’s vocabulary is important for public health. Atlantic Magazine. [Web Article]
  8. Talk with Me Baby [Educational Initiative]

Default Image
Kathryn Mills
Kathryn Mills

Teenage Brain

Adolescence is the period between childhood and adulthood. And though it can stretch into our early twenties, we spend many of these years in high school. This stage of life is marked by increased cognitive abilities, social sensitivity, and agency (or increasing independence). These changes make this time particularly perplexing to some adults, as they struggle to make sense of stereotypical adolescent behaviors such as risk taking and increased allegiance to peers.

At the end of the 20th century, it was common to discuss adolescent behavior as being influenced by “raging hormones.” Today, it is becoming increasingly common to discuss adolescent behavior in terms of the “teenage brain.” But what makes the teenage brain different from the child or adult brain? And do these differences have implications for education and learning? This blog will discuss the latest research in adolescent brain development and how the current evidence might inform education during the teenage years. This post outlines three of the most interesting things neuroscience has taught us about the physical changes that take place in the brain during adolescence.

1. The brain continues to change throughout adolescence.

Perhaps the most important consideration to keep in mind regarding the brain during adolescence is that it is continuing to change. There is evidence for this from multiple lines of research, including cellular work on post-mortem human brain tissue1, as well as longitudinal magnetic resonance imaging (MRI) studies of brain structure and function.

What do we mean by “physically change”?
With MRI, we have the ability to see how the living human brain changes from birth to old age by taking different kinds of pictures. One kind of picture we can take is of the structure–or anatomy–of the human brain, and we can use this picture to look specifically at two components of the brain’s structure: one component is grey matter, which is largely made up of brain cell bodies and their connections. And the other is white matter, which is primarily the long connecting fibers that carry signals between brain regions. The thing that gives white matter its color is “myelin”, which is a fat that wraps around connecting fibers in order to make communication more efficient.

There have been a few studies now where hundreds of participants had their brains scanned multiple times across development, and we know from these studies that the amount of grey matter is greatest during childhood, but decreases during adolescence before roughly stabilizing in the mid- to late- twenties2. We also know that the amount of white matter increases almost linearly across adolescence3. These are two major changes happening in the structure of our brain during adolescence.

2. The brain doesn’t all change at once.

Structural changes are not occurring at the same time across the whole brain. Actually, areas of the brain that are involved in basic sensory processing or movement develop earlier than areas of the brain involved in more complex processes such as inhibiting inappropriate behavior, planning for the future, and understanding other people. These and other complex processes rely on areas in the prefrontal, temporal and parietal cortices, which are continuing to change in structure across the second decade of life4.

How do these changes happen?
We still do not know the specific cellular mechanisms that underlie developmental changes in measures of grey or white matter. It is often thought that these decreases in grey matter reflect, at least in part, changes in connectivity between brain cells. These changes include decreases in dendritic spine density (which is basically a proxy for how interconnected cell bodies are in the grey matter) and other cellular processes involved in synaptic pruning (which is the way that connections in the brain are broken). Histological work, which involves studying the cells using microscopes, has given us a better understanding of the cellular changes occurring in the human brain across the lifespan.

In one specific study, researchers at the Croatian Institute for Brain Research counted the number of dendritic spines in an area of the prefrontal cortex5. They found that the number of spines continued to decrease across the second and third decades of life. So, this finding gives some cellular evidence for the continued structural development of the human brain across adolescence, at least in a section of the prefrontal cortex.

Is this a bad thing?
Not necessarily. The continued reduction in synapses seen in the prefrontal cortex means that the brain is still undergoing changes in organization during adolescence. As humans, we have an excess amount of brain connections when we are children, and almost half of these connections can be lost in adolescence. We know that experience influences what connections are kept and subsequently strengthened. Thus we can think of adolescence as a time of transition rather than a time of loss in certain areas of the brain.

3. The brain is changing in more ways than one.

MRI can also be used to see how blood flows in the brain, which allows researchers to get a sense of how the brain is working. So if MRI alone reveals brain structure, you can think of fMRI (or “functional MRI”) as revealing brain function. Many fMRI studies have also shown changes in brain functionality across adolescence. For example, how we use areas of the brain involved in understanding other people changes between adolescence and adulthood6.

This is especially true for “the social brain”.
There are a number of cognitive processes that are involved in interacting with and understanding other people, and we can use functional MRI to see what areas of the brain are active when we engage in important social tasks like understanding the intentions or emotions behind facial expressions or understanding social emotions like guilt or embarrassment. Tasks like these consistently recruit a number of brain regions in the prefrontal and temporal cortex, which is sometimes referred to as the “social brain.”

Although adolescents and adults use the same areas of the brain during a number of social tasks like understanding intentions and social emotions, these tasks all show a similar decrease in activity across age in this medial prefrontal cortex area, which is a part of the brain often related to social processing Adolescents seem to use this part of the prefrontal cortex more than adults when doing certain social tasks7.

So what does it all mean?
What is the point in highlighting these biological changes if we cannot connect them to real world behavior? In this post, I discussed how the brain is changing in both its structure and function during adolescence, highlighting in particular the changes involved in areas of the brain used when we attempt to understand the thoughts, intentions and feelings of other people. These changes are relevant because of the developmental tasks that adolescents must accomplish. One of the major developmental tasks of adolescence is to learn how to successfully navigate our highly social world. Having a malleable brain during adolescence is arguably adaptive for this sort of task, as new social skills and higher level cultural rules can be acquired with greater ease. Thinking about how these changes may impact the way students interact with educational environments is also important – considering these environments are often just as social as they are learning-oriented. In the next post, I’ll discuss how the adolescent brain is not just primed to learn from the social environment, but also how it is particularly sensitive to complex social signals.

References & Further Reading

  1. Petanjek, Z., Judaš, M., Šimic, G., Rasin, M. R., Uylings, H. B. M., Rakic, P., & Kostovic, I. (2011). Extraordinary neoteny of synaptic spines in the human prefrontal cortex. Proceedings of the National Academy of Sciences of the United States of America, 108(32), 13281–13286. [Paper]
  2. Huttenlocher, P. R., & Dabholkar, A. S. (1997). Regional differences in synaptogenesis in human cerebral cortex. The Journal of Comparative Neurology, 387(2), 167–178. [Paper]
  3. Mills, K. L., & Tamnes, C. K. (2014). Methods and considerations for longitudinal structural brain imaging analysis across development. Developmental Cognitive Neuroscience, 9, 172–190. [Paper]
  4. Lebel, C., & Beaulieu, C. (2011). Longitudinal development of human brain wiring continues from childhood into adulthood. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 31(30), 10937–10947. [Paper]
  5. Tamnes, C. K., Walhovd, K. B., Dale, A. M., Østby, Y., Grydeland, H., Richardson, G., … Fjell, A. M. (2013). Brain development and aging: Overlapping and unique patterns of change. NeuroImage, 68C, 63–74. [Paper]
  6. Blakemore, S.-J., & Mills, K. L. (2014). Is Adolescence a Sensitive Period for Sociocultural Processing? Annual Review of Psychology, 65(1), 187–207. [Paper]
  7. Blakemore, S.-J. (2008). The social brain in adolescence. Nature Reviews. Neuroscience, 9(4), 267–277. [Paper]

Default Image
Andrew Watson
Andrew Watson

Remember Kid

When teachers say we want our students to learn, we might also say we want them to remember; after all, if I’ve learned something, I can remember it later on. Sadly and surprisingly, there’s a curious danger to remembering: remembering can cause you to forget.

Yes, you read that right. The wrong kind of remembering causes forgetting.

Imagine the following mental exercise—a mental exercise that resembles many research studies1:

To start, you study a list of words in four different groups—say, Animals (dog, cat), Instruments (guitar, violin), Foods (pizza, steak), and Furniture (sofa, table). After a while, you recall half of the words in two of the groups. For example, in the Animal group, you recall the word “dog” (but not “cat”), and in the Foods group, you recall the word “pizza” (but not “steak”). And you don’t recall any words in the Instrument or Furniture groups.

When I test you on all these words several hours later, there are three logical categories.

First, there are the two groups of words you didn’t recall all: Instruments and Furniture. You’re likely to remember—perhaps—50 % of those words.

Second, there are the words and groups you did recall: the word “dog” in the Animal group, or “pizza” in the Food group. Because you recalled these words, you’re likelier to remember them, so your score will be higher—say, 75%.

Third, there are words that you didn’t recall (“cat,” “steak”) even though you recalled other words in Animal and Food groups.

Take a moment to ask yourself: what percentage of words in this 3rd group are you likely to remember?
Perhaps—because you practiced their groups—you’ll remember them at the 75% level. Or perhaps—because you didn’t practice these specific words—you’ll remember them at the 50% level.

It turns out both answers are wrong. You’ll remember even fewer of those words: say, 40%.

Why? Because practicing some of the words in the Animal and Food categories makes it less likely you’ll remember the un-practiced words. In other words, recalling some of the words prompts you to forget the words you didn’t recall.

The wrong kind of remembering caused you to forget.

In the neuroscience community, there is an active debate about the mechanisms that cause “retrieval-induced forgetting.”2,3 And while that debate is fascinating, it doesn’t really help teachers answer our constant question: “what should teachers do in the classroom with this scientific information?”

I haven’t read any research that addresses this question directly. (More precisely: I don’t remember having read any research that answers it; perhaps I read it, and forgot the source.) But I think the potential dangers of retrieval-induced forgetting (often abbreviated RIF) should shape our practice in very specific ways—in particular, the way we review.

Here’s an example. In yesterday’s class, my students discussed the five ways that the French and Indian War lay the foundation for the American Revolutionary War. To begin today’s class, naturally, I ask my students what conclusions we reached. One student calls out: “The French and Indian War cost a lot of money, and the British government decided to tax the colonies to pay for it. Those taxes helped spark the revolution.” Exactly so. Another student adds to the list: “George Washington gained essential military training and a cross-colony reputation for bravery.” Because we’ve gone over these two key points from yesterday, I assume my students will be prompted to remember the other three. Confident in this assumption, I move on to today’s new topic…

But there’s a problem here. Yesterday, my students got a list of five key points; today, we began class by reviewing two of them. I hoped—in fact, assumed—that my two-item review will help them remember the other three points. However, if the RIF research is true, then my two-item review will in in fact make it less likely that the students will remember the other three items. Because they practiced two of the examples in this group (“ways that one war set the stage for the next”), they are less likely to remember the un-practiced examples in that group.

When I first read this research, and started thinking about my own teaching practice, I realized with increasing alarm how often I review this way. If we studied ten vocabulary words yesterday, I’ll prompt students to recall two or three. If we looked at eight subject-verb agreement rules, I’ll asked them to jot down two, and discuss them with a partner. Of course, teachers must help their students review the material they learn, but if the first review is incomplete, we may very well be reducing—not increasing—the long-term likelihood that our students remember all the information.

In my own teaching, the RIF research has led to this guideline: the first two or three times I go over a topic, I make sure to cover all of the material that is a) conceptually related and b) equally important:

  • “Conceptually related”: RIF results from partial review of conceptually related information only; it influences Animal and Food words, not Instrument and Furniture words.1 For this reason, I don’t need to review an entire lesson—just the logically connected pieces of it. When I go over five essentials for a strong topic sentence, I don’t also need to review the highlights of “Young Goodman Brown.” We discussed both topics on the same day, but our discussion of the short story was conceptually distinct from our discussion of effective writing.
  • “Equally important”: when we go over all five ways that the French and Indian War led to the Revolutionary War, I don’t need to go through the detailed specifics; they’re not as important as the main concept. If I think of my lesson plan in an outline, I should cover all (or none) of the points on the same level of that outline.

One final danger to consider: student directed review might be especially prone to RIF. If students come up with their own list of key terms to remember, for example, their incomplete list might prompt them to forget the examples they didn’t include. As teachers, we need to find mechanisms to ensure that student generated review covers all equally important information.

Of course, research into RIF continues, and we don’t yet completely understand how and why it happens. For teachers, the key point to keep in mind is this: whenever we prompt our students to review, we must be sure that RIF doesn’t cause them to forget what we want them to remember.

References & Further Reading

  1. Jonker, T. R., Seli, P., MacLeod, C.M. (2012). Less we forget: Retrieval cues and release from retrieval-induced forgetting. Memory & cognition 40(8), 1236-1245. [Paper]
  2. Dobler, I.M. & Bäuml, K.T. (2013). Retrieval-induced forgetting: dynamic effects between retrieval and restudy trials when practice is mixed. Memory & cognition 41(4), 547-557. [Paper]
  3. Mall, J.T. & Morey, C.C. (2013). High working memory capacity predicts less retrieval induced forgetting. PLOSOne 8(9), e52806. [Paper]
  • Johansson, M. et al. (2007). When remembering causes forgetting: Electrophysiological correlates of retrieval-induced forgetting. Cerebral Cortex 17(6), 1335-1341. [Paper]

Default Image
Stephanie Fine Sasse
Stephanie Fine Sasse

Research and Education

Anyone who has ever stood in front of a classroom silently praying that their curriculum is engaging, their students are comfortable, and their jokes don’t skip a generation can tell you: Teaching isn’t easy. It’s some secret blend of intuition, strategy, and deep breaths. Great teachers aren’t measured by how much they know about the brain any more than great artists are measured by how much they know about the reflective properties of light: knowing how to use it trumps knowing how it works.

So why do we think it’s so important to use research in the classroom? Why am I spending all my time hanging around the places where education and neuroscience overlap?

Well, put simply: I think it can help.

And I’m not the first. For generations, teachers have been drawn to learning more about the engine that runs the minds that they’re shaping. And researchers have believed in the power of knowledge to improve the way we teach. We have the instinct that the more we know about how things work, the better we’ll be able to control or optimize them. And for generations this has led to a somewhat rocky relationship between the researchers who can describe a student, and the teachers who can inspire one. Sometimes toes get stepped on, sometimes lines are crossed or miscommunications abound… and sometimes, it works. So I decided to take a look back at a few of the ways that research has influenced education, and what that can teach us about getting this important relationship on the right track.

(350 BC) Aristotle and the meeting of science and education.

Aristotle was the original evidence junkie, and arguably, one of the first people to view education through what can be thought of as an early iteration of a scientific lens. He was a pioneer of carefully evaluating claims through observation and reasoning and — by refusing to settle for assumptions — laid the groundwork for some of the greatest scientific discoveries. He even compared how constellations appear in the sky depending on your distance from the equator, providing physical evidence to corroborate Pythagoras’s claim that the Earth was, in fact, round (sorry, Columbus).

At the same time, he was a dedicated educator, and he applied his love of the measurable to his pedagogical beliefs. He started a school, which was built on his view that nature is best understood through structured evidence-seeking and reason. He was an early advocate for ideas that have since transformed into a slew of modern buzzwords; he believed experiential learning, educational equality, lifelong learning, and public access to education were essential components of an ethical society. His fact-forward approach to inquiry even snuck into his moral teachings through the concept of “phronesis”: a type of practical knowledge that maintains that being a good person requires taking a bit of a motivated, scientific approach to moral decision-making.

Aristotle believed in fundamental ways of knowing that informed both his investigation of nature and his approach to teaching. As a foreshadowing of professional-learning-communities-to-come, he believed that teaching itself ought to be informed by the collected knowledge of those who have taught. Basically, he’s not only one of the first scientists, but also one of the first advocates for the ways that what we know should directly influence what and how we teach.

Admittedly, Aristotle was a far cry from applying neuroscience to the classroom, but his proclivity for fusing analysis, evidence, action, and learning continue to shape the way we think about education today.

(19th-20th century) The empirically-informed standardization of learning.

We’ve all heard critics of the modern educational system summarize its shortcomings as an out-dated “factory model of education”. They’re usually referencing things like standardized assessment, IQ testing, age grouping, and the one-size-fits-all approach. Most often, the blame is placed on late 19th century adoption of the Prussian-Industrial model in the US, claiming it prioritized conformity and efficiency above all else. Of course, it’s not really that simple.

The 19th century was a time of great flux for public education. Competing models were emerging to solve the most pervasive problems of the day: inconsistent content, unclear standards, and a lack of equal access. By the mid-19th century, the inaugural Secretary of the Massachusetts State Board of Education, Horace Mann, made it his mission to empirically evaluate existing domestic and international models and to lobby for the full implementation of the one that best suited the nation’s needs. Enter: the Prussian-Industrial Model.

This model was first crafted by King Frederick William I as a state-mandated program, arguably crafted to cultivate an obedient and submissive public. Teachers were stripped of autonomy and held to strict standards uncommon to the “sage” or “mentor” models of the past. Sure it was rigid, impersonal, and reeked of indoctrination, but it also ticked off the boxes that mattered most to Mann. It was cost effective, scalable, inclusive, consistent, and prioritized teacher training; features he argued to be necessary for public education to thrive. If he leveraged this powerful system for good, he believed, society as a whole would benefit.

Of course, once this ball started rolling, it seems, Mann’s best intentions couldn’t hasten the inertial appeal of standardization or the emergent needs of the new system. As the Department of Education and similar bodies came into existence, the ambition to continue to improve our approach and evaluate progress intensified. But the new model catalyzed major increases in student-to-teacher ratios; how do you properly evaluate groups of students that large? Policymakers wanted data. They wanted to know what was working, what wasn’t, and what different students knew. By the early 20th century, research on human development, learning, and memory aligned with the goals of evaluators. The complicated entangling of research and education was well on its way as psychologists and educational strategists were recruited to design blanket assessments for growing classrooms.

One of these tests, the Binet-Simon test, was created by French psychologist Alfred Binet to determine the mental capacity of students so that those with severe difficulties could be properly accommodated. That’s it. This whole mess of an IQ debate started with the earnest goal of capturing a snapshot of a particular child’s abilities and responding accordingly. Binet was clear: intelligence is diverse, complicated, and unlikely to remain static over the lifespan. Unfortunately, not everyone was listening.

Henry Herbert Goddard, a US psychologist, caught wind of the Binet-Simon test and translated it into English. He went on to promote its use as an intelligence assessment tool, going so far as to encourage the sterilization of those deemed “feeble-minded” by its measure. Stanford psychologist Lewis Termin who adapted the test to create the Standord-Binet version (now in its fifth edition), also believed that intelligence was an inherited and fixed trait. The result was a national effort to rank students, citizens, and immigrants against each other, with sometimes dire consequences.

By 1936, standardized testing had become such a popular way to quickly and consistently assess large groups of people that the first automatic test scanner was developed to make doing so even easier. Basically, in less than one hundred years, the goal of systematizing public education led to a series of (sometimes) reasonable next steps that eventually landed us with Scantrons.

(Present Day) Brain-Based Learning.

Selfies and Netflix consumption aside, it’s safe to assume that people haven’t fundamentally changed much since the days of Aristotle. We’re still susceptible to the same biases, assumptions, and miscommunications that we were in the 19th century. Of course, we have the added benefit of learning from everything that’s come before us. So the question is, how do we make sure that research is used wisely?

Well, for starters, we have to talk to each other. If someone had asked Binet before implementing his test, he would have likely clarified how to use it reasonably. If someone had asked experienced teachers before they assumed a single standard for quality, they would have likely clarified the value of adapting to your students. The problem is that information doesn’t exist in a vacuum and expert does not mean right. We’re all constantly interpreting research to match our own goals, experiences, and understanding of the world. We are much better at hearing what we want to hear than we are at listening to each other.

Which leads me to a question that a teacher asked me in one of my workshops: Exactly what type of learning is not brain-based?

Her point was that the premise is flawed. The way this research is being shared is often flawed. If we present neuroscientific research as a solution, or information that lays the foundation for a “type” of learning, we miss the point — and the opportunity. Great teachers have navigated the inner workings of the brain for centuries without ever needing to know what was going on inside. To suggest that now that we have MRI machines and EEG we’re all of a sudden going to better understand how to teach a brain to learn is highly unlikely. Like any good, long-term relationship, it all comes down to goals, expectations and respect. Researchers and educators have to consider the lens and goal of each other, and adjust their expectations accordingly.

And when it comes to neuroscience, there seems to be a bit of a communication breakdown. Some people are adamant that there’s no place for neuroscience in education; it’s too premature or the questions are just too different. Others believe that it’s the answer we’ve all been waiting for; the pixie dust that’s going to fix whatever we believe to be broken. Still others see a business opportunity; if we can package up the appeal of neuroscientific answers and cater them to educators’ needs in bite size chunks, we can make some serious dough and no one will be the wiser.

I find that reality is usually somewhere in-between.

Neuroscience is unlikely to create great teachers, great tests, great classrooms, or great curricula – that’s not its goal and that’s not something I expect anyone to bottle up any time soon – but it can inform the way we think about students and the nested communities that they’re a part of. It can teach us more about ourselves, how we interact with information, and how we interact with each other. It can be one of many tools we use to get this right. And frankly, that’s all we should ask of it.

When thinking about history, we first have to consider whether the actions that look foolish in retrospect were actually reasonable reactions to the problems of the day. Sometimes it’s successful, and sometimes it’s not. By putting ourselves in their shoes, we can empathize with their mistakes, and more easily imagine ourselves making them in similar circumstances. Education and science are similar in that both can be a reflection of the society that supports them. Every solution has flaws, and often, the solution to one problem ends up causing a whole slew of new ones. The power and perceived credibility of research was well-received by a system that felt haphazard and disorganized. The problem is, that same research viewed through different lenses can have drastically different consequences. In the case of standardization, history reminds us to both embrace the insights research may offer, while also being wary of the agendas that may be shaping its use.

Similarly, neuroscience is valuable to education, so long as we understand its limits and the biases of those who are disseminating it. If we adopt it blindly, then we run the risk of misallocating resources or creating more problems than we solve. Recent studies suggest that the majority of educators continue to believe neuromythologies, and the problem is, they didn’t come up with those themselves. Someone else told them that students are left-brained or right-brained. Someone else told them that boys and girls are born with totally different brains, or only use 10% of it, or that Mozart will make you smarter. The list goes on and on, but the point is, as history has shown us, society (and in this case mass media and capitalism) will often shape the message. It’s up to us to find ways to make sure that the darker side of history doesn’t repeat itself.

It’s a pursuit that we’ll never really finish. Research is always in progress and education is always looking for ways to adapt to the needs of the day. The goal is to work towards the best ways to keep up, so that we can collectively take the next chapter of history into our own hands.

References + Further Reading:

Aristotle

  • Aristotle B.C. (384-322) – Education for a Common End”. StateUniversity.com. [Blog]
  • Back, S. (2002). The Aristotelian challenge to teacher education. History of Intellectual Culture, 2(1), 1-5. [Paper]
  • Curren, R.R. (2000) Aristotle on the necessity of public education. Rowman and Littlefield Publishers. [Book]
  • Kurthagen, F.A.J. in cooperation with Kessels, P.A.M., Kostler, B., Lagerwerf, B., Wubbels, T. (2001) Linking practice and theory: The pedagogy of realistic teacher education. Mahawa, N.J.: Lawrence Erlbaum Associates. [Book]
  • Popova, M. (n.d.) The art of practical wisdom: The psychology of how we use frames, categories, and storytelling to make sense of the world. BrainPickings.com [Blog]
  • Wanjek, C. (2011). Top 5 misconceptions about Columbus. LiveScience [Blog]

19th-20th Century Learning

  • Binet, A. (1905). New methods for the diagnosis of the intellectual level of subnormals. L’Anée Psychologique, 12, 191-244. [Paper]
  • EdX. (2015). Saving schools: History and politics of U.S. Education. Harvard University [MOOC]
  • Fletcher, D. (2009). Brief history: Standardized testing. TIME [Article]
  • Meshchaninov, Y. (2012). The Prussian-Industrial history of public schooling. The New American Academy. [Report]
  • Noer, M., Khan, S. (2012). The history of education. Forbes. [Video]
  • Watters, A. (2015). The invented history of ‘The Factory Model of Education’. Hack Education [Blog]
  • Zenderland, L. (2001). Measuring minds: Henry Herbert Goddard and the origins of American intelligence testing. Cambridge University Press. [Book]

Brain-Based Learning

  • BrainFacts. (n.d.). Neuromyths. BrainFacts.org. [Resource]
  • Howard-Jones, P.A. (2014). Neuroscience and education: Myths and messages. Nature Reviews Neuroscience, 15(12), 1-8. [Paper]
  • Sukel, K. (2015). When the myth is the message: Neuromyths and education. Dana Foundation. [Briefing]
  • Sylvan, L.J. & Christodoulou, J.A. (2010). Understanding the role of neuroscience in brain based products: A guide for educators and consumers. Mind, Brain, and Education, 4(1), 1-7. [Paper]
  • Tardif, E., Doudin, P., Meylan, N. (2015). Neuromyths among teachers and student teachers. Mind, Brain, and Education, 9(1), 50-59. [Paper]
  • Weisberg, D.S., Keil, F.C., Goodstein, J., Rawson, E., Gray, J.R. (2008). The seductive allure of neuroscience explanations. Journal of Cognitive Neuroscience, 20(3), 470-477. [Paper]

Default Image
Rebecca Gotlieb
Rebecca Gotlieb

Resilience—the ability to recover from a set-back—is one of the most important traits and mindsets to instill in children so that they may thrive in adulthood. This is the theme of Building Resilience in Children and Teens: Giving Kids Roots and Wings, Third Edition, by Dr. Kenneth R. Ginsburg. Ginsburg is a pediatrician at the University of Pennsylvania Perelman School of Medicine, a counselor and researcher about child development, and a father of two adolescent girls. The “seven crucial Cs”– competence, confidence, connections, character, contributions, coping and control—comprise the skills parents should foster in their children to promote resilience. Although Ginsburg humbly states that much of the book is “commonsense parenting,” this guide, aligned with recommendations from the American Academy of Pediatrics, is helpful to all parents and youth services providers because of the practical tips and tricks he provides for reflecting about and improving one’s parenting practices.

Children and adolescents experience stress from parents, friends, school, demanding extracurricular activities and the media. While stress had an evolutionarily productive role (i.e., spurring us to escape predators), and, while it can still be harnessed to increase productivity today, chronic stress may lead to poor health and risky decision-making. Given that many of the behaviors parents hope their children will avoid arise as a stress-coping mechanism, it is important to address children’s stress directly. Ginsburg offers numerous strategies to help children and adolescents (and their caregivers) manage stress and build resilience. He suggests physical exercise, meditation and reflection, proper nutrition and sleep, engaging in creative activities, volunteering in the community, having multiple friend groups and older mentors, and learning to ask for help. He discusses some of his clinical techniques for redirecting stress-related behaviors, such as making a decision tree. More than any of these tactics, the most critical parenting practice for building resilience and managing stress is ensuring that children know they are loved unconditionally, that their parents will always be a source of stability, and that home can be a safe haven.

Another crucial parenting practice is setting high expectations for children. Children will fulfill the expectations set for them, whether they are low or high. One of the greatest challenges of parenting is knowing how much to protect a child. Loosening protective reins to allow children to work on their emerging abilities and build on their strengths gives them an opportunity to gain competence and confidence. When children or adolescents meet expectations, praise should be realistic and based on effort. When they fail to meet reasonable expectations that parents set, parents need to avoid lecturing. Criticism should be focused. Punishments should be clearly related to the offense committed. Parents should not equate discipline with punishment; rather they should think of discipline as a way of teaching and scaffolding behavior. To understand why a child has not met an expectation, and to connect more generally, parents need to learn to listen. Doing so means creating opportunities for discussions, listening intently without interruptions, and being non-judgmental.

Parents need to model the behaviors that they hope to cultivate in their offspring. The actions that children observe will impact their behavior much more than the messages they are told. As such, parents should embody the values they hope to pass on, such as giving to charity, avoiding prejudice, delaying gratification, communicating emotions effectively, and devoting oneself to important relationships. Ginsburg argues that the aim of parenting is to raise children who will grow into people who will be successful at ages 35, 45, 55 and beyond. At each of these ages parents need to care for themselves and model a full life for their children by engaging in their own interests and maintaining their own social relations outside of their children.

These parenting practices encompass an authoritative parenting style in which parents set clear expectations, offer an overabundance of love, and urge their children to develop their own independence. Authoritative parents offer their children lots of time, attention, and opportunities for emotional vulnerability, but they do not spoil their child by indulging each material desire.

While nearly the entire book is relevant to any parent, Ginsburg offers a few specialized tips for populations that face unique challenges such as military families or adolescents with depression. Beyond the recommendations in this book, Ginsburg also refers his readers to online resources with a wealth of information about promoting grit and resilience and reducing stress. Ultimately, he suggests that first and foremost a parent should trust her own instinct about what is best for her child as this is the most important ingredient for “giving kids roots and wings.”

 

Ginsburg, K.R. & Jablow, M. M. (2015). Building Resilience in Children and Teens: Giving Kids Roots and Wings (3rd ed.) Elk Grove Village, IL: American Academy of Pediatrics.

Default Image
Rebecca Gotlieb
Rebecca Gotlieb

Laurence Steinberg, professor of psychology at Temple University, provides a compelling call to action grounded in psychological and neuroscientific research in Age of Opportunity: Lessons from the New Science of Adolescence. Adolescence (roughly defined as ages 10-25) lasts longer than in previous generations. Comparisons of today’s U.S. adolescents to adolescents of previous generations suggest that they are doing no better in terms of critical social, health, and societal measures. Comparisons to our peer-nations suggest that U.S. adolescents are doing worse. Thus, Steinberg urges parents, educators, policy makers, and other actors to rethink how they raise and interact with adolescents. Recent research about the adolescent brain suggests that adolescence is a unique time for developing skills to flourish, but this must be balanced against adolescents’ proclivity for risk-taking and poor self-control. Incisive and comprehensible, Age of Opportunity is a worthwhile read for educators and parents of adolescents as well as anyone interested in understanding the causes and implications of shifting demographic trends among young people.

Steinberg argues that society needs to invest in the period of adolescence because the brain will never again be as plastic. Research in the last fifteen years illuminates the unique features of the adolescent brain. Neural connections among various regions of the brain, which mature at different rates, are reorganized during adolescence. The more we use particular skills the better connected the regions of the brain that facilitate those skills will be. The brain systems that undergo the greatest change during adolescence are those that control reward-seeking, relationships, and regulatory behaviors.   Remembering is also exaggerated during adolescence. The adolescent brain continues to develop towards an adult-like form well into the twenties, which parallels societal changes in the protraction of adolescence into the twenties.

In the mid-1800s, adolescence—bookended by menarche (first period) and marriage—lasted about five years. In 2010, it was about fifteen years and by 2020 Steinberg suggests that it may be as many as twenty years. Obesity, low birth weights, and exposure to light, stress, and certain endocrine-disrupting chemicals have all hastened the arrival of adolescence. Steinberg presents evidence that maturing too early can increase the risk of problems like teenage pregnancy, contraction of STDs, psychological disorders, school disengagement, and even cancer. However, Steinberg argues that contrary to the media’s messages about lazy and self-indulgent 20-somethings who are unwilling to commit to a career or marriage, extending adolescence on the older end can actually improve social and cognitive development if the time is used productively to experience novelty.

One downside of prolonged adolescence is that it expands the time in which people are prone to take unreasonable risks. Connections between the brain’s limbic system, which plays a role in emotion regulation, and the prefrontal cortex, which is responsible for inhibition, are slow to develop. Also, adults make decisions about riskiness by relying on parts of their brain that control their gut (i.e., they have a “gut reaction” to risky events), but adolescents rely more on areas associated with deliberative decision-making.   Adolescents’ brains respond more strongly to the pleasure of rewards than do adults. Together this helps explain why, across the globe, we see a pattern of adolescents taking more risks, seek rewards, being less deterred by losses, behaving impulsively, and acting more violently than people at any other developmental stage. This developmental change was once evolutionarily adaptive for tasks such as finding mates outside of one’s family. Nowadays, to reduce adverse adolescent behaviors, Steinberg argues that we ought to create an environment with more adult supervision to help adolescents regulate their behavior so as not to be harmed by their own risk taking.

In particular, adolescents from economically disadvantaged backgrounds need these supportive structures. Low-SES adolescents typically have less “psychological and neurobiological capital.” Steinberg defines psychological capital as noncognitive skills (e.g., self-regulation) important for success and neurobiological capital as advantages procured from a protracted period of brain plasticity. Self-control, a skill critical for success, can be developed with training in mindfulness, consistent aerobic exercise, and interventions aimed at boosting working memory.

Parents can support productive adolescent development by adopting an authoritative style in which they show their child equal parts warmth (e.g., tender touches, emotional understanding), firmness (e.g., clear expectations, fair punishments), and support (e.g., praising effort). Educators can support authoritative parenting, make school more challenging and academically engaging, and teach self-regulation. Policy makers should continue their work to reconceptualize adolescent health education, driving restrictions, and criminal punishment. The last several decades have brought legislation geared towards helping people survive adolescence. Taking into account new research about the adolescent brain and the changing cultural construction of adolescence can usher in new policies and practices geared towards helping students thrive.

Steinberg, L. (2014). Age of opportunity: Lessons from the new science of adolescence. Houghton Mifflin Harcourt.

Default Image
Rebecca Gotlieb
Rebecca Gotlieb

In her 2014 book, Neurobiology and the Development of Human Morality: Evolution, Culture, and Wisdom, Darcia Narvaez aims to increase virtuous morality, empathy, and cooperativeness among adults. Anyone interested in understanding the evolutionary, biological, and social bases of morality or seeking to improve ethical behavior can learn a great deal from this book.

Narvaez argues that our sense of morality—or how we function as social beings sharing with one another life’s victories and challenges—is shaped by the integration of our physical, mental, and social experiences as well as our evolutionary history. Modern times have drawn us away from some of our intuitive wisdom about how to create communal societies and led to shrinkage of our moral and emotional capacities. This trend can be reversed, she argues. Both our culture and one’s interpretation of experiences can be altered to increase empathic concern and communal orientations, which will help create societies in which all life (humans and other natural creatures) flourish.

Narvaez begins by reporting about the decline in social interaction, health, and ethical decision making in the U.S. She believes that humans are dynamic, but early exposure to suboptimal learning environments affects our physiology, brain development, and epigenetics. Changes at these levels affect a person’s moral reasoning and ability to act consistently in a virtuous way. Citing Darwin, Narvaez argues that we evolved to be cooperative, connected, moral beings. Only recently have we abandoned the wisdom our small-band hunter gatherer (SBHG) forbearers knew about acting communally. In fact, when people (and even other animal species) are raised in supportive conditions, they tend to cooperate with one another. There are cultural differences in the emphasis placed on competition and cooperation, but cooperation is generally more adaptive.

In our early years, Narvaez asserts, we develop an “empathic core” that impacts both our understanding of ourselves as moral beings and our socio-emotional imaginative abilities. Only with responsive parenting do these systems develop properly such that we have emotional and cognitive understanding of social dynamics. When raised by inattentive parents or in a dangerous environment, our social capacities may underdevelop, and our stress response may become hyperactive. For example, we all have a “safety ethic” that helps us navigate relational stress, but among individuals who experienced early social trauma, the safety ethic may lead people to make decisions based on their own preservation rather than on maximizing group success.

Not only is the influence of early social experiences observable behaviorally, but also we see changes in people’s brains and physiology. Narvaez states that the right hemisphere, more than the left, is associated with emotional and moral processes like affective empathy, interpretation of social relations, emotional modulation, and stress regulation. The frontal lobe as a whole and especially the orbital frontal cortex, which is the primary projection of the emotional limbic system, are also critical. They aid with processing affect, making moral decisions, and imagining moral paths. When these systems are well-formed, they keep us regulated and integrate cognition with the feelings that guide our morality. When these structures are underdeveloped because of early stress or damaged because of injury people may display a lack of compassion, love, and respect as well as signs of sociopathy or other forms of maladaptive social coping.

Narvaez reviews the traditional moral wisdom in Ancient Greek, Abrahamic, Buddhist, and Native American traditions as well as “primal wisdom” from SBHG societies. Primal wisdom can teach us to view the self, and not just society, as communal, expansive and integrative. We are each equal partners with the rest of the world around us. A return to this view of oneness and reciprocity might help arrest some of the moral decline in the western world that framed Narvaez’s investigation of modern human morality. A more holistic moral orientation would help us also to raise wiser children.

Narvaez observes that across religions and cultures the most important virtues are humility, love and authenticity. However, even if an individual is not born into a society or culture that facilitates development of these qualities, that person can (and should) still change herself to have a fuller moral imagination that seeks to facilitate communal thriving and a reverence for nature.

 

Narváez, D. (2014). Neurobiology and the Development of Human Morality: Evolution, Culture, and Wisdom. New York: W.W. Norton & Company.

Default Image
Rebecca Gotlieb
Rebecca Gotlieb

“I think, therefore I can change what I am.” Walter Mischel, a Columbia University psychology professor renowned for his research about self-control, concludes his 2014 book, The Marshmallow Test: Mastering Self-Control, with this modification to Descartes’ famous proposition. Mischel, the creator of the “marshmallow test”, argues that self-control and the ability to delay gratification are critical for long-term health and for social and professional success. These skills are detectable at an early age, responsive to training, and able to help us shape who we are. His admission of his personal self-control short-comings (e.g., at one point he smoked more than 3 packs of cigarettes a day while aware of the adverse health effects) and the strategies he used to exercise self-control illuminate his presentation of the field of self-control research.

The Marshmallow test (formally known as “The preschool self-imposed delay of immediate gratification for the sake of delayed but more valued rewards paradigm”) exists in many iterations, but the basic set-up begins by having a researcher ask a preschool child (age 3 or 4) to select a tasty treat. The researcher leads the child into a room with a one-way mirror in which there are no toys or colorful distractors; there is only a chair and desk with the tasty treat and a bell atop it. The researcher explains to the child that he can ring the bell at any time to bring the researcher back into the room so that the child can eat the treat, or if the child waits until the researcher returns, then he can have two of the tasty treats. The difficulty of the task arises from the tension between our “hot system,” which acts quickly and reflexively and our “cool system,” which acts slowly and reflectively.

Mischel and his colleagues found, across several cultural settings, individual differences in children’s likelihood of delaying. Decades later, brain imaging of those who delayed immediate gratification as a child compared to those who did not revealed greater activity for the delayers in the brain’s prefrontal cortex—an area associated with impulse control.   Mischel is careful to frame these results by noting that categorizing people as high or low-delayers, as though self-control is a stable and universal quality, is inaccurate. First, self-control is context-specific. For example, politicians (e.g., Bill Clinton) famously exert extreme self-control to be disciplined decision makers in their professional lives, and yet show an enormous lack of self-control in their private lives. Second, Mischel emphasizes that self-control abilities are changeable. Both nature and nurture play a role in determining self-control ability.

Several specific strategies can promote self-control. Mischel and his colleagues found that children were more or less likely to eat the marshmallow depending on how and what they thought about during that time. For example, children encouraged to think about how delicious the marshmallow would taste waited a shorter time than students who were encouraged to think about the marshmallow abstractly or to imagine it as something else, like a cloud. By about 5 or 6, children realize that obscuring the reward from view may help them delay. One trick that helped Mischel quit smoking was to associate cigarettes with the prospect of developing cancer and a haunting encounter he had with a man about to undergo radiation treatment.

Mischel encourages the use of “if-then” plans—plans in which people recognizes that if they are confronted by a trigger of the behavior they are trying to control, they will engage in a specific, more constructive behavior instead. To help self-regulate when recalling emotionally charged events (like the end of a romantic relationship) Mischel says that if people recalls the situation from an objective, fly-on-the-wall perspective, rather than recalling themselves as an actor, they are likely to be more level-headed. Mischel reports on research that suggests that individuals who view their current selves as closely related to their future selves save more for retirement.

To promote self-control in children parents should try to minimize the stress their kids experience, teach them that choices have consequences, encourage autonomy rather than controlling decision-making processes, and (perhaps most critically) model the type of self-control they would like their kids to exert. Executive function—the cognitive skill that allows us to assert self-control over our thoughts, actions, and emotions—is critical for students’ success. Mischel argues that there is no ambiguity about the need to promote executive function skills in school. He offers KIPP charter schools, schools that emphasize character development and college-going, as a model for how schools can help students (including economically disadvantaged students) learn these skills. With practice and the techniques that Mischel describes, we can resist the marshmallow, so that we can work towards becoming a better version of ourselves.

 

Mischel, W. (2014). The Marshmallow test: mastering self-control. New York: Little, Brown, and Company.