L&B Blog – Page 81 – Education & Teacher Conferences Skip to main content
Default Image
Myra Laldin
Myra Laldin

Lollipop

I went to a school in the foothills of the Himalayas in Pakistan. The school consisted mostly of western children of aid workers, which meant that for the majority of my school years my family members were the only students of color. The school followed a U.S. school system, with bits of the British system interspersed throughout. Although I was not fully aware of it at the time, I look back at my early years in school and realize how often I was confused and slower than my peers at grasping what was going on in the lesson.

I remember sitting in math class trying to figure out a word problem about chipmunks. We were supposed to be counting their acorns but I found myself trying to figure out what the heck a chipmunk was to begin with. In English class I would listen to a British story about “A Day at School” with some strange “lollipop lady*” who would stand on the road and hold up a giant lollipop sign. Forget the point of the story, what is a lollipop lady? Try reading “The Magic School Bus” to a young girl from rural Pakistan. I was mesmerized. While all the other kids moved on to the amazing adventure of riding down the tongue and inside the human body, I was stuck on the cool yellow bus. Years later, when I first came to the U.S., I took a picture of a yellow school bus and sent it to my siblings with the caption, “the magic school buses!”

As I observe students in our beautiful, multicultural classrooms here in the U.S., these memories of life in an international school come back to me. When I see students struggling because they can’t quite grasp the cultural nuances, I’m reminded of the out-of-place chipmunks, lollipops, and big yellow buses of my childhood. In many ways those are the only things I remember about the lessons at my little school beneath the Himalayas. It wasn’t until I began studying educational neuroscience that I was finally able to put words to what was happening. How did those cross-cultural experiences affect my learning in those early years? How many things was my working memory juggling at once? I realize now, not only was I carrying the same “cognitive load” as other students; I was carrying a “cultural load” as well. Unpacking these important ideas will help us all become better learners and better educators.

How does working memory work?

Working memory is what we use to hold on to information in the short-term, retaining the ability to use and manipulate this information. For example, if I give you a problem:

6 x 2 =?

You can keep the two numbers in your head easily and at the same time figure out the answer.

Simple, right?

The main idea of cognitive load centers around the idea that there is a limited amount of information that our brains can take in and successfully process at a given time. The more data we send to our brain, the more “processing capacity” is used.

But what if I give you this problem instead:

2 x 3 x 5 x 7 x 9 x 2 = ?

On the one hand, this will be a real test on your working memory. You most likely will not be able to hold these numbers in your head like you could with the 6 and 2. Instead, you have to keep going back and looking at the problem. This is because for the average person this problem requires greater processing capacity to figure out. Simply put, cognitive load is the amount of mental effort being used in the working memory.

Of course, like most things in life, people differ in their processing capacities. This is probably most obvious in an expert/novice relationship. Experts have greater knowledge and familiarity in their area of expertise. This makes completing a task specific to their domain of expertise less of a cognitive load compared with a novice doing the same task. The expert doesn’t have to spend as much time getting to know the problem. You can also see this in children, who naturally have fewer points of reference (knowledge) than adults for how things work. They have fewer areas in their minds to peg ideas to or build concepts on. Therefore, children have a greater cognitive load than adults when trying to perform an activity or understand a new concept. There’s a lot more work for children to do to reach the same result.

It ultimately comes down to the ability to have a point of reference that is related to the new information that is being taught. Studies show that being able to relate to new information is important for all students. This suggests that lesson plans designed to connect to students’ “real worlds” are more effective than abstract lessons. When students don’t have any background knowledge needed to perform a task or comprehend a concept, they experience greater cognitive load. The good news is this is normal. All of us experience varying degrees of cognitive load when learning. If we didn’t, we would never learn! The better question to ask is when does this become a problem in our classrooms?

What happens when you overload working memory before you get to the point?

As an elementary school student, my inability to visualize a chipmunk made it more difficult for me to spend energy on the math aspect of the word problem. It added unnecessarily to the cognitive load. And lacking a point of reference for the “lollipop lady” also taxed my cognitive load as I tried to understand the story. My working memory was trying to hold on to the words “lollipop lady” as well as the rest of the words to make sense of the story. Somewhere, my brain was also trying to understand what a lollipop lady was in the first place, not to mention why she was standing in the middle of the road! When this extra cognitive load relates to foreign or cultural references, we call this cultural load.

Cultural load is the amount of culture-specific knowledge required to understand or perform a task (like figuring out a math problem or understanding a story). The concept of cultural load has become evident in a range of studies that evaluate the role of cultural “frame[s] of reference” in student performance. Growing understanding of cultural load has inspired calls for less culturally biased tests.
Studies suggest that using cultural knowledge and experiences that directly relate to our students can avoid some of this negative cognitive and cultural overload. Researchers have also found that academic success increases when students can take ownership of their learning. Regrettably, the concept of cultural load is often overlooked in classrooms.

What happens when we don’t consider the role of cultural load in the classroom?

One 1998 report showed that a disproportionate number of students from multicultural backgrounds may be inappropriately placed in special education classes. Some case studies talk about how children are labeled mentally disabled. In fact, in many cases testing revealed the children were functioning at a normal intellectual level. Sadly, it wasn’t until years later that schools began realizing their mistaken labels. This research reinforces the susceptibility of tests to carry a cultural and social bias, and begs the question: If it’s happening in our tests and we don’t know it, what might be happening in our classrooms? Of course, there are an array of factors that play into culturally and linguistically diverse populations having a higher percentage of students in special education that have not been discussed here. That being said, the research aligns with the experiences of many students just like me: we must acknowledge the need for more culturally aware and accepting classrooms.

*Lollipop Lady – noun. British informal. A woman who is employed to help children cross the road safely near a school by holding up a circular sign on a pole to stop the traffic. Not a woman with a giant lollipop.

If you didn’t know that, you may have experienced some “cultural load” first-hand with that cultural reference 😉

References & Further Reading

  1. Artiles, A. J., & Zamora-Duran, G. (1997). Reducing disproportionate representation of culturally and linguistically diverse students in special and gifted education. Reston, VA: The Council for Exceptional Children. [Book]
  2. Benson, E. (2003). Intelligence across cultures: Research in Africa, Asian, and Latin America in showing how culture and intelligence interact. American Psychological Association, 34(2), 56. [Paper] 
  3. Campbell, T., Dollaghan, C., Needleman, H. & Janosky, J.  (1997) Reducing bias in language assessment: Processing-dependent measures. Journal of Speech and Hearing Research, 40, 519-525. [Paper]
  4. Feger, M. (2006). “I want to read”: How culturally relevant texts increase student engagement in reading. Multicultural Education, 13(3), 18. [Paper]
  5. Jordan, C. (1985). Translating Culture: From ethnographic information to educational program. Anthropology & Education Quarterly, 16(2), 105–123. [Paper]
  6. McClafferty, K., Torres, C. & Mitchell, T. (Eds.) (2000). Challenges of urban education: sociological perspectives for the next century. Albany, NY: SUNY Press. [Book]
  7. Meyer, L. (2000). Barriers to meaningful instruction for English learners. Theory into Practice. Accessed through Wilson Web on-line database on Sept 23, 2015. [Article]

Default Image
Rose Hendricks
Rose Hendricks

Language Nutrition

We’re told that a picture is worth a thousand words, but this adage robs words of much-deserved credit. When you’re an infant with a rapidly developing brain, words are one of the most valuable things you can receive. They’re so valuable that a new initiative in Georgia called “Talk With Me Baby” promotes the importance of “language nutrition”. When it comes to language, infants are sponges: essentially every baby growing up in a normal environment masters the complex language system he or she’s exposed to. It helps that the adults around them hold up objects and emphatically enunciate their names, saying something like “ba-na-na” while waving the fruit in the child’s face, but that’s not the only way babies learn. Infants are constantly immersed in linguistic environments that are full of people expressing real and complicated thoughts through varied sentence structures. This provides the rich experience that children need to rapidly become fluent speakers. If you’ve ever tried to learn a new language in your teens or later, you know that this sponge-like capacity doesn’t last forever. Talk With Me Baby makes no bones about why they want to increase the amount of language that babies are exposed to: hearing more words in infancy promotes stronger language skills, which in turn form a foundation for academic and other successes throughout life.

Why are words so crucial during infancy?

The idea that there’s a sacred window of time in which language can be learned – referred to as a critical period – was first articulated in 19591, but it’s still a widely researched and debated topic. There isn’t yet a consensus on whether attempting to learn (a first) language after the critical period is futile or just more difficult than learning it earlier, and if there is a critical period, researchers still debate about exactly when that period is. Since intentionally raising a child without linguistic input (exposure to language) would be unethical, much of the support for the critical period hypothesis comes from tragic cases of children who grew up in abnormal environments that lacked language. Genie is a classic example of a girl who spent her entire childhood locked in a room without any stimulation or proper nourishment until she was discovered at 13 years old. At that time, researchers tried to provide her with therapy for her physical and cognitive abnormalities. Although she seemed able to learn a limited vocabulary, most scholars claim that Genie never truly learned language: she could not use grammar to put words together in a meaningful way. Although her case suggests the importance of receiving linguistic input during the critical period, it’s unclear whether Genie was disadvantaged from the start – her father claimed that he locked her up because she was cognitively disabled – and there are many other aspects of Genie’s deprived childhood that could have contributed to her inability to learn language at 13.

There are a few characteristics of the developing brain that speak to why we might be better at absorbing language as babies than as adults. For one, a critical period is not unique to language. Other biological processes also have their own critical periods2. Some of these periods have been demonstrated most clearly in animals deprived of specific sensory stimuli. For example, Hubel and Wiesel studied a cat whose eye was sewn closed as a kitten. When they removed the stitches, the cat was still unable to see out of the previously deprived eye. During the deprivation period, the visual cortex became dominated by the normal, unobstructed eye, which hijacked the brain space typically devoted to the second eye.

The cat’s visual cortex demonstrates a crucial feature of the brain: its plasticity. Neuroplasticity refers to the brain’s ability to reorganize itself based on the inputs it receives. Our brains are constantly reorganizing themselves (that is how we learn anything), but infants’ brains are especially plastic3. Developing brains are highly sensitive to incoming information and experiences, allowing them to learn massive amounts of information rapidly.

Perhaps counter intuitively, another explanation for why immature brains are ripe for language learning is that their prefrontal cortex (PFC) – the area of the brain most associated with higher-level and rational thinking – is undeveloped4. A paper by Sharon Thompson-Schill, Michael Ramscar, and Evangelina Chrysikou gives an example of watching a football game to highlight how adults’ and toddlers’ pattern-learning strategies differ. In the example game, you notice that the team passes the ball 75% of the time and runs with it the other 25%. Your task is to predict what the team will do in subsequent plays. As an adult, you’re likely to match probabilities: 75% of the time you’ll guess that the team will pass, and the other 25% you’ll guess that it’ll run. You’ve taken the less frequent event (the run plays) into account. However, since you don’t know when those rare events will occur, the optimal strategy would actually be to always guess that the team will pass. That’s precisely what a toddler would do. Toddlers ignore irregularities and grasp conventions quickly, at least partially thanks to their undeveloped PFCs. Thompson-Schill and colleagues argue that toddlers’ tendency to ignore inconsistencies might be ideal for learning the foundations of language, especially the syntactic patterns that govern our grammar. Toddlers eventually discover and master their language’s irregularities, moving from forms like “drinked” to “drank” as their PFCs develop and help them filter exceptions to rules.

Infants’ and toddlers’ brains are ready and waiting for linguistic input. This input allows their brains to develop new neural pathways in response to the language conventions they’re exposed to. As they get older and continue to use language more (whether they’re listening, speaking, reading, or writing), these pathways continue to strengthen. Talk With Me Baby asserts that “early language exposure is the single strongest predictor of third grade reading proficiency,” and that third grade reading proficiency, in turn, predicts further academic and economic successes. This is because third grade is when most kids transition from learning to read to reading to learn. In this way, linguistic exposure as an infant has cascade effects that last long after infancy. Just as proper nutrition promotes physical growth and is crucial for babies’ future health, proper linguistic nutrition promotes the mental growth necessary for future success.

The 30 Million Word Gap

It’s almost impossible for a baby to grow up without any exposure to language, but many children still grow up in environments that lack sufficient language exposure. In one seminal study, researchers found that the number of words addressed to children differed dramatically across families of different socioeconomic statuses (SES)5. SES is a measure that combines income, occupation, and education to reflect a family’s economic and social position in society. Children from families in the highest SES category heard an average of 2,153 words per hour, while those is the lowest SES group only heard 613 words per hour. From these numbers, the researchers calculated that by 4 years old, the average child from a higher-income family hears a total of about 45 million words, while the average child from the low-income family hears a measly 13 million words. The authors referred to this disparity as the 30 Million Word Gap. The gap may result, at least in part, from the fact that parents who are struggling financially are often unable to devote the same amount of focused time to their children that parents with fewer financial struggles can6. Reduced linguistic input is one consequence of the quality-time deficit that lower-SES kids often experience.

If a child from a low-income family enters school at age 4 after hearing 30,000,000 fewer words than his or her other classmates have, this child will immediately have an immense disadvantage. Because learning is sequential, in the sense that that many concepts build on each other, the child on the disadvantaged side of the word gap will have difficulty learning new information that requires an understanding of language. As a result of missing out on valuable linguistic input as a baby, this student may never catch up.

Talk With Me Baby

The state of Georgia has launched an effort to close the 30 Million Word Gap7, acknowledging that “a lack of early language exposure has lifelong consequences,” like dropping out of high school, incarceration, becoming a teen parent, involvement in violence, unemployment, and poverty8. Their initiative, Talk With Me Baby, is being implemented mainly by spreading awareness. Because the word gap can have future physical health consequences and because almost all babies are seen in hospitals, nurses in particular are helping spread the message that babies are listening, even before they’re born. They’re absorbing what they hear, so they should hear as much language as possible. The website for Talk With Me Baby also advertises an app that parents will soon be able to download with features like topics to talk about, milestones to look for, reminders to talk, and resources.

Raising a child is complicated. It can be hard to know what to feed your child, when to do it, and even how to afford the ideal nutrition. Luckily, providing babies with proper linguistic nutrition is fairly straightforward and accessible to all. What words should you feed your child? As many as you can! When should you feed your child his or her words? Whenever you can! Ideally, babies should hear not only as many different words as possible, but they should also hear as many different sentence structures as possible. Long sentences are the linguistic equivalent of milk: consuming them helps children’s cognitive foundations get strong enough to support all of the lessons and skills that they’ll learn in school. Perhaps best of all, words are free and we can all make them, which means closing the 30 Million Word Gap is within our reach.

References & Further Reading

  1. Penfield, W., & Roberts, L. (1959). Speech and brain-mechanisms. Princeton, N.J: Princeton Univ. Press. [Book]
  2. Sengpiel, F. (2007). The critical period. Current Biology, 17(17), R742-R743. [Paper]
  3. Mundkur, N. (2005). Neuroplasticity in children. Indian Journal of Pediatrics, 72(10), 855-857. [Paper]
  4. Thompson-Schill, S., Ramscar, M., & Chrysikou, E. (2009). Cognition without control. Current Directions in Psychological Science, 18(5), 259-263. [Paper]
  5. Hart, B. & Risley, T. (2003). The early catastrophe: 30 Million word gap by age 3. American Educator, Spring 2003, 4-9. [Paper]
  6. National Journal. (2015). 30-million word gap divides rich and poor kids. [Web Article]
  7. Deruy, E. (2015). Why boosting poor children’s vocabulary is important for public health. Atlantic Magazine. [Web Article]
  8. Talk with Me Baby [Educational Initiative]

Default Image
Kathryn Mills
Kathryn Mills

Teenage Brain

Adolescence is the period between childhood and adulthood. And though it can stretch into our early twenties, we spend many of these years in high school. This stage of life is marked by increased cognitive abilities, social sensitivity, and agency (or increasing independence). These changes make this time particularly perplexing to some adults, as they struggle to make sense of stereotypical adolescent behaviors such as risk taking and increased allegiance to peers.

At the end of the 20th century, it was common to discuss adolescent behavior as being influenced by “raging hormones.” Today, it is becoming increasingly common to discuss adolescent behavior in terms of the “teenage brain.” But what makes the teenage brain different from the child or adult brain? And do these differences have implications for education and learning? This blog will discuss the latest research in adolescent brain development and how the current evidence might inform education during the teenage years. This post outlines three of the most interesting things neuroscience has taught us about the physical changes that take place in the brain during adolescence.

1. The brain continues to change throughout adolescence.

Perhaps the most important consideration to keep in mind regarding the brain during adolescence is that it is continuing to change. There is evidence for this from multiple lines of research, including cellular work on post-mortem human brain tissue1, as well as longitudinal magnetic resonance imaging (MRI) studies of brain structure and function.

What do we mean by “physically change”?
With MRI, we have the ability to see how the living human brain changes from birth to old age by taking different kinds of pictures. One kind of picture we can take is of the structure–or anatomy–of the human brain, and we can use this picture to look specifically at two components of the brain’s structure: one component is grey matter, which is largely made up of brain cell bodies and their connections. And the other is white matter, which is primarily the long connecting fibers that carry signals between brain regions. The thing that gives white matter its color is “myelin”, which is a fat that wraps around connecting fibers in order to make communication more efficient.

There have been a few studies now where hundreds of participants had their brains scanned multiple times across development, and we know from these studies that the amount of grey matter is greatest during childhood, but decreases during adolescence before roughly stabilizing in the mid- to late- twenties2. We also know that the amount of white matter increases almost linearly across adolescence3. These are two major changes happening in the structure of our brain during adolescence.

2. The brain doesn’t all change at once.

Structural changes are not occurring at the same time across the whole brain. Actually, areas of the brain that are involved in basic sensory processing or movement develop earlier than areas of the brain involved in more complex processes such as inhibiting inappropriate behavior, planning for the future, and understanding other people. These and other complex processes rely on areas in the prefrontal, temporal and parietal cortices, which are continuing to change in structure across the second decade of life4.

How do these changes happen?
We still do not know the specific cellular mechanisms that underlie developmental changes in measures of grey or white matter. It is often thought that these decreases in grey matter reflect, at least in part, changes in connectivity between brain cells. These changes include decreases in dendritic spine density (which is basically a proxy for how interconnected cell bodies are in the grey matter) and other cellular processes involved in synaptic pruning (which is the way that connections in the brain are broken). Histological work, which involves studying the cells using microscopes, has given us a better understanding of the cellular changes occurring in the human brain across the lifespan.

In one specific study, researchers at the Croatian Institute for Brain Research counted the number of dendritic spines in an area of the prefrontal cortex5. They found that the number of spines continued to decrease across the second and third decades of life. So, this finding gives some cellular evidence for the continued structural development of the human brain across adolescence, at least in a section of the prefrontal cortex.

Is this a bad thing?
Not necessarily. The continued reduction in synapses seen in the prefrontal cortex means that the brain is still undergoing changes in organization during adolescence. As humans, we have an excess amount of brain connections when we are children, and almost half of these connections can be lost in adolescence. We know that experience influences what connections are kept and subsequently strengthened. Thus we can think of adolescence as a time of transition rather than a time of loss in certain areas of the brain.

3. The brain is changing in more ways than one.

MRI can also be used to see how blood flows in the brain, which allows researchers to get a sense of how the brain is working. So if MRI alone reveals brain structure, you can think of fMRI (or “functional MRI”) as revealing brain function. Many fMRI studies have also shown changes in brain functionality across adolescence. For example, how we use areas of the brain involved in understanding other people changes between adolescence and adulthood6.

This is especially true for “the social brain”.
There are a number of cognitive processes that are involved in interacting with and understanding other people, and we can use functional MRI to see what areas of the brain are active when we engage in important social tasks like understanding the intentions or emotions behind facial expressions or understanding social emotions like guilt or embarrassment. Tasks like these consistently recruit a number of brain regions in the prefrontal and temporal cortex, which is sometimes referred to as the “social brain.”

Although adolescents and adults use the same areas of the brain during a number of social tasks like understanding intentions and social emotions, these tasks all show a similar decrease in activity across age in this medial prefrontal cortex area, which is a part of the brain often related to social processing Adolescents seem to use this part of the prefrontal cortex more than adults when doing certain social tasks7.

So what does it all mean?
What is the point in highlighting these biological changes if we cannot connect them to real world behavior? In this post, I discussed how the brain is changing in both its structure and function during adolescence, highlighting in particular the changes involved in areas of the brain used when we attempt to understand the thoughts, intentions and feelings of other people. These changes are relevant because of the developmental tasks that adolescents must accomplish. One of the major developmental tasks of adolescence is to learn how to successfully navigate our highly social world. Having a malleable brain during adolescence is arguably adaptive for this sort of task, as new social skills and higher level cultural rules can be acquired with greater ease. Thinking about how these changes may impact the way students interact with educational environments is also important – considering these environments are often just as social as they are learning-oriented. In the next post, I’ll discuss how the adolescent brain is not just primed to learn from the social environment, but also how it is particularly sensitive to complex social signals.

References & Further Reading

  1. Petanjek, Z., Judaš, M., Šimic, G., Rasin, M. R., Uylings, H. B. M., Rakic, P., & Kostovic, I. (2011). Extraordinary neoteny of synaptic spines in the human prefrontal cortex. Proceedings of the National Academy of Sciences of the United States of America, 108(32), 13281–13286. [Paper]
  2. Huttenlocher, P. R., & Dabholkar, A. S. (1997). Regional differences in synaptogenesis in human cerebral cortex. The Journal of Comparative Neurology, 387(2), 167–178. [Paper]
  3. Mills, K. L., & Tamnes, C. K. (2014). Methods and considerations for longitudinal structural brain imaging analysis across development. Developmental Cognitive Neuroscience, 9, 172–190. [Paper]
  4. Lebel, C., & Beaulieu, C. (2011). Longitudinal development of human brain wiring continues from childhood into adulthood. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 31(30), 10937–10947. [Paper]
  5. Tamnes, C. K., Walhovd, K. B., Dale, A. M., Østby, Y., Grydeland, H., Richardson, G., … Fjell, A. M. (2013). Brain development and aging: Overlapping and unique patterns of change. NeuroImage, 68C, 63–74. [Paper]
  6. Blakemore, S.-J., & Mills, K. L. (2014). Is Adolescence a Sensitive Period for Sociocultural Processing? Annual Review of Psychology, 65(1), 187–207. [Paper]
  7. Blakemore, S.-J. (2008). The social brain in adolescence. Nature Reviews. Neuroscience, 9(4), 267–277. [Paper]

Default Image
Andrew Watson
Andrew Watson

Remember Kid

When teachers say we want our students to learn, we might also say we want them to remember; after all, if I’ve learned something, I can remember it later on. Sadly and surprisingly, there’s a curious danger to remembering: remembering can cause you to forget.

Yes, you read that right. The wrong kind of remembering causes forgetting.

Imagine the following mental exercise—a mental exercise that resembles many research studies1:

To start, you study a list of words in four different groups—say, Animals (dog, cat), Instruments (guitar, violin), Foods (pizza, steak), and Furniture (sofa, table). After a while, you recall half of the words in two of the groups. For example, in the Animal group, you recall the word “dog” (but not “cat”), and in the Foods group, you recall the word “pizza” (but not “steak”). And you don’t recall any words in the Instrument or Furniture groups.

When I test you on all these words several hours later, there are three logical categories.

First, there are the two groups of words you didn’t recall all: Instruments and Furniture. You’re likely to remember—perhaps—50 % of those words.

Second, there are the words and groups you did recall: the word “dog” in the Animal group, or “pizza” in the Food group. Because you recalled these words, you’re likelier to remember them, so your score will be higher—say, 75%.

Third, there are words that you didn’t recall (“cat,” “steak”) even though you recalled other words in Animal and Food groups.

Take a moment to ask yourself: what percentage of words in this 3rd group are you likely to remember?
Perhaps—because you practiced their groups—you’ll remember them at the 75% level. Or perhaps—because you didn’t practice these specific words—you’ll remember them at the 50% level.

It turns out both answers are wrong. You’ll remember even fewer of those words: say, 40%.

Why? Because practicing some of the words in the Animal and Food categories makes it less likely you’ll remember the un-practiced words. In other words, recalling some of the words prompts you to forget the words you didn’t recall.

The wrong kind of remembering caused you to forget.

In the neuroscience community, there is an active debate about the mechanisms that cause “retrieval-induced forgetting.”2,3 And while that debate is fascinating, it doesn’t really help teachers answer our constant question: “what should teachers do in the classroom with this scientific information?”

I haven’t read any research that addresses this question directly. (More precisely: I don’t remember having read any research that answers it; perhaps I read it, and forgot the source.) But I think the potential dangers of retrieval-induced forgetting (often abbreviated RIF) should shape our practice in very specific ways—in particular, the way we review.

Here’s an example. In yesterday’s class, my students discussed the five ways that the French and Indian War lay the foundation for the American Revolutionary War. To begin today’s class, naturally, I ask my students what conclusions we reached. One student calls out: “The French and Indian War cost a lot of money, and the British government decided to tax the colonies to pay for it. Those taxes helped spark the revolution.” Exactly so. Another student adds to the list: “George Washington gained essential military training and a cross-colony reputation for bravery.” Because we’ve gone over these two key points from yesterday, I assume my students will be prompted to remember the other three. Confident in this assumption, I move on to today’s new topic…

But there’s a problem here. Yesterday, my students got a list of five key points; today, we began class by reviewing two of them. I hoped—in fact, assumed—that my two-item review will help them remember the other three points. However, if the RIF research is true, then my two-item review will in in fact make it less likely that the students will remember the other three items. Because they practiced two of the examples in this group (“ways that one war set the stage for the next”), they are less likely to remember the un-practiced examples in that group.

When I first read this research, and started thinking about my own teaching practice, I realized with increasing alarm how often I review this way. If we studied ten vocabulary words yesterday, I’ll prompt students to recall two or three. If we looked at eight subject-verb agreement rules, I’ll asked them to jot down two, and discuss them with a partner. Of course, teachers must help their students review the material they learn, but if the first review is incomplete, we may very well be reducing—not increasing—the long-term likelihood that our students remember all the information.

In my own teaching, the RIF research has led to this guideline: the first two or three times I go over a topic, I make sure to cover all of the material that is a) conceptually related and b) equally important:

  • “Conceptually related”: RIF results from partial review of conceptually related information only; it influences Animal and Food words, not Instrument and Furniture words.1 For this reason, I don’t need to review an entire lesson—just the logically connected pieces of it. When I go over five essentials for a strong topic sentence, I don’t also need to review the highlights of “Young Goodman Brown.” We discussed both topics on the same day, but our discussion of the short story was conceptually distinct from our discussion of effective writing.
  • “Equally important”: when we go over all five ways that the French and Indian War led to the Revolutionary War, I don’t need to go through the detailed specifics; they’re not as important as the main concept. If I think of my lesson plan in an outline, I should cover all (or none) of the points on the same level of that outline.

One final danger to consider: student directed review might be especially prone to RIF. If students come up with their own list of key terms to remember, for example, their incomplete list might prompt them to forget the examples they didn’t include. As teachers, we need to find mechanisms to ensure that student generated review covers all equally important information.

Of course, research into RIF continues, and we don’t yet completely understand how and why it happens. For teachers, the key point to keep in mind is this: whenever we prompt our students to review, we must be sure that RIF doesn’t cause them to forget what we want them to remember.

References & Further Reading

  1. Jonker, T. R., Seli, P., MacLeod, C.M. (2012). Less we forget: Retrieval cues and release from retrieval-induced forgetting. Memory & cognition 40(8), 1236-1245. [Paper]
  2. Dobler, I.M. & Bäuml, K.T. (2013). Retrieval-induced forgetting: dynamic effects between retrieval and restudy trials when practice is mixed. Memory & cognition 41(4), 547-557. [Paper]
  3. Mall, J.T. & Morey, C.C. (2013). High working memory capacity predicts less retrieval induced forgetting. PLOSOne 8(9), e52806. [Paper]
  • Johansson, M. et al. (2007). When remembering causes forgetting: Electrophysiological correlates of retrieval-induced forgetting. Cerebral Cortex 17(6), 1335-1341. [Paper]

Default Image
Stephanie Fine Sasse
Stephanie Fine Sasse

Research and Education

Anyone who has ever stood in front of a classroom silently praying that their curriculum is engaging, their students are comfortable, and their jokes don’t skip a generation can tell you: Teaching isn’t easy. It’s some secret blend of intuition, strategy, and deep breaths. Great teachers aren’t measured by how much they know about the brain any more than great artists are measured by how much they know about the reflective properties of light: knowing how to use it trumps knowing how it works.

So why do we think it’s so important to use research in the classroom? Why am I spending all my time hanging around the places where education and neuroscience overlap?

Well, put simply: I think it can help.

And I’m not the first. For generations, teachers have been drawn to learning more about the engine that runs the minds that they’re shaping. And researchers have believed in the power of knowledge to improve the way we teach. We have the instinct that the more we know about how things work, the better we’ll be able to control or optimize them. And for generations this has led to a somewhat rocky relationship between the researchers who can describe a student, and the teachers who can inspire one. Sometimes toes get stepped on, sometimes lines are crossed or miscommunications abound… and sometimes, it works. So I decided to take a look back at a few of the ways that research has influenced education, and what that can teach us about getting this important relationship on the right track.

(350 BC) Aristotle and the meeting of science and education.

Aristotle was the original evidence junkie, and arguably, one of the first people to view education through what can be thought of as an early iteration of a scientific lens. He was a pioneer of carefully evaluating claims through observation and reasoning and — by refusing to settle for assumptions — laid the groundwork for some of the greatest scientific discoveries. He even compared how constellations appear in the sky depending on your distance from the equator, providing physical evidence to corroborate Pythagoras’s claim that the Earth was, in fact, round (sorry, Columbus).

At the same time, he was a dedicated educator, and he applied his love of the measurable to his pedagogical beliefs. He started a school, which was built on his view that nature is best understood through structured evidence-seeking and reason. He was an early advocate for ideas that have since transformed into a slew of modern buzzwords; he believed experiential learning, educational equality, lifelong learning, and public access to education were essential components of an ethical society. His fact-forward approach to inquiry even snuck into his moral teachings through the concept of “phronesis”: a type of practical knowledge that maintains that being a good person requires taking a bit of a motivated, scientific approach to moral decision-making.

Aristotle believed in fundamental ways of knowing that informed both his investigation of nature and his approach to teaching. As a foreshadowing of professional-learning-communities-to-come, he believed that teaching itself ought to be informed by the collected knowledge of those who have taught. Basically, he’s not only one of the first scientists, but also one of the first advocates for the ways that what we know should directly influence what and how we teach.

Admittedly, Aristotle was a far cry from applying neuroscience to the classroom, but his proclivity for fusing analysis, evidence, action, and learning continue to shape the way we think about education today.

(19th-20th century) The empirically-informed standardization of learning.

We’ve all heard critics of the modern educational system summarize its shortcomings as an out-dated “factory model of education”. They’re usually referencing things like standardized assessment, IQ testing, age grouping, and the one-size-fits-all approach. Most often, the blame is placed on late 19th century adoption of the Prussian-Industrial model in the US, claiming it prioritized conformity and efficiency above all else. Of course, it’s not really that simple.

The 19th century was a time of great flux for public education. Competing models were emerging to solve the most pervasive problems of the day: inconsistent content, unclear standards, and a lack of equal access. By the mid-19th century, the inaugural Secretary of the Massachusetts State Board of Education, Horace Mann, made it his mission to empirically evaluate existing domestic and international models and to lobby for the full implementation of the one that best suited the nation’s needs. Enter: the Prussian-Industrial Model.

This model was first crafted by King Frederick William I as a state-mandated program, arguably crafted to cultivate an obedient and submissive public. Teachers were stripped of autonomy and held to strict standards uncommon to the “sage” or “mentor” models of the past. Sure it was rigid, impersonal, and reeked of indoctrination, but it also ticked off the boxes that mattered most to Mann. It was cost effective, scalable, inclusive, consistent, and prioritized teacher training; features he argued to be necessary for public education to thrive. If he leveraged this powerful system for good, he believed, society as a whole would benefit.

Of course, once this ball started rolling, it seems, Mann’s best intentions couldn’t hasten the inertial appeal of standardization or the emergent needs of the new system. As the Department of Education and similar bodies came into existence, the ambition to continue to improve our approach and evaluate progress intensified. But the new model catalyzed major increases in student-to-teacher ratios; how do you properly evaluate groups of students that large? Policymakers wanted data. They wanted to know what was working, what wasn’t, and what different students knew. By the early 20th century, research on human development, learning, and memory aligned with the goals of evaluators. The complicated entangling of research and education was well on its way as psychologists and educational strategists were recruited to design blanket assessments for growing classrooms.

One of these tests, the Binet-Simon test, was created by French psychologist Alfred Binet to determine the mental capacity of students so that those with severe difficulties could be properly accommodated. That’s it. This whole mess of an IQ debate started with the earnest goal of capturing a snapshot of a particular child’s abilities and responding accordingly. Binet was clear: intelligence is diverse, complicated, and unlikely to remain static over the lifespan. Unfortunately, not everyone was listening.

Henry Herbert Goddard, a US psychologist, caught wind of the Binet-Simon test and translated it into English. He went on to promote its use as an intelligence assessment tool, going so far as to encourage the sterilization of those deemed “feeble-minded” by its measure. Stanford psychologist Lewis Termin who adapted the test to create the Standord-Binet version (now in its fifth edition), also believed that intelligence was an inherited and fixed trait. The result was a national effort to rank students, citizens, and immigrants against each other, with sometimes dire consequences.

By 1936, standardized testing had become such a popular way to quickly and consistently assess large groups of people that the first automatic test scanner was developed to make doing so even easier. Basically, in less than one hundred years, the goal of systematizing public education led to a series of (sometimes) reasonable next steps that eventually landed us with Scantrons.

(Present Day) Brain-Based Learning.

Selfies and Netflix consumption aside, it’s safe to assume that people haven’t fundamentally changed much since the days of Aristotle. We’re still susceptible to the same biases, assumptions, and miscommunications that we were in the 19th century. Of course, we have the added benefit of learning from everything that’s come before us. So the question is, how do we make sure that research is used wisely?

Well, for starters, we have to talk to each other. If someone had asked Binet before implementing his test, he would have likely clarified how to use it reasonably. If someone had asked experienced teachers before they assumed a single standard for quality, they would have likely clarified the value of adapting to your students. The problem is that information doesn’t exist in a vacuum and expert does not mean right. We’re all constantly interpreting research to match our own goals, experiences, and understanding of the world. We are much better at hearing what we want to hear than we are at listening to each other.

Which leads me to a question that a teacher asked me in one of my workshops: Exactly what type of learning is not brain-based?

Her point was that the premise is flawed. The way this research is being shared is often flawed. If we present neuroscientific research as a solution, or information that lays the foundation for a “type” of learning, we miss the point — and the opportunity. Great teachers have navigated the inner workings of the brain for centuries without ever needing to know what was going on inside. To suggest that now that we have MRI machines and EEG we’re all of a sudden going to better understand how to teach a brain to learn is highly unlikely. Like any good, long-term relationship, it all comes down to goals, expectations and respect. Researchers and educators have to consider the lens and goal of each other, and adjust their expectations accordingly.

And when it comes to neuroscience, there seems to be a bit of a communication breakdown. Some people are adamant that there’s no place for neuroscience in education; it’s too premature or the questions are just too different. Others believe that it’s the answer we’ve all been waiting for; the pixie dust that’s going to fix whatever we believe to be broken. Still others see a business opportunity; if we can package up the appeal of neuroscientific answers and cater them to educators’ needs in bite size chunks, we can make some serious dough and no one will be the wiser.

I find that reality is usually somewhere in-between.

Neuroscience is unlikely to create great teachers, great tests, great classrooms, or great curricula – that’s not its goal and that’s not something I expect anyone to bottle up any time soon – but it can inform the way we think about students and the nested communities that they’re a part of. It can teach us more about ourselves, how we interact with information, and how we interact with each other. It can be one of many tools we use to get this right. And frankly, that’s all we should ask of it.

When thinking about history, we first have to consider whether the actions that look foolish in retrospect were actually reasonable reactions to the problems of the day. Sometimes it’s successful, and sometimes it’s not. By putting ourselves in their shoes, we can empathize with their mistakes, and more easily imagine ourselves making them in similar circumstances. Education and science are similar in that both can be a reflection of the society that supports them. Every solution has flaws, and often, the solution to one problem ends up causing a whole slew of new ones. The power and perceived credibility of research was well-received by a system that felt haphazard and disorganized. The problem is, that same research viewed through different lenses can have drastically different consequences. In the case of standardization, history reminds us to both embrace the insights research may offer, while also being wary of the agendas that may be shaping its use.

Similarly, neuroscience is valuable to education, so long as we understand its limits and the biases of those who are disseminating it. If we adopt it blindly, then we run the risk of misallocating resources or creating more problems than we solve. Recent studies suggest that the majority of educators continue to believe neuromythologies, and the problem is, they didn’t come up with those themselves. Someone else told them that students are left-brained or right-brained. Someone else told them that boys and girls are born with totally different brains, or only use 10% of it, or that Mozart will make you smarter. The list goes on and on, but the point is, as history has shown us, society (and in this case mass media and capitalism) will often shape the message. It’s up to us to find ways to make sure that the darker side of history doesn’t repeat itself.

It’s a pursuit that we’ll never really finish. Research is always in progress and education is always looking for ways to adapt to the needs of the day. The goal is to work towards the best ways to keep up, so that we can collectively take the next chapter of history into our own hands.

References + Further Reading:

Aristotle

  • Aristotle B.C. (384-322) – Education for a Common End”. StateUniversity.com. [Blog]
  • Back, S. (2002). The Aristotelian challenge to teacher education. History of Intellectual Culture, 2(1), 1-5. [Paper]
  • Curren, R.R. (2000) Aristotle on the necessity of public education. Rowman and Littlefield Publishers. [Book]
  • Kurthagen, F.A.J. in cooperation with Kessels, P.A.M., Kostler, B., Lagerwerf, B., Wubbels, T. (2001) Linking practice and theory: The pedagogy of realistic teacher education. Mahawa, N.J.: Lawrence Erlbaum Associates. [Book]
  • Popova, M. (n.d.) The art of practical wisdom: The psychology of how we use frames, categories, and storytelling to make sense of the world. BrainPickings.com [Blog]
  • Wanjek, C. (2011). Top 5 misconceptions about Columbus. LiveScience [Blog]

19th-20th Century Learning

  • Binet, A. (1905). New methods for the diagnosis of the intellectual level of subnormals. L’Anée Psychologique, 12, 191-244. [Paper]
  • EdX. (2015). Saving schools: History and politics of U.S. Education. Harvard University [MOOC]
  • Fletcher, D. (2009). Brief history: Standardized testing. TIME [Article]
  • Meshchaninov, Y. (2012). The Prussian-Industrial history of public schooling. The New American Academy. [Report]
  • Noer, M., Khan, S. (2012). The history of education. Forbes. [Video]
  • Watters, A. (2015). The invented history of ‘The Factory Model of Education’. Hack Education [Blog]
  • Zenderland, L. (2001). Measuring minds: Henry Herbert Goddard and the origins of American intelligence testing. Cambridge University Press. [Book]

Brain-Based Learning

  • BrainFacts. (n.d.). Neuromyths. BrainFacts.org. [Resource]
  • Howard-Jones, P.A. (2014). Neuroscience and education: Myths and messages. Nature Reviews Neuroscience, 15(12), 1-8. [Paper]
  • Sukel, K. (2015). When the myth is the message: Neuromyths and education. Dana Foundation. [Briefing]
  • Sylvan, L.J. & Christodoulou, J.A. (2010). Understanding the role of neuroscience in brain based products: A guide for educators and consumers. Mind, Brain, and Education, 4(1), 1-7. [Paper]
  • Tardif, E., Doudin, P., Meylan, N. (2015). Neuromyths among teachers and student teachers. Mind, Brain, and Education, 9(1), 50-59. [Paper]
  • Weisberg, D.S., Keil, F.C., Goodstein, J., Rawson, E., Gray, J.R. (2008). The seductive allure of neuroscience explanations. Journal of Cognitive Neuroscience, 20(3), 470-477. [Paper]

Default Image
landb
landb

In Smart Thinking: Three Essential Keys to Solve Problems, Innovate, and Get Things Done, Art Markman draws on psychological and cognitive scientific principles to provide a general audience with techniques for changing mental habits, improving memory formation, and refining decision-making skills. Smart thinking, he argues, is based upon wise use of the information one possesses to pursue a goal. Smart thinking is not raw intelligence or test taking ability.

Effective habits are key to smart thinking. Repetition, environmental cues, and distinctive actions facilitate habit formation. Eliminating bad habits by relying on willpower is extremely taxing; rather, one should replace bad habits with good behaviors through changes in the environment. A “habit diary” can help a person track her progress toward habit change.

A person cannot process—let alone remember—all the information to which he is exposed, but he can use a few techniques to be strategic about what he will remember. For example, whether preparing oneself to remember written or oral information or preparing others to remember the information one will present, we can aid memory by providing a preview, sticking to three main points, and reviewing key information. Also, being mentally present and resisting the cultural habit of multi-tasking are important for remembering. Markman asserts that we are more likely to remember information if it is meaningful and related to already known concepts. It can be recalled most easily when we are in a state similar to the state we were in when we learned it originally. If upon initially learning new information we experience some “desirable difficulty,” we are more likely to retain that information since we had to work to understand it.

We can bolster our ability to learn, remember, and innovate by asking the question “why” and answering this question when teaching others. It is important to ask oneself “why” questions given that people overestimate the extent to which they understand a concept. In the spirit of learning and with a friendly and non-accusatory disposition, people should ask others “why” when that speaker explains a new concept or uses new, unique terminology.

Effective decision-making is the third key component of smart thinking. Markman suggests his readers familiarize themselves with their decision-making style or their “need for closure” in deciding among options. Swift decision makers may need to take time to fully consider potential creative solutions and cool-off before committing to a course of action; painstakingly deliberate decision makers should learn to commit to a solution and recognize the futility of generating endless options. Decision makers should ensure that they clearly understand the situation about which they need to make a decision, which may require recasting the problem in different terms. People should elicit help from others in identifying issues they may have overlooked. Analogies are a powerful way to structure people’s beliefs and projections about situations. Proverbs (and stories and jokes) are a pithy and effective way of drawing an analogy. Markman even suggests his readers study lists of proverbs to improve their understanding of the key relations in a situation. Diagrams and gestures can be a more effective way of expressing a problem or the steps to a solution than words alone.

Finally, in the interdependent culture in which most people will find themselves (including in the corporate world), an organization’s “smart thinking” is critical. People tend to adopt the goals and actions exhibited by those around them. Accordingly, organizations should help their members reflect on how they think, stretch them to learn, be encouraging of new ideas and questions, probe for deep explanations, discourage multitasking, and encourage an attitude of “we” not “I.”

In addition to improving habits, memory, and decision making, Markman scatters throughout the book “instantly smarter” tips that one can implement immediately to improve thinking. Among his suggestions are: get a good night’s sleep; listen to your emotional reactions when making decisions; if you do not know something important, then identify the people who would possess that information; and if you struggle to remember something, stop thinking about it and the solution may come to you.

With a clear structure and relatable examples, Markman provides easily digestible tips to improve our habits of mind and to execute Smart Thinking.