Skip to main content
Interrupting Skilled Students
Andrew Watson
Andrew Watson

AdobeStock_65282787_Credit

Here’s a sentence that won’t surprise you: practice typically makes us more skilled at the activity we’re practicing.

Here’s a sentence that might surprise you: practice makes us more vulnerable to mistakes after an interruption.

So, for example, if my students have just learned how to solve for three variables with three equations, then an interruption will have some effect on them when they get back to work.

If, however, they have spent some time getting familiar with the process of solving for three variables with three equations, then an interruption will distract them even more.

Said a different way: an interruption may distract your relatively advanced students more than your less advanced students.

Counter-intuitive?

My first response to this research finding was straightforward puzzlement. Why are experienced students more distractible than neophytes?

As I’ve thought more about this study, I’ve had an idea. If I’m experienced at a step-by-step activity, then I’m probably not paying full attention to each step as I go through the process. After all, my experience lets me work almost by rote. In this case, an interruption is quite a problem, because I wasn’t really focused on my place in the list of steps.

However, if I’m a newbie, I’m likely to be focusing quite keenly on each step, and so–after a distraction–am likelier to remember where I left off.

Teaching Implications

In the first place, this study by Altmann and Hambrick is the only one I know of that reaches this conclusion. Until their results are replicated, we ought to be interested in, but not obsessed by, their findings.

Second, we should note that relative expertise does have occasional disadvantages. We shouldn’t assume that our accomplished students won’t be fuddled by a classroom interruption–in fact, they might be more so than their still-struggling peers.

Third, I for one will be on the lookout for this pattern in my own work. In theory at least, I’m the expert in my classroom, and so I might be more discombobulated than my students by a distraction during a rote task.

Given this research, I now know to turn to my least confident students for a reminder of where were were.

A Tale of Two Analyses
Lindsay Clements
Lindsay Clements

AdobeStock_127365916_Credit

For researchers and research-readers alike, the data analysis portion of a study is many things: complex, exciting, frustrating, intriguing, and sometimes even befuddling.

And, analytics are always on the move. With each new study, researchers are developing increasingly intricate and elegant ways to make meaning of their data. At the same time, powerful statistical software, like SPSS or Stata, is continuously expanding its capability to process such sophisticated research designs.

Certainly, many long hours go into choosing a study’s analytic approach. Researchers must develop and refine their hypotheses; organize their data in such a way that statistical software can read it; and choose a statistical method (i.e., a mathematical approach) to test their research questions.

That last part about choosing a statistical method is where things can get tricky. In general, different statistical methods are not simply multiple ways of doing the same thing. Whereas something like a division problem may use different techniques (e.g., long division, trial- and-error) to get the same result, different statistical methods can analyze the same data yet produce differing, and even contradictory, results.

Differences in design: A little goes a long way

 Just as French philosopher Jean-Paul Sartre liked to say “we are our choices,” in many ways our research results are our choices, too.

A study conducted by Burchinal & Clarke-Stewart illustrates this well. [1] These authors noticed that two different research teams had analyzed the same longitudinal data set, yet found (and published) substantially different results.

These two research teams analyzed data from the National Institute of Child Health and Human Development (NICHD) Study of Early Child Care: a large, national study that followed the development of 1,364 children from six months of age. Both teams were also interested in the same question: what is the impact of mothers’ employment, and children’s subsequent nonmaternal child care, on children’s early cognitive growth?

The NICHD Early Child Care Researchers (Team 1) were first in line to test this question. After a series of analyses, this team concluded that the age at which children entered nonmaternal care, and the amount of time spent in such care, showed no relation to children’s cognitive performance up to age three. [2]

Next, Team 2 (Brooks-Gunn, Han, & Waldfogel, 2002) tested this same question. However, in contrast to the Team 1, they concluded that mothers’ full-time employment during children’s first nine months was indeed associated with impaired cognitive functioning when the children were three years of age. [3]

Speaking different analytic languages

 The contradictory findings between these two research teams were not only curious, but also important to reconcile. After all, the difference between advising mothers of young children to work or not work is a big one. And, such a recommendation has implications for state and federal programs, such as Temporary Assistance for Needy Families, that assist young mothers in finding employment.

Burchinal & Clarke-Stewart therefore conducted a new, third study investigating how each team’s analytic design may have engendered the contradictory results.

Two approaches

 First, Team 1 team used a conservative, top-down analytic approach. This approach:

  • uses all available information, such as multiple outcome variables and data from all participants
  • begins with a general test of significant relations between variables and works its way down to more specific comparisons
  • helps researchers avoid exaggerating the significance of associations found when working with large data sets

Team 2, on the other hand, used an a priori comparison approach. This technique:

  • examines hypotheses and variable relations chosen by researchers before (a priori) data exploration
  • utilizes a small subset of participants and/or variables in order to conduct a small set of comparisons between explicitly chosen participants and/or variables
  • is helpful when theory or previous research strongly implies a relation between specific variables or constructs

Thus, it seemed likely that investigating a smaller group of participants, and analyzing a smaller set of outcome data, contributed to Team 2’s finding of a relation between maternal employment and children’s cognitive growth. On the other hand, utilizing the full set of study participants, and analyzing all possible child outcome data, seemed to result in Team 1’s lack of such a finding.

To confirm this hypothesis, Burchinal & Clarke-Stewart analyzed the same participants and variables that Team 2 did; but, they used the top-down approach this time. The result of these new analyses? No significant findings.

The authors therefore reported Team 1’s findings—that is: it doesn’t hurt young children for their mothers to get a job—as being a more reliable take-away.

A cautionary tale

 It is important to note that both the top-down approach and the a priori comparison approach are well-respected and well-established analytic techniques. And, as with all analytic techniques, each has strengths, weaknesses, and research questions for which its use is optimal.

But a study such as the one conducted by Burchinal & Clarke-Stewart provides an important cautionary tale. That is, when we, as consumers of research findings, draw conclusions from empirical work, it is important to remain attentive to the type of analyses that were used to engender such claims.

Of course, we probably won’t all end up being experts in all areas of analytic approach. But perhaps a good rule of thumb is that when we see a small amount of data being used to make big claims, it’s best to take a second look, get a second opinion, or see if the study has been replicated a second time.

References

[1] Burchinal, M.R. & Clarke-Stewart, K.A. (2007). Maternal employment and child cognitive outcomes: The importance of analytic approach. Developmental Psychology, 43, 1140-1155.

[2] Brooks-Gunn, J., Han, W.J., & Waldfogel, J. (2002). Maternal employment and child cognitive outcomes in the first three years of life: The NICHD Study of Early Child Care. Child Development, 73, 1052–1072.

[3] National Institute of Child Health and Human Development Early Child Care Research Network. (2000). The relation of child care to cognitive and language development. Child Development, 71, 960–980.

Dangerous Authenticity?
Andrew Watson
Andrew Watson

AdobeStock_29925093_Credit

Here’s an odd question: is it ever a bad idea for teachers to be authentic?

In a recent study, Johnson and LaBelle surveyed students to discover the teacher behaviors that struck them as “authentic.”

By closely analyzing the students’ descriptions of “authentic” teachers, they came up with four broad categories. According to their findings, authentic teachers are…

Approachable (for example: they tell appropriate personal stories and jokes)

Passionate (they’re excited about their subject matter)

Attentive (they know their students’ names)

Capable (they stay on top of assignments)

Unsurprisingly, “inauthentic” teachers do the opposite (and, are Disrespectful to boot).

Johnson and LaBelle acknowledge that this research includes some methodological quirks.

In particular, paradoxically, the fact that students describe these behaviors as “authentic” doesn’t mean that they are authentically “authentic” for all teachers.

For example: “authentic” teachers are approachable, and approachable teachers tell jokes. But, what if you’re not a joker? Maybe your sense of humor is quieter than that. Or maybe, while you appreciate a good joke told by others, you’re just not comfortable telling them yourself.

Should you adopt “authentic” teacher behaviors even if they’re not authentic to you?

Zooming Out

This question–which Johnson and LaBelle raise but don’t answer–hovers over much of the research you’ll hear about at Learning and the Brain Conferences.

Let’s imagine that you come to the November LatB conference, which will focus on the intersection of teaching and technology. You might attend a session that warns about the distractions that technology creates, and the attentional benefits that movement can provide.

On the one hand, this argument might make good sense to you. You know of several computer programs that might help your students, and you’re happy to know that they’ll be less distracted by technology if they’ve had a chance to move about recently.

On the other hand, as you listen to the speaker’s list of movement strategies (Have them move into small groups! Require students to change their seats every 20 minutes! Ask 5 students to write their answers on the board!), you might feel a growing dread.

Those strategies might seem like a good fit for the speaker. And, based on the fact that everyone around you is nodding energetically, you conclude they’re eager to give them a go.

But here’s the thing: that’s just not you. You simply can’t imagine directing your students about in some elaborate traffic-control exercise. You’re feeling a little embarrassed just thinking about it.

We’ve got good research showing the benefits of this particular teaching behavior. And, alas, that beneficial teaching behavior just doesn’t mesh with the persona you bring to the classroom.

So, what should you do?

Hard Questions, Tentative Answers

For starters, I think you should be suspicious of anyone who thinks this is an easy question.

On the one had, research has powerful answers to lots of questions about good and bad teaching. On the other hand, research mostly looks at AVERAGES.

And here’s the thing: you are not average. Your students aren’t average either. Your school isn’t average.

You are an agglomeration of unique particulars, and some research-established average might not apply to you.

That hard truth goes double when the teaching practice under discussion runs counter to something deep in your personality.

Here’s the best answer I got. In my view, you can decline particular teaching practices, but you shouldn’t ignore the broader topic within which those practices reside.

To go back to my “attention and movement” example: you can decide that you won’t rely on movement to focus your students. After all, that’s just not you.

But, you can’t overlook the topic of attention itself. There are MANY other teaching strategies you can use to foster attention, and–especially if you’re going to set this one strategy aside-you’ll need to be even more attentive and thoughtful about the other strategies that you have at hand.

Imagine a Venn diagram. Once circle represents all the teaching practices that have research support. A second represents those that students find “authentic.” A third represents those that are, in fact, authentic to you.

Find the teaching practices that fit in all three of those circles–you’ve found the best place to be.

Regions or Networks, Take 2
Andrew Watson
Andrew Watson

AdobeStock_25611509_Credit

Just yesterday, I posted some thoughts about “thinking both-ily”; that is, understanding that brain processing happens in both regions and networks.

Today, I found a Beebs video showing a remarkably detailed version of the neurons that make up brain networks.

You’re probably used to seeing images of the brain like the one above — one that emphasizes regions over networks. This video provides a useful counter-example — a way to visualize networks over regions.

If you can picture both together, you can get even better at thinking both-ily.

Brain Regions or Brain Networks?
Andrew Watson
Andrew Watson

AdobeStock_107836271_Credit

Here’s an odd brain theory to start off your day:

Let’s assume that particular regions of your brain produce particular mental abilities or habits. For instance, let’s say that this part of your brain right here is the generosity center of the brain.

So, if I feel a bump on your head above the “generosity region” of the brain, I can assume that this part of your brain is unusually big, and that you are therefore unusually generous.

However bizarre this theory might sound, phrenology was quite popular in the early 1800s. Most often attributed to Franz Joseph Gall–and, of course, popularized and spread by others–this theory produced a mini-industry of diagnosing people’s characters by feeling the bumps on their heads.

The Larger Question

Phrenology is but the most extreme position in an ongoing debate.

On one side of this debate–the phrenology & co. side–we can think about mental activities taking place in very specific parts of the brain. We can say that the amygdala is the “fear center” of the brain, or that the hippocampus is the “memory center,” or the anterior cingulate cortex the “error-detection center.”

The other side of the debate argues that all brain functions take place in wide networks that spread across many parts of the brain. Memory isn’t just in the hippocampus: it includes the prefrontal cortex, and sensory regions of the neocortex, and the striatum, and the cerebellum…it happen all across the brain.

So, here’s the question: which side of the debate has it right?

A Very Specific Answer

I thought about this debate when I read a recent article about language areas in the brain. Here’s a one-paragraph summary of that article, written by the authors. (Don’t worry too much about the “BA” numbers; focus on the broader argument):

The interest in understanding how language is “localized” in the brain has existed for centuries. Departing from seven meta-analytic studies of functional magnetic resonance imaging activity during the performance of different language activities, it is proposed here that there are two different language networks in the brain: first, a language reception/understanding system, including a “core Wernicke’s area” involved in word recognition (BA21, BA22, BA41, and BA42), and a fringe or peripheral area (“extended Wernicke’s area:” BA20, BA37, BA38, BA39, and BA40) involved in language associations (associating words with other information); second, a language production system (“Broca’s complex:” BA44, BA45, and also BA46, BA47, partially BA6-mainly its mesial supplementary motor area-and extending toward the basal ganglia and the thalamus). This paper additionally proposes that the insula (BA13) plays a certain coordinating role in interconnecting these two brain language systems.

Got that? [I put those words in bold, btw.]

In brief, researchers argue that language requires two sets of neural networks. One network, “Wernicke’s area,” has both a core and a periphery. The other, “Broca’s,” is itself a complex. And, these two networks are coordinated by the insula.

(If you want to reread that paragraph now that you’ve seen the summary, it might make more sense.)

“Both-ily”

As I read that summary, I think the authors are saying that both theories about brain structure are partly true.

Understanding language takes place in Wernicke’s area–which is itself a pair of networks. Producing language takes place in Broca’s area–which is a complex. And those networks and complexes communicate through the insula.

In other words: specific mental functions take place in specific places, but those “places” are best thought of as interconnected networks.

In grad school, a discussion group I was in once debated the theories I outlined above. Our question was: “when we study brains, should we think locally or network-ily?” After an hour of heated discussion, we reached a firm conclusion: we should always think “both-ily.”

One More Famous Example

You probably know the story of Henry Molaisson: a patient whose hippocampi were removed to cure his epilepsy.

The good news: his epilepsy was (largely) cured.

The tragic news: he could no longer form long-term declarative memories.

From H.M.’s example, we learned to think about long-term memory locally: clearly, the hippocampus is essential for creating new long-term declarative memories. After all, if you don’t have one, you can’t learn new things.

(This hypothesis was confirmed with a few other patients since H.M.)

But, from H.M.’s example, we also learned to think about long-term memory in networks. He didn’t learn things when told them, but he could learn new things.

For example: when asked how to get to the kitchen in his new house, he couldn’t answer. He just didn’t “know.” (That is: he didn’t know in a way that would allow him to explain the answer.)

But, when he wanted a cup of tea, he went to the kitchen and made one for himself. Clearly, he did “know.” (That is: he knew in a way that would allow him to get to the kitchen–as long as he didn’t have to explain how he got there.)

When thinking about H.M., should we think “locally” or “network-ily”? I say: think both-ily.

The Bigger Message

When you hear from self-proclaimed brain experts who tells you that “the wrinkle-bop is the juggling center of the brain,” beware.

If those “experts” go on to explain that this sentence is a crude shorthand for “the wrinkle-bop is a very important part of a complex network of areas involved in juggling,” then you’re okay.

But if those “experts” just stop there–in other words, if they really think “locally,” not “both-ily”–then you should be suspicious.

You might conclude that their teaching advice is valid, and decide to give it a try. But, don’t rely on their neuroscience expertise. They are, in effect, just reading the bumps on your skull…

Video Games and Empathy
Andrew Watson
Andrew Watson

AdobeStock_96027478_Credit

Do violent video games reduce empathy?

If people spend lots of time pretending to beat up and shoot pretend people, will this experience reduce their empathy for human suffering? Will it make them more likely to really beat up and shoot real people?

We might get different answers to this question depending on the variables we decide to measure, and the tools we use to measure them.

In this study, researchers found 15 people who often played violent video games–typically “first person shooter” games involving automatic weapons–and 15 who had never played them.

These participants looked at sketches: some showed people by themselves while others depicted people in pairs. Half of the pictures showed mundane activities–two men carrying a cabinet–while the other half showed violent activities–one man forcibly holding another man’s head underwater.

As participants looked at these pictures, researchers used functional magnetic resonance imaging to measure neural responses.

The researchers reasoned as follows: if violent video games impair players’ empathy, these scans should reveal differences in brain networks associated with empathy. That is: gamers and non-gamers would respond similarly to the men carrying the cabinet, but the gamers would not respond with as much empathy as the non-gamers to the sight of human suffering. After all, in this hypothesis, the gamers would have been desensitized to human pain, and so would not have as strong an empathetic response.

How much difference did they find?

One Conclusion, and One More

No difference. Gamers and non-gamers were equally empathetic–and non-empathetic–when they looked at these images.

So: when these researchers answer this version of this question using these tools, they get this answer.

However: when these researchers answer this version of the question using metanalysis, they get a radically different answer:

The evidence strongly suggests that exposure to violent video games is a causal risk factor for increased aggressive behavior, aggressive cognition, and aggressive affect and for decreased empathy and prosocial behavior.

The Takeaway

I hope this entry does NOT persuade you that video games do, or don’t, reduce empathy.

I hope, instead, to persuade you that it’s hard to answer that question once and for all. We have many ways to ask, and many tools with which to answer, such a question. Only by asking (and asking and asking), and then by looking for converging answers, can we start to move towards a conclusion.

This is Your Chess on Ritalin
Andrew Watson
Andrew Watson

AdobeStock_51407584_Credit

In movies and on television, chess skill symbolizes “pure intelligence.” Characters who can outwit others on the chessboard are–obviously–just smarter than everyone else. (On The West Wing, President Bartlet routinely schools his staff on the nuances of the game.)

By implication, people who get better at chess seem to be getting smarter. So, if I can give you a drug that improves your chess score, you might conclude that this drug is making you more intelligent.

This approach, of course, has a controversial history. We have developed drugs (such as methylphenidate and modafinil) that benefit people who struggle during cognitive tasks. Will those same drugs benefit those who don’t typically struggle? If they do, is that benefit somehow unfair?

The Study: Setup

German researchers worked with 40 mid-level chess players. Following a remarkably detailed and precise research regimen, these players spent 4 days playing games against a chess program that had been matched to play at their level.

On each day, these chess players took either methylphenidate (Ritalin/Concerta), modafinil (Provigil), caffeine (yum), or a placebo. The schedule of these 4 drugs was varied among the group, to be sure that the order didn’t matter.

The Study: Results

How did they do? It’s bit complicated…

Compared to the games when they took a placebo, they players slowed down when they took all three drugs. On average, they added nearly 2 minutes to the time they took (9:13 vs 7:17 per game); that’s a slowdown of 25%.

When they took more time, these players often ran up against the time limit that had been set for each game. As a result, they lost lots of games by running out of time.

But, what happens when we look at the games when they didn’t run out of time?

They got better. It’s a little tricky to describe improvement in chess terms. You might say they had a 5% increased chance of winning. Or, you might say–as the lead researcher said:

If we correct for the slowest players, then the effect would be the equivalent of moving a player from say, number 5000 in the world ranking, to number 3500 in the world ranking. In a single game, the effect is the equivalent of having the white pieces, every time.

That’s quite the improvement.

The Study: Implications

So, what do we do with this information? Should we all rush right out and add some methylphenidate to our daily vitamins?

In my view, not yet.

First, this study looked at people playing chess. Although we associate chess with “intelligence in general,” we can’t be sure–based on this study alone–that the effects of these drugs will generalize to other cognitive activities.

Second, the study worked with an unusual subgroup of the population: the average IQ among the players was 125. (Of course, IQ isn’t the only–or necessarily the best–way to measure human cognitive capacity. But, it’s not meaningless.)

An IQ of 125 is more than 1 standard deviation above average. This is, in other words, a select–even atypical–group of thinkers.

For these reasons, I wouldn’t do anything differently just yet.

And third: I stumbled across this study after I had completed this blog entry. The headline is that non-prescription use of Ritalin can muddle the dopamine system–at least in rats.

When I say “muddle,” I’m summarizing the following passage:

These changes in brain chemistry were associated with serious concerns such as risk-taking behaviors, disruptions in the sleep/wake cycle and problematic weight loss, as well as resulting in increased activity and anti-anxiety and antidepressive effects.

In other words, if these effects are true for humans as well as rats, that’s some serious muddling right there.

At the same time, I must tell you that this chess study gives me pause. In grad school, the orthodoxy about these drugs was that “they help people who struggle think more like typical learners, but they don’t help typical learners think like more extraordinary learners.”

(You might think of them as a mental knee brace. The brace helps you if you’re injured, but isn’t particularly beneficial if you’re not.)

This study, however, suggests that–for this atypical group of people doing this atypical thing–such drugs do provide a cognitive benefit.

An alternate explanation

I’m intrigued by the fact that chess players taking methylphenidate, modafinil, and caffeine slowed down.

Perhaps the reason they played better is not that the drugs helped them think better, but that they gave the players more time to think.

Could we get the same benefit by deliberately forcing ourselves to take more time with our thoughts? This study doesn’t answer that question. But, the possibility seems worth exploring.

______________________________________________________________

A final note, unrelated to the content of this study. In looking over the specifics of the research paradigm, I note that the team began work on this study in July of 2011, and that it was published only in 2017. That’s right: they’ve been working on this for over 6 years.

Wow.

Montessori: The New Science behind the Century-Old Methodology (part II)
Austin Matte
Austin Matte

AdobeStock_67700002_Credit

“Freedom in intellectual work is found to be the basis of internal discipline” (*Montessori, 1917/1965).

This quote highlights a notion which goes completely against our conventional wisdom. Internal discipline is to be instilled externally. Students develop discipline through following top-down instruction, no? They are told what to do and they do it.

Much of what Maria Montessori intuited about child development and education seems to fly in the face of the systems we have in place to educate our youngest citizens. There is no doubt that the insistence on obedience can create disciplined individuals; however, in regard to intellectual pursuits, if we desire individuals who are self-driven, allowing them a certain freedom in what they pursue enables them to develop this drive themselves.

Maria Montessori developed her approach to education, which contains eight principles, primarily through observation. In her book, Lillard (2005) outlines these eight principles and the scientific studies that have been carried out since which underscore that these principles are in fact in line with the way we learn and develop. These principles are:

  1. movement and cognition are intertwined
  2. students should have a sense of control
  3. interest improves learning
  4. extrinsic rewards hinder intrinsic motivation
  5. learning from and with peers
  6. learning should be contextualized
  7. optimal adult-child interactions
  8. order in the environment

In this post, we delve into principles five through eight, continuing from a previous post. While there are many examples of each of these principles in the Montessori classroom, I bring up one or two examples and just some of the science backing them up.

  1. Learning from and with peers

In a primary Montessori classroom (children ages 3 to 5), younger children may often work side by side, though not collaborating. As children get to be 4 and 5 years old, they begin to work together. At this age, and especially at the elementary level, collaboration is suggested and encouraged by the teacher.

Further, the mixed-age classrooms, which are an essential part of the Montessori classroom, provide opportunities for younger students to learn from older students. As importantly, it also creates the opportunity for older students to teach younger students — teaching becomes an opportunity for the older students to learn and grow.

Developmental psychologist Jean Piaget, who was 26 years younger than Maria Montessori and was involved in and presumably inspired by her work, also notes the importance of learning from peers. He believed that peer interaction, and the presentation of different ideas that comes along with it, creates states of cognitive disequilibrium where children must then “accommodate” these new concepts, advancing their understanding.

Lev Vygotsky, another developmental psychologist, believed that children learn in what he called the “Zone of Proximal Development.” This “zone” consists of tasks that one cannot accomplish, or conceptual understanding that one cannot attain, alone. Older peers allow their younger counterparts to accomplish tasks and reach higher levels of understanding.

Research has confirmed the benefits of social learning. Young children may benefit the most from observation, rather than interaction (*Azmitia, 1988). Yet, as children grow older, and especially in the elementary years, they are able to make gains through collaboration (*Tomasello, et al., 1993). Once children are able to engage in a dialogue that contains explanations, inferences, strategies, etc., collaboration has been found to improve their cognitive performance, compared to children who work alongside peers but are discouraged from talking, much like in traditional schools (*Teasley, 1995).

  1. Learning should be contextualized

As with the learning that takes place in traditional schools, “people are prepared for life by exclusion from it” (Montessori, 1967a). Investigation into some topic, void of the context of the topic, prevents the learner from engaging with it as deeply as she would be able to otherwise.

In the Primary Montessori Classroom, context is given via hands-on materials that are connected with the outside world. For example:

  • activities where children must button, tie, or zip fabric together, much the same way their clothes function
  • pouring (water, beads, etc.) from one container to another, just as they must pour water into their glass to drink
  • scooping items (small stones, beans, etc.) with a spoon from one container into another, just as they must spoon their own food into their mouths

In the elementary classroom and beyond, learning is also made contextual by physically going out to explore the place where the subject at hand is occurring. For a theme on entrepreneurship, for example, students may be encouraged to go out and speak with local business owners.

It is understood that learning is improved when new knowledge is connected with what we already know. Placing learning in a specific context is one way to facilitate that.

In a simple study, 10-year-old children were taught the basics of the LOGO graphics program language. One group was taught in an abstract context, where children had to move the cursor between five circles, touching each one. Other groups were taught in a meaningful context — they had to carry out the same task, though the circles were described as islands with treasure that must be collected at each one, for example.

Children who learned the language in a meaningful context, as opposed to an abstract one, better learned the programming language, and also reported to have better enjoyed the learning process. These children also showed a better mastery of following a series a steps towards the end of executing a plan — a skill set important to computer programming. Even further, and perhaps most importantly, two weeks later these children performed better on a geometry test of the underlying concepts and skills (*Papert, 1980; *Parker & Lepper 1992).

As highlighted in this study, placing learning in a specific context can allow an individual to enjoy learning the material, better learn the material, better retain the material learned, and better learn the meta-skills required to learn similar material. Is there anything else?

  1. Optimal adult-child interactions

Through observation, Montessori described optimal adult-child interactions before there was a body of research on the topic. In few words, she outlined that adults are to set clear limits but allow children to operate freely within those limits, and to respond sensitively to children’s needs while maintaining high expectations.

One study found that children of authoritative/democratic parents (i.e. parents who are warm, make suggestions rather than demands, and expect maturity) were more well-liked by their peers, and were judged as more prosocial by their teachers. On the other hand, children of  authoritarian/restrictive parents (i.e., parents who often prohibit, assert themselves, and provide directives) were viewed as less helpful by teachers and peers, and were often more disliked by their peers (*Dekovic & Jensen, 1992).

Another study also found that for two-year-olds, parents’ directiveness was positively associated with measures of cognition. As children grew older than two, however, parents’ directiveness was found to associate with lower levels of cognitive functioning (*Landry, et al., 2000). These findings suggest that adults must be sensitive to the degree to which children need direction and guidance. While it may be necessary, too much can hamper cognition.

  1. Order in the environment

When I first learned of the degree to which the Montessori classroom was ordered and structured, admittedly, it struck me as strange. Learning must be messy, I thought. Now, having spent plenty of time in a Montessori classroom, I see that the meticulous organization of the materials and shelves, and the purposefully structured progression of the materials, help children learn.

The research currently available agrees with this notion. Studies have found, for example, that items to be memorized are better retained when presented in an organized and structured way, rather than at random (*Bower, Clark, Lesgold, & Winzenz, 1969). As a (perhaps overly) simple example, it is much easier for us, as adults, to memorize “A B C D E F G H I J K” than it is to memorize “D F J C I A H E G K B,” even though the contents of both lists are the same.

In the same way, when new information is presented to children in an organized way, they are better able to make sense of it. The manner in which information and materials are presented matters.

Studies have also found that orderly home environments are associated with better cognitive functioning, while less organized homes are associated with lower levels of cognition, language use, more difficult temperaments, and lower motivation to solve problems or master tasks (*Wachs, 2000). Bear in mind that these are associations, not causations.

The Montessori Approach in Real Life

As I have seen through my own experience, and as Montessori set out to do with her method over a century ago, this approach to education has the potential to develop self-motivated, well-regulated, prosocial learners who take ownership of their own education.

Old two- and young three-year-olds have entered our classroom at the beginning of the year with little to no focus, just bouncing around the room, going from one thing to the next. One parent even described his young child to me as a “threenager.”

By the end of the year, this very same child is able to choose, out of a room full of activities, the one that she wants to do. She very carefully picks it up and, walking around other people and their work, she carries it to an open desk. She uses the work for as long as she desires, and with great focus. When she’s done, she cleans up her work, stands up, pushes in her chair, and carries the work back to shelf from where she got it; both the spot where she did her work and the work itself are both ready for the next person to use.

This work cycle enables the child to build focus, order, concentration, coordination, and independence. Beyond these higher-order cognitive functions, these activities also encourage the development of skills necessary for everyday living, like fine motor skills and hand strength.

By the time this child turns four, a tremendous foundation has been laid for learning. Not only are children ready for more challenging work, they ask for it. One of my four-year-olds, after working very hard on learning letter sounds and writing them out over this past year, recently began reading consonant-vowel-consonant words (cat, hop, etc.). She wrote a list of the first words that she read, and upon reading me this list, she danced around with excitement. This is a child who has ownership of and loves learning.

By the time many children finish their third year at the primary level in our classroom, they are reading books. As someone who didn’t begin reading until second grade, I can unabashedly say that I am jealous. Bear in mind that this did not come at some great expense either; the students aren’t burnt out– just the opposite is true.

While above I mostly mention the literacy curriculum, I’m also envious of my students’ ability to add, subtract, (and sometimes) multiply and divide. They also walk away with other pieces of explicit knowledge, like, for example, the names of the continents of the world, the names of plant and animal parts, and the names of many 2D and 3D shapes (do you know the difference between an ellipsoid and an ovoid?!).

Even further, these children have a strong set of social/emotional skills. They, for example, are encouraged to consider the needs of others, and also to express their own needs. Students are also made aware of our school community’s collective needs, and most are eager to contribute via preparing food, rearranging the room for different activities, organizing the materials on the shelves, cleaning up after a meal, etc.

Conclusion

Montessori’s methods highlight basic truths about human learning. Through observation, she realized that early childhood presents a time of unique cognitive malleability, and that this can be leveraged through interactions the child has with the environment. Scientific support for her methods continues to come to light even in present day, over a century later.

Through quality implementation of the Montessori method, children become engaged and excited about learning. They develop into independent learners who seek out challenges. Even further, this approach can develop students who are as engaged in their learning and as they are in their community, and it’s a beautiful thing.

Reference

Lillard, A. S. (2005). Montessori: The science behind the genius. New York, NY: Oxford University Press.

Montessori, M. (1967a). The absorbent mind. New York, NY: Henry Holt.

*References marked with an asterisk are cited in Lillard, 2005.

Out with the Old…
Andrew Watson
Andrew Watson

AdobeStock_93044051_Credit

Articles about learning styles theory–including my own–typically focus on debunking the theory.

This article, over at The Learning Scientists, takes a different approach: it chooses specific parts of learning styles theory, and shows how each small part derives from another–more useful–theory about learning.

The goal of this article, in other words, is not that you stop believing a false theory, but that you replace false beliefs with correct ones.

In my view, that’s a GREAT approach, and one that I plan to borrow.