Andrew began his classroom life as a high-school English teacher in 1988, and has been working in or near schools ever since. In 2008, Andrew began exploring the practical application of psychology and neuroscience in his classroom. In 2011, he earned his M. Ed. from the “Mind, Brain, Education” program at Harvard University. As President of “Translate the Brain,” Andrew now works with teachers, students, administrators, and parents to make learning easier and teaching more effective. He has presented at schools and workshops across the country; he also serves as an adviser to several organizations, including “The People’s Science.”
Andrew is the author of "Learning Begins: The Science of Working Memory and Attention for the Classroom Teacher."
A recent article in Nature magazine wisely captures the complexities of the Great Cell Phone Debate.
Will they transform human potential?
Will they destroy our children’s self-confidence, not to mention their ability to hold a simple conversation? Their ability to pay attention in class?
(For earlier articles on these topics, see here and here.)
Researcher Candice Odgers offers a simple formula to answer those questions:
“In general, the adolescents who encounter more adversity in their offline lives seem most likely to experience the negative effects of using smartphones and other digital devices.”
That is, the cell phone isn’t causing the problems. For children who already struggle, however, it might exacerbate existing problems.
Teens and Cell Phones: The Good
You might be surprised to read Odgers’s list of digital benefits. Several studies show that teen texting can foster healthy relationships.
6-12 year-olds with solid social relationships are likelier to keep in digital communication with peers as they get older.
Virtual conversations can even help teens “bounce back after social rejection.”
Clearly, cell phones aren’t destroying our children’s ability to create healthy relationships. (Of course, the form those relationships take looks quite different from those of our youths. Or, at least, my youth.)
Teens and Cell Phones: The Bad
As Odgers sees the research, socio-economic status might be a key variable.
The “digital divide” used to mean that rich people had technology that others didn’t. Today, it’s more likely to mean that affluent parents can supervise their children’s digital lives more consistently than less-affluent parents.
“What we’re seeing now is a new kind of digital divide, in which differences in online experiences are amplifying risks among already-vulnerable populations.”
So: for low-income families, online fights lead to real world fights more often. So too bullying.
A Final Point
Complaints about teens and cell phones often miss a crucial point: they get those cell phones from us.
Odgers’s own research shows that 48% of 11-year-olds in North Carolina have cell phones. I’m guessing that relatively few of those 11-year-olds bought those phones — and the data plans — with their own money.
Also: adolescents did not invent the cell phone. They aren’t running companies that make huge profits from their sale.
Odgers’s article suggests that we should focus our concerns not on teens overall, but on those who are already struggling in their daily, non-virtual lives. I suggest, in addition, that we should focus on adult participation in this digital culture.
We are, after all, the ones who make their digital lives possible.
Regular readers of this blog know that I like technology, but I’m not easily wowed about its educational uses. From my perspective, many “you just have to try this” technologies fail to produce nearly as much learning as they promise.
(Some of my concerns show up here and here. But: I’m a champion of laptop notes here.)
At an evolutionary level, our species evolved interacting with real, live other people. Our basic perceptual and emotional systems often work best when we’re learning with and from them.
All that being said, I’m REALLY interested in the educational possibilities that this new technology might offer.
As you’ll see in the video below, combining virtual reality (VR) with advanced haptic feedback produces remarkably persuasive visual and physical experiences.
The video’s host — a professed VR skeptic — is obviously giddy by the end of his trial.
Potential VR+haptics pedagogy
Several kinds of learning might well be much more persuasive (and interesting) with this VR/haptics combination. Physics problems with mass and momentum and magnetism, for example, lend themselves to this kind of exploration.
(As you’ll see in the video, our host can feel the weight of the virtual rock he lifts.)
Another possibility: As our research into embodied cognition gets better, we might be able to translate those strategies into VR/haptics pedagogy. (For an introduction to embodied cognition, see Sian Beilock’s book How the Body Knows Its Mind.)
For example, Susan Goldin-Meadow has done considerable research showing that different hand motions improve mathematics learning. These gloves just might make such problems more physically — and therefore cognitively — persuasive.
Just watch the video; you’ll see what I mean. (By the way: I’m not endorsing any of the products advertised here. They’re an unavoidable part of the video.)
Few theories have gotten more teacherly attention than Carol Dweck’s work on Mindset.
As you no doubt know, she has found that a “fixed mindset” (the belief that ability and intelligence can’t really change) demotivates people. On the other hand, a “growth mindset” (the belief that the right kind of hard work enhances ability) promotes intrinsic motivation.
(We’ve posted about Mindset several times, including here and here.)
Because it’s so well known, Dweck’s theory is a popular target. You’ll often read that this or that study disproves her argument. For years now, this mindset controversy has raged on.
The Mindset Controversy: This Week’s Big News
Scholars at Case Western Reserve University looked at over 300 Mindset studies, and found…not much. By looking at all the relevant research, rather than just the well-known or successful studies, they got a comprehensive view.
That view showed only very modest effects.
Here’s lead author Brooke Macnamara (by the way, the word “significant” here means “statistically significant” not “deeply meaningful”):
“We found a significant but weak relationship between growth mindsets and academic achievement, and a significant, but small effect of growth mindset interventions on academic achievement.” (source)
Predictably, this meta-analysis has produced lots of strong responses.
Nick Soderstrom–a researcher whose work I admire–mused on Twitter that Mindset is “the new learning styles.” That is: a theory which lots of people believe, but which doesn’t have empirical support.
[Editor’s note, added 3/23/18: Dr. Soderstrom has responded to this post, and his comment includes this important point: “After seeing that you referenced one of my tweets, I feel compelled to mention that none of my tweets comparing growth mindset to learning styles have been assertive in nature. That is, I have never said that mindset IS the new learning styles. Indeed, such an assertion would be unfair and irresponsible at this point. Rather, I’ve simply asked the question and expressed my concern that it might be heading in that direction. I just don’t want your readers to assume that I’ve made up my mind on the utility of mindset interventions because I certainly haven’t. More evidence, or the lack thereof, is needed for that to happen.” My thanks for this clarification. You can see his full comment below.]
If, in fact, Mindset interventions just don’t do very much, should we stop?
Mindset Controversy: Don’t Give Up The Ship
I myself am still on board with mindset, and for several reasons.
First: other people have looked at large populations and found impressive effects.
For instance, this report found that for some groups of students, a growth mindset basically added an extra month’s worth of learning to school. Mind you, these authors looked at data for 125,000+ students to reach this conclusion.
Other thoughtful scholars and wise skeptics, have written sympathetically about Mindset. Here, for example, is recent article by John Hattie — not one to accept a theory simply because it’s popular.
Second: we should ask not simply “do Mindset interventions work?” but “do they work compared to something else?”
Mindset seeks to influence students’ motivation, and motivation is notoriously hard to influence. So, I’m not surprised it doesn’t produce dramatic changes. To get my attention in the world of motivation, even a small boost will do.
Third: Dweck is a famously careful scholar. When others criticize her work, she doesn’t ignore them; she doesn’t rant; she doesn’t change the subject. Instead, she accepts fair critiques and updates her thinking.
For example: many of Dweck’s early studies focused on the importance of hard work. You have to work hard to learn most anything, and students need to accept that.
Teachers and scholars offered a reasonable rejoinder. Some students do work hard and yet don’t learn, because they’re doing the wrong kind of work. We need a more precise phrase.
Accepting this criticism, Dweck now speaks of the right kind of hard work. She listened, and refined her theory appropriately.
Next Steps
A: I’ll be curious to hear what Dweck has to say once she’s digested this new information.
B: we should keep our eyes out for new theories of motivation that provide genuine assistance to teachers and students.
C: we should, of course, not overhype Mindset interventions. Until we get a better theory, however, we can call on these strategies at the right moments to help deepen our students’s motivation.
Few educational innovations have gotten more hype than online learning, and few have a more checkered track record.
For every uplifting story we hear about a Khan Academy success, we get at least one story about massive drop-out rates for MOOCs.
You’d be forgiven for thinking that people are just better at learning face to face than from a computer.
And so a recent study deserves our attention. And — like all other research — it also merits our respectful skepticism.
Three Headlines, and a Warning
First Headline: middle schoolers learned A LOT more science from an interactive online curriculum than from a more traditional one.
Second Headline: learning gains seem impressive for students with learning disabilities, and for English language learners.
and
Third Headline: the program was tested with more than 2300 students and 71 teachers in 13 schools in two different states: Oregon and Georgia. That is: this study isn’t about 30 college students in a psychology lab. It’s about real students in real classrooms at opposite ends of the country.
And now, the Warnings: I’m going to give you reasons to be skeptical about all of this. (Well, at least the first two headlines.)
Interactive Online Science Teaching
Researchers developed four units aligned with national science standards. These units, each roughly 10-14 weeks long, covered such topics as Knowing My Body and Our Place in the Universe.
We should note that researchers brought several theoretical perspectives to these science units. They took care to make them culturally relevant to the learners. They designed each unit with project-based learning principals in mind.
And, they made sure that students could interact with the material in different ways. Middle schoolers could control the pace of the lesson, move images around on the screen, get feedback, and navigate among several sources of information.
Even from this brief description, you can see how this interactive online science lesson could be especially appealing — and potentially especially effective — for 7th and 8th graders.
(You can learn more about the curriculum at this website.)
Results: By The Numbers…
First, the online units helped students learn.
In the control group, where teachers taught as they typically had done, students improved 5.7% from pretest to posttest. In the treatment group, where teachers used the units described above, students improved 16.7%.
When the researchers looked at students with learning disabilities and at English language learners, the raw data also showed promising improvements.
For ELL students, for example, the control group improved 4.9%, whereas the treatment group improved 15.0%.
Reasons for Concern #1: “Active Controls”
Researchers compared students who used their online curriculum to others who did not. This comparison to a “control group” allows them to say that their curriculum is better than the old way.
However, it’s quite possible that the students in the treatment group responded well simply because they did something different. In other words: perhaps the online curriculum didn’t help students learn, but the change of daily routine did.
The only way to rule out this possibility is to have the control group do something new as well. If the new online curriculum produces more learning than some other new curriculum, then we can reasonably conclude that the curriculum itself made the difference.
After all, in this hypothetical case, we can’t say that just changing something made a difference; both groups changed something.
However, because the control group in this study wasn’t “active” — that is, they didn’t do anything different or special — we can’t be certain why the researchers found the results they did. Maybe it was the interactive online science curriculum. But maybe it was just the change of pace.
The researchers’ own video — linked to above — highlights this concern. One of the teachers who used the online units says that she liked them because teachers simply hadn’t had an organized curriculum in the past. Each teacher had put something together on her own.
If that’s true for the schools in the control group, then this study compares schools that had no consistent, professionally developed curriculum at all with schools that had a curriculum developed by a Oregon University’s School of Education.
In other words: the benefit could come from having any well developed curriculum, not specifically this curriculum. Or not an online curriculum.
We simply can’t know.
Reason for Concern #2: Goodies
There’s no neutral way to say this. Teachers assigned to the online curriculum got a big gift bag. Teachers in the control group got dinky gift bags.
More specifically: teachers in the control group got a $300 bump. (Admittedly, that’s a nice perk for not having to do anything special.)
Teachers who used the online curriculum got $1500. (That’s a REALLY nice perk.) It’s five times as much.
And, they got some cool technology. And, they got extra PD and instructional resources. They even got 2 days of paid subs so they could attend that PD.
For all these reasons, it’s possible that the students did better with the online science curriculum because it helped them learn. It’s also possible that they did better because their teachers were really excited by all the loot they got.
Again, because of the study design, we simply can’t know.
What Should Teachers Do?
In my view, the jury is still out on online curricula.
On the one hand, they seem like a really good idea. On the other hand, the evidence in their favor is, at best, equivocal.
I wouldn’t be surprised if, 20 years from now, we all clapped our hand to our foreheads and asked, what were we thinking?
In other words: we can’t yet look to research to be certain of an answer. If you have an opportunity that appeals to your teacherly instincts, give it a try.
If it works, let me know. If not, don’t worry that you might be doing something wrong. After all: humans evolved learning face to face. It’s just possible that the e-version of that experience will never be as good.
Back in October, I published one of the blog’s most popular articles: a summary of a study showing that moderate drinking benefits memory.
In brief, that study showed that drinking before learning muddled memories. However, moderate alcohol after learning produced a modest but clear benefit.
You can understand why this research proved such a hit among teachers.
Bottoms up!
More About Alcohol and Learning
Back in December, Olga Khazan published an article in the Atlantic summarizing several studies about the effects of alcohol on memory. Alas, her take on the literature sees more bad news than good.
Most tellingly, she focuses on a study in Britain that tracks participants’ health over time. The short version of the findings: more alcohol meant a smaller hippocampus. And, generally speaking, a healthy hippocampus helps us form declarative memories.
This study also looked at participants’ ability to generate words. (It’s a test called “lexical fluency.”) Here again, even moderate alcohol intake meant that — over time — people had a harder time with this particular test.
Alcohol and Learning: Not All Bad News
Any complex study produces complex results, and this one is no exception.
First, although “lexical fluency” declined over time as a result of alcohol, there was no correlation between alcohol consumption and cognitive ability as measured by multiple tests at the time of the study.
In other words: that one particular mental ability declined, but that didn’t mean all of them did.
Also: the decline in lexical fluency was significant for men, but not women. (I suspect that 51% of you are happy to read that fact.)
Putting It All Together, with a Cozy Glass of Wine
We would, of course, love to have a clear understanding of alcohol’s relationship to learning. And, to brain health. And, to health overall.
Unfortunately, there are too many variables, and too many ways to measure them, for a simple answer.
I myself take medical advice from my doctor, not the interwebs. And, although I don’t drink wine because it might help me learn more, I do enjoy a nice Napa Cab on a rainy Boston evening.
For many decades, neuroscientists believed that adult brains don’t generate new neurons. Once childhood is over, the neurons you have are all the neurons you’ll get.
Then, in the 1960s, we started seeing evidence that adult brains DO INDEED create more neurons.That evidence got even stronger in the 1980s — believe it or not by studying songbirds.
When you go to Learning and the Brain conferences, you doubtless hear about adult neurogenesis. It is, we thought until this morning, one of the reasons you can learn new things.
Today’s Headline: No Adult Neurogenesis?
This article has been cropping up all over my newsfeed. It’s headline: “Birth of New Neurons in the Human Hippocampus Ends in Childhood.”
The article is easy to read, and I encourage you to give it a look. It offers a helpful historical context, and digs into the implications of these findings.
The findings are so new that I haven’t yet seen much response to them. I’ll post updates as scholars start to grapple with this research.
In the meanwhile, you can take confidence from this research that skepticism never flags. Even so “well-established” a finding as adult neurogenesis can be overturned when we get better data.
As Arturo Alvarez-Buylla, one of the researchers, say:
“I always try to work against my assumptions in lab,” he said. “We’ve been working on adult neurogenesis so long, it is hard to see that it may not happen in humans, but we follow where the data leads us.”
In 2012, researchers in Britain found that Omega 3 fish oil benefited students who struggled in schools. In fact, it helped students both concentrate better and learn more.
That was exciting news, because we can provide those dietary supplements relatively easily. It sounded like an easy way to fix to a real problem.
However, other studies didn’t confirm this result. For that reason, the original lab decided to try a replication study. In other words: they repeated what they had originally done to see if they got the same results.
Omega 3 Fish Oil: The Bad News
Nope, they didn’t help.
You can review the study here. Most impressive — and most discouraging: chart after chart and graph after graph showing no meaningful difference between the students who got Omega 3 supplements and those who didn’t.
(By the way: nobody knew who got the supplements until after the study. It was, as they say, “blind.”)
In the muted language of research, the authors conclude:
In summary, this study did not replicate the original findings of significant, positive effects of omega-3 DHA on either learning or behavior. No systematic adverse effects from the supplementation were observed. As such the study does not provide supporting evidence for the benefits of this safe nutritional intervention.
Alas, this easy solution simply doesn’t pan out.
The Good News
The system worked.
When researchers come across a positive finding, they should both spread the news and double check their work.
That is, they should let us know that omega 3 fish oil might be beneficial, and run the study again to be sure.
Of course, replicating a study is expensive and time consuming; it’s easy to decide that other research priorities are more important.
In this case, however, the researchers did what they ought to have done. As a result, we know more than we did before. And, we’re not wasting time and money stuffing our children with needless dietary supplements.
We should all tip our hats to this research team for doing the right thing. I don’t doubt they’re disappointed, but they’ve shown themselves to be a real model for research probity.
(For another example of researchers sharing conflicting results, see this story from last October.)
__________________
PS: After I finished writing this post, I came across another article about fish. It might not help with working memory, but it just might help prevent MS.
In the debates between “progressive” and “traditional” educational theories, few arguments rage hotter than the battle between project based learning and direct instruction.
PBL’s proponents take a constructivist perspective. They argue that people learn by building their own meaning from discrete units of information.
In this view, teachers can’t simply download conclusions into students’ brains. We can’t, that is, just tell students the right answer.
Instead, we should let them wrestle with complexities and come to their own enduring understanding of the material they’re learning.
An Alternative Perspective: The Benefits of Direct Instruction
In a recent meta-analysis, Jean Stockard’s team argues that direct instruction clearly works.
Looking at 300+ studies from over 50 years, they conclude that DI benefits students in every grade, in a variety of racial and ethnic groups, with a variety of learning differences, from every socio-economic background.
Of course, this research conclusion challenges some often-repeated assurances that direct instruction simply can’t help students learn.
(The recent meta-analysis is, unfortunately, behind a paywall. You can, however, see some impressive graphs in an earlier white paper by Stockard.)
Another Alternative Perspective: Reinterpreting “Constructivism”
Interestingly, Stockard doesn’t disagree with a constructivist understanding of learning. Instead, she sees direct instruction as a kind of constructivism.
“DI shares with constuctivism the important basic understanding that students interpret and make sense of information with which they are presented. The difference lies in the nature of the information given to students, with DI theorists stressing the importance of very carefully choosing and structuring examples so they are as clear and unambiguous as possible.”
(This quotation comes from a brief pre-publication excerpt of the meta-analysis, which you can find here.)
In other words: in Stockard’s view, the difference between PBL and DI isn’t that one is constructivist and the other isn’t.
Instead, these theories disagree about the kind of information that allows students to learn most effectively.
Simply put: PBL theorists think that relatively more, relatively unstructured information helps students in their mental building projects. DI theorists think that relatively less, relatively tightly structured information benefits students.
Stockard makes her own views quite plain:
“It is clear that students make sense of and interpret the information that they are given–but that their learning is enhanced only when the information presented is explicit, logically organized, and clearly sequenced. To do anything less shirks the responsibility of effective instruction.”
You might mentally add a “mic drop” at the end of that passage.
Other Sources
Of course, lots of people write on this topic.
John Hattie’s meta-meta-analyses have shown DI to be quite effective. This Hattie website, for example, shows an effect size of 0.60. (For Problem based learning, it’s 0.12; for Inquiry based teaching, it’s 0.35.)
If you like a feisty blogger on this topic, Greg Ashman consistently champions direct instruction.
And, I’ve written about the difficulties of measuring PBL’s success here.
This glum question has a glum answer: yes, pollution harms working memory.
Researchers in Barcelona focused on children walking to school. Working with over 1200 students, 7-10 years old, they reached a grim conclusion. Children whose walk was more polluted experienced slower development of working memory.
(The same research project had already concluded that pollution in school slows working memory development as well.)
Why teachers care
If you’ve been to a Learning & the Brain conference, you know that working memory is essential for all classroom learning. It allows students to combine pieces of information into new, meaningful ideas.
The less working memory students have, the slower they are to read, acquire math skills, compare historical figures, and learn new oboe melodies.
In other words, damaging working memory is one of the worst things we can do in schools.
What teachers should do
Of course, pollution is too big a problem for teachers and schools to solve right away. We’ll need lots of social effort–and lots of political will–to make meaningful changes.
In the short term, the study’s authors warn against one seeming solution. We might reason that walking to school exposes children to pollution, so we should encourage them to ride in cars or buses. However, the health benefits of walking are obvious and important; we should encourage–not discourage–physical activity.
In the short term, the best we can do is encourage students to walk less polluted routes: away from major highways, closer to parks and forests.
Of course, such a solution isn’t available to all students. We’ll need bigger fixes over the long term.
For the time being, knowledge of the danger is the power that we have.