A handy video from Ted Education gives some pointers on spotting misleading graphs. Pay close attention to their warnings about meddling with the y-axis. Believe it or not, this sort of thing happens frequently in the world of science publishing.
(If you’re interested in visual representation of data, I encourage you to look up the work of Edward Tufte. He’s written some amazing books, and is a fun and provocative thinker.)
One note about the Ted Ed video: its has clear political leanings–so clear, in fact, that I’ve hesitated in linking to it. I hope that you will watch it, because its suggestions are both both important and useful. Rest assured: my goal is not to sway your political views, but to give you a helpful tool in analyzing scientific information.
If you attend Learning and the Brain conferences, or read this blog regularly, you know all about the well-researched benefits of retrieval practice. (You can read earlier articles on this subject here and here.)
The short version of the story: if we ask students to recall ideas or processes that they have learned, they are likelier to learn those ideas/processes deeply than if we simply go over them again.
But, does retrieval practice always work?
The question answers itself: almost nothing always works. (The exception: in my experience, donuts always work.)
Over at The Learning Scientists, Cindy Wooldridge writes about her attempt to use retrieval practice in her class–and the dismaying results.
From her attempt, Wooldridge reaches several wise conclusions. Here are two of them:
Another very important take-away is that learning science is not one size fits all. Just because we say retrieval practice works, doesn’t mean it works in all scenarios and under all circumstances.
This is why it’s so important to be skeptical. Use objective measures to assess whether and how a teaching strategy is working for your students and take time to do some reflection on how and why it worked (or didn’t). This is another great example of a time when my intuition said that this absolutely should work, but we should follow the evidence and not just intuition.
To learn more about her effort and her conclusions, click here.
What’s not to love? The photo shows a mug of cocoa, with an already-nibbled chocolate bar in the background. Even better, the headline alerts us that both the cocoa and the chocolate “enhance cognitive abilities and memory.”
For once, this headline is not overselling the scientific article. In the abstract, the authors really do say
Although still at a preliminary stage, research investigating the relations between cocoa and cognition shows dose-dependent improvements in general cognition, attention, processing speed, and working memory.
WOW.
The authors even use the word “nutraceutical”–new to me–to emphasize that chocolate is both nutritious and pharmaceutically beneficial.
New that sounds this good can’t be true. Can it?
Maybe the News Really Is That Good
For their review, Valentina Socci’s team assembles a solid list of articles touting the physical benefits of cocoa flavanols: compared to control groups, those who have chocolate or cocoa over several days/weeks show better blood pressure, insulin resistance, and brain blood flow.
They also show exciting changes in various kinds of brain activity. One study, looking at a particular measure of brain activity (SSVEP), showed
changes in SSVEP average amplitude and phase across several posterior parietal and centro-frontal sites that indicated an increased neural efficiency in response to the working memory task.
Increased neural efficiency on a working memory task! Now you’ve got my attention…
Then Again, Maybe Not…
All that chocolate may have changed SSVEP average amplitude and phase. However, as teachers, we don’t really care about that: we care about learning. Did this “increase in neural efficiency” actually improve working memory?
Nope.
Similarly, another study showed that chocolate improved neural activity “in various brain regions in response to an attention switching task.”
But, that improved neural activity didn’t make them any better at switching attention.
In fact, of the six studies that focus specifically on one-time doses (not weeks-long doses), two showed no meaningful cognitive differences for those who had chocolate/cocoa, and the others showed differences in some measures or some participants–but not in all.
In other words, the research is suggestive and interesting, but hardly persuasive.
Who Is Learning?
I suspect that most of the people reading this blog are in the world of PK-12 education. How many of the people being studied were PK-12 students?
None.
For the studies looking at one-time doses of cocoa, most were in college.
For the studies looking at daily shots, many (most?) of the participants were older than 55.
In fact, many of these studies focused on people with some kind of cognitive impairment: typically dementia.
Reasonable Conclusions
Based on the data gathered here, I think we can reasonably say that for older people–especially those with some cognitive problems–cocoa flavanols might have some physiological benefits (blood pressure, insulin levels), and might even offer some cognitive boosts as well.
That’s exciting and helpful if you teach people, and especially if you are taking care of someone, in that group. (If you’re looking after someone with dementia, by the way, don’t rely on a blog for medical advice: talk with a doctor.)
However, we have no good reason to think that chocolate offers cognitive benefits for PK-12 students. Perhaps it does–but this article simply doesn’t present direct evidence to support that conclusion.
At the same time, I am entirely willing to hypothesize that chocolate offers substantial emotional benefits. For this reason, S’mores will be served at the upcoming Learning and the Brain Conference…
For researchers and research-readers alike, the data analysis portion of a study is many things: complex, exciting, frustrating, intriguing, and sometimes even befuddling.
And, analytics are always on the move. With each new study, researchers are developing increasingly intricate and elegant ways to make meaning of their data. At the same time, powerful statistical software, like SPSS or Stata, is continuously expanding its capability to process such sophisticated research designs.
Certainly, many long hours go into choosing a study’s analytic approach. Researchers must develop and refine their hypotheses; organize their data in such a way that statistical software can read it; and choose a statistical method (i.e., a mathematical approach) to test their research questions.
That last part about choosing a statistical method is where things can get tricky. In general, different statistical methods are not simply multiple ways of doing the same thing. Whereas something like a division problem may use different techniques (e.g., long division, trial- and-error) to get the same result, different statistical methods can analyze the same data yet produce differing, and even contradictory, results.
Differences in design: A little goes a long way
Just as French philosopher Jean-Paul Sartre liked to say “we are our choices,” in many ways our research results are our choices, too.
A study conducted by Burchinal & Clarke-Stewart illustrates this well. [1] These authors noticed that two different research teams had analyzed the same longitudinal data set, yet found (and published) substantially different results.
These two research teams analyzed data from the National Institute of Child Health and Human Development (NICHD) Study of Early Child Care: a large, national study that followed the development of 1,364 children from six months of age. Both teams were also interested in the same question: what is the impact of mothers’ employment, and children’s subsequent nonmaternal child care, on children’s early cognitive growth?
The NICHD Early Child Care Researchers (Team 1) were first in line to test this question. After a series of analyses, this team concluded that the age at which children entered nonmaternal care, and the amount of time spent in such care, showed no relation to children’s cognitive performance up to age three. [2]
Next, Team 2 (Brooks-Gunn, Han, & Waldfogel, 2002) tested this same question. However, in contrast to the Team 1, they concluded that mothers’ full-time employment during children’s first nine months was indeed associated with impaired cognitive functioning when the children were three years of age. [3]
Speaking different analytic languages
The contradictory findings between these two research teams were not only curious, but also important to reconcile. After all, the difference between advising mothers of young children to work or not work is a big one. And, such a recommendation has implications for state and federal programs, such as Temporary Assistance for Needy Families, that assist young mothers in finding employment.
Burchinal & Clarke-Stewart therefore conducted a new, third study investigating how each team’s analytic design may have engendered the contradictory results.
Two approaches
First, Team 1 team used a conservative, top-down analytic approach. This approach:
uses all available information, such as multiple outcome variables and data from all participants
begins with a general test of significant relations between variables and works its way down to more specific comparisons
helps researchers avoid exaggerating the significance of associations found when working with large data sets
Team 2, on the other hand, used an a priori comparison approach. This technique:
examines hypotheses and variable relations chosen by researchers before (a priori) data exploration
utilizes a small subset of participants and/or variables in order to conduct a small set of comparisons between explicitly chosen participants and/or variables
is helpful when theory or previous research strongly implies a relation between specific variables or constructs
Thus, it seemed likely that investigating a smaller group of participants, and analyzing a smaller set of outcome data, contributed to Team 2’s finding of a relation between maternal employment and children’s cognitive growth. On the other hand, utilizing the full set of study participants, and analyzing all possible child outcome data, seemed to result in Team 1’s lack of such a finding.
To confirm this hypothesis, Burchinal & Clarke-Stewart analyzed the same participants and variables that Team 2 did; but, they used the top-down approach this time. The result of these new analyses? No significant findings.
The authors therefore reported Team 1’s findings—that is: it doesn’t hurt young children for their mothers to get a job—as being a more reliable take-away.
A cautionary tale
It is important to note that both the top-down approach and the a priori comparison approach are well-respected and well-established analytic techniques. And, as with all analytic techniques, each has strengths, weaknesses, and research questions for which its use is optimal.
But a study such as the one conducted by Burchinal & Clarke-Stewart provides an important cautionary tale. That is, when we, as consumers of research findings, draw conclusions from empirical work, it is important to remain attentive to the type of analyses that were used to engender such claims.
Of course, we probably won’t all end up being experts in all areas of analytic approach. But perhaps a good rule of thumb is that when we see a small amount of data being used to make big claims, it’s best to take a second look, get a second opinion, or see if the study has been replicated a second time.
References
[1] Burchinal, M.R. & Clarke-Stewart, K.A. (2007). Maternal employment and child cognitive outcomes: The importance of analytic approach. Developmental Psychology, 43, 1140-1155.
[2] Brooks-Gunn, J., Han, W.J., & Waldfogel, J. (2002). Maternal employment and child cognitive outcomes in the first three years of life: The NICHD Study of Early Child Care. Child Development, 73, 1052–1072.
[3] National Institute of Child Health and Human Development Early Child Care Research Network. (2000). The relation of child care to cognitive and language development. Child Development, 71, 960–980.
Over at The Anova, Freddie deBoer has a knack for writing about statistical questions and making them not just readable but interesting.
Case in point: he recently explored the New York Times feature about school choice.
Although careful to praise the Times authors for their genuine concern and dedication, he thoughtfully explicates the numerous ways in which their article gets important questions wrong because it doesn’t think its way through statistics carefully enough.
For example: when we say we want students to do better, does that mean we want individual students to rise above the average, or that we want to raise the average for students overall?
As deBoer sees the field, we typically say we want the latter, but focus on (and tell stories about) the former.
DeBoer’s article doesn’t express an opinion about school choice (I’m sure he has one, but he doesn’t tip his hand here). But, it’s an excellent reminder that statistics can help us only so long as we are clear-minded about what they really measure.
As he glumly says in his final paragraph:
It’s not just that we can’t get what we want. It’s that nobody really knows what they’re trying to accomplish.
Oxytocin is often described as the “love hormone.” Apparently lots of oxtyocin is swirling around when mothers interact with their babies, and so its role in maternal affection is much trumpeted.
You may well hear people say that, in schools, we need to be sure that our students have more oxytocin in their lives.
However, folks giving this advice may be unsettled to hear that recent research describes oxytocin as “the relationship crisis hormone.”
Researchers in the US and Norway have found that, in romantic relationships, discrepancies in romantic interest lead to higher levels of oxytocin production.
In my mind, this news underlines an important general conclusion.
a) The study of psychology is complicated.
b) The study of neuroscience is really complicated.
c) The study of hormones is absurdly complicated. I mean, just, you cannot believe how complicated this stuff gets.
As a result, I encourage you to be wary when someone frames teaching advice within a simple hormonal framework. If you read teaching advice saying “your goal is to increase dopamine flow,” it’s highly likely that the person giving that advice doesn’t know enough about dopamine.
(BTW: it’s possible that the author’s teaching advice is sound, and that this teaching advice will result in more dopamine. But, dopamine is a result of the teaching practice–and of a thousand other variables–but not the goal of the teaching practice. The goal of the teaching is more learning. Adding the word “dopamine” to the advice doesn’t make it any better.)
In brief: if teaching advice comes to you dressed in the language of hormones, you’ll get a real dopamine rush by walking away…
A friend recently referred me to this online article (at bigthink.com) about this research study: the eye-catching phrase in both headlines being “Teaching Critical Thinking.”
(The online article is even more emphatic: “Study: There Are Instructions for Teaching Critical Thinking.”)
This headline sounds like great news. We can do it! Just follow the instructions!
We should, of course, be delighted to learn that we can teach critical thinking. So often, especially in upper grades, schools emphasize teaching “not what to think, but how to think.”
Every time we say that, we are—in effect—claiming to be teaching critical thinking.
The author of the BigThink article summarizes the societal importance of critical thinking this way:
We live in an age with unprecedented access to information. Whether you are contributing to an entry on Wikipedia or reading a meme that has no sources cited (do they ever?), your ability to comprehend what you are reading and weigh it is a constant and consistent need. That is why it is so imperative that we have sharp critical-thinking skills.
Clearly, students need such skills. Clearly we should teach them.
It Can Be Taught!
The study itself, authored by N. G. Holmes and published in the Proceedings of the National Academy of Arts and Sciences, follows students in a college physics course. The course explicitly introduced its students to a process for thinking critically about scientific data; it emphasized the importance of this process by grading students on their early attempts to use it.
For example (this excerpt, although complex, is worth reading closely):
“students were shown weighted χ2 calculations for least squares fitting of data to models and then were given a decision tree for interpreting the outcome. If students obtain a low χ2, they would decide whether it means their data are in good agreement with the model or whether it means they have overestimated their uncertainties.”
Early in the course, the instructors often reminded the students to use this process. By term’s end, however, those instructions had been faded, so the students who continued to use it did so on their own.
The results?
Many students who had been taught this analytical process continued to use it. In fact, many of them continued to use it the following year in another course taught by a different professor.
In other words: they had been taught critical thinking skills, and they learned critical thinking skills.
Success!
It Can Be Taught?
Sadly, this exciting news looks less and less promising the more we consider it.
In the first place, despite the title of his article, Holmes doesn’t even claim to be teaching critical thinking. He claims to be teaching “quantitative critical thinking,” or the ability “to think critically about scientific data and models [my emphasis].”
Doubtless our students need this valuable subset of critical thinking skills. And yet, our students think about many topics that defy easy quantification.
If we want our students to think critically about a Phillis Wheatley poem, or about the development of the Silk Road, or about the use of gerundives, we will quickly recognize they need a meaningfully different set of critical thinking skills.
How, for example, would a student use “weighted χ2 calculations for least squares fitting of data” to compare the Articles of Confederation with the Constitution of the United States?
To return to the examples offered in BigThink’s enthusiastic paragraph: despite this author’s enthusiasm, it’s not at all certain this procedure for analyzing “scientific data and models” will help us update a Wikipedia entry, or critique an unsourced meme.
(It might, but—unless we’re editing a very particular kind of Wikipedia entry, or reading a very statistical meme—it probably won’t.)
In brief: ironically, the headlines implying that we can “teach critical thinking” generally do not stand up to critical thought.
The Bigger Picture
Cognitive scientists, in fact, regularly doubt the possibility of teaching a general set of critical thinking skills. And here’s one big reason why:
Different disciplines require different kinds of critical thought.
Critical thinking in evolutionary biology requires different skills than critical thinking in comparative theology.
The field I’m in uses psychology and neuroscience research to inform teaching; hard experience has taught me that the fields of psychology and neuroscience demand very different critical thinking skills from their practitioners.
Perhaps your own teaching experience reveals the same pattern:
The English department where I taught included some of the sharpest minds I know: people who can parse a sonnet or map a literary genre with giddy dexterity. Their critical thinking skills in the world of English literature can’t be questioned.
And yet, many of these same people have told me quite emphatically that they are hopeless at, say, math. Or, chemistry. Or, doing their taxes. Being good critical thinkers in one discipline has not made them successful at critical thought in others.
Chapter 2 of Daniel Willingham’s Why Don’t Students Like School explores this argument at greater length.
The Smaller Picture
There’s a second reason that it’s hard to teach general critical thinking skills: knowledge of details.
To think critically about any topic, we need to know a very substantial amount of discipline-specific factual information. Finding those facts on the interwebs isn’t enough; we need to know them cold—have them comfortably housed in long-term memory.
For example: to use Holmes’s critical thinking technique, you would need to know what “weighted χ2 calculations for least squares fitting of data” actually are.
Even more: you’d need to know how to calculate them.
If you don’t have that very specific kind of detailed knowledge, you’re just out of luck. You can’t think critically in his world.
Another example. Much chess expertise comes from playing lots and lots of chess. As Chase and Simon’s famous study has shown, chess experts literally see chess boards differently than do chess novices.
You really can’t think like a chess expert (that is, you can’t engage in critical chess thinking) until you can see like a chess expert; and, seeing like a chess expert takes years. You need to accumulate substantial amounts of specific information—the Loomis gambit, the Concord defense—to make sense of the chessboard world.
Your own teaching experience almost certainly underlines this conclusion. Let me explain:
How often does it happen that someone learns you’re a teacher, and promptly offers you some heartfelt advice on teaching your students more effectively? (“I saw this AMAZING video on Facebook about the most INSPIRING teacher…”) How often is that advice, in fact, even remotely useful?
And yet, here’s the surprise: the person offering you this well-meaning advice is almost certainly an expect in her field. She’s an accomplished doctor, or financial adviser, or geologist, or jurist. In her field, she could out-critical-think you with most of her prefrontal cortex tied behind her occipital lobe.
Unfortunately, her critical thinking skills in that field don’t transfer to our field, because critical thinking in our field requires a vast amount of very specific teaching knowledge.
(By the way: twice now this post has assumed you’re a teacher. If you’re not, insert the name of your profession or expertise in the place of “teacher.” The point will almost certainly hold.)
Wishing and Thinking, not Wishful Thinking
As so often happens, I feel a bit like a grinch as I write this article. Once again, I find myself reading news I ought to find so very exciting, and instead finding it unsupported by research.
Truthfully, I wish we could teach critical thinking skills in general. If you’ve got a system for doing so, I genuinely hope you’ll let me know. (Inbox me: [email protected])
Even better: if you’ve got research that shows it works, I’ll dance a jig through Somerville.
But the goal of this organization—and the goal of Mind, Brain, and Education—is to improve psychology, neuroscience, and pedagogy by having these disciplines talk with each other deeply and knowledgeably.
And with that deep knowledge—with critical thinking skills honed by scientific research—we know that critical thinking skills must be taught discipline by discipline; and, they must be honed through extensive and specific practice.
This task might sound less grand than “teaching critical thinking skills.” And yet, by focusing not on lofty impossibilities, but on very realistic goals, we can indeed accomplish them—one discipline at a time.
L&tB bloggers frequently write about working memory — and with good reason. This cognitive capacity, which allows students to reorganize and combine pieces information into some new conceptual structure, is vital to all academic learning.
And: we don’t have very much of it.
For example: our grade school students may know the letters C, A, and T. But, putting letters together to form the word “cat” can be a challenge for new readers. After all, that new combination is a working memory task.
Putting those letters together with another letter to make the word “catch” — well, that cognitive effort can bring the whole mental exercise to a halt. (Psychologists speak of “catastrophic failure,” an apt and vivid phrase.)
When teachers learn about the importance of working memory and the limitations of working memory, we often ask an obvious question: what can we do to make working memory bigger?
How to Embiggen Working Memory
This simple question has a surprisingly complicated set of answers.
The first thing to do: wait. Our students’ working memory is getting bigger as they age. We don’t need to do anything special. (Here is a study by Susan Gathercole showing how working memory increases from ages 4-15.)
The second thing to do: watch researchers argue.
Some scholars believe that working memory training does increase its capacity; some companies sell products that claim to do just that.
For the most part, however, the field is quite skeptical. A recent meta-analysis (here) and several classroom studies (here and here) find that working memory training just doesn’t have the effect we’d like it to. And, of course, that ineffective training takes up valuable time and scarce money.
As I read the field, more scholars are skeptics than believers.
Today’s Headline
All that information is important background for a headline I saw recently: “Buzzing the Brain with Electricity Can Boost Working Memory.” (Link here.)
According to this study, weak electrical stimulation to the middle frontal gyrus and the inferior parietal lobule (not joking) temporarily synchronizes theta waves (obvi), and thereby enhances WM function.
Aha! At last! A solution!
When our students struggle with a working memory task, now we just give them a helpful little ZAP, and they’ll be reading like the Dickens. (Or: solving complex math problems. Or: analyzing Sethe’s motivation. Or: elucidating the parallels between US wars in Korea and Vietnam.)
In other words: all those skeptics can now become believers, as working memory problems become a thing of the past.
Beyond the Headline
Or, maybe not yet a thing of the past.
First, it’s always important to remember that science works incrementally. This study is only one study, offering initial testing of a hypothesis.
Second, it’s quite a small study. We’ll need to test this idea many, many more times with many, MANY more people.
Third–and this is my key point–the authors of the study do not even suggest that this technique has classroom uses. Instead, to quote from the Neuroscience News article, “[t]he hope is that the approach could one day be used to bypass damaged areas of the brain and relay signals in people with traumatic brain injury, stroke or epilepsy.”
In other words: the present hypothesis isn’t about helping students with typical working memory capacity to increase it. Instead, it’s about helping people with damaged working memory capacity to boost it — temporarily.
999 Steps to Go
Teachers can be tempted by flashy headlines–oversimplified as they must be–to pounce on scientific advances as practical classroom solutions.
If we’re going to be responsible, even critical consumers of psychology and neuroscience, however, we must learn to read this research in the spirit it is intended. In these scientific realms, the intended spirit is almost always “here’s an interesting incremental step. Let’s think about how to take one more.”
Classroom uses may be at the end of this journey of a thousand steps. Until then, we should keep our students–and our own–working memory limitations clearly in mind.