Academics have a reputation for using overly technical language. Just as any career comes with its own terminology, scientific fields often use highly precise and specialized vocabulary that is not easily comprehensible to anyone else. Unfortunately, in science this poses a unique issue because findings are often interpreted and applied outside of the field.
It’s a problem with a relatively straightforward (though incomplete) solution: explanation in simpler terms.
In addition to traditional science journalism, efforts such as Ten Hundred Words of Science, The People’s Science Forum, and our very own Learning & the Brain blog address this communication barrier in part by explaining and reducing jargon in sharing scientific research.
However, for educators and scientists looking to communicate about the science of learning, there’s another complicated language issue: when academics and educators use familiar words, but with different meanings attached. Subtle differences in how these professional worlds tend to use key terms may, inadvertently or not, overstate the findings of scientific work and lead to miscommunication.
Let’s take a look at three examples.
Example 1: Self-directed learning
How do educators think about the term “self-directed learning”? Here’s how Mindshift, a popular education blog affiliated with National Public Radio (NPR) and the Public Broadcasting Service (PBS), has used the term “self-directed learning” in a several 2015 articles:
- An article about Nick Bain, a student who experimented with taking a completely self-taught trimester of his junior year in high school.
- Examples of how teachers in Boise, Idaho, are structuring their classes to release responsibility to students, teaching them lead and guide their own learning even in a low-income school. This includes implementing Google’s 20% time to allow students to pursue their own interests and learning.
- A two-part series about learning environments that offer the world’s most marginalized children tremendous choice and autonomy in their schooling, from egalitarian school structures to experiments in radical un-schooling.
These articles reflect how educators use and understand the term “self-directed learning”—as a kind of learning in which students take on a high level of personal responsibility and face a broad array of choices.
How do cognitive scientists use the term “self-directed learning”?
A recent review on self-directed learning was published in Perspectives on Psychological Science1 (results of this paper are summarized here, and point to potential pros and cons of self-directed learning). Notice how the same term is used in this context:
- In traditional cognitive science memory tasks, study participants are often presented with flashcards one at a time. As a more self-directed alternative, study participants choose the timing and order of the terms they wanted to study.
- Cognitive scientists are interested in how people learn to identify different categories, like how to tell the difference between a cat and a dog, or a nail and a bolt. Typically, this takes place by presenting people with lots of examples of objects, one at a time. Some scientists studying self-directed learning instead gave study participants the opportunity to select the objects they wanted to learn about.
- Another major topic in cognitive science is causal learning—how do people figure out causal relationships between different things? Some causal learning occurs just by observing when different variables seem to change together. Studies of more self-directed approaches to causal learning allowed participants to change one variable and observe the consequences.
Here, the term “self-directed learning” generally refers to a highly limited set of learning choices. Rather than having almost no choices as an entirely passive learner being presented with material, people in studies of self-directed learning are typically given a small number of simple choices.
While this might seem like an impoverished view of “self-directed learning,” even these simple choices introduce many new variables for scientists to study. For example, when study participants choose which flashcards to use, scientists were faced with many additional considerations—what aspect of how people used the flashcards explained how well different people learned the material? Was it the order, timing, and/or spacing of how people chose to study that made a difference?
Example 2: Executive functions
“Executive functions” is an umbrella term for cognitive processes that regulate thoughts and actions. The usage of this term in educational contexts tends to focus on higher-level processes like planning, judgment, decision-making, and self-regulation.
However, much of the work on executive functions in cognitive neuroscience focuses on more basic processes.2 For example, one commonly studied component of executive functions is called inhibition, or how people suppress simple impulses. One common way of studying inhibition is called the “go/no-go task”. In this task, participants are instructed to press a button in response to some stimuli, and then not to press the button in response to other stimuli (I’ve previously written about a study using this task).
Much research on executive functions does not directly report on some of the higher-level regulatory skills educators might be interested in. Many executive functions, like inhibition, are thought to be building blocks of higher-level tasks, like planning. However, they’re not identical; while these skills are likely related, it doesn’t always make sense to lump them together.
Example 3. Musical ability
The perception of pitch is thought to have a genetic basis. On average, identical twins sharing nearly their entire DNA perform more similarly on a pitch recognition task than fraternal twins, which share approximately half of their DNA.3 And this tends to be true even when one identical twin has invested a lot more time in musical practice than the other.
Does this mean that musical ability is inherited?
It’s tempting to say so, and some articles reporting on similar findings do take this route.
But in an interview with Carry the One Radio, neuroscientist and professional musician Indre Viskontas says that using these lower-level perceptual skills to judge musical ability is “a little like testing the eyesight of a painter to gauge whether or not they’re a good painter, a good artist…I wouldn’t even say that that gets really even that close to what we’d call musicality.”4
What we consider a musically gifted performance of course relies in part on the artists’ sensitive hearing, but these two “musical abilities” are quite different in their level of complexity.
Tomato, tomahto. So what?
What’s the pattern here? Some of the same terms that represent highly simplified concepts in the cognitive sciences tend to signify or are mistakenly equated with very complex versions of that idea in the education world.5 Exaggeration occurs if conclusions from research in the cognitive sciences are, inadvertently or not, generalized to a much higher level without an empirical basis.
When cross talk happens, it’s not always clear the extent to which people are talking about the same thing. But they’re using the same words—and often nobody clarifies (or knows to)!
While new studies can be incredibly exciting, we should interpret them cautiously. Neuroscience reporting is frequently exaggerated, particularly if the initial press release at all overstates the results.6,7 Even when the reporting is accurate, plenty of published results aren’t replicable, meaning that new researchers repeating the same study don’t find the same results.8
In “Combating Neurohype,” Mo Costandi asks researchers to take responsibility for accurate reporting of their results.9 I’d argue that part of this responsibility is actively taking into account how readers might interpret word choices with varied emphases in different spheres.
For educators and others reading and talking about science, it’s important to develop a healthy skepticism with regard to the headline. Going beyond it usually reveals that the exciting result is a bit more nuanced and perhaps limited, raising critical questions about when and where such research might be applicable (or not). Developing these critical questions and getting them in front of scientists might propel what we know about learning and the brain even further.10
References & Further Reading
- Gureckis, T. M., & Markant, D. B. (2012). Self-Directed Learning: A Cognitive and Computational Perspective. Perspectives on Psychological Science, 7(5), 464–481. [Paper]
- Miyake, A., & Friedman, N. P. (2012). The Nature and Organization of Individual Differences in Executive Functions: Four General Conclusions. Current Directions in Psychological Science, 21(1), 8–14. [Paper]
- Drayna, D., Manichaiku, A., de Lange, M., Snieder, H., Spector, T. (2001). Genetic Correlates of Musical Pitch Recognition in Humans. Science, 291, 1969-1972. [Paper]
- “The Sound of Music(ality)”. (2015). Carry the One Radio. [Audio Podcast]
- Howard-Jones, P. A. (2014). Neuroscience and education: myths and messages. Nature Reviews. Neuroscience, 15(12), 817–824. [Paper]
- O’Connor, C., Rees, G., & Joffe, H. (2012). Neuroscience in the public sphere. Neuron, 74(2), 220–6. [Paper]
- Sumner, P., Vivian-Griffiths, S., Boivin, J., Williams, A., Venetis, C. A., et al. (2014). The association between exaggeration in health related science news and academic press releases: retrospective observational study. Bmj, 349 (December), g7015. [Paper]
- Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science Magazine, 349(6251). [Paper]
- Costandi, Mo. (2015). Combating Neurohype. The Neuroethics Blog. [Blog]
- Christodoulou, J. A., & Gaab, N. (2009). Using and misusing neuroscience in education-related research. Cortex, 45(4), 555–557. [Paper]
- Center on the Developing Child at Harvard University (2011). Building the Brain’s “Air Traffic Control” System: How Early Experiences Shape the Development of Executive Function: Working Paper No. 11. [Organization]