Skip to main content
Diagnosing ADHD with MRI
Andrew Watson
Andrew Watson

AdobeStock_58318021_Credit

How do we know if a student has ADHD?

Typically, we observe behavior.

To what degree is the student inattentive? (That’s one kind of ADHD.) To what degree is s/he hyperactive? (That’s another type.) Perhaps the student demonstrates both kinds of behavior.

If these behaviors last long enough, and cause enough distress to be “clinically significant,” we can then make a diagnosis.

What would happen, however, if instead of looking at behavior, we could look at the student’s brain? Could a brain scan ever replace a behavioral study to make a diagnosis?

The Future Might Be Now

A just-published study starts us down this road.

By scanning the brains of 33 just-diagnosed/never treated students, and comparing them to the brains of 87 control subjects, researchers identified three brain areas substantially correlated with two subtypes of ADHD: inattentive, and combined inattentive/hyperactive.

(For the neurally curious, those three areas are the left temporal lobe, the bilateral cuneus, and regions around the left central sulcus.)

If the future is now, we might conclude that we can use MRI imaging to diagnose students, without having to observe their behavior.

The Future Might Be a Long Way Off

Despite all this exciting news, we have many reasons not to rush toward neuro-diagnosis of ADHD just yet.

First: the scans correctly distinguished between those who DO and those who DON’T have ADHD 75% of the time. That might sound impressive…unless you’re one of the 25% of cases where they got it wrong.

Second: the scans distinguished between Inattentive-type ADHD and Inattentive/Hyperactive-type ADHD 80% of the time. So, again, 1 in 5 of the participants would have been mis-diagnosed.

Third: the study didn’t include any students with purely Hyperactive-type ADHD. That’s a big gap in the diagnostic ability of MRI. (The authors explain that there is a low prevalence of this subtype in their research pool.)

Fourth: in a switch to cross-cultural perspectives, we must notice that different countries and cultures define “appropriate behavior” differently. Behavior that seems “clinically significantly” hyperactive or inattentive in one culture might be entirely appropriate in another. For this reason, the fact that this research was done in China means we must be very thoughtful about applying its conclusions to students from a non-Chinese cultural context.

(To be very clear on this point: I’m NOT saying that Chinese researchers can’t produce meaningful findings, or that ADHD doesn’t matter in China, or anything like that. I AM saying that cultures define “appropriate behaviors” differently, and so when behavior becomes diagnosable, we must be careful about cross-cultural applications. And we must be especially careful when looking for differences in neural structures that underlie those behaviors.)

Fifth: Chinese psychologists use a somewhat different set of terms in describing ADHD than do American psychologists.  They are, quite possibly, looking for neural correlates of meaningfully different behavior than we would for a Diagnostic and Statistical Manual diagnosis of ADHD.

Sixth: changing perspectives once again, we should note that MRI scans are crashingly expensive. If we’re going to start diagnosing students this way, we need to have thoughtful discussions about the services we’ll stop providing in order to make these funds available.

A Balanced Perspective

With this daunting list of reasons to pause, I don’t mean to dismiss the importance of this research.

Instead, I want to be sure that we look at in with an appropriate balance of enthusiasm and caution.

Enthusiastically, I can say that the future possibility of MRI diagnoses of ADHD could be very helpful.

For one thing, when people recognize that there are consistent and meaningful differences in neural structures, they might be less likely to say “Well, the kid just needs to try harder to pay attention.”

Cautiously, I can say that these helpful possibilities are a long way in the future, and we should not let our enthusiasm prompt us to embrace them before they’re ready for effective, culturally appropriate, and affordable use.

Default Image
Andrew Watson
Andrew Watson

Here’s a hypothetical situation:

Let’s say that psychology researchers clearly demonstrate that retrieval practice helps students form long-term memories better than rereading the textbook does.

However, despite this clear evidence, these researchers nonetheless emphatically recommend that students avoid retrieval practice and instead reread the textbook. These researchers have two justifications for their perverse recommendation:

First: students aren’t currently doing retrieval practice, and

Second: they can’t possibly learn how to do so.

Because we are teachers, we are likely to respond this way: “Wait a minute! Students learn how to do new things all the time. If retrieval practice is better, we should teach them how to do it, and then they’ll learn more. This solution is perfectly obvious.”

Of course it is. It’s PERFECTLY OBVIOUS.

Believe It Or Not…

This hypothetical situation is, in fact, all too real.

In 2014, Pam Mueller and Dan Oppenheimer did a blockbuster study comparing the learning advantages of handwritten notes to laptop notes.

Their data clearly suggest that laptop notes ought to be superior to handwritten notes as long as students learn to take notes the correct way.

(The correct way is: students should reword the professor’s lecture, rather than simply copy the words down verbatim.)

However–amazingly–the study concludes

First: students aren’t currently rewording their professor’s lecture, and

Second: they can’t possibly learn how to do so.

Because of these two beliefs, Mueller and Oppenheimer argue that–in the witty title of their article–“The Pen is Mightier than the Laptop.”

But, as we’ve seen in the hypothetical above, this conclusion is PERFECTLY OBVIOUSLY incorrect.

Students can learn how to do new things. They do so all the time. Learning to do new things is the point of school.

If students can learn to reword the professor’s lecture when taking notes on a laptop, then Mueller and Oppenheimer’s own data suggest that they’ll learn more. And yes, I do mean “learn more than people who take handwritten notes.”

(Why? Because laptop note-takers can write more words than handwriters, and in M&O’s research, more words lead to more learning.)

And yet, despite the self-evident logic of this argument, the belief that handwritten notes are superior to laptop notes has won the day.

That argument is commonplace is the field of psychology. (Here‘s a recent example.)

Even the New York Times has embraced it.

The Fine Print

I do need to be clear about the limits of my argument:

First: I do NOT argue that a study has been done supporting my specific hypothesis. That is: as far as I know, no one has trained students to take reworded laptop notes, and found a learning benefit over reworded handwritten notes. That conclusion is the logical hypothesis based on Mueller and Oppenheimer’s research, but we have no explicit research support yet.

Second: I do NOT discount the importance of internet distractions. Of course students using laptops might be easily distracted by Twinsta-face-gram-book. (Like everyone else, I cite Faria Sana’s research to emphasize this point.)

However, that’s not the argument that Mueller and Oppenheimer are making. Their research isn’t about internet distractions; it’s about the importance of reworded notes vs. verbatim notes.

Third: I often hear the argument that the physical act of writing helps encode learning more richly than the physical act of typing. When I ask for research supporting that contention, people send me articles about 1st and 2nd graders learning to write.

It is, I suppose, possible that this research about 1st graders applies to college students taking notes. But, that’s a very substantial extrapolation–much grander than my own modest extrapolation of Mueller and Oppenheimer’s research.

And, again, it’s NOT the argument that M&O are making.

To believe that the kinesthetics of handwriting make an essential difference to learning, I want to find a study showing that the physical act of writing helps high school/college students who are taking handwritten notes learn more. Absent that research, this argument is even more hypothetical than my own.

Hopeful Conclusion

The field of Mind, Brain, & Education promises that the whole will be greater than the sum of the parts.

That is: if psychologists and neuroscientists and teachers work together, we can all help each other understand how to do our work better.

Frequently, advice from the world of psychology gives teachers wise guidance. (For example–retrieval practice.)

In this case, we teachers can give psychology wise guidance. The founding assumption of the Mueller and Oppenheimer study–that students can’t learn to do new things–simply isn’t true. No one knows that better than teachers do.

If we can keep this essential truth at the front of psychology and neuroscience research, we can benefit the work that they do, and improve the advice that they give.

Does project-based learning work?
Andrew Watson
Andrew Watson

AdobeStock_111982420_Credit

The answer to the titular question depends on a) your definition of “project-based learning,” and b) your methodology for measuring success.

In a just-published, comprehensive literature review, MDRC takes 84 pages to say: “we can’t really answer the question, because we don’t have consistent definitions or consistent methodologies.”

For example:

Without a common set of PBL design principles, it is difficult to use the existing body of research to draw conclusions about PBL’s effectiveness. (p. 53)

or

More rigorous evidence is needed to confirm whether PBL is a better approach to prepare students for college and career than traditional teacher-directed methods. (p. 55)

That’s a frustrating answer.

If you love and believe in PBL–and, more than most pedagogical theories, PBL really has true believers–you’d rather have a ringing endorsement.

If you’re a skeptic–check out Kirschner’s emphatic rejection here–you’d like this idea put to bed once and for all.

In this review, however, the authors make clear that until we agree what PBL really is (and, what it isn’t), we can’t coherently measure its effectiveness.

What Should Teachers Do?

In the absence of a clear research answer to this question, I have two suggestions.

First: teacher experience matters. If you and your colleagues have experience teaching both PBL and direct-instruction curricula, and you’ve had good success with one or the other, then draw on that experience. As long as you’re being honest with yourselves, and keeping good records, then your experience is–for now–at least as good as any other information we’ve got.

Second: rely on useful principles from cognitive science. Does PBL help your students pay attention? If yes, that’s good. Does PBL decrease their motivation? If yes, that’s bad.

Quite often, for instance, I find that PBL curricula overwhelm students’ working memory limits. If so, then it doesn’t matter that the curriculum ought to work, or was designed by experts, because it’s overwhelming working memory.

In other words: if the curriculum sounds upliftingly progressive, but it violates basic principles of cognition, then put the rubric down and step away from the authentic question.

Every curriculum must fit with the way that students’ brains work–including a PBL curriculum.

 

 

(In case you’re wondering, “MDRC” stands for “Manpower Demonstration Research Corporation.” It was created by the Ford Foundation; its lumpy name was simplified to MDRC in 2003. You can read its history here.)

5 Praises a Day
Andrew Watson
Andrew Watson

AdobeStock_9411229_Credit

Back in May, a brief flurry of articles rose up (here, here, and here) around the “Five Praises a Day Campaign,” which encourages parents of 2- to 4-year-olds to praise their children more often.

(The authors don’t claim that the number five is magic; they picked it to align with the well-known advice about “five fruits and vegetables a day.” They’re more interested in being sure that there’s enough praise; and “enough” will vary from child to child.)

I’m frankly surprised to read this advice, given all the recent concern about the self-esteem movement.

As you know, especially in the 1970s, researchers noticed a correlation between self-esteem and academic success (and lots of other good things). They concluded that we can help students learn by helping them feel good about themselves.

Voila: the Self-Esteem Movement.

Sadly, this advice confused correlation with causation. It turns out that academic success raises self-esteem (obvi), but high self-esteem doesn’t prompt academic success.

(Check out Baumeister and Tierney’s book Willpower — especially Chapter 9, “Raising Strong Children: Self-Esteem versus Self-Control — for the history and the research.)

While Baumeister argues that too much praise saps self-control, Carol Dweck has shown that the wrong kind of praise fosters a fixed mindset and imperils a growth mindset.

For instance, Mueller and Dweck’s 1998 study shows that praising a student’s ability or intelligence leads to all sorts of unfortunate consequences. It even encourages them to lie to demonstrate their success!

Rejoinders, and Re-Rejoinders

While championing the 5 Praises campaign, Carole Sutton does acknowledge these concerns. First:

Dweck (2007) has highlighted the pitfalls of allowing children to expect unwavering approval, especially when this is directed towards their intelligence rather than their effort. She is right: these pitfalls exist. However, we are concerned here with very young children, those below the age of five and primarily with their behaviour, rather than their intelligence or physical attributes.

And second:

Other critics, such as Baumeister, Hutton and Cairns (1990), have demonstrated that giving praise to skilled practitioners has the effect of undermining those skills, not enhancing them. However, we are concerned here with very unskilled practitioners indeed, namely, toddlers learning to walk, to feed themselves, to toilet themselves, to dress themselves and to develop a sense of competence and self worth.

My first concern with these explanations is that they’re actually quite hard to find. Neither the Time article nor the ScienceDaily.com post — which I linked to above — nor even the press release touting a 5 Praises lecture, mentions them.

I found them on the last page of a document that’s downloadable at the very end of a university web page.

My second concern is that they’re not very persuasive.

Sutton, for example, says that the 5 Praises advice focuses on behavior — not intellect or ability — for young children. However, Dweck’s research makes clear that fixed and growth mindsets influence all ages, and a great many human attributes.

For example, I might say to a 3-year-old: “That was very good–you remembered to say “excuse me” before you asked a question!”

Or, I might say: “That was very good–you’re such a polite boy!”

Both of those compliments focus on behavior. The first compliment, however, fosters a growth mindset by emphasizing what the child is doing; the second promotes a fixed mindset by emphasizing what kind of person the child is.

To Sum Up

To be clear: I’m in favor of praise. At the same time, we’ve got lots of research showing that the kind of praise and the reasons for praise matter a lot–more than simply the amount of praise. Praising children more won’t necessarily lead to good results, even if they eat all five of their fruits and vegetables.

Action Video Games Harm the Hippocampus, Right?
Andrew Watson
Andrew Watson

AdobeStock_167868720_2_Credit

Here’s a headline to get your attention: Action video games decrease gray matter, study finds.

The article opens with this alarming sentence:

“A new study suggests that playing action video games can be detrimental to the brain, reducing the amount of gray matter in the hippocampus.” [emphasis mine, ACW]

We have a number of reasons to be curious about this claim.

Primarily, researchers have debated one another with vehemence–and occasional vitriol–on the benefits and detriments of action video games–such as Call of Duty. This article seems to be an interesting addition to that debate.

The article itself is behind a paywall, but you can read the abstract here. Let me quote the first and last sentence of the abstract:

“The hippocampus is critical to healthy cognition, yet results in the current study show that action video game players have reduced grey matter within the hippocampus. [… ]

These results show that video games can be beneficial or detrimental to the hippocampal system depending on the navigation strategy that a person employs and the genre of the game.” [emphasis mine, ACW]

So, does this research show that video games can be detrimental to the hippocampus, as the article’s first sentence claims? Yes, it does.

But, as my highlighting makes clear, it also shows that video games can be beneficial to the hippocampal system.

In other words: the article’s scary headline — and several of its subsequent statements —  mischaracterize the underlying article.

After all, if I wrote an article claiming that Leonardo diCaprio is the best and the worst actor of his generation, and you summarized my article with the headline “Watson calls DiCaprio This Generation’s Worst Actor,” you’d be technically correct, but substantively misleading.

You can’t just leave out half of the argument.

To be fair: the study itself is quite complex. It distinguishes, first, between action video games — like Call of Duty — and 3D video games — like SuperMario. It further distinguishes between two strategies that players use to navigate those games.

SuperMario-like games are beneficial to hippocampal gray matter whichever navigation strategy players use. For Call-of-Duty-like games, the benefit or detriment depends on the navigational strategy.

The Lesson for Teachers to Learn

I believe that we, as teachers, must increasingly inform our classroom practice with research from neuroscience and psychology. We should know, for instance, whether or not action video games do bad things to the brain.

(When I spoke with parents at a school in New York just two weeks ago, I got that very question.)

If we’re going to rely on scientific research, however, we need to hone our scientific skepticism skills.

For me, here’s rule number one: ALWAYS READ THE ABSTRACT.

If a book or a speaker or an article make a research-based claim, get the primary source and read the abstract–that’s the first paragraph that summarizes the key points of the study.

(It’s usually very easy to find the abstract: use Google Scholar.)

When you read the abstract, you can see right away whether or not the speaker, article, or book summarized the research correctly–or at least plausibly.

In this case, you can easily see that the article mischaracterized half of the the researchers’ conclusions. So, as a newly-minted skeptic, you know what to do: look elsewhere. This source isn’t strong enough to use as a resource for making school decisions.

(BTW: I have reached out to the website that published this summary. As of today–October 4–they’re sticking to their claims. If they make changes, I’ll update this post.)

Next Steps

If you’d like to hone your skepticism skills, you might check out the TILT curriculum at The People’s Science–developed by Stephanie Sasse (former editor of this blog) and Maya Bialik (former writer for this blog; speaker at the upcoming LatB Conference).

 

We Need a Bigger Boat
Andrew Watson
Andrew Watson

AdobeStock_72647120_Credit

Because working memory is so important for learning, and because human working memory capacity isn’t as large as we wish it were, we would LOVE to be able to increase it.

If we could make working memory bigger, then all sorts of complex cognitive tasks–comparing historical figures, multiplying multi-digit numbers, parsing complex sentences, coding useful programs–would just be easier.

Various researchers and companies have touted exercises and games to embiggen working memory. However, many scholars are quite skeptical about such activities.

Except in unusual circumstances (say, a particular kind of brain injury, or a very long gap in schooling), we simply haven’t had much luck in artificially boosting WM.

Recently I’ve been reading more and more about an alternate approach to cognitive enhancement: transcranial direct-current stimulation. That’s a fancy way of saying: applying electricity to the brain through the skull. (Safely.)

Although the ideas sounds really cool in a sci-fi kind of way, this recent study dampens the hype. The details of the study, and the statistical analysis, are quite complex.

The short version is: don’t sign up to get zapped any time soon.

Given that working memory training programs tend to be VERY expensive and VERY time consuming, I advise skeptical caution before going that route.

When he first sees Jaws, Roy Scheider tells Robert Shaw that they’ll need a bigger boat. If we want to enhance working memory, we’re going to need a better technology.

For the time being, the best working-memory enhancer is what it’s always been: school.

Criticizing Critical Thinking
Andrew Watson
Andrew Watson

AdobeStock_29005489_Credit

Over at Newsweek, Alexander Nazaryan wants to vex you. Here’s a sample:

Only someone who has uncritically mastered the intricacies of Shakespeare’s verse, the social subtexts of Elizabethan society and the historical background of Hamlet is going to have any original or even interesting thoughts about the play. Everything else is just uninformed opinion lacking intellectual valence.

If you’d like a more nuanced version of this argument, check out Daniel Willingham’s Why Don’t Students Like School. 

In particular, you might read…

Chapter 2: “Factual knowledge must precede skill”

Chapter 4:  “We understand things in the context of what we already know, and most of what we know is concrete”

Chapter 5: “It is virtually impossible to become proficient at a mental task without extended practice”

and chapter 6: “Cognition early in training is different from cognition late in training”

From another vantage point: my own book Learning Begins discusses the dangers of working memory overload lurking in efforts to teach critical thinking.

Whether you prefer Nazaryan’s emphatic declamations, or Willingham’s and my more research-focused commentary, take some time to think critically about all the cognitive legwork that must precede real critical thought.

How Best to Take Notes: A Public Service Announcement
Andrew Watson
Andrew Watson

AdobeStock_108457358_Credit

The school year is beginning, and so you’re certainly seeing many (MANY) articles about the debate over laptop notes vs. handwritten notes.

If your research stream is anything like mine, most of the articles you see assert that handwriting is superior to laptops for note-taking.

And, most of those articles cite Mueller and Oppenheimer’s blockbuster study, arguing–as its witty title avers–“the pen is mightier than the keyboard.”

Here’s my advice: don’t believe it.

More substantively: it’s possible that the pen is mightier than the keyboard. However, Mueller and Oppenheimer’s study supports that conclusion only if you believe that students can’t learn new things.

(Of course, that would be a very odd belief for a teacher to have.)

If you believe that students can learn new things, then this widely cited study suggests that laptop notes ought to lead to more learning than handwritten notes.

After all, a student who has practiced correct laptop note-taking can a) write more words than a student who takes notes by hand, and b) take notes in her own words just as well as a student who takes notes by hand.

Mueller and Oppenheimer’s research clearly suggests that a) + b) ought to lead to more learning.

The details of this argument get tricky; I lay them out in this post.

TWO CAVEATS

FIRST: I am not saying that I know laptop notes to be superior to handwritten notes.

I am saying that the study most often used to champion handwritten notes simply does not support its own conclusion. If you believe students can learn new things, then Mueller and Oppenheimer’s research suggests that laptop notes ought to lead to more learning.

A study testing my hypothesis has not–as far as I know–been done.

SECOND: you might reasonably say that students taking notes on laptops will be distracted by the interwebs. For that reason, handwritten notes will be superior.

I very much share this concern. (In fact, Faria Sana’s research shows that laptop multitasking distracts not only the multitasker, but also the person sitting behind the multitasker–a serious problem in lecture halls.)

However, multitasking is a separate question–not one addressed by Mueller and Oppenheimer.

The narrow question is: do non-multitasking laptop note-takers learn more than non-multitasking handwritten note-takers?

If the answer to that question is “yes,” then we should train laptop note-takers a) to reword the teacher’s lecture–not simply to write it down verbatim, and b) to unplug from the interwebs.

This combination will certainly be difficult to achieve. But, it might be the very best combination for learning.

A FINAL POINT

The laptops-vs.-handwriting debate stirs up a remarkable degree of fervor–more than I would expect from a fairly narrow and technical question.

I suspect that this debate is in fact a proxy war between those who think we should use more technology in schools (who favor laptop notes) and those who think we already use too much technology in schools (who favor handwriting).  That is: we’re not so much concerned with note-taking specifically as we are with technology in general.

That’s an important conversation to have. In fact, it’s central to the November Learning and the Brain Conference.

At the same time, let’s be sure that our general views on technology don’t obscure the answer to a precise, researchable question. If students learn more by taking notes on laptops, let’s find that out with well-designed research studies and then guide them well.

 

Online K-12 Schools
Andrew Watson
Andrew Watson

AdobeStock_113099546_Credit

The upcoming Learning and the Brain Conference (Boston, November) will focus on “Merging Minds and Technology.”

Given that I blog so much about the importance of skepticism, it seems only appropriate to offer up at least some voices that are highly doubtful about the benefits of technology–in particular, virtual classrooms.

Freddie deBoer has strong opinions, and he supports them with data. You’ll want to check out the graph he includes: one of the axes shows the equivalent of “days of learning lost.” That number–especially when it comes to math learning–will astonish you.

Promoting Motivation?
Andrew Watson
Andrew Watson

AdobeStock_118814656_Credit

Over at 3 Star Learning Experiences, Kirschner and Neelan are skeptical about research into academic motivation.

In essence, they argue that defining motivation can be quite a trick, and measuring it even more so. If we struggle to define and measure something, it’s hard to be scientifically thoughtful (and accurate) about it.

As a result, we tend to discuss vague things like “student engagement”: it sounds good (who could be opposed to “student engagement”?), but it’s hard to know if behavior that looks like “engagement” reliably promotes learning.

I share much of their concern about this part of our field. In fact, I find Dweck’s work on Mindset, and Steele’s work on Stereotype Threat, so interesting because they have found motivational topics that can be both defined and measured.

Like Kirschner and Neelen, I’ll be more motivated to explore this field when more of it can cross these essential thresholds.