Skip to main content
Online K-12 Schools
Andrew Watson
Andrew Watson

AdobeStock_113099546_Credit

The upcoming Learning and the Brain Conference (Boston, November) will focus on “Merging Minds and Technology.”

Given that I blog so much about the importance of skepticism, it seems only appropriate to offer up at least some voices that are highly doubtful about the benefits of technology–in particular, virtual classrooms.

Freddie deBoer has strong opinions, and he supports them with data. You’ll want to check out the graph he includes: one of the axes shows the equivalent of “days of learning lost.” That number–especially when it comes to math learning–will astonish you.

Promoting Motivation?
Andrew Watson
Andrew Watson

AdobeStock_118814656_Credit

Over at 3 Star Learning Experiences, Kirschner and Neelan are skeptical about research into academic motivation.

In essence, they argue that defining motivation can be quite a trick, and measuring it even more so. If we struggle to define and measure something, it’s hard to be scientifically thoughtful (and accurate) about it.

As a result, we tend to discuss vague things like “student engagement”: it sounds good (who could be opposed to “student engagement”?), but it’s hard to know if behavior that looks like “engagement” reliably promotes learning.

I share much of their concern about this part of our field. In fact, I find Dweck’s work on Mindset, and Steele’s work on Stereotype Threat, so interesting because they have found motivational topics that can be both defined and measured.

Like Kirschner and Neelen, I’ll be more motivated to explore this field when more of it can cross these essential thresholds.

 

Default Image
Andrew Watson
Andrew Watson

Two articles jumped out at me today because of the illustrative way they clash with each other.

Writing on Twitter, and providing helpful links to several sources, Adam Grant argues that “Differences between Men and Women are Vastly Exaggerated.”

Whereas Neuroscience News published a summary of a recent research study (by Daniel Amen) with the headline “Women Have More Active Brains Than Men.”

So, which is it? Are differences between the sexes exaggerated? Or do male and female brains operate very differently?

Let’s use three lenses to look at that question.

The First Lens: Discipline

Oversimplifying for the sake of clarity, we can say that neuroscience studies brains–that is, physical objects. It looks at neurons and blood flow and neurotransmitters and electrical energy. Things.

Psychology studies the behavior of brains–that is, what people do with those physical objects. It looks at a student’s ability to remember, or an athlete’s ability to concentrate, or an adult’s ability to learn a new language. Behaviors.

Obviously, both neuroscience and psychology are fascinating. But, which discipline is more useful?

Of course, the answer to that question depends on your definition of “useful.”

I myself think that teachers benefit from learning about the behavior of brains (that is, psychology) more than we do from learning about brains as objects (that is, neuroscience).

For example, if I tell you how brains change physically when long-term memories form, that information is interesting. (In fact, I often share this information when I talk with teachers.)

But, if I tell you what kind of teaching behavior makes long-term memory formation more likely, that information is really useful.

For this reason, I think Grant’s summary–which focuses on psychology–is likely to be more useful than the Amen study–which focuses on neuroscience.

For example: Grant’s summary looks at anti-stereotype-threat strategies that combat gender differences in college majors or professions. Teachers can do something with this information.

The Amen study, on the other hand, tells us about different levels of brain activity as measured by Single Photon Emission Computed Tomography (SPECT). I don’t know exactly what SPECT is, and I certainly don’t know how I would teach differently given this information.

So for me, again, neuroscience is fascinating, and psychology is useful.

(To be clear, I have several colleagues–whose judgment I highly respect–who disagree with me strongly on this point; that is, they think the neuroscience is just as important for teachers as the psychology. So, if you think I’m wrong, you’re not the only one.)

The Second Lens: The Population Being Studied

Whenever you use brain research to help your teaching, you should focus on the participants in the study. The more the participants resemble your own students, the likelier it is that the research findings will benefit your students.

So, if you find a study that says three repetitions of a practice exercise benefits long-term memory, that study might be very helpful. But: if the participants in the study were college students at an elite university, and you teach 1st graders who are already struggling with formal education, the study might not mean much to you.

After all, your students differ from those in the study so substantially that there’s no way to be sure the conclusions apply to your teaching context.

Grant’s research summary chooses several very large analyses. When he looks at (very small) gender differences in math scores, for example, his source draws on almost 4,000 studies. It seems likely that such broadly supported research will apply to my students too.

Amen’s study looks at a very large population–almost 27,000 people. However, and this is a big however, all but 119 of those people were suffering from “a variety of psychiatric conditions such as brain trauma, bipolar disorders, mood disorders, schizophrenia/psychotic disorders, and attention deficit hyperactivity disorder (ADHD).”

For obvious reasons, it’s hard to draw conclusions about neurotypical brains by studying aneurotypical brains.

So, again, because the Grant summary includes students like mine, and the Amen study doesn’t, I’m likelier to benefit from Grant’s conclusions.

(By the way, it’s entirely possible that your students seem more like Amen’s participants than those included in Gran’s summary–in which case, you may be more swayed by Amen’s findings.)

The Third Lens: Biases

In the world of science, “bias” isn’t necessarily a bad thing. All analysis–including yours, including mine–includes bias. Our goal should not be to eliminate bias (we can’t), but to recognize it in ourselves and others, and to do the best we can to look for countervailing biases.

So, let me be up front with you: my bias is, I’m usually skeptical about strong claims of gender difference in education. This skepticism has many sources–but, no matter how good those sources are, you should know that I’m not an impartial author delivering truth from on high.

I am, instead, someone who rarely finds evidence of gender difference in education persuasive…and (surprise!) my post has twice concluded that the “gender makes little difference in education” article is more useful and persuasive than the “there are big gender differences in brains” article.

Now that you know my bias, you should a) look for people with the opposite bias, and see if you find their arguments more persuasive than these, and b) recognize your own biases, and do your best to counterbalance them.

After all, one thing is certainly true about male and female brains: we’re all faster to believe ideas that support our own prior conclusions.

——————————————

Two final notes:

First, my thanks to Stephanie Sasse (prior editor of this blog) and Maya Bialik (former writer for this blog) for their idea of “lenses” as a way to analyse brain research.

Second, brain research generally hasn’t come to grips with people who fall outside a male/female gender dichotomy. Our understanding of gender and learning will be stronger and more useful when it does.

Cell Phones in the Classroom: Expected (and Unexpected) Effects
Andrew Watson
Andrew Watson

AdobeStock_120881908_Credit

Quick! Where’s your cell phone?

Now that I’ve got your attention: what effect does the location of your cell phone have on your attention?

Researchers have recently found some predictable answers to that question–as well as some rather surprising ones. And, their answers may help us think about cell phones in classrooms.

The Study

Adrian Ward and Co. wanted to learn more about the “mere presence” of students’ cellphones.

That is: they weren’t asking if talking on the phone distracts drivers (it does), or if a ringing phone distracts that phone’s owner (it does), or even if a text-message buzz distracts the textee (it does).

Instead, they were asking if your phone lying silently on the desk in front of you distracts you–even if its not ringing or buzzing.

Even if you’re NOT CONSCIOUSLY THINKING ABOUT IT.

So they gathered several hundred college students and had them complete tests that measure various cognitive functions.

The first group of students left all their stuff–including phones–in another room. (That’s standard procedure during such research.)

The second group brought their phones with them “for use later in the study.” After silencing the phones (no ringing, no buzzing), they were told to put them wherever they usually keep them. Roughly half kept them in a pocket; the other half kept them in a nearby bag.

The third group brought phones along, and were instructed to put them in a marked place on the desk in front of them. (These phones were also silenced.)

Did the proximity of the phone matter?

The Expected Results

As is so often true, the answer to that question depends on the measurement we use.

When the researchers measured the students’ working memory capacity, they found that a cell phone on the desk reduced this essential cognitive function.

Specifically, students who left phones in their bags in another room averaged about a 33 on an OSpan test. (It measures working memory–the specifics aren’t important here.) Those who had cell phones on their desks scored roughly 28.5. (For the stats pros here, the p value was .007.)

If you attend Learning and the Brain conferences, or read this blog regularly, you know that working memory is ESSENTIAL for academic learning. It allows us to hold on to bits of information and recombine them into new patterns; of course, that’s what learning is.

So, if the “mere presence” of a cell phone is reducing working memory, it’s doing real harm to our students.

By the way, the students who had their phones on their desks said that they weren’t thinking about them (any more than the other students), and didn’t predict that their phones would distract them (any more than the other students).

So, our students might TELL US that their phones don’t interfere with their cognition. They might not even be conscious of this effect. But, that interference is happening all the same.

The UNEXPECTED Results

Few teachers, I imagine, are surprised to learn that a nearby cell phone makes it hard to think.

What effect does that phone have on the ability to pay attention?

To answer this question, researchers used a “go/no-go” test. Students watched a computer screen that flashed numbers on it. Whenever they saw a 6, they pressed the letter J on the keyboard. They ignored all the other numbers.

(The researchers didn’t go into specifics here, so I’ve described a typical kind of “go/no-go” task. Their version might have been a bit different.)

To do well on this task, you have to focus carefully: that is, you have to pay attention. Researchers can tell how good you are at paying attention by measuring the number of mistakes you make, and your reaction time. Presumably, the slower you are to react, the less attention you’re paying.

So, how much difference did the cell phone on the desk make? How much slower were the students who had the phone on the desk, compared to those whose phones were in the other room?

Nope. Sorry. No difference.

Or, to be precise, the students who had the phone on the desk reacted in 0.366 seconds, whereas those whose phones were elsewhere reacted in 0.363 seconds. As you can imagine, a difference of 0.003 seconds just isn’t enough to worry about. (Stats team: the p value was >.35.)

Explaining the Unexpected

Ward’s results here are, I think, quite counter-intuitive. We would expect that the mere presence of the phone would interfere with working memory because it distracted the students: that is, because it interfered with their attention.

These results paint a more complicated picture.

The explanation can get technical quickly. Two key insights help understand these results.

Key Insight #1: Attention isn’t just one thing. It has different parts to it.

One part of my attentional system brings information into my brain. I am, at this moment, focusing on my computer screen, Ward’s article, my keyboard, and my own thoughts. Sensory information from these parts of my world are entering my conscious mind.

Another part of my attentional system screens information out of my brain.  I am, at this moment, trying not to notice the bubble-and-hum of my cats’ water gizmo, or my cat’s adorable grooming (why is his leg stuck up in the air like that?)–or, really, anything about my cats. Sensory information from those parts of my world are not (I hope) entering my conscious thought.

The attention test that Ward & Co. used measured the first part of attention. That is, the go/no-go task checks to see if the right information is getting in. And, in this case, the right information was getting in, even when a cell phone was nearby.

Key Insight #2Working Memory INCLUDES the second kind of attention.

In other words, we use working memory to keep out adorable cat behavior–and other things we don’t want to distract our conscious minds. Other things such as–for example–cell phones.

The nearby phone doesn’t interfere with the first part of attention, and so the correct information gets into student brains. For this reason, students do just fine on Ward’s “attention” test.

However, the nearby phone does make it hard to filter information out. It’s bothering the second part of attention–which is a part of working memory. For this reason, students do badly on Ward’s “working memory” test.

Classroom Implications

Ward’s research, I think, gives us some clear pointers about cell phones in classrooms: the farther away the better.

Specifically, it contradicts some teaching advice I’d gotten a few years ago. Some have advised me that students should silence their phones and put them on the desk in front of them. The goal of this strategy: teachers can be sure that students aren’t subtly checking their phones under their desks.

While, clearly, it’s beneficial to silence phones, we now know that their “mere presence” on the desk interferes with working memory.

In brief, we need another solution.

(Sadly, Ward doesn’t tell us what that solution is. But she warns us away from this phone-on-the-desk strategy.)

Implications for Brain Science in the Classroom

Teachers LOVE learning about psychology and neuroscience research because it can offer such helpful and clarifying guidance for good teaching.

(I should know: I’ve spent the last ten years using such research to be a better teacher.)

At the same time, we teachers occasionally stumble into studies like this one where psychology gives us results that seem strange–even impossible.

After all: how can you tell me that cell phones don’t interfere with our students’ attention? And, if they don’t interfere with attention, how can they possibly interfere with something like working memory?

The answer–as described above–is that psychologists think of attention as having multiple parts, and one of those parts overlaps with working memory. Because psychologists define the word “attention” one way and we teachers use it a different way, research like this is potentially very puzzling.

(You can imagine our students reading this study and crowing: “See! Cell phones have NO EFFECT on attention! “)

For this reason, we need to be especially careful when we enter into the world of brain science. Definitions of basic words (“attention,” “transfer,” “significant”) might trip us up.

And so, you’re wise to be attending Learning and the Brain conferences, and to be consulting with experts who know how to read such studies and make sense of them.

In brief: teachers should be modest when we try to interpret primary research in neuroscience and psychology. These fields are so complicated that we just might misunderstand even basic terms.

By the way, the same point holds in reverse. Neuroscientists and psychologists should be modest when telling us how to teach. Our work is so complicated that they just might misunderstand even basic classroom work.

This mutual modesty is–I believe–the basis of our field. We all come together to learn from and collaborate with each other. Our students will benefit from this complex and essential collaboration.

Emotional Intelligence: Teachers’ Perspectives
Andrew Watson
Andrew Watson

AdobeStock_84579535_Credit

The National Network of State Teachers of the Year has released a report on teaching emotional intelligence.

Overall, they find research in this field persuasive. That is, these award-winning teachers think it likely that social/emotional intelligence can be taught, and does benefit students in a number of ways.

At the same time–and for obvious reasons–they think more professional development and more funding are important. And, they worry about including these measures in teacher evaluations.

If your school is considering these questions, this report might be a helpful place to start.

Default Image
Andrew Watson
Andrew Watson

I’ll be on vacation during the month of August; in fact, I’ll be out of the country and away from the interwebs for much of that time.

And so, posting will be light while I’m away: perhaps an article a week or so.

I hope you’re enjoying the perspectives we bring you, and I assure you–there’s lots more to come in the fall.

-Andrew

Helping Students Study Well: The Missing Plank in the Bridge?
Andrew Watson
Andrew Watson

AdobeStock_135671744_Credit

Ok: you’ve taught your students a particular topic, and you’ve provided them with lots of ways to review and practice for the upcoming test. But, will they do so?

How can you ensure that they prepare most effectively?

Patricia Chen’s research team studied a surprisingly simple answer to this question. You might help your students study by asking them to think about the approaches that they will use–and, to make specific plans.

Chen & Co. asked students to follow a four step process:

Step 1: students wrote about the kind of questions they expected on the test.

Step 2: they then chose the resources they wanted to use to prepare for those questions. The checklist from which they chose included 15 options, such as “go over practice exam questions,” “go to professor’s office hours,” and “work with a peer study group.”

Step 3: they wrote why and how they thought each of the resources they selected might be helpful.

Step 4: they made specific and realistic plans about where and when they would use those resources.

Compared to a control group–who were simply reminded that they should study for the upcoming exam–students in this group averaged 1/3 of a letter grade higher.

For example, students in the control group had an average class grade of 79.23. Those who went through these 4 steps had an average grade of 83.44.

That’s a lot of extra learning from asking four basic questions.

What Should We Do?

Chen’s research team worked with college students studying statistics. Do their conclusions apply to–say–5th graders studying history? Or, 10th graders learning chemistry?

As is so often the case, individual teachers will make this judgment call on their own. Now that you’ve got a good study suggesting that this method might work, you can think over your own teaching world–your students, your curriculum, your approach to teaching–and see if this technique fits.

In case you decide to do so, I will offer three additional suggestions.

First: check out Gollwizer’s work on “implementation intentions.” His idea overlaps with Chen’s work, and would pair with it nicely.

Second: I’m a little concerned that Chen’s list of proposed study strategies included two options we know don’t help–reviewing notes and rereading the text. (If my skepticism about those two methods surprises you, check out Ian Kelleher’s post here.) Your list of study strategies should NOT include those suggestions.

Third: as always, keep working memory limitations in mind. The kind of meta-cognition that Chen outlines can clearly benefit students, but it also might overwhelm their ability to keep many ideas in mind at the same time.

However, if we can prevent working memory overload, this strategy just might help bridge the gap between “I taught it” and “they learned it.” As is so often the case, a key plank in that bridge is: asking students to think just a little bit more..

How to Lie with Graphs
Andrew Watson
Andrew Watson

AdobeStock_40302700_Credit

A handy video from Ted Education gives some pointers on spotting misleading graphs. Pay close attention to their warnings about meddling with the y-axis. Believe it or not, this sort of thing happens frequently in the world of science publishing.

(If you’re interested in visual representation of data, I encourage you to look up the work of Edward Tufte. He’s written some amazing books, and is a fun and provocative thinker.)

One note about the Ted Ed video: its has clear political leanings–so clear, in fact, that I’ve hesitated in linking to it. I hope that you will watch it, because its suggestions are both both important and useful.  Rest assured: my goal is not to sway your political views, but to give you a helpful tool in analyzing scientific information.

“One Size Fits All” Rarely Fits
Andrew Watson
Andrew Watson

AdobeStock_88297958_Credit

If you attend Learning and the Brain conferences, or read this blog regularly, you know all about the well-researched benefits of retrieval practice. (You can read earlier articles on this subject here and here.)

The short version of the story: if we ask students to recall ideas or processes that they have learned, they are likelier to learn those ideas/processes deeply than if we simply go over them again.

But, does retrieval practice always work?

The question answers itself: almost nothing always works. (The exception: in my experience, donuts always work.)

Over at The Learning Scientists, Cindy Wooldridge writes about her attempt to use retrieval practice in her class–and the dismaying results.

From her attempt, Wooldridge reaches several wise conclusions. Here are two of them:

Another very important take-away is that learning science is not one size fits all. Just because we say retrieval practice works, doesn’t mean it works in all scenarios and under all circumstances.

This is why it’s so important to be skeptical. Use objective measures to assess whether and how a teaching strategy is working for your students and take time to do some reflection on how and why it worked (or didn’t). This is another great example of a time when my intuition said that this absolutely should work, but we should follow the evidence and not just intuition.

To learn more about her effort and her conclusions, click here.

Rates of ADHD Diagnosis: Age, Gender, and Race
Andrew Watson
Andrew Watson

AdobeStock_78492564_Credit

Dr. David Rabiner offers a helpful summary of trends in ADHD diagnoses.

The short version: rates of diagnosis continue to increase.

The longer version: depending how you analyze the categories, you get very different results. For children younger than 5, the rates are — in fact — falling. For adults over 65, however, the rate rose 348% from 2008-9 to 2012-13.

(That is not a typo: 348%).

One important point as you review these data: percentages are interesting, but so too are the absolute numbers. Diagnoses among those over 65 can increase so much as a percentage because the absolute numbers are relatively low.

By the way: analysis by gender shows an interesting pattern. Among adults, both diagnosis and medication are increasing faster for men than women. Among children, however, that pattern is reversed.