May 2024 – Education & Teacher Conferences Skip to main content
Change My Mind, Please!
Andrew Watson
Andrew Watson

When was the last time you changed your mind about something important?

Perhaps you rethought a core political belief: gun control, or abortion, or a stance on international relations.

Maybe you gave up on one teaching practice to adopt a once-unthinkable alternative.

Just possibly, after a lifetime aversion to vegetables, you realized they’re delicious. (Wrap pencil-thin aspargus in proscuitto, give it a glug of olive oil and a sprinkle of parmesan, pop it on the grill…heaven.)

Now think back: what led you to change your mind? How did you stop believing one thing and start believing a contradictory position? What made the magic happen?

This broad question has specific importance here at Learning and the Brain.

After all: if we’re doing our job right, teachers will often leave our conferences and workshops thinking:

“Wow, I should really try this new way. Now that I know more about psychology and neuroscience research, I’ve got LOTS of ideas for improving my work, my classroom, and my school!”

To say the same thing in reverse: if everyone leaves a LatB event thinking EXACTLY what they believed before the event … well … why have the conference? What good did it serve?

Understandable Passion

To be fair, we at LatB have it relatively easy.

Most people come to our conferences wanting to get new ideas. Attendees are — for the most part — hoping that they’ll have new ways of thinking about teaching and learning. Mind-changing is a feature, not a bug.

An animal walks through a doorway. It goes in an elephant, and comes out a giraffe

In many (most?) educational spheres, however, our profession often prefers sturdy conviction over openness to new ideas.

Whether we’re debating the best approach for reading instruction, or high- vs. low- structure pedagogy, or the right way to use technology in schools, we have strong opinions.

And, because we have strong opinions, we want others to share those opinions.

This desire for others to share my opinions simply makes sense. For example:

If I know for certain that my approach to reading instruction is better than my colleague Lloyd’s approach, I want Lloyd to change his approach! Because mine is better!!

Everyone — from the individual student to society at large — suffers because of Lloyd’s wrongness. He MUST BE STOPPED. Even better, he must be CONVERTED.

You can understand my passion here…

The Problem with Passion

Although this passion is understandable, it also creates problems of its own.

Specifically, my passion might prompt me to say:

“Lloyd, listen up. You’re obviously wrong. I’m obviously right. So, do it my way. Pronto.”

Now, in my defense, it’s entirely possible that I am right and Lloyd is wrong. So, this statement could be factually accurate.

However, whether or not the statement is true, it creates real problems.

At the beginning of this blog post, I asked you to recall a time when you did in fact change your mind.

Did you do so because someone said: “I’m right, you’re wrong, so do it my way?”

I’m guessing the your answer is “no.”

It might be “heck no.”

The answer, I suspect, is rarely “heck yes.”

In other words: although “I’m right, you’re wrong” might be a TRUE statement, it is almost certainly a WILDLY INEFFECTIVE statement…because it doesn’t accomplish the goal: changing someone’s mind.

If I passionately want to persuade Lloyd to change his mind, and I knowingly adopt a strategy that almost certainly won’t change his mind…well, my passion has in fact defeated my purpose.

Beyond Hypotheticals

I’m writing this blog post because of a poster I’ve seen several times recently (where else?) on eX-Twitter.

It says, basically:

Because we’re talking about education, beliefs don’t matter; evidence matters.

Now, on the one hand, I typically agree with the research-based conclusions of the people who highlight this poster.

That is: to the degree I’m on a team in the world of cognitive science, my team-mates are likeliest to make this statement.

On the other hand, I wince every time I see it. I do so because I think this statement makes it conspicuously less likely that my team will convince anyone to do anything.

To me, the poster sounds — more or less — like it’s saying this:

Your beliefs don’t matter to me; my evidence must matter to you.

And yet, who will listen to me after I’ve said “your beliefs don’t matter to me”? Who should listen to me after I’ve said such a thing? Why would someone who experiences my open contempt listen to and trust me?

I just don’t think people work that way.

Alternative Strategy, Take 1

If I’m going to object to the “you’re wrong; I’m right” strategy, I should probably propose an alternative. I will, in fact, propose two.

When I go to do PD work at a school or conference, I usually begin by saying something like this:

“I’m not here to tell you what to do. I don’t know enough about teaching 6th grade history — or 1st grade math, or soccer coaching — to be able to do that.

Honestly, I don’t think cognitive science can tell teachers what to do.

Instead, I’m here with some ideas about how to think about what you do.

Once you hear these new cognitive-science ways of thinking, you will have wise discussions on how to apply them to you work. That is: you will figure out what to do.

Cognitive science is, I think, really good at helping this way.”

This introduction has, I believe at least two benefits.

First: it’s true. No cognitive scientist knows enough to tell me how to teach grammar well. But, many can help me think about teaching it well. (E.g.: “having too many choices might overwhelm a student’s working memory, so design practice exercises to align with their prior knowledge!”)

Second: this introduction sets a tone. I am, in effect, saying:

“We all bring kinds of expertise to this day.

I know a lot about, say, Posner and Rothbart’s theory of attention.

You know your curriculum, and your students, and your school culture, and your society’s culture.

When we pool all that expertise, we’re likelier to get to good answers. Let’s work together.”

Rather than tell people to abandon their beliefs in favor of my evidence, I invite them to find the best ways to combine both.

I’m not certain that they’ll do exactly what I think they should do.

But I think they’re MUCH LIKELIER to take small steps in a good direction than if I start by dismissing everything they’ve believed up until now.

Alternative Strategy, Take 2

An alternative to my strategy is: I could be completely wrong.

No, really.

Although I doubt that telling people “my evidence trumps your beliefs” does much good, others might have found this strategy to be highly effective.

Heck: you might know of research showing that insisting on evidence (and dismissing prior belief) does cause teachers to change their minds and adopt better teaching practices.

If you have that experience — or know of such research — please share it with me! I would LOVE to be able to try this approach with confidence.

After all, as XKCD has told us, people are often wrong. I’m hoping to be better at changing minds…including my own.


 

Sadly, we’ve been having lots of troubles with the “comments” feature on this blog. We get dozens (hundreds) of spammy comments, wanting to advertise all sorts of sketchy products. And the comment filter just might prevent you from responding to this post.

So: if you have reason to advocate for the “my evidence > your belief” strategy, please reach out to me at my first name (Andrew) and then the “at” sign and then the name of my own company (TranslateTheBrain) and then “.com”

Because, I really do hope you will change my mind. Please!

Getting Bossy about Jigsaws; “Don’t Fence Us In”
Andrew Watson
Andrew Watson

Back in February, I wrote about the “Jigsaw method” of teaching. In this strategy, teachers break a large cognitive topic (say, “the digestive system”) down into small pieces, and assign each piece to a student group.

A closeup view looking along a post-an-rail fence on a prairie of brown and green

Those groups become experts in their pieces — the stomach, the pancreas, the large intestine — and then teach other students in the class about their pieces. When each individual student assembles those pieces into a whole, they have completed the jigsaw; that is, they have understood the full topic.

As I wrote at the time, it’s easy to see potential pitfalls and potential benefits to this method.

When I looked for research on the topic, I found…not much clear guidance either way.

A friend recommended a meta-analysis boasting a HUGE effect size (cohen’s d = 1.20). This meta-analysis, however…

… didn’t include many studies (5, plus 6 student dissertations),

… didn’t appear in a journal that focuses on psychology or education, and

… wasn’t available online.

I’m pretty stubborn, so I kept looking.

The best study I could find, from 2022, found that the jigsaw method provided no benefit — and also caused no harm.

So my conclusion was: “we don’t have conclusive research pointing either way, so I don’t have a strong opinion.”

Since I wrote that post, I’ve gotten some pushback from colleagues I respect — colleagues who, for a number of reasons, think quite highly of the jigsaw method.

Prompted by their concerns, I’ve gone back over this question and made quite a discovery: the pro-jigsaw meta-analysis IS available online. You can find it here.

So, perhaps it’s time to rethink my opinion from February.

But First, a Side Plot

Before I explain my new thoughts about this meta-analysis, I want to explain a few core perspectives that I bring to my consulting work, and to this blog.

In the first place, I’m a very independent person.

I happily seek out new perspectives and new ideas; at the same time, I want to make my own decisions on what to do with those new teaching ideas.

In brief: don’t fence me in.

For this reason, in the second place, I’m also very respectful of other people’s independence.

That is: I don’t want some rando on the internet telling me what to do; and, I don’t want to BE a rando on the internet telling YOU what to do.

Most of my posts include caveats about boundary conditions: “this might have worked in particular classroom circumstances, but they might not work in yours.”

I hope readers find my perspective worth crediting, but you know your curriculum and your students and your school and your culture better than I do. I’m just not going to get bossy without a very persuasive research pool to draw on.

In brief, I don’t want to say “you should do this thing” (or, “you shouldn’t do this thing”) unless I’ve checked out lots of research and found it very persuasive.

I’m not here to fence you in, either.

Back To Our Narrative

So, now that you know my standards, I can explain why finding the meta-analysis has not changed my mind: I can’t read it.

And — this will not surprise you, I hope — I’m not willing to tell you what to do based on a meta-analysis I can’t read.

Here’s the reason for my failure: the meta-analysis is in Turkish. Check out that link above.

Now my point here is easy to misunderstand, so I want to be clear:

I am NOT saying that journals shouldn’t be published in Turkish.

I am NOT saying that research published in Turkish journals doesn’t merit attention.

I AM saying: I can’t read it. And I’m not going to boss you around based on a meta-analysis that I can’t read.

If I could read it, I would have very specific questions: for instance, how on earth did they come up with a Cohen’s d of 1.2? A number that high is almost unheard of.

In fact, a stats-wise friend tells me that–for most psychology topics–a d-value of greater than 1.00 means either a) very small studies, b) bad inclusion criteria, or c) correlation (not causation).

Of course, I don’t know that this meta-analysis includes such concerns. And I don’t know that it doesn’t. My only strong opinion about this study is: people who don’t read Turkish (that’s me!) shouldn’t base opinions on it.

If there’s a reliable English translation floating around, I might revise my thoughts again…

TL;DR

I don’t think we have a clear enough research picture to advocate for or against the jigsaw method.

I suspect it takes an enormous amount of work to get right: so many opportunities for working memory overload! so many chances for distraction!

But, if it’s working for you in your context, the absence of research support should not get in your way. No fences on this horizon…


Postscript

After I wrote the blog post above, a JUST PUBLISHED study appeared in my news feed. I haven’t reviewed it carefully yet (the full text isn’t available), but here are the authors’ three highlights:

“The Jigsaw method has no effect on students’ autonomous motivation trajectories.

The Jigsaw method does not impact students’ self-regulation over two years.

Collaborative methods are less favorable for students with lower prior achievement.”

This study took place in very specific circumstances — French vocational high schools — and focuses on motivation and self-regulation more than, say, learning. For these reasons, not everyone will find it on-point or persuasive. At the same time, it does include almost 4700 students!

I’m still not going to get bossy (although I confess my doubts and concerns are heightened). But I think this study, along with the 2022 study I’ve already summarized, makes it hard to insist that teachers really must jigsaw right now.


 

Batdı, V. (2014). Jigsaw tekniğinin öğrencilerin akademik başarilarina etkisinin meta-analiz yöntemiyle incelenmesi. EKEV Akademi Dergisi, (58), 699-714.

Riant, M., de Place, A. L., Bressoux, P., Batruch, A., Bouet, M., Bressan, M., … & Pansu, P. (2024). Does the Jigsaw method improve motivation and self-regulation in vocational high schools?. Contemporary Educational Psychology, 102278.

Anxious Generation by Jonathan Haidt
Erik Jahner, PhD
Erik Jahner, PhD

anxiousFrom the author of The Coddling of the American Mind, The Righteous Mind, and The Happiness Hypothesis, comes another compelling social commentary that helps us better understand and take part in our social evolution. Jonathan Haidt, a social psychologist at the Stern School of Business (NYU), once again asks what kind of society we want to create and empowers us with the knowledge to become agents of change.

In Anxious Generation: How the Great Rewiring is Causing an Epidemic of Mental Illness, Haidt identifies a critical period between 2010 and 2015 when our phones and computers became more than tools for communication and work, but they became “platforms upon which companies competed to see who could hold on to eyeballs the longest.” (p. 115) Not coincidentally, it was during this same industrial change that the western world saw  a rapid increase in anxiety and depression among teenagers, revealing a society unprepared for the technological upheaval it faced.

Haidt contends that the “virtual world” is disembodied, limiting communication to language without the physical contact and expressive synchronous communication that our brains evolved to master. It’s a world with little real physical risk, offering bursts of addictive dopamine as we scroll from post to post. Individuals can join many communities online but often do so without the social investment and learning necessary in face-to-face interactions. It’s a new world, one which has capitalized on our biology, but one we have not yet biologically or socially evolved to handle in a healthy way.

But this is not an anti-technology book, it’s a book about how two “experience blockers” disrupt the natural trajectory of development, making us lonelier and more anxious. Changes in parenting practices are the second “experience blocker.” Our efforts to keep teens safe have been changing over the decades, we’ve overprotected and overscheduled them, denying them the necessary risk-taking and discovery opportunities essential for brain development during adolescence. We have denied them even the basic joys of unstructured play. But our protections have been unbalanced, linked to his first point, we’ve failed to protect them in the digital world, where there’s no consensus on rites of passage or developmentally appropriate use of technology.

Haidt refers to these combined issues as the “Great Rewiring.” The book provides an excellent historical overview of changes in parenting and adolescent behavior over generations. You’ll see reflections of your parents, grandparents, and children, noticing the significant differences in practices and the physical environment for development. Effective images and graphs drive home his main arguments without overwhelming the reader with data points. The data presented speaks for itself, but lest you have doubts Haidt effectively handles and incorporates arguments he has gotten and wondered over the years. Allow him to open your eyes to the data that led him to writing this book.

While directed at parents, this book is important for socially responsible technologists, scientists, legislators, and educators. Throughout the book, Haidt offers a social scientist’s and parent’s perspective on guidelines for teens’ interaction with technology and social experiences. Supporting his thesis with reams of extensive but easily accessible research, Haidt explains the skills we need to rekindle and the new skills we need to develop to overcome the mental health damage inflicted on a generation by changing parenting practices and social media. He provides specific developmental timetables and strategies, backed by research and parenting experience, explaining how and when certain types of technology should be introduced. While you may not agree with every perspective or suggestion, Haidt’s chapters provide essential talking points and critical issues that must be addressed in our changing world.

Haidt assertively demonstrates that a laissez-faire approach to technology has led to an era of psychological problems that can only be combated with collective change of which we are all individually a part. As technological change shows no signs of slowing, his insights are more crucial than ever. We need to invest individually and in communities through real-world interactions preparing for our future.

Updating the Great Cold-Call Debate: Does Gender Matter?
Andrew Watson
Andrew Watson

Edu-Twitter predictably cycles through a number of debates; in recent weeks, the Great Cold-Call Debate has reheated. (You see what I did there.)

Team A argues that cold calling — that is, calling on students who haven’t raised their hands — is a vital strategy to increase student participation and learning. (Additional benefit: it allows teachers to check for understanding with strategic rapidity and flexibility.)

Team B argues that cold calling raises students’ stress levels, and thereby hampers their learning. (Additional detriment: it especially raises stress for students who face a variety of classroom difficulties–from trauma to special educational needs.)

A young student sits at a desk with her hands covering her eyes; a sympathetic teacher stands next to her with his hand on her shoulder

This “debate” mostly involves making strong claims — “it’s vital!”; “no, it’s dreadful!” — but rarely draws on research to explore its key contentions.

In fact, the debate doesn’t often turn to research because we don’t have much research. But given the energy of recent arguments, I thought I’d check to see if any recent studies can help us out…

Picking Up Where They Left Off

A few years ago, I wrote about a 2013 study done by Dr. Elise Dallimore and Co. This research team — working with college sophomores — found that cold calling increased voluntary class participation and decreased class discomfort.

That is: compared to students in low cold-calling classes, those in high cold-calling classes spoke up more on their own, and expressed greater levels of comfort in class.

That sounds like a win-win.

Of course, all studies include limitations — no one study can explore everything. Team Dallimore spotted an obvious concern with their first study: it didn’t consider the effect of gender on class participation.

We have LOTS of research showing that women feel less comfortable participating in class discussions, and — unsurprisingly — speak up less often than men.

So, picking up where they left off, Dallimore and Co. wanted to see if cold calling reduced or increased this gender split.

In other words: if cold calling benefits students overall (the 2013 study), does it have a different effect on men and women?

Important note:

Dallimore’s first study more-or-less supported Team A as described above: “cold calling encourages class participation.”

Her second study starts to address the the concerns of Team B. We might reasonably worry that women — who (on average) go into many classes feeling stressed about participation — will feel EXTRA stress if that participation becomes mandatory.

This second study explores that plausible concern.

Take II

Like her first study, Dallimore’s second study looks at class participation in several college Accounting classes.

They divided those classes into two groups: “low” cold-calling (less than 25% of the questions were framed as cold call), and “high” cold-calling (more than 33% — and as high as 84%!!).

According to survey data, male and female students went into these classes with roughly the same perceptions of class participation.

So Dallimore’s questions were:

First: Did students’ behavior change based on high- vs. low-cold-calling? And,

Second: Did gender matter for any changes?

In answer to the first question: over time, students volunteered more in the high-cold-calling classes than the low-cold-calling classes.

Whether you’re counting the percentage of students who participated or the number of questions that students asked, those numbers went up.

So, cold calling INCREASED voluntary participation.

Better and Better

Of course, we’re happy to see that cold calling increased participation. However, that finding simply replicates the 2013 study. What about the second question: did gender matter?

Well, both men and women voluntarily participated more in high-cold-calling classes. And, women’s participation increased more than men’s participation.

Specifically: 57% of men voluntarily participated in the low-cold-calling classes, whereas 73% did in the high-cold-calling classes. That’s a difference of  16%.

For women: 52% voluntarily participated in the low-cold-calling classes, whereas 82% did in the high-cold-calling classes. That’s a difference of 30%.

We get the same result if we look at the number of questions asked. Men asked more questions in high-cold-calling classes than in low-cold-calling classes; the average number went from 1.78 to 2.13.

Women asked LOTS more question: the average went from 1.33 to 2.6.

In brief: high-cold-calling classes increased participation for everyone — especially women.

Not So Fast

So far, Dallimore’s 2019 study seems like a slam dunk for Team A. It says, basically: “cold calling does help and doesn’t hurt.”

At the same time, I don’t think we can now rush to conclude “all teachers must cold call all the time.”

I have three reasons to hesitate:

First: both Dallimore’s studies were done with college students. As I’ve written elsewhere, I don’t think that college students make great proxies for K-12 students. On average:

College students know more than K-12 students.

They have higher level of academic and personal maturity.

They probably have higher levels of academic motivation — they’re in college!

So, these findings might apply to K-12 students…but we don’t have research (that I know of) to demonstrate that conclusion.

Second: as I wrote in a blog post last fall, bad cold calling does exist. As the research study described there explains, we need to refine our question.

Instead of asking: “is cold-calling a good idea?”

We should ask: “how can we hone our cold-calling technique to get its benefits without its potential harms?”

Let’s get some really good answers to that second question before we insist on spreading the practice.

Third: At least so far, research suggests that Team B’s concern — “the stress that results from cold calling hampers learning” — doesn’t hold true for most students.

At the same time, our goal is not that most students learn, but that all of them do.

We should accept the almost certainly true statement that cold calling will stress out a few students to the detriment of their learning. Part of “honing out technique” — described in my second point above — will be identifying and working with those students.

To Sum Up

Despite all the heated debate about cold calling, I think we have the beginnings of a persuasive research pool. So far — at least — it seems to encourage class participation (which should, in turn, increase learning).

Yes: we need to be good at this technique for it to work. Yes: we should respect important boundary conditions.

And, based on the research I’ve seen so far, I plan to keep using cold calling myself.

Coda

After I wrote this blog post, I discovered that LOTS of people have been adding to this debate.

Here’s Bradley Busch.

Here’s Tom Sherrington.

No doubt others have got wise ideas!


Dallimore, E. J., Hertenstein, J. H., & Platt, M. B. (2013). Impact of cold-calling on student voluntary participation. Journal of Management Education37(3), 305-341.

Dallimore, E. J., Hertenstein, J. H., & Platt, M. B. (2019). Leveling the playing field: How cold-calling affects class discussion gender equity. Journal of Education and Learning8(2), 14-24.

Can students “catch” attention? Introducing “Attention Contagion”
Andrew Watson
Andrew Watson

Every teacher knows: students won’t learn much if they don’t pay attention. How can we help them do so? (In my experience, shouting “pay attention!” over and over doesn’t work very well…)

So, what else can we do?

Close up of student with head down on a wooden desk, hair covering his or her face. Other students are working out of focus in the background.

As is so often the case, I think “what should we do?” isn’t exactly the right question.

Instead, we teachers should ask: “how should we THINK ABOUT what we do?”

When we have good answers to the “how-do-we-think?” question, we can apply those thought processes to our own classrooms and schools.

So, how should we think about attention?

Let me introduce “attention contagion”…

Invisible Peer Pressure

A research team in Canada wanted to know: can students “catch” attention from one another? How about inattention?

That is: if Student A pays attention, will that attentiveness cause Student B to pay more attention as well?

Or, if Student A seems inattentive, what happens with Student B?

To study this question, a research team led by Dr. Noah Forrin had two students — A and B — watch a 50 minute video in the same small classroom.

In this case, “Student A” was a “confederate”: that is, s/he had been trained…

to “pay attention”: that is, focus on the video and take frequent notes, or

NOT to “pay attention”: that is, slouch, take infrequent notes, glance at the clock.

Student A sat diagonally in front of Student B, visible but off to the side.

What effect did A’s behavior have on B?

Well, when A paid attention, B

… reported focusing more,

… focused more, got less drowsy, and fidgeted less,

… took more notes, and

… remembered slightly more on a subsequent multiple-choice quiz.

These results seem all the more striking because the inattentive confederate had been trained NOT to be conspicuously distracting. NO yawning. NO fidgeting. NO pen tapping.

The confederates, in other words, didn’t focus on the video, but didn’t try to draw focus themselves. That simple lack of focus — even without conspicuous, noisy distraction — sapped Student B’s attention.

Things Get Weird

So far, this study (probably) confirms teacherly intuition. I’m not terribly suprised that one student’s lack of focus has an effect on other students. (Forrin’s research team wasn’t surprised either. They had predicted all these results, and have three different theories to explain them.)

But: what happens if Student A sits diagonally BEHIND Student B, instead of diagonally in front?

Sure enough, Forrin’s team found the same results.

Student B caught Student A’s inattention, even if s/he couldn’t see it.

I have to say: that result seems quite arresting.

Forrin and Co. suggest that Student B could hear Student A taking notes — or not taking notes. And this auditory cue served as a proxy for attentiveness more broadly.

But whatever the reason, “attention contagion” happens whether or not students can see each other. (Remember: the confederates had been trained not to be audibly distracting — no sighs, no taps, no restless jostling about.)

Classroom Implications

I wrote at the top that teachers can use research to guide our thinking. So, what should we DO when we THINK about attention contagion?

To me, this idea shifts the focus somewhat from individual students to classroom norms.

That is: in the old days, I wanted that-student-right-there to pay attention. To do so, I talked to that-there-student. (“Eyes on the board, please, Bjorn.”)

If attention contagion is a thing, I can help that-student-right-there pay attention by ensuring ALL students are paying attention.

If almost ALL of my students focus, that-student-right-there might “catch” their attentiveness and focus as well.

Doug Lemov — who initially drew my attention to this study — rightly points to Peps Mccrea’s work.

Mccrea has written substantively about the importance of classroom norms. When teachers establish focus as a classroom norm right from the beginning, this extra effort will pay off down the road.

The best strategy to do so will vary from grade to grade, culture to culture, teacher to teacher. But this way of thinking can guide us in doing in our specific classroom context.

Yes, Yes: Caveats

I should point out that the concept of “attention contagion” is quite new — and its newness means we don’t have much reasearch at all on the topic.

Forrin’s team has replicated the study with online classrooms (here) — but these are the only two studies on the topic that I know of.

And: two studies is a VERY SMALL number.

Note, too, that the research was done (for very good reasons) in a highly artificial context.

So, we have good reason to be curious about pursuing this possibility. But we should not take “attention contagion” to be a settled conclusion in educational psychology research.

TL;DR

To help our students pay attention, we can work with individual students on their behavior and focus.

And, we can emphasize classroom norms of focus — norms that might help students “catch” attention from one another.

Especially if more classroom research reinforces this practice, we can rethink attention with “contagion” in mind — and thus help our students learn.


Forrin, N. D., Huynh, A. C., Smith, A. C., Cyr, E. N., McLean, D. B., Siklos-Whillans, J., … & MacLeod, C. M. (2021). Attention spreads between students in a learning environment. Journal of Experimental Psychology: Applied27(2), 276.