Because scientists work by inquiring, it makes rough-n-ready sense that we should teach science through the process of inquiry. Indeed “inquiry-based learning,” like “problem-based” and “project-based” learning, has emphasized students’ construction of their own understanding.
According to a well-known definition, this pedagogy focuses on students…
… asking questions,
… planning and conducting investigations,
… using appropriate tools and techniques to gather data,
… thinking critically and logically about relationships between evidence and explanations,
… constructing and analyzing alternative explanations,
And so forth.
Of course, we should also inquire: does inquiry-based learning in fact help students learn? This question leads to lots of controversy…
We can explore that question in several ways.
We might, for instance, have one group of students learn a topic through inquiry learning, and a control group learn it through direct instruction. When we test them later, we’ll get a good sense of who learned the material better.
That method — if we do everything right — gives us a clear answer.
But: it focuses on a small group of people learning only one thing. Who knows if that clear answer applies in other circumstances?
Or, we might look at large groups of people who studied many things. If we can find out what method their teachers used, and measure how well they learned, we’ve got another useful strategy for answering our question.
Of course, we’ll be less certain about the quality of the teaching than in the highly-controlled environment. Who knows if the inquiry-based teaching was, in fact, well done?
Following this second approach, researchers in the UK looked at PISA data (PISA = Program for International Student Assessment ), and aligned it with high-stakes testing scores in England: the GCSE (General Certificate of Secondary Education).
The PISA data help here because students rank how much time they spent in various inquiry-learning practices: “every lesson, most lessons, some lessons, never.” For instance, students rate how often they are “allowed to design their own experiments.”
So: by linking PISA data about teaching practices with GCSE scores, those researchers can draw some conclusions about the effectiveness of inquiry learning.
What did they find?
Negotiating Our Own Biases
Before I answer that question, let’s acknowledge a problem.
Many teachers already have opinions — strong opinions — about inquiry learning.
Those opinions bias our responses to new information.
If (for example), I don’t think inquiry learning works, and this research shows that it does, I’m inclined to dismiss the study.
“Well, look at all of these methodological problems!”
Yet (the contrary example) if I’m an inquiry-learning champion, research showing its effectiveness automatically seems wise and well-designed.
“Gosh: it’s hard to think of a fairer way to answer this question! Now we KNOW it works…”
So, here’s my suggestion: decide right now — before you know what the researchers concluded — whether or not you’re confident in this study design.
All study designs have some flaws. This one, for instance, relies on student self report. And, as noted above, it doesn’t have any way to control for the quality of inquiry learning practices.
You might reasonably conclude those flaws are too great. In that case, you don’t get to cite this study even if it ends up confirming your beliefs.
Or, you might reasonably conclude that — on average — errors in self-report and quality control will balance themselves out; and this research method gives a big-enough data pool to draw meaningful conclusions. In that case, you have to pay attention to the study even if it contradicts your beliefs.
So: push yourself to decide now…
The Envelope Please
Now that you’ve decided to give credence to this methodology (or not to do so), here’s what they found.
“In summary, neither high inquiry with low guidance, nor high guidance with low inquiry are related to improved science attainment.” (emphasis added)
That is: pure inquiry learning doesn’t result in more learning that plain-old explanation. And, (as Professor Christian Bokhove notes) it doesn’t result in less learning either.
“There is, however, some tentative evidence that moderate levels of inquiry delivered in conjunction with high guidance may have a small positive impact upon science achievement.” (emphasis added)
That is: lots of plain-old guidance PLUS a moderate amount of inquiry gives a little boost.
By the way: what do the researchers count as “guidance”? Here, they’re looking at PISA questions about teachers’ suggestions for improving performance, meeting learning goals, extra help, and so forth.
As I noted above, the “direct instruction vs. inquiry” debate generates A LOT of passion.
I think this study invites us to step back and get some perspective. It seems that — at least in this research paradigm — a healthy dose of teacher explanation and support helps students learn. And, some degree of independent inquiry enhances that result. (“Small positive impact.”)
But: inquiry learning neither yields a whole new level of scientific understanding, nor prevents students from learning much at all.
I suspect that, ultimately, we need to ask more granular questions.
Does inquiry give a greater lift in later grades than earlier ones? Perhaps it helps when scientific findings confirm our prior knowledge, but not when then contradict it? Does the teacher’s experience level matter?
Research into these questions might give us specific and practical classroom guidance.