In his influential book Visible Learning, John Hattie presents his synthesis of over 800 meta-analysis papers of impacts upon student achievement. On a number of occasions teachers and teacher-librarians have told me that when they have advocated for inquiry learning approaches at their school, their senior administrators have not been supportive, citing Hattie’s research as showing that inquiry learning is ineffective.
As someone who sees inquiry learning as powerful, higher order, authentic learning, I was dismayed at this news. I decided to examine Hattie’s research. The main purpose of my investigation was to critique the inquiry learning section of Hattie’ book in order to understand the assumptions inherent in his approach to this topic, however it’s also worth summarising the range of critiques of Hattie’s work to provide a background as to the trustworthiness and validity of his research.
Hattie’s research has been mainly been criticised on the basis of his methodology and the subsequent political and ideological misuse of his findings to justify educational policy and practice (see Terhart 2011 for an excellent analysis).
There are a number of methodological issues with the research reported by a range of commentators. I should point out that I don’t have any expertise in statistics and quantitative research, so I am relying on the expertise of others:
- Inappropriate use and incorrect calculation of effect size including incorrect calculation of CLE (common effect size) which has apparently been admitted by Hattie (see bloggers’ commentaries here, here, here, here, here, here & here plus scholarly articles behind a paywall here & here)
- problems with a lack of rigour in bundling together a range of studies that vary in focus, validity and quality (see bloggers’ commentaries here plus scholarly articles behind a paywall here & here)
- the exclusion of qualitative research, thus omitting a large body of relevant research (see here)
The critiques make entertaining reading, and it’s particularly worth reading the blog comments, some of which have been contributed by Hattie. You may also like to read this defense of Hattie.
Hattie’s analysis of ‘inquiry-based teaching’
In the rest of this post I examine at the section on inquiry learning in Hattie’s book (chapter 10: The contributions from teaching approaches part II).
Hattie uses the phrase ‘inquiry-based teaching’ rather than the more common term ‘inquiry-based learning’. He says that:
inquiry-based teaching is the art of developing challenging situations in which students are asked to observe and question phenomena; pose explanations of what they observe; devise and conduct experiments in which data are collected to support or contradict their theories; analyse data; draw conclusions from experimental data; design and build models; or any combination of these. Such learning situations are meant to be open-ended in that they do a not aim to achieve a single “right” answer for a particular question being addressed, but rather involve students more in the process of observing, posing questions, engaging in experimentation or exploration, and learning to analyse and reason. (p. 208-209)
First, it is notable Hattie privileges teaching over learning in using the term ‘inquiry-based teaching’ rather than ‘inquiry-based learning’. This is not surprising, as the chapter is aimed at identifying what teachers can do to enhance learning, however it gives an indication of the limits of his research. In particular, the term inquiry-based teaching might be expected to be more teacher-centred, while the term inquiry-based learning might be expected to be student-centred. The level of teacher- or student-centredness in particular inquiry approaches may have some impact on students’ learning outcomes, so this distinction could be important.
One of the problems with meta-analyses noted by the commentators above is the issues of comparing apples with oranges. Inquiry-based learning/teaching may be differently defined and interpreted in each case. For instance, I’ve seen many examples of teachers implementing ‘inquiry-based learning’ which would arguably better be described as fact-finding. As I explain here, the level of teacher guidance and control is a strong factor in the design of inquiry curricular. As such, it is likely to impact on student learning outcomes.
Analysis of individual studies
Hattie says that he has drawn on 4 meta-analyses which cover 205 studies in total. In his commentary (p. 209) he cites 6 papers (including one EdD thesis). This confused me at first, but it seems that 2 of the 6 were based on the same data set (Bredderman 1983, 1985), whereas one study he mentions in his commentary (Sweitzer & Anderson 1983) investigates the efficacy of teacher education programs, which seems like a red herring to include it in the commentary.
I have been able to obtain 5 of the 6 papers. Below I’ve made notes about each of the studies. I haven’t analysed the statistical techniques or methodology as I lack the expertise to do so. Rather, I have examined other aspects such as the age of the original set of studies and the way that they portray inquiry approaches.
The first thing that strikes me about the papers is that they are all about science education. Inquiry-based learning has been traditionally dominant in science from the 1960s, but is now widespread in social studies subjects such as history and geography, as well as in literature and maths. In the Australian Curriculum inquiry learning is used in Science, History, Geography, Civics and Citizenship and Economic and Business.
Bredderman, T. (1983). Effects of activity-based elementary science on student outcomes: A quantitative synthesis. Review of Educational Research, 53(4), 499-518.
Bredderman, T. (1985). Laboratory programs for elementary school science: A meta-analysis of effects on learning. Science Education, 69(4), 577-591.
The Bredderman papers seemed to be reporting the analysis of the based the same set of 57 studies published between 1965-1981. The studies examine the impacts of the implementation of three ‘activity-based’ laboratory programs for elementary (primary) school science which were innovative for their time. The programs were slightly different, but all had elements of inquiry in that they emphasised direct experimentation, observation and process rather than learning facts from textbooks. My main impression was that the original set of studies were very old, and I wondered if it is possible to make judgements about the efficacy of inquiry learning in the the classrooms of today based on the classrooms of the mid 1960s!
Shymansky, J., Hedges, L., & Woodworth, G. (1990). A reassessment of the effects of inquiry-based science curricula of the 60’s [sic] on student performance. Journal of Research in Science Teaching, 27(2), 127-144.
The pedant in me was immediately annoyed at the incorrect use of an apostrophe in the title of this paper, and it made me wonder about the rigour of the journal. Interestingly, the 1985 Bredderman paper had the same error in the introduction section.
Like the Bredderman papers, this paper is also based on the science programs of the 1960s (a set of 81 studies). It is a re-analysis of an earlier set of 105 studies published in 1983 but using more contemporary statistical techniques. The reasons and techniques for the re-analysis is of interest, as it is an acknowledgement that techniques change over time, and that the methodology and findings of a study in one time period cannot necessarily be directly compared with the methodology and findings of studies in a later era. This is one of the criticism of meta-analyses in general. In this case, the authors say ‘the application of refined statistical procedures in the re-synthesis yielded results of greater precision than those generated in the original study’ (p. 127).
In this paper, ‘inquiry-orientated’ curricula was taken to be that which incorporated laboratory work, was process-based and that ’emphasized higher order cognitive skills and an appreciation of science’ (p. 131). Interestingly, the programs were the same ones that were analysed in the set of studies used by Bredderman, however Bredderman’s set of studied extended to the early 1980s. This shows that three papers of Hattie’s set are skewed to the same set of studies reported in different papers and using different statistical methods.
Sweitzer, G., & Anderson, R. (1983). A meta-analysis of research on science teacher education practices associated with inquiry strategy. Journal of Research in Science Teaching, 20(5), 453-466.
As mentioned above, this paper investigates the efficacy of inquiry teacher education programs and thus is not directly relevant to Hattie’s conclusions in relation to the impact of inquiry programs on student learning. It draws on a set of 68 studies from 1965-1980 and thus presents a different era of educational practice which may not be relevant in the classrooms of today.
Smith, D. (1996). A meta-analysis of student outcomes attributable to the teaching of science as inquiry as compared to traditional methodology. Doctor of Education thesis, Temple University.
This EdD thesis examined 79 studies published between 1965 and 1993. This analysis also suffers from age, as only two studies were from the 1990s. Smith’s description of inquiry approaches is sound, so I have reasonable confidence that the studies she has included in her set do address inquiry curricula as it is understood in science.
What conclusions can we draw?
As I mentioned above, the set of studies are old and are not likely to reflect contemporary classroom practice. Also, they all report on science curriculum, in particular most of the studies compare traditional fact-based, textbook based teaching with experiential, experimental laboratory approaches. There is no indication that these results are relevant in any other subject area such as history, geography or mathematics, or indeed in a contemporary science classroom.
It’s important to note that all of the meta-analyses found positive impacts of inquiry approaches. Hattie found that inquiry-based teaching had a effect size of 0.31. Critiques of Hattie’s work have mentioned the problem with the arbitrary setting of a cut-off of 0.4 as an effect size, i.e. implying that any approach below 0.4 is not worth pursuing.
There are a number of other issues that are not possible to examine due to the extensive amount of time it would take to look at each study individually.The problems with conducting such a huge study is that the sheer number of papers to be read, evaluated and analysed means that there are greater margins for error. For instance, Richard Olsen points out the problem with validity in terms measuring the impact of inquiry learning ‘using test scores to measure things that may not be able to be measured meaningfully with test scores.’ For example, a standardised, multiple-choice test may not be a valid measurement of the impact of inquiry teaching/learning.
It’s hard to see that any useful conclusions to improve teaching and learning can be drawn from Hattie’s analysis. And it’s alarming that school administrators are using Hattie’s research to dismiss inquiry approaches out of hand.