A Year's Progress ?

Since all of Hattie's CLE calculations were shown to be incorrect he has changed focus by promoting that an effect size (d) = 0.40 is the 'hinge' point' for identifying what is and what is not effective, and is equivalent to advancing a child’s achievement by 1 year (VL 2012 summary, p3).

Hattie says,

'I would go further and claim that those students who do not achieve at least a 0.40 improvement in a year are going backwards...' (p250).

In terms of teacher assessment, he takes this one step further by declaring teachers who don't attain up to an effect size of 0.40 are 'below average' Hattie (2010, p87).

This interpretation is a major concern for a number of reasons, not the least of which is Hattie's financial interest in a Teacher assessment software program called e-asTTLE and performance pay.

Although he did backtrack in his summary VL 2012 publication,

'I did not say that we use this hinge point for making decisions, but rather we used it to start discussions' (p14).

THE NEED FOR BENCHMARK EFFECT SIZES:


Bloom et al (2007, abstract) argue that,

'there is no universal guideline or rule of thumb for judging the practical importance or substantive significance of a standardized effect size estimate for an intervention. Instead one must develop empirical benchmarks of comparison that (see table of US benchmarks below) reflect the nature of the intervention being evaluated, its target population, and the outcome measure or measures being used. We apply this approach to the assessment of effect size measures for educational interventions designed to improve student academic achievement.'

Lipsey et al (2012, p12) state,

'Cohen’s broad categories of small, medium and large are clearly not tailored to the effects of intervention studies in education, much less any specific domain of education interventions, outcomes, and samples. Using those categories to characterize effect sizes from education studies, therefore, can be quite misleading. It is rather like characterizing a child’s height as small, medium, or large, not by reference to the distribution of values for children of similar age and gender, but by reference to a distribution for all vertebrate mammals.'

The United States Department of Education has commissioned a more detailed study of the effect size benchmarks, for K-12, using the national testing across the USA (table of results below from Lipsey, et al (2012, p28)):




Hattie acknowledges these results in his summary VL 2012 publication, but uses them to justify his 'hinge point' d = 0.40 and says, "the effects for each year were greater in younger and lower in older grades ... we may need to expect more from younger grades (d > 0.60) than for older grades (d > 0.30)" (p14).

So Hattie's minor adjustment misses the HUGE variation from young to older students. He also does not address the use of much older college-level students and practising professionals, like doctors and nurses in many of his meta-analyses, e.g., 'self-report grades', 'problem-based learning', 'worked examples', etc.

The HUGE variation is a major confounding variable in Hattie's method of comparing effect sizes. The difference in two different influences could be simply due to the age of the students being measured.

Further, Professor William Dylan has also identified that meta-analyses need to control for the time period over which the study is conducted. Hattie's does NOT do this.

Hattie finally admits this in his 2015 article in which he defends some of the critiques:

'Yes, the time over which any intervention is conducted can matter (we find that calculations over less than 10-12 weeks can be unstable, the time is too short to engender change, and you end up doing too much assessment relative to teaching). These are critical moderators of the overall effect-sizes and any use of hinge=.4 should, of course, take these into account.'

Yet Hattie DOES NOT take this into account, there has been no attempt to detail and report the time over which the studies ran nor the age group of the students in the question.

Also, the landmark US study goes on to state:
"The usefulness of these empirical benchmarks depends on the degree to which they are drawn from high-quality studies and the degree to which they summarise effect sizes with regard to similar types of interventions, target populations, and outcome measures." 

and also defined the criterion for accepting a research study (i.e., the quality needed):
  • Search for published and unpublished research dated 1995 or later.
  • Specialised groups such as special education students, etc. were not included.
  • Also, to ensure that the effect sizes extracted from these reports were relatively good indications of actual intervention effects, studies were restricted to those using random assignment designs (that is method 1 as explained in effect sizes) with practice-as-usual control groups and attrition rates no higher than 20% (P33).

NOTE: using these criteria virtually NONE of the 800+ meta-analyses in VL would pass the quality test!


Wecker, et al (2016, p33) also question the notion that an effect of 0.4 corresponds to a year's progress:

"An observed effect size of, for example, 0.3 would obviously hardly correspond to the magnitude of the increase in competence over the course of a school year".