By Francis Green
There is a plausible argument that the opening of a free school in an area might spur other schools to improve in the long run: it is a basic tenet of market competition that producers – in this case schools – will respond to such pressure. But many other factors lie behind good management of schools – including working with the community – so it is far from clear how much effect the opening of a new local school would have in practice, and whether the fact that it was a free school would make a difference.
Most commentators have realised that it will take time to evaluate the impact of free schools, not only on their own pupils, but also on others. However, Policy Exchange have recently taken a creative early look in their report “A Rising Tide. The Competitive Benefits of Free Schools”. Their title sums up their findings, namely that free schools do indeed make a difference. But do they? How convincing is their analysis? While ideological opponents have panned the report, what about an impartial reader?
It is far too early to quantify how free school children are doing, because it is a few years till their first cohorts do their GCSEs or their Key Stage 2 tests. The creative angle that Policy Exchange takes is to propose that, nevertheless, the competitive threat of a free school should already be raising the GCSE performance and KS2 test scores of the older pupils at nearby schools. Others have commented on the rather implausible claim being made here for such an immediate effect. Yet Policy Exchange claim that the data prove their case. Calling it a “systemic” effect, they state: “Free schools are helping to raise standards not just for the pupils who attend them but for other pupils across the local community – especially for those in lower performing schools”.
They compare the change in performance (up to 2014) of the three closest schools to where a free school opens, to the change in performance of all other schools over the same period, taking into account the different times at which the free school opened.
They find that, overall, there is little difference between the two groups’ performances in GCSEs or KS2 tests. This is not the result that they headline.
However, when they divide schools into quartiles according to their initial performances, they find that the initially lower performing schools improve more on average than the initially better performing schools. Moreover, this difference is greater for the group of closest schools, than for all other schools. Thus, to take one example from their crucial Table 2.4 (for primary schools), for the initially lowest quartile they report a 12 percentage point improvement for the schools closest to free schools compared with a 4 percentage point rise for all national schools. This is the story that they headline.
But their evidence is not convincing, and is almost certainly the consequence of a classic statistical phenomenon: “regression towards the mean”. Put briefly, when something is subject to variations – like a school’s performance – if you first notice it in a year when the performance is on the high side, chances are that next year it has fallen; and vice versa, if you first observe it when it is on the low side, it will on average be higher next time around. First noted by Victorian polymath Francis Galton, this phenomenon has been plaguing researchers ever since.
Thus, when performance changes between two periods, following an intervention (in this case, the opening of a free school), it is important to consider whether the change results from the intervention or from regression towards the mean. One way to do this would be to ask: what would you expect to happen for the better performing close schools? If they are also subject to the competition of the free school they might also improve, though perhaps by less if there is less room for improvement. Certainly, there is no reason to think they would get worse (and if it were so, would that not be worrying?). On the other hand, if “regression towards the mean” is happening, the performance of the initially-high performing schools would decline on average.
So what did in fact happen? Answer: the better initially-performing schools did indeed decline in their average performance, and in fact did so faster for the schools close to free schools than for all other schools.
The report does not discuss this. It headlines only the finding about the bottom quartile, where the schools close to free schools do well. However, it is not warranted to conclude, as it does, that this is because free schools really did raise the performance of local poor-performing schools.
I have said that the changes are probably due to “regression towards the mean”. The group of closest schools is, naturally, smaller and more variable than the group of all other schools, which might well account for the differences between them. However, it is hard to be sure about the differences stated in the report. There are few figures about the sample sizes, and no mentions of statistical tests and confidence intervals, so one cannot be sure that differences of a few percentage points are statistically significant.
One is also struck by the disjuncture between their welcome caution that “correlation should not be mistaken for causation”, and the strength of the headline claim being made.
Policy Exchange was prominent among thinktanks promoting the idea of free schools while the Labour Government was still in power. They have stated their commitment to evidence-based policy, and it is good that they are now interested in an assessment of how things are turning out. But they need to be rather more thoughtful if they want to convince impartial readers, and to be prepared for less favourable evidence for their cause if that’s the way it happens.