I’ve spent a lot of time over the last few years working with school leaders, civil servants and others on the Progress 8 measure, hoping to improve everyone’s understanding of how it works, what we can and cannot infer from it, and what its pitfalls are.
No single performance measure is ever going to be perfect, but Progress 8 has a lot more going for it than its predecessor of 5+A*C including English and maths. For example, its inclusive nature, with every pupil’s grades contributing something to the school’s performance, and the fact that all grades count is a much better reflection of the moral purpose behind school leadership. Also, schools with different levels of prior attainment now have a chance to do well; it’s been great seeing that schools of all types can get very high scores under the new measure.
Those of us who have been involved with performance measures for some time anticipated some of the problems there would be with Progress 8. When contextual value added (CVA) was the headline measure, schools found that pupils with very low scores affected overall results disproportionately. This has proved to be true with Progress 8 and the DfE has committed to working with the profession to address this in its most recent Statement of Intent.
Another problem was the long delay between publication of exam results in August and finding out what this meant for school performance. As progress measures are relative and depend on the performance of all other schools, they have to wait until the DfE tells them how well they’ve done.
Or do they?
Unlike CVA, Progress 8 does not have complex statistical modelling underpinning it; it’s based on the simple average of the results of students with the same Key Stage 2 test score. All you need is each child’s results matched to prior attainment and you can calculate it without any other information. Crucially however, you can estimate it quite accurately if you have enough results, and this was the driving force behind the collaboration between ASCL, SISRA and well over a thousand schools this year. With around 180,000 students’ results, the accuracy of this collaboration compared to the figures published by the DfE several weeks later was very impressive and incredibly helpful to schools – just read Graeme Smith’s blog on this to see why.
But why stop there? As powerful as it was, the collaboration between schools in 2017 could be just the tip of the iceberg. We can (and I hope we will) do the same again in 2018, but we should not feel limited to the calculation of early estimates of DfE measures. If that is all we do we will be missing a golden opportunity.
For me, the really important and deeply reassuring lesson from this collaboration is that schools and their leaders will gladly work together when they see purpose and value in doing so. We now have a chance to reclaim accountability so that it is properly focussed on what it needs to be, our students. Let’s work together to do just that.
What are your views?
As 2018 approaches - and so we don’t miss this golden opportunity - it would be of great assistance as we proceed in discussions to gather your views on what summary data would be helpful to you. Please email us at email@example.com with your ideas, it will be interesting to hear your views.