How to monitor the academic performance of a school (1) a word of caution
Posted by Magali von Blottnitz on 09 March 2019 11:50 AM SAST
Arguably, everything we do in the education sector is ultimately for the learners – so that they can learn better. So we need to be able to ascertain if our interventions have the desired effects on learning outcomes. Monitoring this, however, is a highly complex task. Data on learner performance is far from straightforward.
In this first blog post, we explain the challenge of academic performance monitoring. Look out for the next blog post which focuses on the kind of data sources that can be used, or download the full word file on this topic here.
Understandably, in PfP partnerships we want our efforts to ultimately impact on the quality of the learning that is taking place at the school – and on the learners’ ability to perform. In order to know if this is the case, we need some data to track how the learners at the school are performing — over time, and relative to other schools. And because educational outcomes depend in important part on the socio-economic profile of the learners, we need data that will control for this factor.
Comparative benchmarking of learning outcomes is challenging, though.
· In South Africa, since the discontinuation of the ANA tests (Annual National Assessment), there are no independently moderated school performance assessments that would allow year-on-year comparisons and comparisons with peers.
· Educational outcomes are dualistic — so average outcomes disguise huge within-system variation, making it difficult to make judgements about quality at different points along the socio-economic spectrum.
· Tests of learning outcomes are often unreliable, with very large standard error of estimates, even for the same test. Further, changes in test design can undermine year-on-year comparability, even if the intention had been to make seemingly modest tweaks.
· Finally, in order to show positive outcomes, education systems have come up with many ways to ‘game’ tests—from ‘teaching to the test’, to constraining who actually gets to write the test.
NB - In PfP, we operate from the premise that
· schools (and school principals in particular) are often subjected to undue pressure for their learners’ academic performance, sometimes resulting in wrong incentives. We believe in “data-informed” rather than “data-driven” decision-making – and using performance data to support rather than to judge.
· most of the time, the interventions that partnerships are doing at school will only impact academic performance over a prolonged period of time. Short-term results are sometimes interesting, but we are more interested in seeing if improvements can be sustained over time – including after the formal PfP year has ended.
· A school’s academic performance is impacted by multiple internal and external factors. It is almost impossible to attribute a change in performance purely to one intervention.