The Learning Impact Fund’s remit is to identify, fund and evaluate programs that will raise the academic achievement of children in Australia, especially those from economically disadvantaged backgrounds. The fund aims to support the growth of those programs which have been shown to work best at raising achievement. A Learning Impact Fund grant intends to achieve two outputs:
- A well-delivered intervention that has the potential to improve the academic achievement of children;
- A robust independent evaluation of the intervention, which includes an estimate of its impact on achievement, an estimate of its cost per student, and a rating of the strength of the evaluation.
Our approach to evaluation
Our approach builds on the successful approach of the Education Endowment Foundation, which has been funding programs and rigorous evaluations in England since 2011.
A central aim of Evidence for Learning is to improve knowledge and extend the evidence-base on what works and why to raise the achievement of Australian students. To achieve this, all Learning Impact Fund projects will be rigorously evaluated by independent experts in educational research. These evaluations will be funded by the Learning Impact Fund.
The impact of projects on achievement will be evaluated, where possible, using randomised controlled trials, with a linked process evaluation to understand the elements of successful delivery. Evaluations will be conducted by one of Evidence for Learning's independent panel of evaluators.
Evidence for Learning takes a cumulative approach to evaluation. Thus the size of evaluation, and therefore the number of schools or projects that we would require grantees to work with, will be determined by what we already know or whether there is a need to either pilot a new approach or demonstrate that an intervention can work at scale.
Program scale in schools
The diagram above shows where in a program’s lifecycle the Learning Impact Fund intends to work. Two main features of a program will determine its place along this continuum:
- The degree to which it has been well-defined and codified; and
- The strength of the evidence that it is effective.
The Learning Impact Fund does not support early-stage programs that have not yet been well-defined nor delivered outside of one school.
Types of trial
- Pilot trials: Test the effect of programs and support their codification prior to large-scale research trials through an independent developmental evaluation.
- Efficacy trials: Evaluate the effect of well-codified, promising programs to test whether they can deliver on their promise under ideal and ‘controlled’ conditions, and support its codification prior to scaling across more schools.
- Effectiveness trials: Evaluate the effect of well-codified, promising programs to test whether they can deliver full-scale as intended, across more schools under ‘real world’ conditions.
Evidence for Learning believe, programs that have demonstrated impact in the effectiveness stage may be good candidates for system investment and support for further scale. However, the Learning Impact Fund does not intend to support programs in this scale-up phase.
We want to share our research about what works to raise student achievement. As projects progress, we will work with our partners to integrate the results of all evaluations into the summary of evidence for practitioners in the Teaching & Learning Toolkit.
As projects are completed, we will feature evaluation reports and examples of approaches that work, along with notes about how they work. Our approach to evaluation is rigorous and transparent. All the research we commission will be published on this website, regardless of the outcome.
Effect sizes and months' impact
Evidence for Learning evaluations translate effect sizes into months' impact. Months' impact is estimated in terms of the additional months' progress you can expect students to make as a result of an approach being used in school, taking average student progress over a year as a benchmark. This approach provides a simple way to translate effect sizes into a measure that is meaningful for school leaders, teachers and policy makers.
These impact estimations are based on ‘effect sizes’ reported in comparative data (see Table 1 below). Effect sizes are quantitative measures of the impact of different interventions on learning. The Teaching & Learning Toolkit prioritises effect sizes derived from systematic reviews of research and quantitative syntheses of data such as meta-analyses of experimental studies. To be included in the analysis, an approach needs to have some quantifiable evidence base for comparison. The Learning Impact Fund evaluation effect sizes are based on the Toolkit’s impact estimates but takes into account lower effect sizes.
Table 1: The Learning Impact Fund effect size translation to months’ progress.
|Months' impact||Effect size from ...||... to||Description|
|0||-0.04||0.04||Very low to no effect|
See supporting document on Evidence for Learning’s reporting and interpretation of statistical significance.