Evidence for Learning: Climbing the pyramid to get a better view

Climbing the pyramid to get a better view

Evidence for Learning’s Learning Impact Fund; a new way to fund and manage research trials.
Author
E4L
E4L

Climbing the pyramid to get a better view

Blog •5 minutes •

A little over three years ago, Evidence for Learning launched our Learning Impact Fund; a model, new to Australia, to fund and manage research trials that pair a promising program with an independent, expert evaluator to conduct a mixed method, randomised control[1] trial during a school year. Basically, this means we search for causation – testing if the program causes more learning to occur for one group than a similar comparison (or control) group.

At the end of the trial, Evidence for Learning works with the evaluator to produce a plain English report and commentary. This includes simple ratings of:

Climbing the pyramid

So far we have commissioned three trials covering professional learning, literacy and numeracy programs. We are now in the final stages of preparing our practitioner-friendly’ reports which will be available for free and open access on the Evidence for Learning website from September.

In this blog I wish to applaud the three program developers who put their programs forward for evaluation and to explain why their contribution to Australian education extends far beyond the results of their individual trials.

Each program developer has worked for many years on their initiative. They identified a specific achievement challenge for learners, they toiled to create a solution using their best knowledge and expertise, they worked with schools to implement their program, they gathered data and conducted their own assessment of impact to refine and improve the offering. All of this is admirable, but their next action is truly brave – they agreed to a wholly independent evaluation to assess the impact of their program on student achievement in the most rigorous way possible and to have these findings made freely available to the profession and the public.

This is brave for two reasons:

  1. We know that impact on student achievement takes time and may be incremental. More rigorous studies on programs commonly yield smaller effect sizes than they have seen previously.
  2. They risk unfair comparisons of the benefit of their program to others that have not been subject to the same level of scrutiny.

On the first point, the experience from our UK partner, the Education Endowment Foundation is instructive. Having run more than 100 rigorous trials they have found that,

Developers will often be confident in their programs because they have prior research or their own data the improvements in student learning. What they often haven’t done before is test how much more their program delivers than the natural learning growth that would have happened anyway or might have occurred with another approach. Even when a developer conducts their own evaluation they might understandably give it the ideal conditions to succeed or recalibrate the program or the assessment model. These options are not available to busy teachers or schools who have to implement the program in the conditions and with the resources they have before them.

The second reason highlights the wider contribution these brave program developers are making to Australia’s and the world’s education evidence base. Programs and practices are vigorously debated in education with competing views and opinions. Sometimes advocates for an approach are unbalanced in their use of data and findings; they use evidence like lawyers do – to make their case or win the argument rather than like scientists do, who gather evidence to test and support or disprove a hypothesis.

A less courageous program developer might be more inclined to stay in the safe haven of anecdote and case study, but this is not the path that will advance our collective understanding or make the greatest difference to the students we seek to help.

There is a helpful pyramid of evidence that lets us more critically compare claims or statements about impact. The pyramid in Figure 1 shows there is much more knowledge at the anecdote and opinion level but also that it is less reliable. The further up the pyramid that research sits, the more confidence and trust we can have in its conclusions, and the rarer that research is.

Hierarchy of evidence2

Figure1: Hierarchy of evidence (Deeble & Vaughan 2018)

It is this commitment to moving our collective understanding up the pyramid that motivates the program developers in our trials. To test, at the next level of confidence, whether their approach makes the difference to students making the next step forward in their learning and achievement.

The program developers who have allowed us to peek under the hood’ and report openly on our learnings should be celebrated for their commitment to improving our collective wisdom. More than just our thanks, however, we owe it to them to do two things with the outputs from the trials:

  1. We should consider the findings of impact (the months’ worth of learning) together with the finding of security (the evidence security). A modest gain of 1 or 2 months is noteworthy if we have a high degree of confidence in that finding. We should pay more heed to it than an alternative approach’s claimed gain of 6, 7 or 8 months if it does not have the same quality of evidence to support it; and
  2. We should consider more than just the headlines. Within the report are more nuanced findings about whether the program worked better for some students or a specific type of knowledge or skill. It will also report on how the program worked in practice and how the implementation or operation of the program affected the results. We should use these findings to look for the active, beneficial ingredients – either to improve this recipe or to carry over to a new one.

This is for all of our benefit; for students who have only one childhood in which to build their foundational literacy, numeracy and other knowledge and skills, for teachers and leaders who strive to create the best opportunities for their learners with limited time and resources and for the rest of us who wish to better support learners and educators in their vital endeavour.

Matthew Deeble is the Director of Evidence for Learning. He is responsible for Evidence for Learning’s strategy, fundraising and system engagement and promotion.

References

Deeble, M & Vaughan, T 2018, An evidence broker for Australian Schools’, Centre for Strategic Education, vol. Occassional Paper, no. 155, pp. 1 – 20, viewed 24 July 2018, http://www.evidenceforlearning.org.au/index.php/evidence-informed-educators/an-evidence-broker-for-australian-schools/.

[1]This is a research model where a pool of schools or students are recruited and then randomly assigned to one group (of students or schools) that receive the intervention’ or a control getting the business as usual’ practice. Both groups are tested (for literacy or numeracy level and gains) at the start and at the end of the trial and the results compared. We can then see if the intervention’ actually led to gains in learning but crucially also whether the gains were better than what was going on already. The mixed method’ means that it has a quantitative part (measuring the academic achievement or the what’) and a qualitative part that assesses the context and human questions about implementing the intervention (understanding the real-world process or the how’).