top of page

In the age of evidence-based decision-making, where can education decision makers turn for evidence?

Authors: Fiona Hollands and Venita Holmes


This post was originally published by the Evidence & Policy blog on March 9, 2022.

We have re-issued the article that has already been published by the Evidence & Policy blog. We would like to express gratitude for the kind offer of the editorial board of the Evidence & Policy blog.

Original article URL: https://bit.ly/3wco8WL




In 20172018, a large school district in the U.S. was threatened by the state education agency with closure of 23 struggling elementary schools unless it could improve students’ performance on state-mandated assessments. The district’s Office of Elementary Curriculum and Development immediately tried to determine which reading resources (reading programmes, assessments, online tools, book collections, and professional development supports) were available at each school and to assess their effectiveness at improving student reading proficiency. To help with this evaluation task, our research-practice team explored various options for quickly providing suitable evidence on the effectiveness of each of 23 reading resources used at one or more of these schools. We expected to find reasonable consistency across multiple sources of information that we could use to help guide the district’s actions. The results were not quite as expected.



The district needed to act swiftly to avert school closures so it was not feasible to plan and execute rigorous research studies to evaluate the reading resources. Moreover, each school used between 8 and 18 reading resources, making it hard to isolate the contribution of any one resource. Lack of documentation regarding which students engaged with which resources also precluded the use of value-added models which would allow us to investigate the relative contribution of each resource to student reading performance. With the clock running, we pursued three other strategies to collect evidence on the effectiveness of these resources in a timely manner: searching for existing studies in research repositories, collecting expert judgments from external reading experts, and eliciting practitioner judgements. We expected to obtain consistent assessments of effectiveness within and across the three sources which could point the district’s schools towards the best reading resources.



Of 23 reading resources used at one or more of the 23 schools, we were able to find evidence on effectiveness for only 5 in 3 reputable repositories of research evidence. For the other 18 resources, either no studies met expected standards of rigor or no studies were available at all. Our strategy of collecting expert judgements was more fruitful. Agreement among the experts regarding various aspects of the instructional programmes they rated was mostly fair to good. However, they did not agree well when it came to expected effectiveness for the district’s particular population of students. Consistency between practitioner and expert assessments of the resources was also surprisingly low. In general, practitioners were more optimistic than the experts about the effectiveness of the reading resources.



In sum, the takeaways from the three sources of evidence for how to guide the struggling schools were not clear cut. But, when considered in combination with practical and contextual factors, they provided a number of useful and actionable insights. For example, district decision makers could identify reading resources that were consistently rated effective across two or three of the sources. Where practically feasible, the district could provide immediate implementation support to help schools serve more students with these effective resources. Resources that received consistently low ratings could be targeted for improved implementation or replacement. And those that were rated inconsistently could be investigated further to determine the reasons for uneven performance.



Our study demonstrates that a substantial disconnect remains between the existing evidence base in education and the needs of education decision makers. To improve the availability of evidence that can inform action in schools and districts, we suggest that funders and education agencies survey schools to identify widely implemented programmes and practices. Subsequently, they should commission studies of the most commonly used programmes and practices in a variety of locales and with different student populations. Effective ones can be targeted for scaling up to serve more students. For those that are found to be ineffective, modifications could be devised and tested. We also recommend that schools and districts collect better data on which students engage in which programmes and practices to allow for the timely production of internal evidence on their effectiveness.

 

Fiona Hollands is a Senior Researcher in the Department of Education Policy and Social Analysis at Teachers College, Columbia University.

Venita Holmes is a Manager in the Department of Research & Accountability in Houston Independent School District.

 

You can read the original research in Evidence & Policy:

Hollands, F.M. Pan, Y. Kieffer, M.J. Holmes, V.R. Wang, Y. Escueta, M. Head, L. and Muroga, A. (2021) Comparing evidence on the effectiveness of reading resources from expert ratings, practitioner judgements, and research repositories. Evidence & Policy, DOI: 10.1332/174426421X16366418828079. [OPEN ACCESS]

 

Image credit: Wavebreakmedia

 

If you enjoyed this blog post, you may also be interested to read:


 

RECENT POSTS
CATEGORIES
TAGS
RSS
RSS Feed
bottom of page