Betheny Gross and Michael DeArmond urge educators to seek out systematic evidence on digital tool effectiveness. This is the seventh installment in our series of "Notes From the Field."
The “Noble Lie” of Evidence-Based Turnaround Strategies
The “Noble Lie” of Evidence-Based Turnaround Strategies
The Every Student Succeeds Act abandons the prescriptive approach to school improvement embraced by both the No Child Left Behind Act and the Obama Administration’s flexibility waivers. Instead, states are empowered to craft their own “evidence-based” strategies to turn around struggling schools and districts.
But what does an evidence-based turnaround strategy look like? CRPE recently synthesized the evaluation evidence on state-initiated turnarounds, like school and district takeovers, and found that while states have been effective at spurring improvement in some cases, the results of these interventions are inconsistent even when states use similar approaches.
The challenge of scaling effective turnaround strategies is aptly captured by what Richard Elmore called the “noble lie” of policy analysis. Elmore argued that policymakers are unable to directly influence program implementation and their efforts to do so, by prescribing permissible actions, are often ineffective because educators and administrators possess significant discretion and are constrained by countervailing organizational and political realities.
Researchers have long known that effective programs hinge on implementation and that variability in local contexts can make “what works” difficult to define with any degree of reliability. As Jerome D’Agostino, a professor at Ohio State University who is leading a meta-analysis of the What Works Clearinghouse, told Education Week, “If you look from 10,000 feet at education interventions, you can almost count on your hand the number of interventions that have truly scaled and established.” But the belief that evidence can be used definitively to design effective programs at scale remains deeply engrained in both policy and research circles.
An evidence-based approach to turnaround isn’t as simple as picking the “right” policy or program; it requires understanding whether initiatives have a chance of success in a particular context, given existing leadership, capacity, and political dynamics, and adapting the approach to deal with on-the-ground realities.
Many states rush to adopt new intervention strategies without considering whether they can replicate the conditions that made the initiative work in the first place or overcome the political controversy that comes with the difficult work of school reform. School Improvement Grants (SIGs), for example, drew upon the large body of research that suggests that school leadership and educator talent are critical to improving schools. In this spirit, the program required participating schools to replace principals as a condition of funding. But many rural schools, already struggling with shortages of talent, were unable to find effective replacements. Likewise, Tennessee’s Achievement School District took inspiration from the positive results coming out of Louisiana’s Recovery School District; but the ASD struggled to replicate those results because of challenges gaining buy in from local communities and constraints on operator capacity.
Some states are putting implementation at the center of their work to improve local schools. Massachusetts, for example, uses a full complement of turnaround strategies that enables the state to better tailor the types of oversight and support it provides struggling schools and districts. And the state’s School Redesign Grants (a version of the SIG program) offered funding only after vetting districts’ turnaround plans, including an evaluation of leaders’ “capacity and commitment” to implement effective turnaround strategies. (It’s worth noting that evaluation evidence from these programs is strongly positive.)
To be sure, none of this is to say that research isn’t important or that policymakers shouldn’t use evidence to inform their work. Evidence is critical to enlightening the often polarizing debates around issues like charter schools, most recently illustrated by Question 2 in Massachusetts, as well thornier issues like school and district turnaround.
But science is a cumulative exercise; a single study of a single program in a single school or district hardly “proves” that something will be effective everywhere or that taking programs to scale won’t result in unintended consequences. Rather than concentrate on emulating the latest “evidence-based” interventions, states should focus on integrating evidence generation and use into their day-to-day work. As Dick Murname and Richard Nelson argued, low-performing schools don’t become high-performing schools by implementing “proven” interventions; they become great by becoming learning organizations. Fostering closer relationships between evidence building and use, as well as between researchers and practitioners, was central to John Easton’s work under the Institute for Education Sciences.
For states tasked with developing new strategies for addressing the challenges of struggling schools and districts, they too could benefit from a learning organization approach. Last year, CRPE dedicated an entire volume to how states can build their capacity to use evidence.
As we found in our study of state-initiated turnarounds, states are more likely to find success in this work when they mitigate common implementation gaps and tailor their approach to the assets and liabilities inherent to different strategies and the unique conditions in localities. This requires a problem-solving approach that leverages evidence but isn’t blind to its limitations.
Robin Lake cautions that one-size-fits-all personalized learning programs are likely to let some students fall through the cracks. This is the sixth installment in our series of "Notes From the Field" on personalized learning.
Jordan Posamentier discusses CRPE’s new analysis on how states are retooling their education accountability systems under ESSA.