• Home
  • |
  • Publications
  • |
  • Measuring the Health of the Charter Movement Is Important to Get Right

Measuring the Health of the Charter Movement Is Important to Get Right

Today the National Alliance for Public Charter Schools released a report looking at what outcomes charter schools are achieving in different states and what might explain the variation. This report is the first of its kind and offers a starting point for tackling these important questions, but I can’t say I have any real confidence that policy makers should act on the results.

The report, The Health of the Public Charter School Movement: A State-by-State Analysis, ranks 26 states on the health of the charter movement. It uses 11 indicators that measure things like how fast charter schools are growing in each state, what proportion of students have special needs or other disadvantages, and by how much charter schools are under- or out-performing district schools. The overall rankings from these indicators make some sense, with places like New Orleans and D.C. ranking highest and places like Oregon and Nevada ranking lowest. But there are also interesting outliers (Michigan, for example, ranks third).

The report then goes on to assess how those 26 states rank on the Alliance’s model law ratings (how state laws compare to the Alliance’s recommended state laws). In effect the Alliance is looking to see whether what it recommends in state law predicts the outcomes they care about. There appears to be some correlation between the law and outcome rankings, but it’s weak. The report presents theories about why that may be the case (laws change, laws don’t affect quality on their own) and speculates a bit about what else might influence quality (authorizing).

I commend the Alliance for taking on this task. As an advocacy organization, it behooves them to learn 1) how states stack up in terms of growth, quality, and equity measures, and 2) whether their recommended legal framework bears any relationship to those outcomes. I’ve mentioned the need for this kind of analysis in the past, as has DFER.

But there are many weaknesses in the analysis that need to be addressed before it can tell us something meaningful and credible. I have a lot of questions about what data were used: whether student-level or school-level data were used, and whether charter school populations were compared to statewide averages or to nearby schools, for starters. The report doesn’t say. What is clear is that sometimes the Alliance had to use outdated and subpar data. The results have to be understood in that context.

The indicators they use also leave plenty of room for debate (as the report acknowledges). For example, the measures of quality come from CREDO at Stanford. They represent a strong measure of performance relative to districts, but actually aren’t a measure of real quality. We know that the relative bar is a very low one in far too many districts. As long as we’re focused on “relative” performance, we won’t really understand whether the promise of charter schools is truly being met. On accountability, it’s really not clear why “small and consistent” closure rate is better. If past authorizing was sloppy, wouldn’t a state need a higher closure rate to clean up its act? On the demographics, it’s sort of silly to compare state charter demographics to statewide or even district averages since we know that the vast majority of charter schools are in urban areas.

Maybe more importantly, it’s not at all clear what process (and math) the Alliance used to develop and weight those indicators and then roll them up into a set of rankings. There are established methods for developing and using indicators to assess outcomes, and failure to use them can lead to meaningless rankings. Why were crude (1-4) measures and weights used for the indicators? Were the weightings and cut scores decided empirically and tested? Were any of the indicators correlated? Did the state law rankings correlate with some indicators and not others?

These may seem like small points, but sometimes a little analysis is a dangerous thing. It can lead to misleading results on which people take real action.

I join the Alliance in believing that these are essential questions to answer. We need to understand why states vary so much in outcomes. We need a rigorous and objective look at state charter outcomes on various measures as well as sophisticated analyses of what factors (such as law, authorizing practices, human capital, etc.) contribute most to these outcomes.

I hope that they will keep refining this analysis as they say they will. But I also hope the Alliance will lend its support to having a variety of researchers take a look at this question and work toward more complete answers.

Skip to content