In late May, The Fordham Institute announced a “Charter School Policy Wonk-a-Thon” to help answer the question: “Why do charters in some cities and states perform much better than their traditional district counterparts, while others perform worse?” Several thought leaders provided insightful responses.
This got us thinking about the relationship between public charter rating systems and student performance. So, we decided to dig deeper by comparing state scores on two rating systems (one by the National Alliance for Public Charter Schools (NAPCS) and one by the Center for Education Reform) against student achievement. For the latter, we used CREDO’s National Charter School Study comparing learning gains made by public charter school students to gains made by similar students in traditional district schools.
A few caveats: these analyses are intended for discussion purposes only, and are not meant to imply cause and effect. Nonetheless, we think examining these relationships both through scatter plots and Pearson R’s (which signify the percentage of variance in student outcomes between the NAPCS and CER rankings) is useful in furthering the discussion about the factors that affect public charter school performance.
Results of Analyses
Neither CER’s or NAPCS’ rankings have statistically significant (p , ≤.o5 represents the likelihood of results due to chance are less than 5%) relationships with average days of public charter student learning compared to students in traditional public schools (see Figure 1).
Four of the states with the highest-performing public charter students had high NAPCS state law ratings: Indiana, Massachusetts, New York and Michigan. Three states with the highest-performing public charter students were ranked by NAPCS in the bottom five of all states: New Jersey, Rhode Island and Tennessee. Conversely, some states with low student learning gains had policies that were rated highly by CER and/or NAPCS: Arizona and Utah (both models), Ohio (CER model) and New Mexico (NAPCS model).
We then tested the relationship between rankings from individual components of the two models with CREDO days of learning results.
We reiterate that observations of relationships do not imply causation. However, there are findings worth noting. Neither the overall NAPCS nor CER rankings of state policy had a significant statistical relationship with CREDO student learning outcomes. None of the individual components of CER’s rating system were related to student outcomes. Only two of the NAPCS components were found to be statistically significant, and both had inverse relationships with student learning outcomes and fairly high R-squared results.
Extracurricular and Interscholastic Activities Eligibility and Access (NAPCS Component 16)
This component, measuring the degree to which state law explicitly allows public charter school students to participate in extracurricular activities and interscholastic leagues, had the strongest relationship to student outcomes. States that scored low on the extracurricular component tended to have more student learning. Policy implications are unclear and suggest a need for further inquiry; does involvement in both district extracurriculars and charter school academics create stress for students and take time away from their studies?
A Variety of Public Charter Schools Allowed (NAPCS Component 2)
This component, described as states allowing new start-ups, public school conversions and virtual schools, had the second-strongest relationship of all variables. The higher a state scored on the component, the lower they scored on overall student learning.
One hypothesis is that in states with high scores on variety, virtual charters are bringing down student performance levels. All states rated 3 or 4 on this component had laws allowing virtual schools in 2011. New Jersey, which scored high on public charter student performance, scored a 4 on the variety component in 2011 but was later downgraded to a score of 2 because applications for new virtual schools were rejected. States that scored low on variety, but high on student learning—Rhode Island, Tennessee, New York, and Massachusetts—did not allow virtual schools at the time of the evaluation.
School choice alone does not guarantee success, but can create the potential for success. Policy and laws do matter, but as they are rated now, they do not clearly explain why some states are having more success with public charter schools than others.
We expected to find a positive correlation between strong laws and strong academic outcomes, but the only statistically significant results were an inverse relationship between two NAPCS rating components and student outcomes. This does not mean that the NAPCS and CER ratings are unimportant; some, like equitable funding and facilities, are a matter of fairness. But important questions remain.
How can we better identify the necessary and sufficient conditions for public charter school success? Some states, such as Rhode Island and Tennessee, have strong student learning results even though their policies are not aligned with CER and NAPCS’ best practices frameworks. Other states, with varying levels of policy ratings, are clearly not managing their public charter sectors well.
The thought leaders responding to Fordham’s “Wonk-A-Thon” challenge made common observations around two themes that may help explain the results here and suggest areas for future research.
- The “No Excuses” talent sandbox: Match Education’s Michael Goldstein argues that the transfer of knowledge between Boston’s interconnected teaching and leadership talent pipelines result in authentic execution of high-performing “No Excuses” schools.
- Andy Smarick noted the presence of Building Excellent Schools, Teach for America and TNTP in high-performing Boston as a possible explanation for the city’s academic success.
- Jed Wallace and Elizabeth Robitaile observe that California’s high-performing public charters tend to be “nonprofit, mission-driven charter organizations serving historically disadvantaged students.”
- Perhaps the best explanation was Robin Lake’s: “You can get a lot done with suboptimal policy if you’ve got great people, and all the policy in the world won’t save you if you don’t.”
The following are considerations for further research:
1. This inquiry should be repeated using more recent policy evaluations and student performance data to see if similar overall and component results are attained.
2. Further student learning research might consider disaggregating results by school type, including virtual schools, and by city. Aggregating results statewide may mask important differences.
3. Consider refining policy evaluation metrics to better capture what’s observed in cities and states where students are showing great learning gains—talent development, a supportive culture, and investment.
Marianne Lombardo is a policy analyst at Democrats for Education Reform (DFER). Before joining DFER, Marianne was the Vice President for Research & Evaluation at the Ohio Alliance for Public Charter Schools, an Education Administrator for Ohio’s statewide Juvenile Correctional System, a Program Evaluator for a welfare-to-work project, and an Adjunct Instructor of Sociology. Read more about Marianne here.