A new CALDER paper by Rouse, Hannaway, Goldhaber and Figlio (pdf) looks at the Florida accountability system to see what effect the school grading system and ensuing consequences (choice) had on low-performing schools. The researchers used both data from Florida and a survey of principals there. Findings aside, the paper shows the power that good state data systems can have for research. As to the findings, though a lot more work should be done here, along the dimensions they looked at they found positive and non-temporal effects from the accountability system that are worth considering (and it would seem not exclusive to vouchers but that’s a policy question). The voucher component of the accountability system was declared unconstitutional last year under Florida’s constitution but this work looks at the window that precedes that.
This report could also be important because, unfortunately, personalities and politics matter a lot in how this sort of research is consumed, often a lot more than the actual evidence. Earlier this year for another project I took a look at the production of school choice research. Since 1990, about 2/3 of the studies on vouchers that were quasi-experimental and used student achievement as the dependent variable were produced by scholars with close ties to Harvard professor Paul Peterson or Peterson himself (and some of the rest was reanalysis of work Peterson and others had done). Similar research on charter schools, by contrast, came from a much wider array of scholars.
This concentration of work is hardly surprising. More than anyone else Peterson is probably responsible for forcing the improvements in the methods around school choice research over the past two decades. And, his PEPG center at Harvard has undertaken a variety of work on the issue. Peterson and some of his protégés were in the lead in exploiting scarcity around school choice programs to create research designs to evaluate the effects of voucher programs on students. In other words, he and his students were first out of the box to see the natural experiment that waiting lists for choice programs created. At times that research blurred the lines with advocacy, but Peterson does deserve a lot credit for moving the field on methods. University of Wisconsin professor John Witte, Peterson’s antagonist on the Milwaukee voucher program told me that, “I’m a much better statistician than I used to be” because of the back and forth about methods he had with Peterson.
But, a lot of people dismissed all this research out of hand based on personalities and politics rather than sort through it. In fact, when Witte’s research began to show some effects from vouchers his partisans in the debate with Peterson largely abandoned him. Now, with more scholars stepping in, in this case Hannaway, Goldhaber, etc…(Rouse has been doing work on this issue for more than a decade) who can hardly been considered partisans in this debate, I’m hopeful it will force a more serious conversation about the good and bad of school choice and what we know and don’t know from the evidence.
Overall, the effects from vouchers still seem pretty modest and there are other reasons to be cautious — and the debates are far out of proportion to the effects — but it’s worth learning about because more choice is coming to education in any event and researchers should endeavor to learn as much as possible about its various effects. Put another way, one can be skeptical of vouchers as a large-scale policy remedy but interested in what we can learn from these various initiatives around the country. Despite a lot of research, that’s a conversation that’s been very hard to have for a long time.
2 Replies to “Florida, And Choices In Choice Research?”
From pages 21 and 22:
We find that the estimated effect of the “F” for repeat “F” schools is unambiguously larger, and statistically distinct from the first time “F” effect in the two mathematics specifications (though not in the reading specifications).
That said, the estimated effects of “F” receipt for the first-time “F” schools remain large and are statistically significant at conventional levels. These results provide suggestive evidence that schools facing still greater competitive and/or accountability pressure raise student test scores by more than those facing less pressure. That said, these distinctions are based on a very small number of schools, so for the remainder of the paper we do not distinguish between first-time “F” schools and repeat “F” schools in our analysis.
Don’t forget that Florida combined THREAT with SUPPORT for those F schools. The equation there is choice+help= better performance.