Speaking of AERA, the new study on Teach For America teachers released yesterday is a pretty big deal. Basically, there is not a lot of disagreement about the secondary effects of TFA, namely that it brings a lot really talented people into the education field, a little less than half stay in teaching after thier two-year commitment, and many others start or join all manner of successful educational efforts and increasingly serve in appointed and elected positions in the public sector.
But, it’s a legitimate question as to whether TFA teachers are good for students while they are teaching. That’s been a very contentious debate. This study, along with the Mathematica study (pdf), seem to indicate that TFA certainly clears a “first do no harm” threshold in that regard. And, like the Mathematica study and unlike several other TFA evaluations, this one has very solid methods behind it. This one is also significant because it looks at secondary schools.
Can’t help but note that a lot of TFA’s most vocal critics have touted a lot of human capital reforms that have a lot less in the way of high quality research behind them.
That’s really good to know. Which I think is the nicest thing one can say about a study.
It’s a bit saddening, too, though; what does it say about the existing teachers in those schools that inexperienced twenty-two-year-olds who frequently have no management skills whatsoever — in other words, who probably aren’t doing all that good a job, on average — still outperform them?
Actually, the paper falls far short of telling whether TFA teachers are better than others b/c the comparison groups are not equivalent.
Of course, it’s equally possible that they are even better than estimated as it is that they’re worse.
CBJ, could you explain what you mean by equivalence, and why it matters?
The study compared TFA teachers in NC to non-TFA teachers in the same schools. If the TFAers weren’t there, the kids would have had the other teachers, and learned less.
Big Kippster,
“Which I think is the nicest thing one can say about a study.”
Among economists the nicest thing you can say is that it’s worth attacking. That’s how they roll.
Corey, what are they teaching you at Vanderbilt?
Derek Neal made a compelling argument a few weeks ago that you cannot compare the performance of entirely dissimilar groups — e.g. which school did a better job the inner-city school full of poor minority kids that gained x points on the test or the wealthy suburban school full of rich white kids that gained y points on the test? In this paper, the TFA teachers are teaching kids that are very different from the kids that non-TFA teachers are teaching. Also, the sample was only limited to districts in which TFA teachers teach, not schools (that’s important). I know they did a very sophisticated analyses using a million different controls, but until they compare apples to apples instead of oranges I think it’s only fair to remain somewhat skeptical.
I’m not going to argue that that specification overestimated the effects of TFA teachers, b/c I have no idea what the outcome would be if they used equal comparison groups.
Lastly, in some ways the study probably underestimates the effects of TFA teachers by assuming that the other teachers in the state would be the ones teaching the kids were TFA not in place. It’s more likely that the spots filled by TFA teachers would instead be filled by teachers significantly less qualified and experienced than the average NC teacher.
Corey is confused. Please read the paper, pages 12-16 especially.
Of course I’m confused . . . they went through a remarkable amount of intricate statistical analyses and they don’t have space to explain them all in-depth.
That said, I did read the entire paper before posting the first time — including pages 12-16. Please point out anything that I’ve said that’s factually incorrect. If I made a mistake, I’ll be happy to admit it and correct it. Otherwise, please refrain from belittling me or my intelligence.
The study is rigorous, but it is not the final word on the topic. Besides the issues with the sample that I already discussed you can also throw in the facts that:
1.) It was only in North Carolina
2.) There was no foolproof way to decide which students had which teachers. Ultimately they decided that they were pretty sure about which teachers 84% of the kids had, and eliminated the rest.
3. There was no longitudinal data for the tests. Students didn’t take tests in the same subjects more than once, so they had to compare their performance on each test to their performance on tests in other subjects that year rather than tests in the same subjects the previous year (e.g. gain scores).
The authors went pretty far out of their way to address all of these issues, so it’s certainly possible that the results would hold up if perfect information was available. But the fact is that perfect information is not available and I have a problem with anybody who argues that this study definitively proves anything. It provides compelling evidence in one direction but, like any study, has flaws and I don’t accept something as the truth just b/c the authors conducted sophisticated quantitative analyses.