I mentioned some sloppiness around the the recent report about Teach for America (TFA) that Michael Winerip featured in his column from a few weeks ago in an effort to make the point that the research on TFA is mixed. Since we seem to be repeating history now seems a good time to revisit that and the larger issues it raises. The report conveniently highlights two problems: Our field’s pathetic and weaponized approach to research and the problem of “study laundering.”
Pile ‘em up: The two big takeaways of this report from the Great Lakes Center is that retention of TFA teachers is bad and the program’s results are, at best, mixed. There are substantial problems with both findings.
On the retention issue the researchers seem to be focusing on whether Teach For America Teachers leave their schools after two years, not whether they leave teaching. Unfortunately, this is a common mistake in research on teacher attrition especially when the goal is to illustrate bigger numbers (for instance all the research about how the attrition of new teachers is so far out of line with other fields).
The Great Lakes Center report states that “(M)ore than 50 percent of TFA teachers leave after two years, and more than 80 percent leave after three years…” The report reaches this figure by consolidating findings from previous studies that in one way or another conflated leaving a school with leaving teaching.
In fact, in a study that delineated the leaving issue more effectively, a 2008 study by Harvard’s Project on the Next Generation of Teachers, found that 61 percent of Teach For America corps members stay in teaching beyond the two-year commitment. Teach For America surveys its alumni regularly and the most recent survey found that 65 percent of Teacher For America’s 20,000 alumni remain in education, with 32 percent continuing as teachers. And remember, that’s a survey of alums going back almost two decades now so that one in three figure should be viewed in that context as well as the larger context of TFA’s mission.
On the question of aggregate TFA performance the report also falls short. There are research methods and they’re not equal in terms of analytic leverage. All the commentary attempting to present the case of mixed effects for Teach For America teachers succeeds only by piling up all the studies and then saying, huh, two big piles so the studies are mixed. In fact, if you look at the studies that employ the most rigorous methodology (in other words, apples to apples, enough apples to make a reliable estimate, etc…) it’s pretty unambiguous that, as a group, Teach For America teachers perform as well or better than other teachers, not only emergency certified teachers but traditionally trained ones and veterans. Considering that on an annual basis Teach For America is now the largest teacher prep program in the country (excluding multi-campus ventures such as the UC system) that overall level of quality is a big deal.
A 2004 study from Mathematica Policy Research found Teach For America corps members were as good or better other teachers, including veteran teachers. This was the only study to earn an A for its methodology in a 2008 Ed Next analysis of research into Teach For America because of its methods.
A 2009 Urban Institute study that found the impact of having a Teach For America teacher was at least twice that of having a teacher with three or more years of experience.
A 2010 study from the University of North Carolina, which concluded that students taught by corps members outperformed their peers in high school science, math, and English. At every grade level and subject studied, Teach For America corps members’ students performed as well as or better than the students of traditionally prepared UNC graduates. This was a state study to help inform policymaking there.
This doesn’t mean that TFA teachers are all outstanding. There is high-variance amongst them, just as there is with other routes into teaching and TFA teachers struggle their first year, just as most teachers do. But these results do mean that in the aggregate hiring a Teach For America teacher is a pretty safe bet, relative to all the other options on the table. This is part of a larger body of research on teacher effectiveness that shows that – outside of emergency credentials with no training at all – routes into teaching matter less than candidates.
TFA critics continue to cite the David Berliner study on TFA from 2002 as evidence of TFA’s “mixed results.” Sorry. Here’s a review of that study by Kosuke Imai (pdf) and here’s a more accessible review by UVA’s Paul Freedman (pdf). As both make clear, the Berliner study wants for rigorous methods: Before you even get to the statistical sleight of hand, which isn’t that complicated to ferret out, the selection problems undermine its methods. That’s why the 2008 report card gave it a ‘D.’ Punchline: Not all research is created equal.
Again, to date no study with what would be considered rigorous methods (meaning adequate controls) has shown that Teach for America teachers depress student achievement. That’s noteworthy but lost in the noise. On some issues (eg charter schools) the research is mixed. That’s really not the case with Teach For America right now.
Study laundering: I know it’s impolitic to forthrightly point this out, but here’s the deal: In terms of mainstream media, only Winerip and, of course, Mikey bit on this study despite that it had been shopped around for some time.
That’s in part because the board of the center is made up of people with a track record of trashing Teach For America and NEA affiliates fighting to keep TFA out of various states. That’s all fine, I’m a big fan of the five freedoms. But, most reporters would (and did) then take a critical eye to the findings. Perhaps ask some disinterested researchers to have a quick look at the studies being aggregated? Yet not here. Rather: Hook, line, sinker. If Winerip and Mikey covered tobacco research, we’d all still be taking cigarette breaks during the workday.
So what happens is that the Great Lakes Center puts out the study, no one serious bites. But then it ultimately does get picked up, for whatever reason, and – voila! - it’s clean money! In other words, suddenly it seems more legit because it earns the moniker ‘as reported in the Washington Post’ or ‘this work was featured in the New York Times.’ This happens will all kinds of studies, pro-and anti-reform, by the way, and it’s a big problem that confuses rather than clarifies things for the casual observer or the policymaker trying to make heads or tails of an issue. In other words, the problem of the easily fooled or the agenda-driven becomes everyone’s problem because it further clouds already complicated issues.
Update: Professor Berliner responds below. He throws up some misdirection (it was peer reviewed!), attacks the reviews, but admits the criticisms have merit, and then unfortunately fails cite any specifics or say which ones. That’s a problem because the criticisms undermine the premise of the study. To quote from Paul Freedman’s* analysis (pdf), again the more accessible of the two reviews and only a few pages and worth reading, the three issues are:
· problems of selection and inadequate matching fundamentally undermine the validity of the study;
· the authors overstate the substantive importance of their estimates;
· the statistical approach employed is not well suited to the research question.
Rather, Professor Berliner argues that, ” But the study we did with very careful matching procedures met some of the standards of quality that the profession had for conducting non-causal designs.” Given the growing body of research about Teach For America that meets more than “some” of the standards, and in fact allows for causal inferences, that statement is an excellent summation of the problem here.
By the way, here’s a bit on Freedman, who doesn’t even have a dog in this fight.