New IES analysis out this morning conducted by Mathematica about Teach For America and Teaching Fellows programs (pdf). Looked at secondary (middle/high) math teachers. Here’s the punchline:
TFA teachers were more effective than the teachers with whom they were compared. On average, students assigned to TFA teachers scored 0.07 standard deviations higher on end-of-year math assessments than students assigned to comparison teachers, a statistically significant difference. This impact is equivalent to an additional 2.6 months of school for the average student nationwide.
Teaching Fellows were neither more nor less effective than the teachers with whom they were compared. On average, students of Teaching Fellows and students of comparison teachers had similar scores on end-of-year math assessment
There is much more in the analysis, TFA teachers outperformed comparison groups of teachers, including traditionally credentialed and veteran teachers. Couple of things to keep in mind:
First, this isn’t new evidence. This study confirms what multiple previous analyses (from Mathematica, Urban Institute, states like TN, LA, and NC, etc…) have shown. Rigorous studies consistently show modest or significant positive effects – and perhaps more importantly given the context of the advocacy debate, they don’t show harm. It’s only among the advocates whose job it is to go after Teach For America and the education media (with their lazy approach to research, methods don’t matter, contrast a good study with a flimsy one for balance, say “research is mixed” when you can’t figure it out) that the effectiveness issue is even considered a live one.
Second, it’s reasonable to accept these findings and still have concerns about Teach For America as an educational strategy. One would hope, however, that we could recognize those concerns as normative and discuss them accordingly rather than the tiresome debate about whether Teach For America teachers are systematically doing harm or are, on average, less effective.
Third, beware sweeping generalizations and ecological fallacies. Today’s study reaffirms previous findings about Teach For America overall (in this case math teachers) but doesn’t mean every Teach For America teacher is spectacular. There is variance within the pool. Likewise, anecdotes about those teachers on the low end of that variance should be considered in the context of this overall evidence base. The plural of anecdote is not data.
Finally, I think it’s the juxtaposition of the two primary results of this study that is the most interesting thing here. Teaching Fellows on par with other teachers (which is not a bad result considering the recruiting issues) but Teach For America outperforms. From my own analysis of Teach For America I think it’s their screening and selection process – and in particular the screens for non-reportable traits such as tenacity, sense of efficacy, and belief about children’s potential – that makes the difference. Today, with 5000+ people entering the Teach for America Corps each year, TFA does not disproportionately pull its teachers from Ivies (only two Ivies in the top 10 feeder schools last time I looked), is diverse relative to most programs, and doesn’t fit with any of the other pervasive advocacy-driven misconceptions about it. Instead, they’ve figured out how to identify good candidates from a wide-range of schools at some scale.
Regardless of what you think of Teach For America that’s an innovation and there is some learning there that could benefit hiring in the field overall. It’s quite different from how labor market issues are approached overall in K-12 education and how most hiring occurs. In fact, it shows how silly and unhinged much of our education debate is that many of the same people attacking TFA simultaneously argue for a greater focus on selection like [insert country du jour here].