A few people have wondered why WaPo’s Jay Mathews wrote about our Hangover paper on teacher evaluation in early 2015 given that we published it in 2012. Jay’s not behind the times. Rather, a version of it is in this new Harvard Education Press book on the teacher quality debate (Hess-McShane) that came out this fall. In addition, in this season of predictions, while much of what we anticipated and discussed in the paper 2012 is ongoing in 2015 the entire debate is a live one so the book is worth checking out.
More generally, like so much of education policy The Hangover has become a Rorschach test for a teacher evaluation debate that is mostly impressionistic. We’re not against value-added, but the paper is about its limits as a tool and the broader set of evaluation challenges facing the sector. For my part, a culture of performance – and evaluation is a key part of that – matters most and tools are not a substitute for that. That culture does not exist now at any scale. Besides, although you wouldn’t know it from the rhetoric, value-added data is available for less than a third of teachers. So different approaches and methods – which are more common in most lines of professional work – are needed.
So, the short version is that value-added is more robust than you’ve probably heard but also less useful as a long-term solution in a field like education. And as we note in the paper, the evolution the field is going through now is probably unavoidable but more innovation with genuinely professional approaches to evaluation (and ones that don’t conflict with emerging innovations in K-12 education) is sorely needed. Anyone who tells you they have evaluation figured out isn’t being straight with you. There is a lot to learn but this is a challenge the sector needs to get right to really improve and innovation is the only path forward to learning more.
Very, very well said!
Hmm.
1. In the NBA, there are X roster spots. It’s fixed.
Old way to evaluate prospects: scouts, mostly meaningless data on body type (like wingspan), mostly misinterpreted data (like over-weighting total points instead of efficiency).
New way: the old way….plus better tools. Still decidedly imperfect, but better.
The difference with teaching, as you say, is the culture is “Typically everyone is ruled good once hired.”
The value-add proponents PARTIALLY embrace it b/c it’s a FORCED distribution separate from any value it might have as a tool…inherently it’s not possible that 99% of teachers are “above average” in VAM.
I suspect Arne Duncan, if you told him that some meaningfully number of teachers would be rated as ineffective without VAM and pushed out, might pull back on VAM.
But I don’t know. Someone should ask him in 2016.
I suspect Arne Duncan, if you told him that some meaningfully number of teachers would be rated as ineffective without VAM and pushed out, might pull back on VAM.
You nailed it. (even though unions play only a small role in the complex problem of failing to remove bad teachers)
The problem is that value-added evals is such a profoundly bad idea that we will have to fight him with all we’ve got until 2016.
What a waste