First off, thanks, Andy, for giving me a toe in the water into the blogosphere. Opinions here are my own and not those of the NYC Department of Education, where I serve as deputy chancellor.
Let’s start with two truths, one rooted in public policy and the other in social science. The first is that the only measure of success school systems should care about is how well students are actually learning. We can debate the most appropriate ways to evaluate student learning, whether graduation rates, test performance, or something else. But let’s hope we are past the point of evaluating success based on “inputs”– how much we care, whether a particular program or approach appears compelling, how many students in a class feels like the appropriate number, how many degrees or certificates our educators possess, etc.
The second point, the scientific one, is that by far and away the single most significant determinant of student outcomes is the actual track record of a given educator in raising achievement (Because socio-economic factors are themselves a powerful determinant, a primary focus must be on progress relative to peers.) Life would be a good deal easier if we could predict on the front end which teachers (and principals) would be most successful at elevating student learning. A lot of good work is underway in this area. So far, however, the research is clear that factors such as a teacher’s particular pathway to the profession, SAT scores, certification status, longevity, or post-secondary degree pale in predictive significance relative to his or her actual record with real live students.
In combination, these two points explain why, quite appropriately, “value added analysis” has become the holy grail of the accountability systems urban school districts across the country are rushing to build. Opponents express the reasonable concern that isolating “teacher effect” as an independent variable is immensely complex given the many factors that contribute to achievement trends. But isn’t that the point?
The debate is no longer over whether, but how evidence of student learning will increasingly inform the management of a school. In unsophisticated hands, achievement data can be used as far too blunt instrument to meet basic standards of fairness. Wouldn’t opponents of “value added” serve their interests far better if, instead of opposing “value added” analysis altogether, they put their shoulder into the challenge of designing data systems that take into account the complexity of each individual classroom and working with districts to ensure that they are used responsibly?
–Guestblogger Chris Cerf