I have a lot of respect for Sol Stern but I was disappointed by his essay on the latest round of test scores in New York. I don’t have any stake in the NY results, and as I said the other day, I can’t know for sure whether the gains are real, but neither can Stern. Unfortunately, that doesn’t stop him from throwing stones and the usual suspects with an axe to grind in New York from echoing it.
Stern’s right that the federal peer review process for state assessment programs doesn’t guard against gaming and I was surprised to hear the NY education commissioner offer the process up as counterevidence to critics. But, that doesn’t prove anything. Absent a real analysis of the test, the methods and decisions used this year in the state to design it, etc…we can’t know. Fortunately, this is public record and so some enterprising reporter could, possibly with the aid of the Freedom of Information Act where they get resistance, actually take a hard look at this and tell us something definitive. Otherwise, this is unfair to teachers, public officials, and parents in New York because it’s becoming one of those things that “everyone knows…” even absent any real evidence. One possible hypothesis here is that the schools are doing better, no?
Stern’s also right that National Assessment of Education Progress scores in NY haven’t reflected the same gains the state test shows. But again, the NAEP is a useful barometer but not the only one and this is an effect you see in a lot of places and it isn’t always just because the state’s are trying to game the system. Sometimes different tests measure different things.*
More likely, Eduwonkette has put her finger on a big part of the issue here — pass rates based on proficiency or getting over the cut score on a test. Using proficiency levels for accountability rather than relative measures has a lot to recommend it from an equity standpoint, but Eduwonkette’s right that people should be careful about what the results do and don’t mean.
And that again allows me to beat my favorite drum on this issue: Transparency. In building all of these assessment systems officials have to make a variety of large and small decisions about their assessment scheme. These decisions have real effects on what the results look like and this is not as airtight as people sometimes assume or are led to assume. What we really need here is more transparency about the process so we don’t have what’s happening in New York right now, which boils down to insinuation and circumstantial accusations getting tossed around. The federal government shouldn’t get into the micromanaging this process in the states but could do more to establish very clear parameters for meaningful (meaning non-technical) public disclosure so that everyone knows what they are and are not getting with these systems each year.
*More say anything! What’s stunning here is the extent to which many people will argue both sides of this depending on what case they want to make at a particular time. For instance, I’ve heard the same people argue in favor of National Board Certification because of larger achievement gains on state tests than national tests by saying that it shows the National Board Certified teachers are more effective in teaching the state content and that’s good etc…etc…etc…and then turn around in a different setting and argue that these state gains are just a sham because the gains on their state tests aren’t reflected on national tests…