At the outset two points so my colleagues don’t tie me to a chair and throw me in a river to see if I float: (A) The National Assessment of Educational Progress (NAEP) is a pretty good test and a valuable tool for analysts and (B) A lot of states do play games to make themselves look better on their state tests than they’re actually doing. That’s because of the ongoing debate in our field between the Achievement Realists and the Public Relationists.*
However, per the WaPo’s Maryland article today and others like it, all that said, the NAEP is not the be all and end all of tests, no test is, and the fetish about it is getting a little out of hand. Remember, all these standards, and the NAEP operates within a framework, too, are human constructions. There is no stone tablet somewhere that says what a student should know or be able to do in, say, 5th grade math. So, healthy skepticism is always in order about what states are up to but at the same time the flashing of the NAEP as a yellow card every time a state improves its numbers is counterproductive. There are some legitimate reasons (alignment, time lag, students clustered around proficiency benchmarks etc…) why those gains may not be reflected on the NAEP in the short term.
*PS–Today’s politics have the Public Relationists in a bit of a bind. Say schools are doing better and the Bush Administration gets some reflected credit, say they’re not and you undermine the whole Public Relationist strategy.
Regarding the public relationists…as one myself, I think the solution is to report the facts. “On [test X], our scores changed by [amount], and on [test Y], by [amount], and we think it’s probably because of [reasons].” Let the audience decide for themselves if that meets their criteria for “better”, and what to attribute the change to.