Over at The Washington Post Mikey is at it again re this new KIPP study. She’ll post anything! It’s unclear if any of the feedback she uncritically relays is based on an actual reading of the study itself (pdf) where the issues raised are addressed or it’s clear they’re not germane to the findings or if it’s just misunderstandings. Still…
The other issue you’re hearing raised (see also the comments to this post below) is that this study just can’t possibly be trusted, or is at least suspect, because KIPP paid for it.* This sort of casual smearing of Mathematica, one of the most respected research organizations in the country, on a variety of issues, is completely outrageous and ignorant of how they operate. For starters, their entire brand and business model depends on absolute integrity. But more to the point they have a set of procedures to ensure a firewall and will take their name off of any work where the client interferes on findings or outcomes. (I know of what I speak, having worked with them in the past and I’m currently advising them on some forthcoming work). Though it’s counter-intuitive, they are actually more accountable than many producers of work precisely because of the fragility of brand and they take that seriously.
*Isn’t the counter-narrative that KIPP should be praised for committing to a publicly disseminated evaluation like this? Outside of some work by Council of Great City Schools and The New Teacher Project you don’t see school districts lining up to do that very often and KIPP is basically a mid-size but non-geographically contiguous district.
I would like to validate Andy’s description of MPR’s integrity as a research organization. In my work with them in the past, they showed the highest integrity. They were sensitive to any attempts at making more of results from their work (for example reviewing a press release and making sure words were chosen very carefully so as not to mis-represent their view of the findings).
Oh, come on! Even if the Mayo Clinic hired a well-known researcher to tell them how great they are, we would not (and should not) accept the results in the same way that we’d accept the results from an independent source. That does NOT mean that the research from the one hired by the Mayo is invalid, it just means “consider the source” as in critical thinking. Although it is indeed praiseworthy when any institution hires a company to evaluate itself, this is the kind of information one would use for self-improvement and not the kind to publicize to others. This is just an extended version of “It’s better to let someone else compliment you on your work rather than say it yourself” or “Don’t toot your own horn.”
oddly enough the NEA hasn’t been stepping up to fund rigorous research by third party evaluators. I wonder why that is?
Linda – Evidence from my organization’s work with MPR runs contrary to what you contend. The last study they published had slightly negative results. This didn’t make us happy and the sample was small, but neither we, nor MPR, ever considered not publishing the results.
Let’s step back a second. In the first place, there are guidelines other industries use for disclosure. Why don’t we use them? Americans routinely use industry funded studies to decide what drugs they take. What do we know about them? In general, those that are subject to close scrutiny accompanied by peer review turn out to be pretty fair.
I have been told Mathematica’s original study doesn’t explicitly support all the claims made at KIPP’s website. So I’m not surprised at this.
The two KIPP schools in Indiana are still scoring below the state average.
http://www.doe.in.gov/istep/2010/index.html
Can’t wait for Andy’s take on what’s wrong with this Mathematica study:
http://www.mathematica-mpr.com/newsroom/releases/2010/Charterschool_6_10.asp
Let’s hope he remembers all the nice things he has said about their methodology in the post above.