Meritorious On Merit Pay, And A Look Back As You Look Forward To 2012

Sam Dillon with a solid look at pay-for-performance state of play.

And a lot of chatter about the recent competition for regional education lab contracts.  In that vein a reader sends along this 1969 paper about the state of education research (pdf).

3 Replies to “Meritorious On Merit Pay, And A Look Back As You Look Forward To 2012”

  1. Most of the teachers Sam mentions from DCPS do not have their bonuses based upon student test scores. Nor do they proportionally represent the Ward distribution of bonused teachers.

  2. First, Dillon did an extremely poor job reporting the story. He said nothing about the fact that research –– and it’s extensive –– does not support merit pay…quite the reverse. And then he cites Eric Hanushek, the Hoover Institution conservative who lets his ideology drive his “research” studies, to suggest that merit pay does, in fact, work. Not true. Hanushek, by the way, claims that getting “rid of” the “worst” 10 percent of teachers would automatically make the U.S. internationally competitive. He discounted the Tennessee STAR study on the positive effects of small class size because the researchers didn’t test kids BEFORE they got into school.

    Second, Dillon says absolutely nothing about the structure of the IMPACT system in the DC schools. Jason Kamras and DC officials set up IMPACT in the form of a “normal” distribution – with an average of fifty percent. As one review of IMPACT noted, “no matter how effective the teachers may be, half of them will fall below the median and half will be above.”

    Third, there are weak correlations between classroom observations and student test scores. As an analysis of IMPACT concluded, this is “perhaps not surprising given that tests measure limited competencies, whereas good schools teach a far broader set of skills.”

    Fourth, ratings on IMPACT instructional criteria range from 1 to 4; variations of MORE than 2 points –– which on a 4-point scale are huge and constitute the widest possible variation – are allegedly “rare. ” Variations of 2 full points are apparently far more common, or unreported. A rating of “1” means a teacher is “ineffective” while a rating of “3” conveys “effectiveness” and eligibility for a bonus. So that variation matters. A lot.

    Fifth, ratings can vary significantly for one year to the next, and master teacher observers can and do get caught up in “pettiness and inconsistency.” Moreover, Teachers “have virtually no input in the evaluation, and appeals of the scores are rarely successful.” Most teachers report that “The biggest problem is the narrowing of the curriculum.”

    Finally, Sam Dillon had an opportunity to help educate the public and take apart the myriad misconceptions about merit pay . He had an opportunity to explain how the District’s IMPACT system is set up and to explore its many flaws. Instead he wrote a puff piece that is a tacit endorsement of both.

    That this is what now passes for “credible” education reporting in the mainstream media is cause for more than just disappointment.

  3. It’s interesting that phillipmarlowe posted EXACTLY the same comment on this Eduwonk blog that DrDemocracy posted at Bill Turque’s DC Schools WPost blog (re: Sam Dillon)
    http://www.washingtonpost.com/blogs/dc-schools-insider/post/suspicious-dc-cas-erasures-down-in-2011-but-osse-withholds-school-by-school-data/2011/12/31/gIQA9s7pSP_blog.html

    Can I assume DrDemocracy & phillipmarlowe are the same entity?

    Or perhaps they share the same cubicle at union headquarters?

Leave a Reply

Your email address will not be published.