Education’s Learning Curve Hits A Plateau!

Once again the Center on Education Policy releases a report challenging the CW on an education issue and once again only Ed Week steps into the breach.  Last time it was No Child Left Behind and student test scores, this time it’s the “plateau effect” of test scores and reform.  This is useful work, but at this point if Jack Jennings doused himself in gasoline and set himself ablaze in front of the NEA, would anyone notice?  Well, I suppose it might at least generate a few stories blaming No Child Left Behind…

10 thoughts on “Education’s Learning Curve Hits A Plateau!

  1. john thompson

    Read the last couple of CEP reports and you have to develop a more subtle appreciation of these issues. But these last two reports addess exceptionally arcane issues, not the big realities that must be addressed before data-driven accountability can be salvaged.

    Its easy for everyone in this polarized environment to seek evidence that supports their positions. The professionalism of the CEP is a great reminder. And if a more nuanced reading of the evidence shows that NCLB may be somewhat more effective (or less ineffective) than we thought, then that is good news.

    Wrestling with Charles Payne’s book also was a good reminder for me. Nobody knows how much of the increase in state scores are real and how much are a bubble. It doesn’t look like student performance has increased in ways that help kids succeed in high school, but it does show that everyone has been working – hard including NEA members.

  2. sandy kress

    One of the most important comments in the new cep report is that it appears to be the continuing pressure of accountability that may be responsible for continuing gains after the tests have been in place for years.

    This might cause us pause in “loosening” 1111 and, even more so, 1116, the accountability provisions of nclb, during re-authorization.

  3. sandy kress

    Of all the telling conclusions in the cep report, one stands out to me.

    It appears that continuing gains in the out years may be due to the stakes from accountability.

    Particularly, given the gains after nclb, this should cause us great pause in “loosening” 1111 and 1116, the accountability provisions of nclb.

  4. john thompson

    Sandy’s comments help explain why it is so difficult to have a constructive conversation in the age of NCLB-type accountability. Yes, the careful researchers at the CEP did say that high stakes MAY explain (emphasis mine) continuing increases in the test scores that they studied which were State tests. They made no statements as to whether this was evidence of INCREASED LEARNING (emphasis mine). They wrote that changes in tests MAY have decreased destructive activities like narrow teaching to the test. If that was the case, then that could be evidence of increases in learning, but that was beyond the scope of their study. And, of course, the evidence might prove the opposite.

    One of the states where increases continued was Oklahoma, which just announced it would change its cut scores because the contrast of growing State scores in contrast to NAEP and other measures indicates that we aren’t getting a realistic indication of whether Oklahoma students are learning more. I’d argue that the most important measure is 8th grade NAEP Reading because that is the essential skill for high school. While Oklahoma State middle school Reading scores are dramatically growing in a classic “Bubble,” White NAEP scores are stagnant and African-American 8th grade NAEP Reading scores declined by ten points since 1998.

    Also, notice the careful language of the CEP study that concluded that the plateau effect “should not be assumed.” Will NCLB supporters be equally respectful of CEP results when they challenge their beliefs?

    This exchange should be read in the context of the recent effort to eliminate funding for independent evaluation of District of Columbia gains, and NYC’s attacks on the independent audits of their graduation rates. In that case, the professional auditors with no dog in the fight are circumspect in their extremely careful wording of their conclusions. The NYC DOE, however, empties out the sewer in its response.

  5. sandy kress

    i’m not trying to claim any more than the report says. the cep report is clear, as have been the last several cep reports: student performance is improving, nclb has been a positive factor in that improvement, and improvement has generally not plateaued where the accountability systems are robust.

    is there increased learning? the report doesn’t get into the details of it. but i’d rather have indices of student capacity going up rather than down. i can only imagine what john would say if that were so.

    i understand and accept that the cep conclusion that the pattern is not uniform. but it’s clear that the general trend is positive.

    i won’t get into naep here, though i have at other times on eduwonk and elsewhere. the trend of naep since 1999, when standards based reforms began to flower, is absolutely positive. (i hope we don’t get into the 2000-2003 versus 2003-2007 comparison game since nclb was adopted in the middle of the earlier period. plus, the states were utilizing nclb strategies increasingly beginning in the mid-90s. in any event…)

    john is right, though, that 8th grade reading has been worrrisome throughout the history of naep testing. but, again, the overall trends there are good.

    so, i hope that’s “constructive.”

  6. john thompson

    Hopefully also being constructive, I’ll agree that generally its good the student performance is increasing, even when it is measured by state scores.

    But, when state scores are increasing so much, while NAEP scores are so stagnant, that is not good. We can debate and discuss the proportion of the increases that are real and the proportion that are not.

    But we shouldn’t discount the harm done by increases in “student performance” (test scores) that aren’t real. They represent opportunity costs, wasted money and energy, and an increase in the dishonesty that poisons classrooms. The greater the Bubble between the two types of test, the greater likelihood that excessive test prep and curriculum narrowing, etc. are damaging schools. If we want to judge the benefits on NCLB, we need an guesstimate of the real increases, a guesstimate of the unreal increases, and then subtract the latter from the former. To judge NCLB by increased “student performance” based on State test scores is like a football team that only wants its good plays and points to count, not the fumbles, interceptions, and points scored by the opponent.

  7. sandy kress

    here’s the substance of an essay i posted on education front in the dallas morning news after the long term naep results were released a couple of months ago. it’s just not true to say the naep results are stagnant. in some ways, the improvement is significant and remarkable.

    Scores of Hispanic 9 year olds went up in math from 213 to 234 from 1999 to 2008. This equates to an astonishing improvement of 2 grade levels. It’s the largest gain in history. It represents a closing of the white-Hispanic gap at that level from 26 to 16 points. And, while we have much further to go, Hispanic 9 year olds are now performing about as well in math as whites were in the 90s. This is nothing short of a major civil rights achievement.

    Black 9 year olds made a gain in math from 1999-2008 that matched their largest gain in history, 13 points. This gain was particularly refreshing because black improvement had stalled entirely in the 90s.

    This same turn up in the slope for the most recent decade characterizes math improvement for black and Hispanic 13 year olds.

    As impressive as these results are, consider the reading results.

    Black 9 year olds went from 186 to 204 from 1999 to 2008. The black-white gap closed from 35 points to 24 points, all while white scores went up. But, best of all, this decade’s growth equalled the growth of the 70s, when the fruit of the civil rights era was finally ripening. Truly remarkable.

    The reading scores of black 13 year olds actually went down from 1988 to 1996, from 243 to 234. Yet, from 1999-2008, they’ve come back from 238 to their highest point ever, 247.

    Hispanic 9 year olds were stuck in a range of 183-193 from 1975 to 1999. Their scores are now at all time highs, 207.

    Bill is right. We have a lot of work to do in our high schools. But we will have little success with that challenge unless and until we recognize the progress we’ve made, when it began, and what we started doing differently in the mid-90s that caused the rapid uptick in the last 10 years in performance in elementary and middle schools.

    This has been the decade of the flowering of standards based reform and accountability.

    Should we fix and continue to improve these reforms? Yes. Should we devote more, better targeted resources to education? Yes. But should we weaken or abandon these reforms? No, never. We could easily fall back into the stagnancy of the late 80s. We must not let that happen.

    Indeed we ought to take the lessons learned and apply them to high schools.

    Who knows? In other 10 years, we might be looking at these studies and see 17 year olds making the gains their younger siblings have made over the last decade.

    Sandy Kress

  8. john thompson

    Sandy,

    From your perspective in Texas it makes sense to see 1999 in the NCLB era. I don’t think it makes sense nationally and certainly not in OKLahoma. For NAEP scores to be attached to the era of NCLB-type accountability, the big city reforms would have had to have gained traction in the mid-1990s, and the burden of proof would be on you. The Chicago School Consortium argues that Chicago was making more progress in the 90s, but that was before stakes were attached. When Chicago started attaching stakes, they argue, the rate of improvements slowed. As I recall from my readings, as opposed to direct knowledge or focus study, NYC and Philly reforms were antithetical to NCLB-type approaches. You might call Massachusetts a victory for reform, but I’d call it the anti-NCLB.
    Maybe you can claim some results from N.C. and Florida, but I’ve never heard of a great flowering of data-driven accountability in the suburbs in the mid 1990s. ..

    What I recall was the belated recovery from the 1991 recession, some distance from the crack and gangs and murder epidemic, and some more distance from the 1982 Reagan recession that was probably the dominant single cause of educational decline.

    That’s why its also good to discuss the actual practices that were adopted post-NCLB. Maybe you and I would see the same examples of instruction and you would call it effective teaching while I’d call it awful. I suspect we’d disagree on teaching methods, but I doubt that the differences would be that stark.

    From what I see, NCLB brought in the type of money you in Texas have had, but it also brought in the worst of Texas methods.

    And I guess that brings me back to the things that frustrate me. Surely you all can see that NCLB brought bad as well as good, even if you disagree about how much bad in relationship to how much good.

  9. sandy kress

    If one reads nclb for what it says and actually requires, I don’t see the negative at all. Many schools have, and I’ve seen it, as have others, done extremely good things in the wake of it.

    However, I do agree with you, John, that there are many administrative practices that have been developed and implemented that are very poor and outright bad. While I agree we ought to root such practice out, I would disagree that much of it is actually caused by nclb.

    As to pre-nclb practice, yes, I think Texas, North Carolina, Massachusetts, and Florida were early practitioners of this sort of standards based reform. I have spent extensive time with leaders in all these states, and they tell me that’s so. You are no doubt right, though, that their early reforms were not fully in sync with nclb. We, in Texas, for example, have had the challenge, which I think we’re navigating pretty well, of making it all work together. Jeb Bush, to your point, would argue that there are important disconnects in the details. But, overall, the general approach is much alike, certainly in contrast to the broad operating philosophy before 1990.

    As to Chicago, NYC, Philly, and Oklahoma, I’ve studied their data pretty carefully. I’d love it if you could refer me to more I may have missed. But I didn’t see much gain in the 80s or 90s in those places, except for a pop in the first Vallas years in Chicago. Again maybe I’m wrong. I’m hungry for data, if I am.

    I definitely agree with you as to other societal forces that contributed to the performance, both good and bad. Further, I happen to think that Clinton’s IASA was a positive.

    I guess my main point is not really confined to nclb. It is rather that standards based reform, which began to be a dominant factor in the country in the mid-to-late 90s has been overall a very positive force, and we should be reluctant to dismantle it.

    Agreeing though with your final challenge, I would close with the thought that some “bad” or “not good enough” needs repair. I’m open to any such ideas so long as “fix” does not cross the line into “weaken.”

  10. Dick Schutz

    Hey, guys. Formal instruction in reading per se starts to wind down toward the end of grade 2 and is over by the end of grade 3. If a child has not learned/been taught to read by that time, it’s highly probable that the kid has encountered so many faulty reading techniques and psychological obstacles that “reading” will be a life-long weakness.

    If you look at the items on NAEP or on any other popular standardized reading test, you’ll see that the differences at age/grades have little to do with the text passages. They’re in the carefully contrived item “foils.” You can nudge these distributions, but you can’t make all students “proficient,” which is rightfully the aspiration of NCLB. The test construction and reporting preclude that possibility.

Leave a Reply

Your email address will not be published. Required fields are marked *