Whole Lotta News!

Suddenly a big news day.

In New Jersey state ed chief Brett Schundler has been fired by the governor over this budget issue with Race to the Top.  Wow.  Given how Governor Christie has treated Schundler throughout this process good luck finding someone strong for that position.   And, given that Schundler was a favorite of the school choice crowd, what’s the fallout there?

Sad news from Kentucky, Robert Sexton has died. He headed the Prichard Committee, arguably the prototypical state education advocacy organization.  As a result he was instrumental in key education policy battles around standards and finance among other issues.

A lot of jaws dropped over this story in The Washington Post today.  Legitimate issue but the Post came down hard one way and didn’t caveat things.  Were they just mimicking The Times and their stories on the gaps there?  In any event,  at TNR Jon Chait cuts to the chase. Save yourself some time and read that.

TNTP continues to hit the cover off the ball. And it’s a scandal that the citizens of D.C. don’t have a better public university. The new rankings of dropout factories also, again, illustrate that profit – non-profit is not an especially useful quality delineation in our field right now.

60 Replies to “Whole Lotta News!”

  1. Phillipmarlowe:

    1) Look at my comment above for my response to the data set that you have now posted here THREE SEPARATE TIMES: (https://www.eduwonk.com/2010/08/whole-lotta-news.html/comment-page-1#comment-209523 ). If you check out my reply below to Billy Bob, you’ll see more of a response. Still waiting for yours.

    2) The B-W achievement gaps have *CLOSED* since 2007. Proficiency for black and white subgroups has *INCREASED* since 2007. I know that you’ve a penchant for misrepresenting data, but there’s a few reasons that 2007 should be considered the start date for grading DCPS schools, namely the fact that Rhee started pushing reform since she started in 2007, and also because it adds another year of data to an already small sample of time points.

    3) “You made that error on the DCPS data Chris, and unlike you, I gently pointed it out”

    I was mature enough to say thanks and acquire the correct version of the data (which only further suggested positive results), unlike you who again tried to defend the link while including a passing reference to Lady Gaga, Tijuana, and assholes. Am I arguing with a student at DCPS?

    4) It’s also interesting that you mentioned the Hispanic-White achievement gaps. We should expect bigger effects on the numbers due to student variation, given the much smaller sample sizes of Hispanic students in each grade compared to black students from my previous analysis, and that’s in fact what we see, along with a majority of positive growth. Let’s look at the data:

    Grades 7,8,10 Reading 2010
    Hispanic Proficiency: 43.75%
    White Proficiency: 89.02%
    Achievement Gap: 45.27%

    Grades 7,8,10 Reading 2009
    Hispanic Proficiency: 46.01%
    White Proficiency: 89.04%
    Achievement Gap: 43.03%

    Grades 7,8,10 Reading 2008
    Hispanic Proficiency: 41.34%
    White Proficiency: 87.25%
    Achievement Gap: 45.91%

    Grades 7,8,10 Reading 2007
    Hispanic Proficiency: 32.21%
    White Proficiency: 84.92%
    Achievement Gap: 52.71%

    1) The achievement gap in secondary reading has closed in the years “post-Rhee” (-7.44% since 2007).

    2) The proficiency of Hispanic and white secondary students in reading has increased overall (+11.64% for Hispanic students since 2007, +4.1% for white students)

    *****

    Grades 7,8,10 Math 2010
    Hispanic Proficiency: 47.82%
    White Proficiency: 88.53%
    Achievement Gap: 40.71%

    Grades 7,8,10 Math 2009
    Hispanic Proficiency: 52.20%
    White Proficiency: 85.16%
    Achievement Gap: 32.97%

    Grades 7,8,10 Math 2008
    Hispanic Proficiency: 48.29%
    White Proficiency: 86.12%
    Achievement Gap: 37.84%

    Grades 7,8,10 Math 2007
    Hispanic Proficiency: 31.71%
    White Proficiency: 82.86%
    Achievement Gap: 51.15%

    1) The achievement gap has closed overall “post-Rhee” (-10.44% since 2007)

    2) The proficiency of Hispanic and white secondary students in math has increased overall (+16.11% for Hispanic students since 2007; +5.67% for white students)

    *****

    Grades 3-6 Reading 2010
    Hispanic Proficiency: 42.30%
    White Proficiency: 89.37%
    Achievement Gap: 47.07%

    Grades 3-6 Reading 2009
    Hispanic Proficiency: 47.78%
    White Proficiency: 88.25%
    Achievement Gap: 40.47%

    Grades 3-6 Reading 2008
    Hispanic Proficiency: 49.79%
    White Proficiency: 87.95%
    Achievement Gap: 38.15%

    Grades 3-6 Reading 2007
    Hispanic Proficiency: 44.80%
    White Proficiency: 86.94%
    Achievement Gap: 42.14%

    1) The data here is impacted by the same effects of variation seen in the black-white achievement gap. The drop in % proficiency from 2009 to 2010 for Hispanic students in elementary reading can be partially attributed to the poor performance of 3rd graders (37.61% proficiency for Hispanic students) and the absence of the high scores seen for 2009’s 6th graders (52.75% proficiency for Hispanic students). The former group comprises a set of students that hitherto had not been tested and thus were not a part of calculations in previous testing years, and so while their data does suggest additional inquiry and support is needed, it also suggests that student variation itself as one ultimate cause for the dropping numbers this year. The loss of the latter group’s effect on the average also similarly contributes.

    2) The only time this has been seen in the data presented thus far, the proficiency of Hispanic elementary students in reading has slightly decreased overall, while white students have slightly increased (-2.5% for Hispanic students since 2007; +2.43% for white students). As above in #1, there is a pronounced effect on the average from this year’s 3rd graders, who achieved a full 10.68% less proficiency than 2009’s 3rd graders and similarly lower than all other time points of data for 3rd grade. The small sample sizes of Hispanic students compounds the effect of student variation.

    3) The achievement gap overall has opened slightly (+4.93% since 2007), attributed to the slightly rising scores of white students and the slightly falling scores of Hispanic students as shown in #2.

    *****

    Grades 3-6 Math 2010
    Hispanic Proficiency: 45.57%
    White Proficiency: 87.64%
    Achievement Gap: 42.07%

    Grades 3-6 Math 2009
    Hispanic Proficiency: 51.61%
    White Proficiency: 87.18%
    Achievement Gap: 35.58%

    Grades 3-6 Math 2008
    Hispanic Proficiency: 46.18%
    White Proficiency: 85.55%
    Achievement Gap: 39.37%

    Grades 3-6 Math 2007
    Hispanic Proficiency: 38.10%
    White Proficiency: 80.40%
    Achievement Gap: 42.31%

    1) Again, we see an impact on average proficiency due to the poor performance of 3rd graders (36.34% proficiency Hispanic students). 3rd grade averages for math scores in general have trended slightly below the mean for the last 4 years. In 2010, however, they trended much lower than in prior years.

    2) The proficiency of Hispanic and white elementary students in math has nonetheless increased overall (+7.47% for Hispanic students since 2007; +7.24% for white students)

    3) The achievement gap overall has very slightly closed (-0.24% since 2007).

  2. Billy Bob:

    1) Give an example of where I’ve “respond[ed] like a jackass”, or of an argument that I “cannot refute”. Was it a line of critique focused on arguments or was it ad hominem? Was it in response to trollish claims like, “KIPP leaders are white supremacists”? Please, do enlighten me on the Billy Bob Standards for Online Debate, they sound like fun.

    2) “While looking at the same group of students over time from one grade to the next is the best method”

    And in this case, that will work ONLY if you have a good normalization factor for the student attrition that ALWAYS occurs. If every group of students consistently experiences some level of attrition or some trend in student data persists over time, you need to account for this trending if you want to account for achievement gaps. If one group of students declines in achievement over time, it doesn’t suggest anything about trends in student performance if every other group also experiences a decline REGARDLESS of the year the data was taken. The only way you could begin to account for this would be to average the achievement gaps of each grade level within a given year, and through several years of time points this would give you an averaged impact on the achievement gap over time (I’m very specific about how this works in the bottom half of my comment). Doing this analysis suggests that either student variation has only trended upward (as gaps at each grade level on average have tended to decline over time) OR that there has been improvements in closing the achievement gaps. In the very unlikely former case, the DC-CAS scores should not be used to bolster *ANY* side of this debate, while in the latter case we see progress being made.

    3) “So, the next best method (unless you are a TFA failure in a crappy masters program) is to look at scale scores at the same grade level over time. Which is what people did. I did it with NAEP (despite your idiotic claim that I did not) ”

    If you were to stop snarling and foaming at the mouth with your argumentation, you would be able to read what I actually wrote. What you were trying to suggest with NAEP was incorrect, as I have already explained:

    ***”First, there’s exactly one time point from NAEP that is “post-Rhee”. If one data point is enough to get you grandstanding and making large inferences about the quality of reform that Rhee is toting in DCPS, I can’t wait to read all the equally valid claims in your forthcoming dissertation.

    Second, were we to nonetheless extrapolate from that one data point to make inferences about Rhee, she still comes out alright:

    In total, the 2009 testing year saw a total growth of 17 (1.76) in scale scores for reading and mathematics. The 2007 testing year had a total growth of 15 (1.65). Given the error, the growth seen in 2009 is not significantly different to that seen in 2007, but surely this is not definitively a growth trend that had been ongoing before 2007. That is because the growth in 2005 is not as large (10 +/- 1.81), and the 2003 scores vary due to the different time scales measured. You and others need to stop claiming that NAEP scores have been on the same upward trend “pre-Rhee” as they have been “post-Rhee”. The data that we have doesn’t directly suggest that.”***

    (https://www.eduwonk.com/2010/08/whole-lotta-news.html#comment-209153 )

    4) “and Phillip did with DC scores. Gaps are NOT closing and may be even widening.”

    And he was only able to show this through an analysis that doesn’t normalize to student attrition, and attempts to fault the current state of the district for an ongoing trend in student enrollment that has likely always existed.

    In 2007 (before Rhee was hired), results from DC-CAS showed that, as one looked toward the later grades, the proficiency level of students dropped and achievement gaps increased, while at the same time there were (shockingly!) fewer students enrolled each year. Now, while this is data for a large number of students in DCPS that year, there is an obvious trend here between the ~53% gap in elementary grades and the ~61% gap in secondary grades that very likely isn’t due to student variation alone, which (if this were the case) would have had to continually trend upward for 7 years for progressively younger students to reach progressively higher achievement levels than their 2007 10th grade counterparts.

    Were we to be able to parse the data for previous years based on grade level, you would more than likely see the same exact thing occur. This seems rather intuitive, even outside of DCPS: as students advance in grades, some will drop out, while more others will continually lose interest in school (for various school-related and society-related reasons). The achievement gap builds as students get older. There’s a reason that intervention programs for students are almost always intended for older students, and that’s to keep them in school and motivate them to achieve.

    So how does this all pan out? As a group of students moves through the grades, each year some of them will become generally more disinterested in schoolwork, and will achieve less. This is obviously a problem, and unfortunately one that has been around longer than Rhee has.

    If you understand the above couple paragraphs, the following statement should become crystal clear: as ANY group of students moves through a school system, one would expect there to be a trend toward declining proficiency and increasing achievement gaps, caused by a number of reasons (schools aren’t supporting the needs of different student subgroups; there’s little support at home; the students come to school hungry; etc. ). ANY analysis that one does on a single group of students will have to account for this trend, and some of it will likely have been caused by non-school factors. That makes this type of analysis very problematic.

    What’s the solution? Well, we could observe different groups of 3rd graders over time, calculate their growth from 3rd to 6th grade, and compare this total growth. The 2007 3rd graders would then have a normalized number describing the achievement gap change as time went on, as would the ’08 3rd graders, and the ’09 3rd graders, and so on, giving more and more examples of the actual longitudinal impact of schools on these students.

    We could take this analysis further to sample students at all grade levels, by making similar analyses for 2007, ’08 and ’09 4th graders, the same for 5th graders, and so on, to get a normalized, longitudinal estimate on achievement gap changes throughout the grades. If on average students within the 2007 initial sets of students experienced more decay in proficiency over 4 years time than did the 2009 sets of students, that would suggest that the district is improving.

    Unfortunately, we don’t have enough data to do any of that. We’d be able to tell how the 2007 3rd graders have fared by the time they entered 6th grade in 2010, but what happens next? There is not enough data to continue this analysis for the 3rd graders of ’08, as they haven’t yet made it to 6th grade testing.

    Phillip tried to do something along these lines. He tracked the performance of 2008 3rd graders over time, the performance of 2008 4th graders over time, and so on. There were, however, three key issues with his analysis:

    * He didn’t make any attempt to calculate the total decay in scores of the 2008 3rd graders by the time they entered 5th grade in 2010, or to similarly do that for the 2008 4th graders, etc. Instead, he found declining numbers and shouted ‘A HA!’.

    * He implicitly assumed that the dropping achievement levels for 2008 3rd graders could be adequately compared to the dropping achievement levels for the 2008 4th graders. This is not so: there’s nothing to suggest that the dropping achievement numbers each year ought to be exactly the same among different age groups. In other words, if 3rd graders in 2008 dropped 5% proficiency when they reached 4th grade in 2009, that shouldn’t be compared to how the 4th graders in 2008 dropped when they reached 5th grade in 2009.

    * It is categorically impossible for him to do the next step he needs to make his data analysis complete, and that is to compare the 3-year decline of 2007 3rd graders to ’08 and ’09 3rd graders, the decline for 2007 4th graders to
    ’08 and ’09 4th graders, and so on. As it stands now, all he has shown is that, in the 3 year span he is analyzing, students who start at different grade levels will lose different levels of proficiency. We already knew that! What we need to know is if the district is helping stabilize this downward trend over the years, and Phillip can’t reliably answer that with his data.

    There is an alternative to this mess: we could instead average the achievement of different grade levels in a given year and compare these longitudinally every year. This would still utilize student achievement data that is complicated by student attrition and the other factors explained above, but the saving grace of it would be that the other grades would be included in the calculation as a way to normalize the numbers. If 10th graders have had 3 more years than 7th graders to become disinterested in school and become affected by their home environments, it would make sense to combine their achievement data with 7th and 8th graders who have experienced this less. This averaged data point could be compared across several years (secondary students in 2009 compared to 2010), so that none of the data points are heavily skewed by the above factors (namely student variation) as all of the data is averaged equally. An additional benefit of this analysis is that we *HAVE* the data to do it. We have enough data to generate 4 different time points, from 2007 and on.

    Using the above described analysis (as I have done upthread) actually suggests *gains* in reading and mathematics, and generally sizable ones, too. What’s more is that the difference between elementary achievement gaps and secondary achievement gaps is also narrowing, indicating that there seems to be some headway being made in reducing the in-school factors that attribute to declining student achievement over time; for example, 10th graders improved to a level on par with the scores of elementary students, something that has *NOT* ever been seen in the data going back to 2007.

    Add to this analysis the growing NAEP scores, and it suggests that the testing data we have is at least one good (but long) argument for keeping Rhee in DCPS.

  3. Wow–that was really stupid.

    Scores increase over time, not decrease, Your assumptions are tragically flawed. Kids who disappear are the lower performing students, thus student attrition leads to increased scores, not decreased scores. Unless in DC the students fleeing the schools are the highest performing–which may be the vcase–but I have seen no evidence on that point. Further, the dropouts would cancel out the movers, thus attrition would likely be a wash at the higher grades.

    Ive looked at student-level data for more than a decade and not once has student achievement declined over time as students progress across grade levels (assuming the test stayed the same over the same time period).

    I love how you critique everyone else’s analyses as flawed, then present your own while not holding it to the same standards that you used to critique the other people’s analyses.

    The only way you could average scores across grade levels and assess the achievement gap is if the scales are comparable across grades and the scale scores have properties that result in a 10 point gain anywhere on the continuum being equal to a 10 point gain anywhere else on the continuum. I don’t think DCAS has those properties. Plus, you have to ensure no ceiling effect is influencing the results. Further, you would need to weight by number of students in each grade. You have not satisfied these conditions yet. Maybe they do exist, but we don’t know.

    Our point with NAEP is that Rhee wants to fire teachers using gains from one year to the next. Her NAEP gains and achievement gap results suggest she is not on track. Granted, it would be foolish to fire someone for one data point in time. YET THAT IS EXACTLY WHAT RHEE WANTS TO DO. Typical with TFA people–hold everyone accountable but yourselves. You and Rhee are two peas in a pod.

  4. I love how you critique everyone else’s analyses as flawed, then present your own while not holding it to the same standards that you used to critique the other people’s analyses.

    CORRECTION

    I love how you critique everyone else’s analyses as flawed, then present your own while not holding it to the same standards that you PRETEND you used to critique the other people’s analyses.

  5. Billy Bob:

    1) If dropouts impacted student scores to trend UPWARD, why in the world would we see GROWING achievement gaps as students get older?

    “The picture that emerges from our research suggests that, as in studies with a majority of White students, in a diverse school district achievement gaps do develop, both for Black and Hispanic students. However, when and how the gaps develop varies by racial group. In particular, we find that Black students have significant test score gaps with respect to White students in the first grade, whereas
    Hispanic students’ gaps become significant in the second grade (especially in math). Moreover, as the gaps widen in later grades for both Black and Hispanic students, Hispanics’ gaps are consistently smaller than Black students’ gaps, often half the size.”

    (http://onlinelibrary.wiley.com.oca.ucsc.edu/doi/10.1111/j.1541-0072.2004.00072.x/abstract )

    “Previous efforts to explain the Black-White test score gap have generally fallen short – a substantial residual remained for Black students, even after controlling for a full set of available covariates. Using a new data set, we demonstrate that among entering kindergartners, the Black-White gap in test scores can be essentially eliminated by controlling for just a small number of observable characteristics of the children and their environment. Once students enter school, the gap between White and Black children grows, even conditional on observable factors. We test a number of possible explanations for why Blacks lose ground. The only hypothesis which
    receives any support is that Black students attend worse schools on average.”

    (http://www.nber.org.oca.ucsc.edu/papers/w8975 )

    “The size and stability of gender, ethnic and socio-economic differences in students’ educational achievement are examined over a 9 year period. Both absolute differences in cognitive attainment and relative differences in progress are considered. The study, which is part of a follow up of an age cohort originally included in the ‘School Matters’ research, utilises multilevel modelling techniques. Attainment in reading and mathematics is reported at primary school (Year 3 and 5), secondary transfer (Year 6) and in the General Certificate of Secondary Education (GCSE) (Year 11). Whilst differences in achievement related to gender and socio-economic factors remained consistent and generally increased over time, greater change was found in patterns of ethnic differences.”

    (http://www.informaworld.com/smpp/content~db=all~content=a746316026 )

    2) “The only way you could average scores across grade levels and assess the achievement gap is if the scales are comparable across grades and the scale scores have properties that result in a 10 point gain anywhere on the continuum being equal to a 10 point gain anywhere else on the continuum.”

    I already noted that student variation *WILL* affect this data analysis as you point out. This will add a level of error to the analysis, and I actually point out where the error is more likely affecting the numbers in my analysis above. Thing is, this is the *ONLY* adequate way that we DO have to correctly analyze the data.

    3) “Plus, you have to ensure no ceiling effect is influencing the results.”

    If this effect were occurring, it would influence the proficiency numbers I calculated downward. Thus my analysis gives at least a lower estimate for the gains made in proficiency for black students. This doesn’t change the positive trends observed.

    4) “Further, you would need to weight by number of students in each grade.”

    Look closer: already did that.

    5) NAEP scores are INCREASING! Going UP! IMPROVING! And by an amount QUICKER than that seen in 2005, and comparable to 2007! Every time you make the argument that NAEP scores show that she’s failing students, I want to throw my computer out the window. They show the EXACT OPPOSITE, even if you assume that one time point is enough to measure her impact.

    6) Rhee wants to use IMPACT, which utilizes student data (normalized to student background) for 50% of teacher evaluations. The exact methodology has not been made clear, but it is very, VERY likely that the goal is to utilize longitudinal student data over the span of several years to create this value-added component, and NOT to throw teachers out for one testing year without ANY other evidence being factored into the decision.

    It’s just beyond simple-minded to suggest that Rhee would be fired if she were a teacher with these improving student data.

    7) I’m still waiting for someone to respond to about 50% of my previous arguments.

  6. 5) Asshat–not in a statistically significant manner. We already covered this territory, but your feeble TFA mind forgot already.

    6) No one wants to waste their time because you just move onto something else or repeat something from before when someone shows that you are wrong.

    I guess you have not picked up on thr fact that people know you are a complete idiot and a TFA hack who believes everything deformers say regardless of real evidence.

    The only reason I even post here is to see you get your panties in a twist.

  7. Billy Bob,

    Lots of huffing and puffing, yet nothing resembling reasonable comments. I didn’t see that coming.

    Stop pretending that you’ve replied to even half of the counterarguments I’ve presented here; instead, here you’ve chosen again to feign (maybe?) ignorance. The numbers aren’t significant? Are you really trying to push that gem through again?

    ***”I also did a quick error analysis on the scale scores that I totaled using their stated standard error, showing that the scores are or are not significantly different as indicated.”***

    (https://www.eduwonk.com/2010/08/whole-lotta-news.html#comment-209171 )

    The growth in every NAEP category for 2009 is above the limits of error; the total growth seen in 2009 was not significantly different from total growth seen in 2007, although it was significantly different from total growth seen in 2005. This has not changed since the last time you incorrectly argued that NAEP scores are stagnant.

    And what is with your obsession with TFA? This unbridled hatred cannot be healthy, nor do your conspiracy theories do much to showcase your ability to reason. What I find most curious is why a researcher would want to go on researching a topic that made him convulse in spasms of vitriol at the mere thought of it– and why he would compromise whatever reputation for objectivity he may have had by sniveling in an online forum, referring to folks he didn’t like as “TFA hacks”.

    Well? Go on — this is the part where you call me a TFA poopface and refer again to UTeach’s retention data.

    (https://www.eduwonk.com/2010/08/adding-value-2.html#comment-209285 )

Leave a Reply

Your email address will not be published.