Two stories from last week that will have legs going forward and are worth keeping an eye on:
Cheating in D.C. schools. This story seems likely to have some twists and turns and it seems like there is more to come. Smoke, fire, all that.
The Gary Miron “study” on KIPP from Western Michigan University (pdf). As a rule, when a “study” is shared with the schools being studied after it’s already been circulated to reporters you should be leery. These issues are complicated so common – and understandable – mistakes, for instance conflating or mixing-up capital and operating expenses are easily addressed through a review process – before a study is released. More generally, as Brian Gill (Mathematica) and Robin Lake (who has herself raised questions about KIPP) have pointed out a lot of this report is “apples to watermelons” comparisons and KIPP’s own response is a good overview (pdf). That’s less understandable. But the issue to keep an eye on is whether some easily-fooled reporters should have been more skeptical and asked some obvious questions they apparently did not. Stay tuned.
66 Replies to “Still Going…”
I understand your example, and I do agree with you on the fact that a majority of the comments do boil down to what you say. I have one tack here, and I have been working it for awhile: there is no explicit evidence of this– mostly because the random assignment you mention would be impossible, probably– and the points that KIPP enrolls a subset of families is entirely based on one’s personal assumption and feelings.
On your point about the baseball team, personally I think there are shades of grey in there that could convolute the example, and I think those are meaningful (e.g., do they “recruit” from every household in the neighborhood or not, and how is the recruiting phrased?). Most of what I have been getting at are these shades of grey– you will see that I have rarely (I did it once but qualified it a lot) actually made an argument, but instead I am desperately trying to point out that we don’t know these things for a fact no matter how much one writes them as a fact. That and people pretend to read research but do not (since when did that become trendy and cool?).
Otherwise, while I think you do a minor little switch-a-roo– simply because KIPP states that everyone must make a choice does not suggest the inverse, that they only take those that make that choice– I do agree with you, but the weight to which we give this “school of choice” factor with respect to everything else in determining student outcomes is an issue: some think that it is the only factor, and that is what I take issue with.
Rob: Well said. Your baseball team analogy is on-point. I had thought of a similar analogy (to a little league team) myself: As long as students or players must take an affirmative step (registering, attending interest meetings, filling out an application) to join a particular school/team, such students/players will automatically be more likely to ‘buy in’ to the activity than would a randomly selected group from the community.
In addition, the students/players who affirmatively joined the school/team would likely be more swayed by the threat (implicit or explicit) to be kicked off the team (or expelled from school) if they do not follow the rules, than would a student/player who had not affirmatively selected the school/team in the first place.
I’d like to add that the above concept of “buy-in” correlates to my experience in the real world of students being more scared to be expelled from a private or charter school (or selective program) than from a traditional public school. This tended to translate into appreciably better attendance and behavior patterns among the students.
For example, when I taught SAT prep classes for the Princeton Review, there were almost no absent students, and students uniformly behaved well and paid attention in class. This included, for the most part, students attending the free classes we provided for low-income DC students who wanted to do well on the SAT (primarily to qualify for college athletic scholarships). All these students made the choice to attend these classes (and, for the regular classes, to pay for them). Consequently, they had a certain amount of ‘buy-in’ or they wouldn’t have taken the time to register and show up. Their behavior and attendance reflected that ‘buy-in’, which I would also expect to occur in a school like KIPP.
Hey Attorney DC, you didn’t answer my question yet. This is the fourth or fifth time you didn’t, and I keep asking it:
“So it took me two posts from you to admit there is no research to prove your point, so now I go to the next one, which is the same exact question I asked all the way above:
Have you worked in admissions for KIPP? Have you worked at KIPP and can first-hand prove this?”
SM: I haven’t worked in admissions at KIPP, but I’m not sure how relevant that is to this discussion. Someone working admissions at KIPP would presumably have knowledge about the KIPP parents, but wouldn’t have any research-based method of comparing these parents to the vastly larger group of non-KIPP parents, correct?
Still waiting for a reply from Attorney DC, particularly about how his opinions are different from facts. “Don’t have time to address all your points” he said last week. What about now?
Your analogy fails because:
* It presupposes that there is a sizable group of families/students who do not “at least like [school]” or “have some interest in doing well [academically]”. You acknowledge that we can’t make any assumptions about these families’ initial interest levels, yet you do just that when you suggest there will be many families who “lack buy-in” in traditional classrooms. Given this, it’s not clear if the intrinsic differences in motivation to succeed are large or even apparent between the schools.
* Whereas option 1 will likely yield a group of kids that will play better than those gained from recruitment under option 2, for KIPP this is not the case. Before starting at KIPP, these students are not performing any better academically than their peers.
* The analogy implies that these kids will play baseball only if they are recruited. In reality, every student is participating in a classroom. Thus, if a family chooses to keep their kid in a traditional school instead of being recruited to KIPP, that does not have any implicit bearing on said student’s/family’s motivation to “play ball”, as there may be many other reasons for not wanting to attend KIPP.
KIPP is a school of choice, but I think the critical point here is whether KIPP is effectively “creaming” (intentionally or unintentionally) or whether the possibility for such exists but is not yet shown to be fulfilled. I guess answering this question does depend on how one defines “creaming”, but I hope that what I’ve written above explains why your definition is too weak to be useful here: “Signing up” doesn’t directly translate into a better KIPP/baseball team. That these kids become great after signing up ought to also suggest how important in-school factors still are.
How do you not see its relevance? My point is precisely what you say: someone working at KIPP would have that knowledge about the parents, but you do not work at KIPP, so you do not have that knowledge.
So you haven’t worked in admissions at KIPP– so you don’t have first-hand experience of how they recruit and admit– and there is no explicit research supporting your argument that KIPP enrolls a subset of families.
So if you don’t have first-hand experience and there is no evidence to support your argument, your continually maintained point that KIPP enrolls a subset of families is just a guess with no basis.
Is there a specific report or research or whatever that lays out the relationship or relative weight of in-school vs. out-of-school factors and their weight on student outcome variables?
My area is charters/ vouchers/ teacher preparation… not as much this in-school vs. out-of-school argument (peer effects… don’t know that much).
I don’t have specific papers that I can highlight but scattered along Google/ERIC one can find a large swath of papers suggesting the (apparently) exceedingly bold idea that teacher effectiveness is an important variable in the student outcome equation.
Folks can argue ’til their blue about how which of these factors are more important, but that’s besides the point. We can do a lot of good by keeping a focus on in-school factors, particularly because we can readily change these regardless of which social revolutions are being plotted on comment boards throughout the blogosphere…
Here’s my latest.
John … You seem to labor under the misconception that students on IEPs are completely the responsibility of a particular school. They aren’t. They are the responsibility of the local education agency. So if kids with IEPs are underrepresented in KIPP schools, it’s a structural problem more than a KIPP problem.
That’s what I said. Its not KIPPs fault that 40% of my students are on IEPs and that 2/3rds to 3/4ths of my afternoon classes are on IEPs, ELLs, 504s for mental illness, or parole for extreme felonies. The problem is pretending that they teach the “same kids.” The problem is using KIPP to argue that 5/6ths of my 237 students who aren’t counted under NCLB don’t need services because KIPP shows it just takes no excuses.
And I return.
Why is “gangbangers” not hyphenated early in the post but then hyphenated later on?
Anyway, I am sure you don’t want my take or care since you vowed to “actively ignore me” and totally adorably “mistyped” my name, but here you go:
1) You set up with this illogical beginning, like MPR claimed KIPP admitted students mid-year by virtue of their terminology (“late arrival”), and you use that to launch your piece. It’s a silly piece of writing.
2) You make this absurd jump:
“Its research tells us nothing about whether the KIPP system of 99 schools could be scaled up.
To do that, we need studies that act like “race track monitors” to make sure that no interest groups get “an unfair advantage” in promoting their preferred agendas.”
Well, MPR or its research or anyone mentioned anything about “scaling up.” Also, really? How in the world would “race track monitors” assess the possibility of large-scale replication? You write this merely to get in the Miron study. It doesn’t make sense.
3) In the comments, you write “As I recall, the attrition data comes from 19, not 22, of their 99 schools, and two excluded schools are illustrati ve. They are studying the success stories. Give the other schools you and I have heard about a decade to implement their vision, they may reduce their attrition rate – or they may not.”
Really? Come on. Just read the darn selection methodology and state the facts.
4) Your last question is absolutely ridiculous: “Why not give our neighborhood schools the same chances to help poor kids that we give to KIPP?” That doesn’t even make sense. What are these “chances?” You write a whole piece on basically how the two entities, KIPP and a public school, are different, but then you end with this question that assumes the same things could be done.
John Thompson, well said: “Its not KIPPs fault that 40% of my students are on IEPs and that 2/3rds to 3/4ths of my afternoon classes are on IEPs, ELLs, 504s for mental illness, or parole for extreme felonies. The problem is pretending that they teach the “same kids.”
I agree with you: There’s nothing wrong w/ KIPP (or private schools or gifted pull out programs or any other specialized programs). The problem arises when charter school advocates proclaim in the media that their schools teach ‘the same students’ as the traditional public schools. In fact, charter schools teach different students and are released from many of the contraints placed on public schools (for example, KIPP doesn’t have to accept transfer students mid-year). Charter school advocates should not pretend that they’re teaching the same students (with IEP’s and parole officers), under the same constraints, that the public schools have no choice but to deal with every day.
Really can’t stand these “here’s a link to something else, but no I won’t respond to criticism” tactics. These debate strategies are not befitting a self-proclaimed academic, nor are other commenters giving a good-faith attempt at debate when they similarly do this.
John writes: “I was excited to read that Mathematica had analyzed “late arrivals” who attended KIPP. That seemed to be an inspired methodology for determining whether KIPP “creams” by excluding the most difficult-to-educate kids. I thought that KIPP did not admit students after the year began, but if they had a big enough sample to compare the characteristics of KIPP students who arrive in October, for instance, with students who first enter neighboring schools at that time, we would have real evidence!”
The goal posts are moved yet again. Apparently, it is not considered “real evidence” when Mathematica analyzes characteristics of KIPP-bound vs. non-KIPP-bound students along several different variables and finds little difference between the two.
John writes: “Saying that KIPP’s attrition rate is a little better than some of the nation’s worst middle schools’ rate is not a ringing proclamation of success.”
No, but what it does show is that critics who argue that KIPP is forcing out the low-achievers at high rates are incorrect. KIPP’s success becomes apparent rather when one looks at their achievement data.
John writes: “Mathematica simply confirms that KIPP succeeds greatly with the kids where it succeeds, while indicating that its failure rate with their more difficult-to-educate students seems comparable to the failure rate of the toughest neighborhood schools. Its research tells us nothing about whether the KIPP system of 99 schools could be scaled up.”
This is misleading. That KIPP does better for the majority of students who stick with the program than if they were in another local school suggests they *are* succeeding with some of the “difficult-to-educate students”, as their students on average achieve at lower levels when starting out at KIPP. In other words, the same percentage of attrition for either school still may leave KIPP with more prior low-achievers. It’s silly to suggest KIPP is failing because some of their students drop out, when many of these low-achievers stay and succeed at high levels.
Also, there’s just a slight difference between 99 schools and ~100,000 schools. KIPP could increase in number 10x and not hit any of the roadblocks you are implying above. That there are students that drop out of KIPP does not imply that scaling up KIPP schools would stifle their noted average academic achievement. There are still plenty of kids/families that would likely benefit from such increases.
John writes: “And to my knowledge, it has never been claimed that KIPP has been more successful with IEP students diagnosed with conduct disorders or serious emotional disturbances.”
Sure, but this doesn’t imply that they are poorly serving these populations, either. Since the SPED populations of KIPP versus the control group are not all that different (9% and 12%, respectively, per MPR 2011), it seems foolish to imply that KIPP isn’t helping SPED students.
John writes: “By pretending that KIPP serves our most vulnerable students, society is given an excuse for starving alternative services for our most traumatized kids.”
KIPP does serve some of these “vulnerable students”. Considering KIPP is successful with a large number of them suggests we ought to consider their methods and/or encourage their scaling up to help more students further succeed.
To be clear, none of what I’ve written above implies that we should ignore efforts to increase other “investments for our most damaged children”, and I think that is a given for most of the folks you disagree with. Until these vague investments are enacted (and concurrently afterward), we ought to continue pushing for school-related reform.
John writes in the comments of his blog: “Deformers don’t play fair.”
You respond with playground taunts to those you agree with but refuse to debate your critics? That’s exactly what we need more of in education.
Is there any evidence that KIPP’s “success” is based on more than the longer school day?
It is not right to compare KIPP to the neighborhood school if the additional instructional time is not figured in.