Chartering Confusion

Pretty good look at the advantages and limitations to various strategies of measuring charter school outcomes from the WSJ.

19 Replies to “Chartering Confusion”

  1. Lets see if I have this right.

    Education reformers are simultaneously arguing that:

    1. We CAN’T compare charter schools because “the prominent studies on charter schools rely on different methodologies—all of which have flaws.”

    but

    2. We CAN compare as well as hire and fire teacher based on test scores despite the fact that various tests rely on different methodologies, often weren’t even designed for that purpose, and all also have flaws?

    Just saying.

  2. actually in both instances they’re saying that we should use all of the information available to make comparisons.

  3. Let’s look at the Stanford study cited in the article. According to the article:

    “Charter-school skeptics have jumped on these results. But a closer look at the study reveals a potential methodological problem. There could be differences between the two pools of students, such as parental involvement or drug use, not accounted for in the study.”

    The EXACT SAME methodological program would of course apply to any system of teacher ratings or rankings that are based on test scores. Yet we have NEVER ONCE heard of any teacher evaluation system that takes into account the level of parent involvement of their students much less their drug use.

    If the Stanford study is invalid because it doesn’t take into account the level of parental involvement or student drug use then why wouldn’t comparisons of individual teachers based on test scores also be suspect if they don’t take into account parental involvement or student drug use?

  4. Kent:

    “Lets see if I have this right.”

    Nope, you got it wrong.

    1. Did you even read the article? Or the blog post? Or the quote you mined? The argument is not that we can’t compare charter schools. It’s that the comparisons that have been made need to factor in the advantages and disadvantages of their respective methodologies.

    2. Bad comparison. Different standardized tests do NOT rely on different methodologies; they are different tests, but they are similarly all aligned to their state’s standards, and are all typically administered in the same way, and for the same reasons (to quantify student achievement). If you mean to say that methods for assessing student effectiveness widely differ, you should elaborate: what exactly are these different methodologies?

    “The EXACT SAME methodological program [problem?] would of course apply to any system of teacher ratings or rankings that are based on test scores.”

    No, not if the teacher evaluation system controlled for past performance of each individual student. It is implied that the Stanford study did not do this; I couldn’t find the paper referenced to confirm.

    “Yet we have NEVER ONCE heard of any teacher evaluation system that takes into account the level of parent involvement of their students much less their drug use.”

    That’s because we don’t need to: VA methodology, for example, normalizes yearly achievement to past years. If there is a biasing effect from home life on achievement, it can be accounted for.

    “If the Stanford study is invalid because it doesn’t take into account the level of parental involvement or student drug use then why wouldn’t comparisons of individual teachers based on test scores also be suspect if they don’t take into account parental involvement or student drug use?”

    Stop burning straw men. The article did not call any of the studies “invalid”. The point was to highlight the advantages and disadvantages of each.

  5. Chris:

    Two quote the second two paragraphs of the article:

    The chief explanation for the lack of consensus is that the prominent studies on charter schools rely on different methodologies—all of which have flaws.

    Education researchers face a big challenge: how to separate the results of charter schools’ educational techniques from the quality and motivation of the students themselves. So far, scholars have been only partially successful at making this distinction, other education experts say.

    Comparisons between teachers within a school and comparisons between schools are doing exactly the same thing. They are using standardized student test scores to measure and compare student achievement. In fact, any statistician will tell you that it should be much easier to use test scores to compare schools than to compare the teachers within a school because the sample size will be much larger and it is much easier to control for variables. And one doesn’t have to compare every teacher in every class at every grade level to make valid comparisons between schools.

    Yet even though it should be easier to compare schools using standardized test scores, and researchers have apparently spent an extraordinary amount of time and money attempting to do so we discover that all the methodologies used “have flaws.” And that scholars have only been “partially successful in separating the charter schools educational techniques from the quality and motivation of the students themselves” And these are studies that are no doubt the result of thousands of hours of research time and rigorous peer review. In fact, the title of the article itself says “Studies that grade charter schools rely on shaky math”

    Yet teachers are being asked to accept the use of those same test scores with even shakier methodologies to be used to determine their merit pay and perhaps whether or not they even retain their jobs.

    How can you claim that there is no methodology involved in using test scores for teacher evaluations? I teach at the HS level so that is mainly what I am familiar with. Does the VA system you reference control for class size, ethnicity, language, socioeconomic status, drug use, parental involvement, special education and learning disabilities, and a host of other variables that affect test score outcomes? That is where the methodology lies. If it is so easy as you say to “normalize yearly achievement to past years” to wipe away those issues then why is it so difficult to make comparisons between schools. Why is the math “shaky” and all the methodologies “flawed?” They are doing exactly the same thing. Making comparisons based on test scores.

    I was merely pointing out the irony. I’m astonished that you don’t see it.

    I teach in one of the highest achieving school districts in Texas and my own students pass the state’s science TAKS test at close to 100%. In fact the only students I’ve had fail the test in the past 2 years have been two recent immigrants from Central America with poor language skills and several special education students who had absolutely no business even taking the test. So it’s not like I’m really concerned about being evaluated based on my kids test scores. Yet I am growing concerned with the increasing emphasis on test scores at all grades down to even K and pre-K and the distorting effect it is having on curriculum and instruction. And I sometimes wonder if I shouldn’t even pull my own kids out of the local schools and send them to private schools just to get them out of that rat race.

  6. Kent:

    “Comparisons between teachers within a school and comparisons between schools are doing exactly the same thing.”

    No, they’re not. You’re incorrectly assuming that the presented methodologies for evaluating charter schools are the same as those for evaluating teachers. I’ve explained why this is incorrect above (and further below).

    “In fact, any statistician will tell you that it should be much easier to use test scores to compare schools than to compare the teachers within a school because the sample size will be much larger and it is much easier to control for variables. ”

    This is avoiding the issue that critics had of the Stanford study: despite there being a larger number of subjects, there seemed to not be any methods in place for controlling for individual student backgrounds. Controlling for demographics wouldn’t necessarily control for all variables.

    “And one doesn’t have to compare every teacher in every class at every grade level to make valid comparisons between schools.”

    This renders your initial point, about their being a larger number of subjects, moot. How would you know which student/teacher data was most representative of each school? Wouldn’t yours be a more problematic approach?

    “we discover that all the methodologies used “have flaws.””

    Your use of quotes here is humorous. Are you implying there are methodologies out there that *don’t* have flaws? You should probably look a bit more into how science works.

    “Yet teachers are being asked to accept the use of those same test scores with even shakier methodologies to be used to determine their merit pay and perhaps whether or not they even retain their jobs.”

    No, they’re being asked to accept that a more objective, quantitative aspect that defines their classroom progress will be included in assessing their effectiveness. Yes, it contains error, as do the current systems of evaluation, as do evaluations in all other job fields, as will any other possible system imaginable.

    “How can you claim that there is no methodology involved in using test scores for teacher evaluations?”

    I didn’t. I asked you to elaborate on your point.

    “Does the VA system you reference control for class size, ethnicity, language, socioeconomic status, drug use, parental involvement, special education and learning disabilities, and a host of other variables that affect test score outcomes? ”

    Yes: by controlling for past performance of each individual student. This is probably the best way to control for all of those factors you’ve listed. It is definitely the only way so far.

    “If it is so easy as you say to “normalize yearly achievement to past years” to wipe away those issues then why is it so difficult to make comparisons between schools.”

    This should be rather evident. How would you go about controlling for past achievement of students at a school as a group? You might try to control for past performance of the student body as a whole, but therein lies the problem: what about students that haven’t been at that school longer than a year? What about students who leave? Each year that you obtained progress would encompass a different sample of students.

    This is different that the situation for teachers, where past performance data for each individual student could be pulled up and analyzed. Doing the same for each and every student of each and every school that was selected for the study? Not going to happen.

    “I was merely pointing out the irony. I’m astonished that you don’t see it.”

    There’s a lot more I’m currently astonished by.

    “So it’s not like I’m really concerned about being evaluated based on my kids test scores.”

    If your district utilizes VA, you probably should be.

    “Yet I am growing concerned with the increasing emphasis on test scores at all grades down to even K and pre-K and the distorting effect it is having on curriculum and instruction.”

    Your concern is duly noted. Now, do you have arguments you can give for why an “increasing emphasis on test scores” is problematic?

  7. Chris:

    I teach at a high school with about 2100 students. It is fairly easy to evaluate my school’s overall performance by using several representative benchmark tests that all students take. 9th grade math, 10th grade English, 11th grade social studies, etc. Not every kid shows up for every test but that would still provide a sample size of about 500 kids per test. Most charter schools are smaller, but one could still use the same benchmark tests and incorporate all the relevant demographic data into the analysis as appropriate. Just as it is possible to drill down to individual student demographic data or past performance data at the individual teacher level as you suggest, it is equally possible to do so at the school level. To the extent that student scores are tagged with any demographic information or past performance information, those data can be be accounted for at any level you want to do the analysis (teacher, school, district, city, state, etc.). It is simply false to claim as you do that we can take these factors into account at the individual teacher level but not the school level. The data queries would be exactly the same, just with a larger student group. If you can measure the year-to-year test score gains for a particular teacher than you can just as easily do it for an entire school with the same data query. All the issues you cite related to students who transfer in and out or aren’t present during a whole school year show up in the individual teacher data as well.

    The discussion in the original article suggests that there are a host of additional intangible factors that aren’t contained within student test score databases. Factors such as “parental involvement” and “drug use.” Or the “quality” and “motivation” of the students themselves to cite four intangible factors mentioned in the article. And these intangible factors that aren’t collected as part of student test score data are what make comparing schools a difficult task and such a “big challenge.”

    I have no problem with the conclusion of the article and in fact I agree with it. It is extremely difficult to distinguish between results that are due to educational technique or teachers and results that are due to the quality and motivation of the students themselves. Where you and I appear to differ is that you believe that this problem can be magically wiped away for individual teacher evaluations by “normalizing student achievement to past years” But that simply isn’t the case. And in fact is much more difficult to do at the individual teacher level than at the school level.

    Let’s look at my own situation which is probably typical of most high school teachers. I teach 11th grade physics and 12 grade aquatic science (an elective). None of my students have had any physics or aquatic science before walking into my room in August. The 11th grade physics students took chemistry as juniors and biology as freshmen. They had a unit of physical science as 8th graders if they were at my district 3 years earlier. But that is it. So what past year’s data should we use to normalize my data with? The state gives a general science test in 10th grade on which 8 questions out of 55 are about physics and very basic 8th grade physics at that. Or do we use their grade in chemistry or their grade on the chemistry final? Or maybe their score on the 10th grade math test? The fact is, there is simply no standardized measure of what my students know about physics when they walk into my classroom because for the most part they don’t know any physics. At least not any high-school level physics. The same goes for my 12th grade aquatic science students. Do we use their 11th grade physics scores as a starting benchmark? One could invent some sort of past performance metric based on grades and scores in other classes. Which would be about as useful as just looking at the student’s GPA. But that most certainly isn’t going to account for all those intangible factors such as parental involvement, drug use, student motivation, etc. that charter school analysts say is very difficult to analyze.

    We both seem to agree that this sort of rigorous data analysis is very difficult. Where we differ is that you seem to believe that it is easier done at the individual teacher level than at the school-by-school level. Whereas I would argue the opposite.

    As for the last point about why increasing emphasis on test scores is problematic? Take my own 2nd grade daughter. Her homework over the past 3 months has consisted almost entirely of math and English TAKS drilling worksheets designed to give her lots and lots of practice on the basic skills that she will face on the standardized tests. They are boring and way beneath her. She has gotten to hate her homework and I don’t blame her. There is no higher level inquiry at all. And it is all driven by her school’s obsession with its test scores. I shudder to think how much worse it will get if, in the future, her teacher’s pay and job security also depend on her math and English test scores.

  8. Kent:

    “It is fairly easy to evaluate my school’s overall performance by using several representative benchmark tests that all students take. 9th grade math, 10th grade English, 11th grade social studies, etc. Not every kid shows up for every test but that would still provide a sample size of about 500 kids per test.”

    You’re still talking past me. I did not disagree with this sentiment the first time you brought it up. I’m saying that the particular issue raised in the article concerns the validity of comparing schools with different pools of students.

    Taking a sampling of a school’s students progress over a year gives a general indicator of the performance of the school, but there is error inherent in this analysis even when comparing similar schools. The article suggests the error is increased when comparing schools with different student groups, despite being matched by demographics. If the two pools of students are different, as argued, it would be more challenging to find samples from different schools that were not only representative but had comparable backgrounds.

    “Most charter schools are smaller, but one could still use the same benchmark tests and incorporate all the relevant demographic data into the analysis as appropriate. ”

    Not if the demographic data didn’t include the factors that critics were citing.

    “Just as it is possible to drill down to individual student demographic data or past performance data at the individual teacher level as you suggest, it is equally possible to do so at the school level.”

    This is exactly what the article noted as the study’s potential pitfall in methodology, that there wasn’t any such school level past performance normalization. Your initial arguments were incorrectly comparing the methodology for comparing schools as described in the article (without past performance normalization) and the methodologies for evaluating teachers. At least we’ve now cleared that up.

    “It is simply false to claim as you do that we can take these factors into account at the individual teacher level but not the school level.”

    I suggested it’s unlikely to happen, not that we can’t do it. If you can’t remember what I’m arguing when you write your replies, you should try quoting me before you give your response.

    There are going to be more obstacles associated with pulling past performance data for each and every student of every school that the study included, versus parsing the data of 30-odd kids for a teacher. We can discuss these obstacles further if you’d like.

    The biggest point I’m making is that such a strategy was NOT undertaken for the charter/traditional school comparison study. That was what the article noted. That was what made your initial comparisons above FALSE. We can argue all day about what the study should have done, but the point of the article was to highlight the pros/cons of the methodologies as they were implemented.

    “Where you and I appear to differ is that you believe that this problem can be magically wiped away for individual teacher evaluations by “normalizing student achievement to past years” But that simply isn’t the case.”

    It’s not really magic, first of all. And I’m waiting for some good arguments on why “this simply isn’t the case”.

    “Let’s look at my own situation which is probably typical of most high school teachers.”

    Correction: typical of certain high school teachers that happen to teach electives and courses in a district like mine that are not currently tested. Put down that broad brush, please.

    “The fact is, there is simply no standardized measure of what my students know about physics when they walk into my classroom because for the most part they don’t know any physics.”

    And there needn’t be. It is absurd to argue that there must have been a standardized pretest given to all students for all subjects that they might take later in school to accurately measure past performance. There will be new ideas/concepts/skills taught in EVERY new class a student takes, and the average student will likely know NONE OF IT beforehand. Does that imply we can’t EVER measure past performance accurately?

    Ironically, this kind of argument also advances the idea of instituting additional standardized tests to include more teachers in the VAM evaluation process and to gather more data, by testing at additional time points, for example. But I don’t think that’s what you were intending.

    “Take my own 2nd grade daughter. Her homework over the past 3 months has consisted almost entirely of math and English TAKS drilling worksheets designed to give her lots and lots of practice on the basic skills that she will face on the standardized tests. They are boring and way beneath her.”

    And it takes some gall to use this experience as supposed evidence of the problems inherent in standardized testing, instead of helping you see what’s actually happening: your daughter likely has a bad teacher.

    Worksheets are boring, but are you against the basic skills that your daughter is supposed to be learning? If your daughter needed additional practice on these skills, wouldn’t focusing on these skills in the form of homework assignments be helpful? If you think the worksheets are beneath your daughter, have you talked with the teacher about giving her more challenging assignments? Do you know if all they do in class are additional worksheets, as well, and if so have you approached the teacher about this?

    Before you phrase your arguments into how testing makes teachers drill, drill, drill(!), realize that:

    1) Testing doesn’t make a teacher do anything. He or she is in charge of the learning in his or her classroom.

    2) Practice makes perfect, but excessive, unthinking practice in the form of countless worksheets or reliance on memorization is likely going to be counterproductive, even when measuring student achievement by test scores.

    3) Standardized tests evaluate student understanding of important concepts in the subject. Doing well on them should not itself be an indication of a dumbing down of a curriculum.

  9. “…our results indicate that professors who excel at promoting contemporaneous student achievement [that is, who do well at what Rhee and Kamras would call ‘value-added scores’], on average, harm the subsequent performance of their students in more advanced classes.

    “Academic rank, teaching experience, and terminal degree status of professors are negatively correlated with contemporaneous value-added but positively correlated with follow-on course value-added. Hence, students of less experienced instructors who do not possess a doctorate perform significantly better in the contemporaneous course but perform worse in the follow-on related curriculum.

    “Student evaluations are positively correlated with contemporaneous professor value-added and negatively correlated with follow-on student achievement. That is, students appear to reward higher grades in the introductory course but punish professors who increase deep learning (introductory course professor value-added in follow-on courses). Since many U.S. colleges and universities use student evaluations as a measurement of teaching quality for academic promotion and tenure decisions, this latter finding draws into question the value and accuracy of this practice.

    “Similar to elementary and secondaryschool teachers, who often have advance knowledge of assessmentcontent in high-stakes testing systems, all professors teaching a given course at USAFA have an advance copy of the exam before it is given. Hence, educators in both settings must choose how much time to allocate to tasks that have great value for raising current scores but may have little value for lasting knowledge.

    “Using our various measures of quality to rank-order professors leads to profoundly different results.”

    “the correlation between introductory calculus professor value-added in the introductory and follow-on courses is negative, r=-0.68. Students appear to reward contemporaneous course value-added, r=+0.36, but punish deep learning, r=-0.31.”

    http://www.journals.uchicago.edu/doi/pdf/10.1086/653808

  10. Tedconsumer:

    Yes, those are a few paragraphs from the paper that you cited. Would you perhaps like to tie an argument to them? Maybe one concerning why a study on college professors and an alternate version of VA (which modeled past performance by such fine variables as the SATs and high school GPA) is being referenced in a thread about charter schools? Or how about one that explains why you are implying that a paper on student achievement at the Air Force Academy has valid takeaway messages for Rhee and DCPS?

  11. Or how about one that explains why you are implying that a paper on student achievement at the Air Force Academy has valid takeaway messages for Rhee and DCPS?

    You seem to have an obsession with Michelle Rhee.
    Por que?

  12. Before you phrase your arguments into how testing makes teachers drill, drill, drill(!), realize that:

    “1) Testing doesn’t make a teacher do anything. He or she is in charge of the learning in his or her classroom.

    2) Practice makes perfect, but excessive, unthinking practice in the form of countless worksheets or reliance on memorization is likely going to be counterproductive, even when measuring student achievement by test scores.

    3) Standardized tests evaluate student understanding of important concepts in the subject. Doing well on them should not itself be an indication of a dumbing down of a curriculum.”

    I love how you and Andy push incentives, incentives, incentives, then turn around and say that the testing and accountability system does not provide teachers an incentive to teach to the test.

    Nice thinking there. I guess being logical was not on standardized tests that assessed your performance.

  13. “Taking a sampling of a school’s students progress over a year gives a general indicator of the performance of the school, but there is error inherent in this analysis even when comparing similar schools. The article suggests the error is increased when comparing schools with different student groups, despite being matched by demographics. If the two pools of students are different, as argued, it would be more challenging to find samples from different schools that were not only representative but had comparable backgrounds.”

    If this is true, then the same holds for groups of students taught by different teachers. Which would make VAM more prone to error and increase the odds of a teacher being inaccurately labeled good or bad.

    So, which is it Chris? Seems like you twist the facts to suit your conclusions. Just like the rest of the ed deformers.

  14. “There are going to be more obstacles associated with pulling past performance data for each and every student of every school that the study included, versus parsing the data of 30-odd kids for a teacher. We can discuss these obstacles further if you’d like.”

    Wow, that was a stupid comment. To do value-added right, you have to collect data as far back as possible–not an easy task. And to do so for thousands of teachers in hundreds of schools across multiple years of schedule changes, student mobility, etc is an enormous task. Its fraught with error–but you won’t admit this.

    Any data matching at the school level is far, far easier than at the teacher level.

  15. I read the article and Kent is correct. The same “problems” in the CREDO study are simply magnified in a VAM analysis.

    This was the pint of the SchoolFinance101 post on kids who don’t give a crap. Yet, Chris could nt understand the post.

    Some teachers get kids who are highly motivated and others do not. ANd these kids are not randomly distributed. Without controlling for this, any VAM analysis has error. And the smaller the sample size, teh larger the error. In fact, the error for a VAM analysis would be largeer than the school study.

    But don;t burst Chris’ little happy bubble. Everything has to support his pre-conceived conclusions

  16. Tedconsumer Says:
    November 24th, 2010 at 7:41 am
    Or how about one that explains why you are implying that a paper on student achievement at the Air Force Academy has valid takeaway messages for Rhee and DCPS?

    You seem to have an obsession with Michelle Rhee.
    Por que?

    Tedconsumer,
    In your extract, you mention (Jason) Kamras. He was hired by Michelle Rhee to develop and run the IMPACT. He knows the value of VAM because when he taught at Sousa Middle School in DCPS, the school’s math scores dropped, but VAM would have shown that the students he taught increased their performance that year. Since his students started so low, especially compared to the year before, their individual gains, while great, were not enough to rasie or keep steady the NCLB scores from the year before.

    So, I took your comment about Kamras as an indirect attack upon Michelle Rhee.
    A reasonable assumption, I believe.

  17. First off, the above post is not from me. I don’t need nor want nor appreciate anyone trying to talk for me or pretend to be me.

    *****

    Tedconsumer:

    “You seem to have an obsession with Michelle Rhee.
    Por que?”

    Awesome argument! It’s almost like you now have a point with citing an irrelevant study! I need to try that sometime.

    *****

    Anti-Chris:

    1) How nice to see you again since the last thread you abandoned!

    https://www.eduwonk.com/2010/11/adding-value-3.html#comment-213548

    2) “I love how you and Andy push incentives, incentives, incentives, then turn around and say that the testing and accountability system does not provide teachers an incentive to teach to the test.”

    The incentives are to promote good teaching strategies; the onus is on the teacher to correctly figure out what is entailed by such strategies. Furthermore, you need to define what you mean by “teaching to the test”:

    If you mean “drilling” like the example give above by Kent, then the teacher is at fault here for assuming that such mundane activities over the course of months will lead to higher test results.

    If you mean “teaching to the standards”, then yes, we’ve encouraged teachers to teach a large breadth of important topics related to their subject, all with a fair amount of depth. The horror.

    3) “If this is true, then the same holds for groups of students taught by different teachers. Which would make VAM more prone to error and increase the odds of a teacher being inaccurately labeled good or bad.”

    You have terrible reading comprehension.

    As I already said, the methodology for grading a school (as given by Kent as an example) is NOT the same as the methodology used for teacher VA. There was no kind of past-performance normalization for individual students in the former; there is in the latter.

    4) “To do value-added right, you have to collect data as far back as possible–not an easy task. And to do so for thousands of teachers in hundreds of schools across multiple years of schedule changes, student mobility, etc is an enormous task. Its fraught with error–but you won’t admit this.”

    …which is why I was implying this as a reason that the study didn’t use this method…

    5) “Any data matching at the school level is far, far easier than at the teacher level.”

    Which is why VAM between schools contains higher amounts of error, right? Hmm..

    6) “I read the article”

    Took you long enough.

    7) “This was the pint of the SchoolFinance101 post on kids who don’t give a crap. Yet, Chris could nt understand the post.”

    I didn’t know they were handing out pints at SchoolFinance101. Was it happy hour?

    And I understood the post, and even had counterarguments I could have argued, but you apparently did not realize that I was addressing YOU and YOUR arguments in that thread, not some other unknown person that was not participating in this forum. If you recall, you lost the debate badly in that thread, and (finally) got called out by someone else for a change.

    8.) “Some teachers get kids who are highly motivated and others do not. ANd these kids are not randomly distributed. Without controlling for this, any VAM analysis has error.”

    The difference is that VAM models for past-performance of each non-randomly assigned student. The study cited in the article DID NOT DO THIS. This is like the tenth time I’ve had to write this.

  18. You clearly don’t understand jack about what “teaching to the test” eans and the implications. Go read Koretz’s book–I told you to before, but you can’t bother educating yourself about an issue. Most state tests don;t cover the full curriculum. Teachers know this and teach what is on the test most often, thus reducing the breadth of the curriculum. Teachers also do drill and kill, teach how to get the right answer, etc. All of which makes it nearly impossible to accurately asses what students know and can do. When teachers are given kids many grades below grade level and then administrators and policymakers demand that the kids reach grade level in a year, teachers do whatever they think will achieve success. Its a broken system, yet once again, you blame teachers, You always do. You clearly don’t understand how incentives play out in real classrooms in real schools.

    4) “To do value-added right, you have to collect data as far back as possible–not an easy task. And to do so for thousands of teachers in hundreds of schools across multiple years of schedule changes, student mobility, etc is an enormous task. Its fraught with error–but you won’t admit this.”

    …which is why I was implying this as a reason that the study didn’t use this method…

    Wow are you stupid. You totally missed the point.

    5) “Any data matching at the school level is far, far easier than at the teacher level.”

    Which is why VAM between schools contains higher amounts of error, right? Hmm..

    No, it doesn’t. Again, your stupidity about VAM is showing again. The error is greater at the school level. How many times do I have to tell you this??? How stupid are you anyway???

    RE–Schoolfinance 101. You never did address that post. If you could read, you would know the post blog is written by Bruce Baker–one of the most prominent school finance experts in the nation and someone who has forgotten more about VAM than you have ever known. You clearly never read his post or simply were too frickin’ stupid to understand it.

    I lost that debate according to you, but only in your own delusional mind. Clearly you are delusional idiot living in your parents basement who tries to increase his sense of worth by coming on this blog and saying stupid crap and pretending you know more than other people.

    I stop replying to your posts because there is no point in continuing. You can;t read or understand, you don;t care to learn anything, you obviously don’t know anything about research, and you are an obnoxious ignoramus. And, I have RESEARCH to complete, so don’t spend hours a day on a blog like someone without a job who lives off their parents and wanks off to eduwonk.

  19. In the other thread (“Not Wired!”), as has happened before, the Chris Smyr @6:09pm is not I. We’re grasping at straws, folks, if this is the new way to try and marginalize me.

    And while we’re on the topic of sockpuppeting and other adolescent debate tactics:

    You could say I had a hunch, but his latest post solves the mystery: “The Anti-Chris” is a sockpuppet of the user who posted before as “Billy Bob”.

    Compare Anti-Chris’s latest rants to some other memorable screeds we’ve seen from Billy:

    https://www.eduwonk.com/2010/11/good-reading-8.html#comment-212813

    https://www.eduwonk.com/2010/09/rhee-assessing-2.html#comment-210432

    https://www.eduwonk.com/2010/08/whole-lotta-news.html#comment-209142

    Billy Bob:

    You create a sockpuppet with a handle a 4th grader would come up with, accuse me of having no life while posting a long-winded reply Thanksgiving evening, and continue to vaguely reference your anonymous research on these forums but refuse to give citations, and I’m the pathetic one?

    “Most state tests don;t cover the full curriculum. Teachers know this and teach what is on the test most often, thus reducing the breadth of the curriculum.”

    Give a cite about how most state tests don’t cover the full curriculum. Or don’t, as this is a moot point considering the tests all focus solely on standards, and even were they to miss some, ALL of the standards tested are ALL still highly important to content understanding.

    “Teachers also do drill and kill, teach how to get the right answer, etc.”

    Teaching how to get the right answers on a test is different from drilling the content, so try to be clearer in your future rants. Teaching how to differentiate right answers from distractors tends to require a background understanding in the subject to be effective, and it often can be used to help further teach the content (as daily assessments, for example). Such strategies can certainly be used to clear up misunderstandings about subject content, if done correctly.

    “When teachers are given kids many grades below grade level and then administrators and policymakers demand that the kids reach grade level in a year”

    Who are these policymakers and admin that are specifically demanding this? Give names and citations.

    “teachers do whatever they think will achieve success. ”

    That doesn’t imply that what they might try to do is a good strategy, nor do these bad choices of teaching strategies imply that it is the fault of tests that teachers chose them.

    “Its a broken system, yet once again, you blame teachers,”

    Only in your wacky worldview is it not a teacher’s fault for choosing useless and mundane curricula in failed hopes of raising scores (that likely won’t), but rather the fault of the big, bad reform movement that is apparently making them do all sorts of terrible, terrible things.

    “Which is why VAM between schools contains higher amounts of error, right? Hmm..

    No, it doesn’t. Again, your stupidity about VAM is showing again. The error is greater at the school level.”

    Isn’t that what I just said?

    “RE–Schoolfinance 101. You never did address that post.”

    I didn’t need to: you linked to a blog entry that was irrelevant to what you were arguing. Any anti-VAM blog entry you find does not magically become relevant to every discussion concerning VA.

    “Bruce Baker- […] someone who has forgotten more about VAM than you have ever known.”

    lol

    “I lost that debate according to you, but only in your own delusional mind.”

    No, you lost it because you relied on baseless accusations and linking elsewhere to obscure what was being discussed.

    “I stop replying to your posts [but will continue to write here about how I’ve stopped replying!]”

    Alright, Anti-Chris Billy Bob, get it all out. Feel better?

Leave a Reply

Your email address will not be published.