Guestpost by Anne Hyslop
The ed policy world has finally agreed on something: there is too much testing. Now it may not win me any Twitter followers, but this consensus makes me nervous. Mostly because it makes hasty, extreme solutions to “over-testing” seem tenable, giving them credibility as a logical response because “this is a crisis.” Is it? Teach Plus has shown that, on average, less than 2 percent of class time is spent on mandated testing. While there are outliers, it looks like the excess is coming from the local level, not state tests. And like my colleague Andy Smarick, I see the virtues in our current testing regime, and the consequences in eliminating it without understanding what could be lost.
So I was glad that large urban districts and chief state school officers are working together to tackle issues of assessment quantity, and quality, while maintaining a commitment to annual testing. Same goes for the Center for American Progress’ work on “better, fewer, and fairer tests.” All common sense responses to the over-testing meme. And given growing numbers, especially on the political left, calling for grade-span testing (see: teachers unions, members of Congress, former President Clinton), it is welcome to see a defense of annual testing–with support from Arne Duncan, and even President Obama.
But are they really defending them? On second glance, I’m not so sure. Even the staunchest supporters of grade-span testing, like Randi Weingarten and Linda Darling-Hammond, would support giving students tests each year, just with a caveat: local assessments without consequences, not statewide. As Darling-Hammond, Gene Wilhoit, and Linda Pittenger describe in a new brief, statewide, grade-span testing merely serves to “validate” the results of the annual local tests–while eviscerating most meaningful accountability systems in the process (not a coincidence).
In other words, the right question to ask is not, “do you support annual testing?” but rather, “do you support annual statewide testing?” And despite outward appearances, CCSSO’s and CAP’s support is more tenuous. That’s because both seem ready to embrace district flexibility (read: opt-outs) of state tests, especially in “districts of innovation.” Their new report “Next-Generation Accountability Systems: An Overview of Current State Policies and Practices” includes multiple examples of district opt-out plans, from New York to Kentucky to New Hampshire, and holds them up as models for the future.
“Districts of innovation” is code for districts that are exploring competency-based learning, or project-based learning, or some other (usually) technology-enabled reform to personalize students’ experiences. All good ideas, in theory. But that’s often what they are: just theories. We don’t actually know if they work yet to improve student outcomes. And in order to find out, we must evaluate them. So let’s take the Darling-Hammond approach and use statewide tests as a “validator” of what’s happening at the local level in one of these innovation hotspots.
Located in Danville, Kentucky, Bate Middle School was profiled by NPR’s Anya Kamenetz this year in a piece originally titled “In Kentucky, Students Succeed Without Tests.” Kamenetz paints the picture of an academic renaissance at Bate, which had been slapped with the “needs improvement” label by the state’s accountability system. This renaissance was possible all because Bate chose to forego administering state tests and, instead, tapped into students’ interests with project-based learning and performance-based assessments that were evaluated locally. Except, Bate didn’t get a waiver to skip the standardized tests, as first reported.
And thank goodness for that, because when you examine the latest data from the Kentucky Department of Education, results at Bate are a little more complicated. The school got solid marks–full credit and close to “distinguished”–in the “program review” component, which looks at instruction and curriculum, formative and summative assessments, professional learning, and leadership (all of which should reflect the project-based learning strategy).
But on the statewide, Common Core-aligned assessments, the outlook was decidedly less rosy. Half of students were at least “proficient” in reading, but only 37% of low-income students met that benchmark. In writing? 36% and 27%, respectively. The results in math were most distressing: only one-third of students were proficient, and just one in five low-income students could say the same. Minority students struggle, too. And all of the results were below state averages.
Proficiency rates are imperfect measures, though, often correlated with student characteristics. For that reason, they aren’t especially useful for diagnosing how Bate is contributing to its students’ learning. Further, this is a school in improvement–it’s reasonable to expect that the proficiency data wouldn’t be great. That’s why it’s so important that Kentucky has annual statewide assessments that can measure student growth. Is Bate, with its project-based learning experiment, accelerating learning more than other schools in the state?
The unfortunate answer is “not really.” Growth is only measured in reading and math, because they are the only statewide tests given annually (it’s not feasible to measure growth using grade-span tests, or the local measures–a big reason to keep annual state tests). In reading, 56% of Bate students were found to be making typical annual growth toward college and career readiness. Put another way, nearly half of students, even with this new project-based approach, were not making typical growth, let alone the high levels Bate needs for its most disadvantaged kids to get on track academically. The math results were, again, worse: only 43% of students were making typical growth, while the majority were not.
I’m not saying Bate should give up its project-based experiment. These data are by no means a thorough program evaluation, and there were other benefits to the reforms, based on Kamenetz’s reporting–like increased student engagement, community involvement, and educator morale. That should not be discounted or overlooked, and other results could improve with time.
What I’m saying is that Bate should not give up statewide annual assessments. Not yet. The only way to know whether “districts of innovation,” like Danville, are working is by examining all the evidence, not just the evidence local officials like. And in this case, it’s pretty clear the local approach is not ready to be “validated.”
Maybe the education field will be ready for a system of district “validated” assessments one day that are both high-quality and comparable to state tests. But that day is not today. And the only way that day will come for these reforms, whether it’s project-based learning or competency-based education, is to show that they work. And that’s going to require statewide annual testing for the foreseeable future. Instead of talking out of both sides of their mouths on testing–for annual tests and for district flexibility–CCSSO and CAP should clarify: we need statewide annual assessments now… and to demonstrate that district flexibility is a worthy alternative in the future.
Anne Hyslop is a senior policy analyst at Bellwether Education.
14 Replies to “Guestpost By Anne Hyslop: Reformers, Annual Testing is not Enough, It Must Be Statewide”
That is quite alot of education mumbo jumbo. Ms. Hyslop has not spent much time on the ground in an urban district. She is a data diva and seems to know little about students. Having taught in LAUSD, Orleans Parish (An Orleans Parish Teaching Fellow), Sacramento City Unified (under an incompetent Broad Academy graduate) and, now, in a state prison, from experience I know the two percent testing time is misleading. This testing program includes hours and hours of test prep, dumbing down of the curiculum and elimination of valuable and enjoyable electives. My own son put very well in the third grade when he said, “I am so tired of filling in those little circles. I hate school.” He used to love school, but no more. I would rather my son be a life long learned than a wiz at standardized tests. Plato put it well when he said, “The mose effective kind of education is that a child should play amongst lovely things.”
I think some great points are made in this article. Whether or not a statewide end-of-year test is used for high-stakes accountability or not, it provides critical value in order to measure student growth and program effectiveness. As a former teacher, I wanted all the flexibility to do creative, inspiring, project-based learning with my kids. But I also took the time to make sure those projects were well-aligned to the standards so that at the end of the year we could measure if kids actually learned what was intended, and grew academically, as a result of my curriculum and instruction. I taught in a subject area with statewide standardized assessments and probably spent one week at the end of the year for review and test prep. The test itself took about an hour. We were able to pace things out nicely in NC with the Course Blueprints provided by the NCDPI. I talk more about this topic of “teaching to the test” and narrowing of curriculum in a couple of blogs:
1. From a student’s perspective: http://blogs.sas.com/content/statelocalgov/2014/05/15/top-rated-value-added-school-extreme-test-prep-or-well-rounded-experience-a-students-perspective/
2. From my perspective:
Kentucky’s “performance review,” is a great concept, but it’s a self assessment. And 64% of schools scored 100% last year. And, the 34% of schools, who didn’t score themselves at a 100%, no what they need to do next year.
DT … Averaged across all kids in all grades in all school, the 2 percent figure for time on testing could be correct, but there still could be substantial variation for some kids in some grades in some schools. Beyond that, calling Anne names does not make your case stronger.
Art do you find “Data Diva” offensive? The point is many Ed reformers who aren’t in the class room on a daily basis with kids don’t know the damage testing is doing to kids. They mostly know data, not students. It really is making many kids hate school. The test itself is only part of it. When I was a teaching fellow in New Orleans the principle and consultant actually had us teaching from old LEAP tests. The curriculum was teach them what was on the last test in hopes it would raise scores. Of course it is not supposed to be that way, but the pressure is so great to improve scores that this sort of response is more common than you think.
DT, the attack on the person rather than on the issue cuts both ways. I can as easily argue that you, having most of your experience in the trenches but having little knowledge or understanding of broader objective student data, miss the forest for the trees. All your experiences are at best reflective of you and the handful of your colleagues, say 10 — or even 100 — people out of 5 million teachers. So please convince me why should I trust such a tiny non-representative sample?
Cut it out. Please.
Begging everyone’s pardon here, but a Data Diva is EXACTLY what this woman is:
She is a policy analyst, for heaven’s sake, with a degree in public policy. She hasn’t spent one minute as a teacher. She has absolutely no business whatsoever talking about public education because she has no idea what she’s talking about.
Interesting no one really answered what DT said, but took offense instead to a pretty mild “name.” Sheesh.
If Anne were claiming that she knew the best ways to teach reading, or the best ways to calm classrooms full of restless children, you might question her authority to speak on those things given her background. But she isn’t talking about things like that at all. She’s talking about education policy and the way it plays out at many levels of the system. We need clear-eyed and tough-minded people to inform us of those policy realities just as we need teachers to inform us of day-to-day life in classrooms. It’s “public” education, after all.
Art–potato, pohtato. She’s weighing in on the merits of testing and how often they should occur and she has no experience on which to base her opinion. In fact, she’s talking about lots of things in here she doesn’t understand, precisely BECAUSE she’s a Data Diva and not a teacher. Her “insights” are not insightful.
Just one example I’ll pull out: She dismisses as insignificant the 2% of the year that the state test supposedly consumes, but because she does not work in a school, or anything resembling a school, she doesn’t understand the amount of prep time that goes into the test, which is just about the entire school year. She doesn’t realize the number of hours that go into getting the test ready–before PAARC, somebody (the staff) had to write in all the names, and all the accommodations. The staff has to write in the teachers’ names, the school code, the district code. The tests have to be counted and signed for–every single session. My school last year had 650 students. But the main thing is that it takes a laser-like focus on the test ALL YEAR. 2% indeed!
In education, we need a lot more input from teachers and a lot less input from “experts” who aren’t.
Mary, you could have told us all that it takes to prep for a test in a school without the ad hominem attack on the author, right?
Think about it.
So, I take it you have nothing to add to the conversation, no responses to my points. Not even one. And she’s still not an expert. And she’s still a Data Diva.
Sigh. (Eye roll).
I think YOU need to think about it.
Actually, Mary, I do have something to add, but I refuse to engage with people who don’t understand when they need to wash their mouth with a soap.
Yikes. No point in continuing this.
Have a nice day.
I believe we need better testing system for better quality of education