"Least influential of education's most influential information sources."
-- Education Week Research Center
"full of very lively short items and is always on top of the news...He gets extra points for skewering my high school rating system"
-- Jay Mathews, The Washington Post
"a daily dose of information from the education policy world, blended with a shot of attitude and a dash of humor"
-- Education Week
"unexpectedly entertaining"..."tackle[s] a potentially mindfogging subject with cutting clarity... they're reading those mushy, brain-numbing education stories so you don't have to!"
-- Mickey Kaus
"a very smart blog... this is the site to read"
-- Ryan Lizza
"everyone who's anyone reads Eduwonk"
-- Richard Colvin
"designed to cut through the fog and direct specialists and non-specialists alike to the center of the liveliest and most politically relevant debates on the future of our schools"
-- The New Dem Daily
"peppered with smart and witty comments on the education news of the day"
-- Education Gadfly
"don't hate Eduwonk cuz it's so good"
-- Alexander Russo, This Week In Education
"the morning's first stop for education bomb-throwers everywhere"
-- Mike Antonucci, Intercepts
"…the big dog on the ed policy blog-ck…"
-- Michele McLaughlin
"I check Eduwonk several times a day, especially since I cut back on caffeine"
-- Joe Williams
"...one of the few bloggers who isn't completely nuts"
-- Mike Petrilli, Thomas B. Fordham Foundation
"I have just three 'go to' websites: The Texas Legislature, Texas Longhorn sports, and Eduwonk"
-- Sandy Kress
"penetrating analysis in a lively style on a wide range of issues"
-- Walt Gardner
-- Education Week's Alyson Klein
-- Susan Ohanian
Smart List: 60 People Shaping the Future of K-12 Education
2 Replies to “Hot Data Action!”
Firstly, I’m glad that the report said that data systems are “an obtainable goal,” not something that are ready for use. Without prejudging whether NCLB-type accountability, value-added evaluations, etc. are a good or bad idea, I noticed a pattern. Those approaches to longitudianl data, that seek to advance accountability, are being implemented faster. Again, without prejudging whether they are more good than bad, they obviously have trade-offs and unintended effects that would need to be addressed.
But what about data systems that simply help students, that are completely win win solutions? How much have we invested in those? There is no downside to tracking middle school achievement to see whether students are ready for high school, but only 12 states have adopted them. What about high school indicators to help kids prepare for college. Again, a technology wityh no down side has been adopted by only 10 states.
I’m reminded of the proposal to track and reach out to elementary students with severe absenteeism. Again, that is simply a technology for helping children?
Why can’t we show the same commitment to “win win” solutions, as we do for solutions that may produce good things for students, but which also may hurt students?
Maybe the devotees of data-driven accountability are right and you have to pick fights and you can’t make an omelet without breaking eggs. But where are their priorities? Have many of them have gotten so caught up in fighting teachers and unions that they forgot what are the means and what are the ends?
The Data Quality Campaign report overlooks one important data quality factor: whether the tests states administer cap student progress or whether they are “ceilingless” tests. Too many tests, under our NCLB, measure only whether students have achieved state proficiency standards and the extent to which they fall below it. If a bright or high-achieving child begins the school year at proficiency or even advanced proficiency levels (if such scores are available), those tests have no way of measuring whether that student has made any progress over the school year. While the school will be happy to report that student’s proficiency score at the end of the year, the score could mask the school’s failure to challenge the student or advance his or her learning at all. By requiring no measures of achievement once a child surpasses the proficiency or advanced proficincy levels, NCLB simply provides no incentive for schools to focus on ensuring that gifted students are challenged to reach their potential. By relying largely on these flawed diagnostic tools, the DQC factors also do little to improve instruction for gifted students.