Where Do Testing and Accountability Go From Here?

My Bellwether colleagues Alex Spurrier, Jenn Schiess, Andy Rotherham, and I released a set of briefs today looking at the past, present, and future of standards-based reform. Those include:

  1. In The Historical Roots and Theory of Change of Modern School Accountability, we review the history and logic behind standards-based reform to recall the foundational goals and rationale for the main strategic levers reformers were trying to pull.
  2. In The Impact of Standards-Based Accountability, we assess the strengths and weaknesses of the ways in which standards-based reform has been operationalized in policy and practice and begin to identify what should be retained and what should evolve.
  3. In Assessment and Accountability in the Wake of COVID-19, we explore what accountability may mean in a global pandemic, as challenges of equity in our education systems are exacerbated and the need to rapidly assess and address those challenges is urgent.

A forthcoming webinar will further explore these topics.

Join us on Monday July 20th for a conversation with Jeb Bush, John B. King, Jr., and Carissa Moffat Miller about how we should measure the impact of education systems on students, particular students of color and low-income students, even as COVID-19 changes schooling dramatically. Register and learn more here.

–Guest post by Chad Aldeman 

#EduFridayFive: The Collaborative for Student Success on Assessment HQ

Earlier this week the Collaborative for Student Success* released Assessment HQ, a new go-to resource for information on state assessments, including data and results from those assessments over time. To learn more about the project, I posed the #EduFridayFive questions to the Collaborative team. Read their answers below:

How would you describe this project in 200 words or less? 

The Collaborative for Student Success launched Assessment HQ to help build greater understanding about the role annual assessments play and how they are being used to advance educational equity and improvement in student achievement across the country. Don’t get us wrong, no one is saying annual assessments are perfect. However, they are an important tool to help educators and policymakers monitor the progress of all students in gaining the knowledge and skills they need over time. They are particularly important for those students who are most vulnerable or who historically have been underserved by education systems. We set out to create one place where individuals can find state-by-state student proficiency data, original commentary, resources, and state/national news all in an easy to navigate format.

What would most people miss about this project if they only read the headline? 

A lot of data is gathered from K-12 assessments, but sometimes it is not easy to access or it may not be clear what the data says. For the first time, student proficiency data for more than half of the states across the country is publicly available online, and in one location, for anyone to view and use. Assessment HQ highlights state-reported student performance results for grades 3-8, in mathematics and English language arts (ELA). The new site also allows users to see trends in student proficiency in individual states and observe the performance of student groups, like African American and Hispanic students. Only by exploring trend data on these students can we ensure they are making real progress.

What compelled you to do this work? 

Outside of the education policy community, it has largely gone unnoticed that the assessment landscape is constantly changing—with states adjusting vendors, new proficiency calculations, and debates on proper accountability. That constant change is often not in the best interest of students or educators. We’re hopeful that this platform will help cut through much of the uncertainty around tests by offering a clean, consolidated look at actual state-reported information. Let’s face it, tests are an easy punching bag. This site can contribute to a more informed dialogue around assessments and work to avoid situations where the politically expedient choice comes before what’s best for our students. 

What would a smart critic say about it, and how would you respond? 

Student proficiency data is one moment in time. It cannot – and should not – be used in isolation when viewing the work being done to help students succeed. Through the original commentary provided by Dale Chu on the Testing 1-2-3 blog, resources from state and national partners, as well as news coverage, we strive to provide the background information and nuance about the factors that contribute to assessment choices and results. We have assembled available state assessment data based on those states that have kept the same test in place for four years. We only needed three years to show a trend, but we set a higher bar for ourselves. With the goal of improving student success for every child, we must make sure that assessments are aligned to high standards, are informative to parents and policymakers, comparable, meaningful, and actionable.

Other than this project, what are you most excited about right now?

The Top Gun sequel…it’s been too long since Maverick did his thing!

Seriously though, the Collaborative has always been focused on its role as a nonpartisan player that is dedicated to ensuring that students are held to high standards and have the resources and supports necessary for their success. Funding and resources are a perennial topic in education and we’re currently very interested in the move states are making to report out per pupil spending at the school level vs. just at the district level. This provision in the Every Student Succeeds Act hasn’t received a ton of coverage, but it’s a significant step forward in understanding how education dollars are being spent. Also, we’re following the development of needs-assessments as states develop Career and Technical Education plans in coordination with local and regional business leaders.

–Guest post by Chad Aldeman

*Disclosure: The Collaborative is a client of Bellwether’s, although not on this project. 

Non-China Midweek Bonus Post

A few wonky tidbits until you return to regularly scheduled USA programming:

 
Stacy Childress and James Atwood on what they’re learning at NSVF.  Can you guess: what are they learning about growth mindset, and how does it relate to this?  
 
Matt Kraft and David Blazar on teacher coaching.  Can you guess: what did they find?  (Hint: pertains to scale)
 
David James taking a shot at professor coaching.  Much needed!  
 
Jon Baron is all over that scale/evidence question.  Can you guess: what does Jon call the 800 pound gorilla?  
 
Tom Vander Ark looks at Bridge (disclosure: worked there, love ‘em) for his book, Better Together.  What does he find about scale?  
 
Patrick Wolf et al with Do Test Scores Really Matter   Must-read.  Pro tip in the tables: ELA gains and high school grad rates modestly predict college grad rates, but math gains didn’t.  CMO problem: much better at math gains than ELA gains.  Hmm.  
 
What’s an under-used outcome measure?  Net Promoter Score!  Gets at what “customers” really think.  Two great organizations just today shared their NPS: Matt Kramer at Wildflower Montessori micro schools, and Jessica Kiessel at Omidyar Network (grantmaking).  More please!
 
Guestblogger Mike Goldstein