Be Wary of Edu-Tourism!

Yesterday I noted that Finland is seeing declines in reading, math, and science achievement. For anyone unfamiliar with education policy debates in the U.S., you might wonder why I was paying special attention to a Nordic nation with a population about the size of South Carolina.

But for those who do pay attention to American education politics, you probably weren’t fazed by the Finland coverage. After all, Finland’s education policies have been given an inordinate amount of attention since they scored near the top of international achievement tests about a decade ago. “What would Finland do?” prompted a cottage industry of commentators about how we could copy whatever it was that Finland was doing and then, as a result, improve our own results.

But this was sloppy thinking. As Pat Wolf pointed out in the tweet below, researchers call this “selection on the dependent variable.” That is, you can’t just look at what the high performers are doing and try to copy them. Making policy prescriptions that way can easily confuse correlation with causation, and you can’t tell what really caused an outcome just by looking at what activities were completed.

Put another way, were Finland’s results caused by their ethnically homogenous student body, their low teacher turnover and high bar to entry to the teaching profession, their school choice policies, their high-stakes standardized test given to high school seniors, their national curriculum, or something else? We don’t know! But that didn’t stop advocates from championing these ideas, or arguing that Finland was successful without some common reform ideas espoused here in the States.

I’m afraid we’re already starting to see this same “edu-tourism” in the wake of the recent NAEP results. Mississippi and DC stood out as two places that bucked the national trends, but it’s hard to say what caused those positive results. Instead of visiting those places and looking backward at what practices make them special, we should be consulting research on the specific policies those places have attempted. For example, we should pay much more attention to the empirical evidence on DCPS’ teacher evaluation program than on any policy prescriptions coming out of the NAEP results.

I don’t want to bash Finland, but I do hope Finland’s recent decline will serve as a cautionary tale. And no, I don’t mean trying to diagnose why Finland’s scores are now declining. That would be the exact same mistake but in the opposite direction! No, I mean that we should stop trying to identify policy prescriptions by blindly copying high performers.

–Guest post by Chad Aldeman

Our Schools Have Lost Focus on the Lowest-Performers

The new NAEP results are out. Here’s your overall summary: They’re mostly bad, with noticeable declines in reading over the last two years.

What stood out to me was the fact that we’ve lost our focus on the lowest-performing students. Zooming out to look at the last 10 years by performance level, here are the changes in 4th grade reading scores:

10th percentile: -7

25th percentile: -2

50th percentile: +2

75th percentile: +3

90th percentile: +2

And here’s the same thing for 8th grade reading:

10th percentile: -6

25th percentile: -3

50th percentile: -1

75th percentile: +1

90th percentile: +4

Here’s the same trend for 4th grade math:

10th percentile: -3

25th percentile: -1

50th percentile: +1

75th percentile: +2

90th percentile: +5

And for 8th grade math:

10th percentile: -5

25th percentile: -4

50th percentile: -2

75th percentile: +1

90th percentile: +4

This didn’t use to be true. In the 1990s and 2000s, we saw some signs of gap-closing or at least broad and shared gains. The 2010s are the opposite; it was a pretty flat decade in terms of overall achievement, with higher-performing students making some gains and pulling further away from their peers.

Back to work.

–Guest post by Chad Aldeman