Guest post by Jim Ryan
Thanks to Andy for inviting me to guest blog. I haven’t blogged much at all, so I apologize in advance if I’m lousy at it. I’m a law professor at the University of Virginia, and I write and teach about law and education. I recently finished a book, which will be published this week by Oxford University Press, which I may talk about later this week. For the first couple of days, though, I want to raise some questions that have puzzled me. I’m hoping readers will have answers.
A front page story in the NYT on July 27 described the recent findings of some education economists regarding the impact of good kindergarten teachers on their students over the long haul. The headline of the article says it all: “The Case for $320,000 Kindergarten Teachers.” The researchers estimated that this was the present value of the additional money a full class of kindergarteners taught by a standout teacher would eventually earn over a class taught by a less talented teacher.
The findings have not been peer reviewed, and they may not hold up. But that’s not what’s interesting to me. What I’m wondering is why more social scientists don’t make an effort to translate their findings into points that resonate with other, non-expert academics (think, say, law professors), policymakers, or the public? The economists who studied the value of kindergarten teachers seem to be following in the footsteps of preschool researchers, who, brilliantly I think, have tried to quantify how much return governments can expect from “investing” in preschool. See, for example, this RAND study about expanding preschool in California.
Yet anyone who regularly reads articles by social scientists would see most findings reported in somewhat arcane and relatively inaccessible terms, like standard deviations or percentile gains over the median, which are difficult for the untutored (including yours truly) to translate into something more meaningful. You know that bigger is better, so a .06 effect is better than a .04 effect, but you (or I, at least) have no real sense of what a .06 effect means in the real world. In another context, I suppose phrases like statistically significant or robust to multiple variations might be evocative, but in these studies, they leave me a little cold. I get writing for other academics, not pandering, maintaining professional standards, being precise, etc., and I recognize that not all findings can be easily translated to plainer terms. But I bet a lot more could.
Ultimately, aren’t social scientists who write about education trying to influence public policy? If so, what would be wrong with translating the findings into terms anyone could understand? Instead of talking (just) about a percentile gain over the current median test score, for example, why not talk about gains in terms of months or years of school work? (And, while we’re at it, why not try more often to compare the costs and benefits of different interventions?) Or is a front page story in the NYT a bad thing for the academic credibility of social scientists?