Malcolm Gladwell’s promotional tour for his book David and Goliath is rolling along, but his responses to criticism aren’t getting any more persuasive.
When he appeared on CBS This Morning, he was asked about the contention that he “cherry picks” from published research, and only discusses results that support his points. Here’s his answer:
There are two instances in the book where I rely heavily on the science. On the class size discussion, I use a meta-analysis of 300 studies. That’s not cherry picking. The second case is when I talk about the effect of the three-strikes law in California. I review the entire literature and give all references to it. People use that – I always ask them to give me an example and they never can. A woman wrote an article on the book vis-à-vis my cherry picking in the Columbia Journalism Review, and I emailed her and said, give me an example, and she said, I can’t, I haven’t read your book. I think people just say that as a way of saying, oh, I disagree with what you’re saying.
The study that Gladwell cites in David and Goliath as the “definitive” discussion of class size effects on student performance is Eric Hanushek’s “The Evidence on Class Size”. Here’s the version of the article linked on Hanushek’s website, and here’s another version of the article.
Gladwell is mistaken in characterizing this article as a “meta-analysis”. A meta-analysis is a specific kind of statistical methodology. It combines data or results from multiple sources, usually from published research studies, and re-analyzes the combined information to see if the test results using the entire data set are different from the test results using each set of data on its own. Hanushek’s article looks at the results from 377 tests, as reported in 90 published research articles – but there is no re-analysis of the data used in those tests, or of their results.
Instead, the article categorizes the results of the 377 tests, depending on what type of school the data were collected from, and whether the test results showed a positive, negative, or statistically insignificant relationship between class size and student performance. A more accurate characterization of Hanushek’s study would be as a review or a summary of the test results in the literature – and the difference between that and a meta-analysis is important, because the different methodologies could lead to different results. Someone with Gladwell’s experience in reviewing published research should understand why that’s an important difference.
And while using a study based on 300+ other studies may not itself be cherry-picking, Gladwell doesn’t acknowledge that Hanushek only chose to compare studies that shared certain characteristics. Hanushek’s methodology is very understandable, because so many variables other than class size could potentially affect student performance. But because of that restriction, the study is not a comprehensive review of every study ever published on class size effects. And it’s also worth noting that Hanushek’s data were collected in the mid-1990s, so the analysis only includes studies published before or in 1994. One could certainly question whether studies conducted that long ago can give a “definitive” answer to whether class size affects student performance in 2013.
I’m not familiar with the literature on the “three strikes” law, so I can’t really address whether Gladwell’s review of it in David and Goliath is truly comprehensive. But reviewing “the entire literature” on any topic is a major undertaking, depending on how the topic is defined and on where you decide to draw the boundaries around that topic. In my experience, even researchers recognized as experts in their field rarely claim that they are familiar with the “entire literature” on their particular topic. They may be familiar with most of the research, but there is always new research being published – even more so now, with online journals as well as print journals – and there is always research that may not be directly related to the topic but which may still give insights into it.
The Columbia Journalism Review article (which Gladwell also mentioned on CBC television a few days later) appears to be this one. This article is not specifically about David and Goliath; it’s a summary of some of the criticisms of Gladwell’s books, including the newest one. Obviously, I don’t know what exactly was said in the email exchange between Gladwell and the article’s author. But from my reading of the CJR article, whether the author had actually read David and Goliath is irrelevant to the article’s credibility or to the author’s credibility (although she does make a rather sloppy error in referring to the “tenants of non-fiction writing” rather than the “tenets”). The article contains four examples of critiques of specific points in Gladwell’s books – critiques by commentators who clearly had read the books they were criticizing. So if Gladwell did indeed ask the article’s author to “give me an example” of cherry-picking, I’m not sure why, since the article already contains four of them.
Gladwell characterizes his books as “provocative” and “conversation starters”. But he seems remarkably reluctant to engage in substantive conversation about challenges to what he says in those books.
RELATED POST: Who’s David, and Who’s Goliath?: Malcolm Gladwell and His Critics
Reblogged this on peakmemory and commented:
The Malcolm Gladwell controversy continues:
I recently watched a long profile of M. Gladwell on the BBC. Once again, when challenged, in particular about that (in)famous article about the NY Police fixing the “small things” to fix the bigger ones (that was the basis for “The tipping point” book and a notion that was wholly debunked a few years later), he once again said his articles were “conversation starters”. As a long time New Yorker reader, I do enjoy his articles but can’t say I was ever compelled to buy the books.
Most of the current research I’ve seen coming out on the broken windows hypothesis shows it doesn’t work–if we want to fix large crimes, we should focus on large crimes, not graffiti.
I wonder if he gives the Levitt article that cherry picks the three strikes data to avoid showing the pre- trend in diminished crime the same weight as the criminolgy article that uncovers levitt’s malfeasance.
Similarly, there’s a lot of differences in methodological quality in the class size literature, though I would assume that Hanushek would do a competent job sorting though that.
The criteria that Hanushek used were that the paper had to have been published in a book or a journal; had to include some measure of students’ family background and at least one measure of resources devoted to schools; and had to describe the “statistical reliability” of the estimate of how resources affected student performance. This paper has a very thoughtful analysis of that methodology, including the observation that Hanushek gave equal weight to all the estimates, which might have an effect on the results:
Click to access 447.pdf
Oh gee, that Hanushek literature review is ANCIENT. 1997 is ages ago in education research.
The Stock and Watson methods textbook uses class size research as its example throughout the book, building from “wrong” research to “better” research.
AFAIK, we still think the Tennessee Star experiment is our best estimate on the subject, so we can say something about class size and K-3.
Reblogged this on Right Now.