During conference season, when you’re rushing from session to session, peer review is something you often hear about in snatches of conversation as you’re running by. “[Professor X] must have reviewed that paper, otherwise it would have been accepted”. Or “I knew getting in at [Journal Y] was a problem because they don’t like [Theory Z]”.
Peer review can have a really big effect on someone’s academic career, because it determines whether your research ever gets published or presented. In theory, the peer review process in the academic world works like this. An author sends a article about their research to an academic conference or to an academic journal. The conference organizer or the journal editor removes the identifying information (such as the author’s name and institutional affiliation) to reduce any possibility of reviewers being biased in their assessment by knowing who wrote the article. Then the article is sent to a couple of reviewers, who read it and say whether they think the article should be accepted or rejected – and then the conference organizer or journal editor decides whether to accept the article to be published or presented.
In most peer review processes, at least in my field, the author doesn’t know who the reviewers are. The journal editor or the conference organizer does, but he or she is expected to assign the article to reviewers that have some expertise or familiarity with the article’s subject. The reviewers’ expertise helps to identify and weed out articles with major problems, like poorly thought out research questions, bad methodology, or weak reasoning. Using multiple reviewers is also supposed to balance out any bias or limitations that might result from relying on the opinion of a single reviewer.
Well, that’s the way it’s supposed to work.
A few weeks ago, researcher and consultant CV Harquail sent out a Tweet about this research article about the peer review process, with a comment to the effect of “Holy crap, how did I miss this one?” When I looked up the article, I had the same reaction – the article was published in 1982, and I’d never heard about it either, but, boy, does it kick some holes in the idea of peer review being unbiased.
The text of the entire article is behind a paywall, but if you are unable to read it, I’ll summarize the main points. The authors of the article, Douglas Peters and Stephen Ceci, chose one published article from each of 12 prominent psychology journals that used peer review to choose which articles to publish. Each of these journals had a record of rejecting more than 80% of the articles submitted to them. The articles that Peters and Ceci chose had at least one author affiliated with a prestigious department of psychology.
Peters and Ceci removed the original authors’ names and institutional affiliations from the articles, and substituted fake names and fake institutional affiliations – so that the articles appeared to come from institutions “without meaning or status in psychology”. They also made slight alterations to each article’s title, its abstract (the summary of the article), and its first few paragraphs, to reduce the chances of the article being recognized as a resubmission. Everything else in the articles was left exactly the same.
Then, Peters and Ceci resubmitted the slightly altered articles to the same journals that published the original articles. So what happened? In their own words:
Nine of the 12 articles were not detected (by editors or publishers) as having been previously published…and eight of the nine articles were rejected…We should add that every editor or associate editor included in this sample indicated that he had examined the manuscript and that he concurred with the reviewers’ recommendations.
So, in other words, research that was found to be more than acceptable when it was associated with a well-known institution suddenly became terribly flawed when it was associated with an unknown institution. The reviewers of the nine rejected manuscripts said that the studies reported in the articles had “methodological defects” and that the articles were poorly written – when these articles had already received positive reviews and been published in the same journal. I’m not sure whether this should make us laugh or make us cry.
Peters and Ceci’s article was published in a journal that has a very interesting format. It presents the article and then follows it with a set of commentaries from other researchers, and then allows the article authors to respond to the commentaries. For this article, the commentators were editors of academic journals from a wide range of subjects – and I would highly recommend this set of commentaries as good reading for anyone who’s interested in the whole idea of peer review. Collectively, they are a fascinating debate about how the peer review process actually works and whether it needs to be changed. Peters and Ceci, in their response, acknowledge that there were some methodological limitations to their research – for example, they could have also submitted “low status” papers as “high status” papers, and seen if those were treated any differently – but, nevertheless, they contend that the results of their experiment indicate some serious difficulties with supposedly anonymous peer review.
The results of Peters and Ceci’s research might be different now, especially with Internet resources such as Google Scholar and electronic databases of journal articles that make it very easy to identify part or all of previously published research. So I searched a couple of journal databases to see if anyone post-1982 had tried to replicate Peters and Ceci’s experiment – and I have to say that I was quite surprised to not find anything. There are many articles reporting studies of inter-rater reliability in peer review processes – in other words, whether supposedly equally qualified reviewers independently looking at the same article come up with the same assessments – and if there is anything consistent in the results of these studies, it’s that inter-rater reliability is low or nonexistent.
But no one that I could find has tried to do what Peters and Ceci did in 1982, and I can’t help but wonder if that too says something – maybe that researchers are afraid to prove or even to talk about something that I would say is not exactly a secret of the peer review process. There’s no way to make peer review absolutely and completely anonymous, but even when the process is designed to be as anonymous as possible, bias can creep into the process for all sorts of reasons. And that’s really unfortunate, if it means that innovative or important research doesn’t get the attention it deserves.