The practice of pre-publication peer-review of scholarly papers has recently come under attack from a wide variety of sources, ranging from bloggers to The Scientist to The New York Times. Nearly every discussion of peer-review refers to it as a “burden,” and that burden is often described as “overwhelming.”
I’ve always thought of peer-review as a tremendously efficient bargain (review a small number of papers and get back the entire set of literature that’s been filtered and scrutinized at the same level).
How overwhelming is the burden of peer-review, and does the proposed solution of post-publication review offer any relief?
The Scientist quotes UCSF biologist Keith Yamamoto, as just one example of the stated burden:
“The culture of having to publish means the burden of papers is just enormous,” Yamamoto says. And the burden of reviewing this glut of papers goes almost entirely unrewarded.
It’s tempting to immediately dismiss the recent set of “advertorials” in the Scientist, conveniently timed as they are with the announcement of their new partnership with the post-publication review service Faculty of 1000. But even taken at face value, there seems to be little data given to support the concept of “overburdening.” Another Scientist article merely states that there are more articles submitted each year, but neglects to mention that the population of scientists, and hence potential reviewers, is also growing.
How big of a problem is peer-review for most scientists?
A recent study suggests that “unpaid non-cash costs of peer review” undertaken by academics works out to £1.9 billion. That seems like a lot of money, but when one amortizes it across the total number of working scientists (best estimate I can find is around 11.5 million worldwide, sourced here and here), and using today’s exchange rate, it works out to around $256 per researcher per year. Is that a reasonable amount of effort to contribute?
The Research Information Network’s data shows that peer-reviewed journal content is valued as more important than any other source, quoting one researcher who stated, “Anything that isn’t peer-reviewed . . . is worthless.”
What is the value in having your most important information source vetted by experts? Is it worth less than $256 annually to you? Isn’t having the literature filtered in this manner — the time saved from having to go through the unacceptable dross — the very “reward” Yamamoto is seeking above?
I wanted to get a feel for how burdensome peer-review is in my field, biology. In a thoroughly non-scientific study, I asked a dozen biology professors about their peer-review burden, trying to get a good cross section of people at different stages of their careers and at different types of institutions. The vast majority told me they review around 1-3 papers each month. Scientists are under enormous work and time pressures these days, but how much of that can be blamed on reviewing a few papers each month?
Some senior researchers review more papers, often because they’re on the editorial boards of journals, and their burden can range as high as 10 to 15 papers per month. That does seem like a sizable workload, but it’s hard to think of it as an unbearable burden when it’s an entirely voluntary one. Can one really call a voluntary activity a “burden”? There’s no stigma for turning down an editor’s review request. All professors I contacted said they had no problem with this. Some well-known senior professors deliberately limit their reviews to no more than a few per month, and in times when they’ve got a heavy workload in other areas, they refuse all review requests. Do other fields differ greatly from biology, or is this a reasonable picture of science as a whole?
Editors and researchers in other fields, please chime in with comments below and let us know how well my admittedly small sample size reflects things in your area.
If these sorts of numbers are accurate, then peer-review does seem to offer a superb bargain in efficiency. A recent study showed that 50% of biologists use academic journal articles every working day (and another 30% use them “most days”). So by agreeing to review 1-3 articles per month, you’re guaranteed that the multiple articles you’re using nearly every day of your career have been scrutinized and filtered at that same level.
Let’s compare this level of efficiency with that seen for the most commonly proposed alternative, post-publication review, the idea of putting everything up on the web and letting “the crowd” filter things out.
If we start with the idea that information overload is a problem — that researchers are buried in an constant avalanche of papers — then imagine the size of that avalanche in a system where no paper is ever rejected, where everything gets published.
Not only will you be reading more papers, but those papers are going to be of lower quality than those you now read. One of the key rewards of our current peer-review system is that the criticisms are used to improve papers before they’re published. Time and attention are incredibly valuable commodities. A system that requires you to spend more time reading more papers that are of lower quality is already looking problematic.
One of the main complaints against peer-review is that it delays the dissemination of research results:
Peer-review is too slow, affecting public health, grants, and credit for ideas. . . . Another common frustration among authors is the lengthy time delay between submission of a manuscript and its publication.
Does post-publication review solve this problem? Peer-reviewed journal articles are considered “very important” information sources by 92% of researchers in the study mentioned above compared to 4% giving that rating to “un-refereed articles.” Given this attitude, are researchers going to be willing to read articles that have not yet been reviewed in any manner at all? Or will they wait, particularly for articles outside of their area of expertise, until a trusted source has posted a review? That introduces a new delay into the system — instead of waiting for an editor-driven review process, we’ll instead be dependent on a stochastic process.
For that stochastic process to work, it asks participants to review every article they read. And that’s where the efficiency of the system bottoms out. Which is a bigger burden — serving as the peer reviewer for 1-3 articles per month, or serving as the peer reviewer for every article you read?
Beyond efficiency, post-publication peer review suffers from a likely lack of expertise and trust. A highly respected journal with a track record for editorial excellence in selecting qualified reviewers is likely to be trusted more than an anonymous commenter who may or may not be qualified:
Many professors, of course, are wary of turning peer review into an “American Idol”-like competition. They question whether people would be as frank in public, and they worry that comments would be short and episodic, rather than comprehensive and conceptual, and that know-nothings would predominate. After all, the development of peer review was an outgrowth of the professionalization of disciplines from mathematics to history — a way of keeping eager but uninformed amateurs out. “Knowledge is not democratic,” said Michèle Lamont, a Harvard sociologist who analyzes peer review in her 2009 book, “How Professors Think: Inside the Curious World of Academic Judgment.” Evaluating originality and intellectual significance, she said, can be done only by those who are expert in a field.
As Phil Davis recently asked:
Is a system that allows anyone to comment on a paper — anonymous or not — really a form of “peer-review.” Where is the “peer” in “peer review?”
Replacing a flawed system with one that’s even more flawed is not an option.
Most proposals for doing away with pre-publication peer-review suffer from “Highlander Syndrome” (“There can be only one!”), the notion that everything must be a zero-sum game, and that if a new layer is added, a previous layer must be removed. In an age of information overload, we need more filters, not fewer. Yes, peer-review can be improved, and yes, if one could actually generate participation, post-publication review could be tremendously valuable. Wouldn’t it be better if these filters were additive rather than having to choose between them?
Even if one assumes that peer-review is an enormous burden, it’s possible to turn it into an important educational opportunity for students. One thing — and possibly the most valuable thing — I learned in my graduate school lab was how to write a scientific paper. The evaluation of submitted papers provides a hands-on opportunity to hone these sorts of skills.
The head of a major research institution recently told me the following about his peer review practices:
One reason I do accept review responsibilities is that I work with lab members to do some of the reviews. This helps people learn the system and how to evaluate papers. Because reviewer opinions are available, I can sit down with the person from the lab and go over how their opinion relates to the other reviewers. It helps them learn to be a better reviewer, understand what to expect from their own papers and how people respond to reviewers comments.
If done right this is a great training opportunity. If a PI just gives a lab member a paper and turns in that review, it can be problematic. However, if they get the review from the lab member and sit down to go over whether it is fair, balanced, accurately deals with strengths and weaknesses etc., and compare it with their own take on the paper, it can be a great learning experience. When I do this and then show the lab member the other reviewers comments after they are available, it can be very interesting. They are often surprised about what was said, missed, or put in the review and are reassured when others found similar issues. I find they also learn a lot from the author rebuttals and revisions the authors make if they resubmit the paper. It really prepares them for how to deal with comments when their own papers are reviewed.
Ultimately, what matters most is the near-universal demand for peer-review as a necessary filtering system:
Our study indicates that many researchers are discouraged from using new forms of scholarly communications because they do not trust what has not been subject to formal peer review… [R]esearchers seek assurances of quality above all through peer review, and that they do not see citation counts, usage statistics or reader ratings or other ‘wisdom of the crowds’ tools as providing an adequate substitute.
In seeking to replace pre-publication peer-review, one must look at the whole picture, at all of the benefits the current system provides, rather than focusing solely on the limited instances where it is problematic or open for abuse. Can peer review be improved? Of course, but the best bet for the future is adding to peer-review rather than doing away with it altogether.