Everyone has his burden.
Image by Spiritless Visionary via Flickr

Selected Post: The “Burden” of Peer Review

I received a great deal of feedback, nearly all of it positive, for my post on The “Burden” of Peer Review.  To me, the issue pointed out the alarmist “the sky is falling” nature of some of the publishing community.  Various publishers set up interesting and groundbreaking experiments on article level metrics and post-publication peer-review, and the usual online science voices chimed in with their thoughts on traditional peer review as being somehow broken and unsustainable.  The story hit traditional media outlets like the New York Times and The Scientist (touting their new affiliation with Faculty of 1000), and the panic was on.  Much of September’s ALPSP meeting was spent hand-wringing over the state of peer-review.

The problem is that the old system works pretty well and the proposed replacements are not quite up to snuff.  As noted in the original blog entry, the scientists interviewed (including young rising professors and senior researchers at the top of their field) didn’t feel overburdened. Every survey that’s done points out the tremendous perceived value of peer review (“Anything that isn’t peer-reviewed . . . is worthless.”).

Is this really a crisis?  Are we on the verge of a paradigm shift to a literature that’s randomly released, leaving things to the reader to do all the heavy lifting of the review process?

A few further data points strengthen the argument made in the original blog:

  • The editors of the journal Molecular Ecology published a study finding “little evidence for the common belief that the peer-review system is overburdened by the rising tide of submissions.”
  • The EMBO Journal published a study showing the ultimate fate of every article they reviewed in 2007 (Phil Davis discussed this here).  The results are impressive, a clear showing that the traditional peer-review process, as practiced by EMBO, is extremely effective. Of the papers rejected, only 1.2% were subsequently published in journals with higher impact factors. The citation rate for rejected papers published elsewhere was around 50% of those accepted by the journal.  The numbers show a system that, while not perfect, does what it is supposed to do very well.
  • Furthermore, PLoS has released another set of data showing their article level metrics through October 31, 2010.  A quick analysis of the data shows a total of 23,934 articles in the set.  Of these, a mere 1,291 (5.3%) received a “rating” (a user assigned assessment of the article’s value).  1,042 of those had only one rating, and 208 had two ratings.  As many science journals send articles out for review to three reviewers, that means that only 0.17% of PLoS articles received that same level of post-publication scrutiny as far as star ratings go.  The numbers are similar for articles that received comments (2,606 total papers with comment threads, with 1,803 seeing only one comment thread and 512 two comment threads yielding 1.2%) and notes (745 papers with notes, 536 with one note, 116 with two yielding 0.38%).

Clearly, the research community has yet to embrace post-publication peer-review.  If traditional methods are indeed in crisis and this is the immediate solution, the result then is a literature that is almost entirely unreviewed.  I’m not sure that many researchers would see that as an improvement.

The recent uproar surrounding the paper claiming to show a bacterium that grows using arsenic instead of phosphorus shows both the best and worst aspects of post-publication peer-review. The paper was released to great fanfare, including a mysterious and much-hyped press conference from NASA. Once the paper was released, several scientists found it lacking, and offered public critiques via the blogosphere.  It’s a great example of a paper having a post-publication life, one that inspired further analysis and debate.  If you side with the critics, online discussion of science here has caught a mistake in the peer-review system, a paper that should never have been published.

However, it also brings up a few problems with post-publication peer-review. The arsenic paper is a rare example that drew a great deal of attention.  This mirrors the PLoS numbers quoted above.  A small number of papers receive post-publication commentary, while the majority are ignored.  How many other papers came out the week of December 2nd?  How many of them were even mentioned online, let alone analyzed to the same degree as this one?  Do we accept the results in these other papers as valid because they haven’t been disputed?

The arsenic paper drew criticism because it made extraordinary claims and did so in a grandstanding manner, “science by press release and press conference” as Jonathan Eisen put it.  If we are heading toward a system where post-publication peer-review is important, then we can expect more shenanigans of this sort.  If we judge papers by how many blog mentions or tweets they receive, then hype becomes the norm, and outlandish claims become more important than quality science.

The authors of the arsenic paper refused to publicly debate the merits of their paper and haven’t responded to the blogosphere’s criticisms, stating that  “Any discourse will have to be peer-reviewed in the same manner as our paper was, and go through a vetting process so that all discussion is properly moderated.”  Nature has responded to this with an editorial calling for scientists to “adjust their mindsets to embrace and respond to these new forums for debate.”

It’s hard to agree with the idea that researchers must invest time in answering every single critic of their work, every yahoo with a blog and an opinion (particularly when that idea comes from the owner of a blogging network). Taking a look at Nature’s own coverage of controversial subjects, they very often dismiss opinions because they aren’t peer-reviewed.  Intelligent design theories are debunked because there is “no demonstrable peer-reviewed research” supporting it (see here and here for further examples).  The anti-vaccine movement is attacked for feeding conspiracy theories circulating on the Internet and in viral e-mails.

Nature is, of course, right in both cases.  But why are we supposed to discount these opinions, yet give credence to the similarly unreviewed criticisms of the arsenic paper?  Does peer-review only really matter when it’s something with which we disagree?  Are researchers obligated to respond to every piece of criticism their work receives?  If not, where do you draw the line?

If we are living in an age of information overload, then we need trustworthy filtering systems to separate the signal from the increasing amounts of noise available.  Given the tremendous workload of the modern scientist, abandoning a functional, and apparently reliable (though not perfect) one for an inefficient one with wildly variable quality is not an attractive proposal.

Social media provides wonderful opportunities for discussion and exploration of published works, but as the original blog posting suggested, it’s best as an addition to traditional peer review, not as a substitute.

David Crotty

David Crotty

David Crotty is a Senior Consultant at Clarke & Esposito, a boutique management consulting firm focused on strategic issues related to professional and academic publishing and information services. Previously, David was the Editorial Director, Journals Policy for Oxford University Press. He oversaw journal policy across OUP’s journals program, drove technological innovation, and served as an information officer. David acquired and managed a suite of research society-owned journals with OUP, and before that was the Executive Editor for Cold Spring Harbor Laboratory Press, where he created and edited new science books and journals, along with serving as a journal Editor-in-Chief. He has served on the Board of Directors for the STM Association, the Society for Scholarly Publishing and CHOR, Inc., as well as The AAP-PSP Executive Council. David received his PhD in Genetics from Columbia University and did developmental neuroscience research at Caltech before moving from the bench to publishing.

Discussion

3 Thoughts on "David's Pick for 2010: Peer Review May Be Old and Imperfect, But It Still Works"

The arsenic paper is a sad story of how misplaced hype can create an uphill fight for a potentially interesting line of inquiry. Reading the actual paper, it’s clear the finding is more modest and circumscribed than the NASA PR office and subsequently the press blazed in headlines. The paper is actually detailed in its findings and measured in its interpretation.

The media megaphone is an unpredictable tool for broadcasting subtle scientific findings.

Last year’s Darwinius paper is another example where a paper with modest claims was crazily overhyped by a PR campaign.

Also, if anyone is interested, here’s an article on how easy it is to game online rating systems, even ones at the New York Times, something that should be considered when designing systems for and interpreting data from article level metrics.

Comments are closed.