Magnifying glassEditor’s Note: A recent Nature blog posting by Richard Van Noorden starts with the claim that online comments on newly published research are becoming “widespread.” Van Noorden suggests that the problem is compounded by there being too many places to post, making it difficult for commenters to maximize the impact of their critiques. Is this really the case, or are we just seeing a few, highly visible efforts by the research community to chase down high-profile, seemingly important but extraordinary studies (e.g., arsenic life, acid bath stem cells). With that question in mind, I wanted to revisit Kent Anderson’s 2012 post about peer review which presents the key concept that what publishers tend to call “post-publication peer review,” researchers call “doing the next experiment and publishing the next paper.” Also of interest, this recent article on the nature of online debate where, at least in the political sphere, winning the argument is more important than being right.

Back in late 1998, I thought myself clever by launching “P3R,” or “post-publication peer-review” at Pediatrics. It was our attempt at e-letters, and P3R was the marketing spin I decided we’d try. It worked pretty well, actually, with one memorable statistical correction coming from a grocery store clerk, and more (and better) e-letters coming in than we expected. But over the intervening years, I’ve come to realize that marketing exaggeration and hard reality probably don’t meet on this one.

We had online letters. We didn’t really have post-publication peer-review.

Peer-review is a special animal with an erratic history. First used by the Royal Society of Edinburgh in 1731, where individuals “most versed in . . . matters” were consulted before collections of medical articles were published, it fell out of favor in the 1800s as an era of editor-centric letters journals came to the fore. As scientific publishing grew in size and complexity, peer-review was reintroduced to deal with the onslaught of papers, especially after World War II.

Peer-review has such a key role in the social goals of science that it is governed and protected by laws, some federal, some state. In medicine, peer-review within hospitals is also regulated by the Joint Commission on Accreditation of Healthcare Organizations (JCAHO). There are several ways peer-review is validly accomplished according to these bodies, with the best approach being double-blind peer-review (neither the author or reviewer finds out the other’s identity, as opposed to single-blind peer-review, in which the reviewer knows the identity of the author):

  1. The review must be objective and comprehensive. The people doing the review must be in a position to render a fair and unbiased opinion, and must look at all sides of a case undergoing review.
  2. The reviewer must be a true peer. This can be a case-by-case judgment, but the principle applies.
  3. The review must be uniform in nature. That is, if someone is called out because of a potential bias, then all others suffering from the same or similar biases must be called out.
  4. The review is usually confidential, which is why it’s afforded protections under US law.
  5. A peer-review committee or group must be defined in some manner.

Now, not all of these rules are applied in the same manner to health organizations, legal peer-review systems, or scientific journals broadly speaking. Health organizations work differently, legal scholars and bodies work differently, and journals work differently still. These differences only underscore how careful we have to be with terms. Even Wikipedia distinguishes its peer-review process from academic peer-review, noting that articles undergoing academic peer-review should be presumed to have greater authority. What JCAHO considers peer-review is slightly different from what journals consider peer-review.

Between journals, there’s a lot of variability when we talk of peer-review. Most journals strive to have at least two independent reviewers evaluate each manuscript chosen for peer-review, but the level of review required, the disclosures required of reviewers, the depth of review, the grading of reviews and reviewers, and the time given for review can vary between and, in some cases, within journals, depending on the speed required and the type of article.

Yet, key constants remain, especially the desire to select true peers to do the review, to have objective reviews (often leading to some form of blinding), and to have a uniformity of reviews and reviewers, usually through a structured system of training, grading, and ranking.

When we move to the system currently being called post-publication peer-review, we leave many aspects of peer-review behind:

  • The authors of the paper are known to the reviewers.
  • The identity of the reviewers are disclosed to the authors and to subsequent reviewers.
  • The reviewers are not pre-qualified as true peers.
  • The reviewers are not identified as part of any formal committee or peer-review group.
  • There is no uniform ranking or grading system.

There is evidence that peer-review systems devoid of these special characteristics fail. The well-known trial of open peer-review, done by Nature in 2006, studied the use of an open approach for a large multi-disciplinary high-status journal. Over the course of four months, 71 articles were posted for open comment; of these, 33 received no comments at all, while the level of commenting on the other articles proved scant and unhelpful. One difficulty was in getting substantive comments from experts in the area (peers). As one paper reviewing the findings stated it:

It is simply unrealistic to expect informed, well-argued opinions from those who have not been specifically tasked with the job of supplying them.

With open pre-publication peer-review failing so dramatically, why are we holding out hope for an even more exposed, less incentivized system of post-publication peer-review?

For us to truly enact “post-publication peer-review,” it seems we need to actively build out that exact capability. It’s not impossible, but we can’t lazily pass off comments and letters as the same things as “post-publication peer-review,” not without devaluing peer-review overall. As I see it, a system that would qualify as post-publication peer-review would have the following features:

  1. Reviewers would apply to be a post-publication peer-reviewer, have her or his application approved by the editor, and then be subject to viewing contents of the site without identifying characteristics (author names, institutional and other sponsors, etc.).
  2. The reviewer could request his or her identity be suppressed on the review, but this could be optional, and dependent on the substance or direction of the review.
  3. Post-publication reviews would be clearly distinguished from reader comments or letters to the editor, both of which have a place, but are not technically post-publication peer-review.

Even if we were to create such a system, the issue of incentives raises its big, ugly head — incentives are always present in science and scientific publishing; they’re what make our world go around. What are the incentives for someone to write a post-publication peer-review? As someone noted at a meeting I attended last week, the more effective and incentive-driven response to research you don’t believe to be correct or sufficient is to conduct and publish a better study — you get academic credit for this, which is precisely in keeping with the incentive systems we have in place. But putting your name to a criticism that might tip off another scientist about a great new study to perform? Not a smart move.

It’s interesting to contemplate how our current implementations of what some are calling “post-publication peer-review” can lead to what is known as “sham” peer-review. As defined by Wikipedia, sham peer-review is:

. . . the abuse of a medical peer review process to attack a doctor for personal or other non-medical reasons

Today’s commentators seem to have many axes to grind. Far too often, commentary forums degrade into polemical attacks with win or lose dynamics at their heart. The pursuit of knowledge and science isn’t the goal. Capitulation of one combatant to another is.

Peer-review is at the heart of scientific communication and validation. It’s not perfect, and it has many forms and shadings within those forms. However, attempts to market comments as post-publication peer-review — as something akin to true, single- or double-blinded peer-review by true peers — seems doomed to failure. In fact, there is every indication that it is failing.

Perhaps it’s failing because it’s a failure.

Enhanced by Zemanta
Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.

Discussion

9 Thoughts on "Stick to Your Ribs: The Problems With Calling Comments "Post-Publication Peer-Review""

“Noorden suggests that the problem is compounded by there being too many places to post, making it difficult for commenters to maximize the impact of their critiques” –

But wait …. scholarly publishing has already solved this problem!

When commenters make their “comments” in the form of published manuscripts they benefit from the sophisticated linking functionality provided by CrossRef. CrossRef is an amazingly effective and proven way to link backwards and forwards between comments across a distributed and Balkanized “commenting” ecosystem.

It’s not that there are “too many places to post”, it’s that there are too many start ups that think they can uniquely benefit from the network effects of being the “one place” for commenting. They, and their investors, miss the point that scholarship needs (and already has) a distributed but linked “commenting” infrastructure.

We tend to think of peer review as a jury giving thumbs up or thumbs down. That happens occasionally, but for most papers peer review is a more nuanced and constructive process. As an editor, I see the before-and-after sides of peer review all the time. Papers usually come out much improved, with clearer explanations and more supportable conclusions. More than one author has been spared embarrassment by a peer reviewer insisting on getting it right.

Also, we sometimes encounter the reviewer-from-hell, someone who resolutely refuses to see any value in what the author writes. In a traditional review system, we can evaluate these comments in context of other reviews. The associate editors and I spend considerable time deciding if these contrarian reviews merit modifying or even rejecting the paper. In a post-publication review system, a reviewer-from-hell would get an unmoderated shot at the author, regardless of the merits of their position.

I know authors may think traditional peer review a pain, but the alternative is not better.

The question that comes to mind is: What is the purpose of post-publication peer review?

A previous kitchen post summarized the functions of a journal well:

“While journals are no longer needed for the initial problems they set out to solve (dissemination and registration), there are 3 additional functions that journals serve that have developed over time. These later functions—comprising validation (or peer review), filtration, and designation—are more difficult to replicate through other means.”
(https://scholarlykitchen.sspnet.org/2010/01/04/why-hasnt-scientific-publishing-been-disrupted-already/)

Some firms (including ResearchGate and Academia.edu, if I recall correctly) see the purpose of post-publication peer review as a way of flinging open the gates to distribution without the annoying pre-publication filter. The argument goes, “The Internet makes distribution free. Let’s post everything on the web and do validation, filtration, and designation using the crowd.” So why bother with pre-publication peer review?

There are some very compelling reasons why we bother. The consumers of research are many. Patients want to track the work that could impact their lives. Advocacy groups want to keep their constituents informed. Science journalists want to find stories to track. Physicians want to stay up to date on their fields. Other researchers in a field want to make meaningful contributions to the literature, and so on. How can post-publication peer review give these constituents trust in what they are reading? Science is already plagued by a surge in retractions (http://www.nature.com/srep/2013/131106/srep03146/full/srep03146.html and many other articles can be found on this). Can we reduce the trust of science output even further by relying solely on post-publication review? How can a researcher further her field if there is no clarity on where the field is?

The better solution would be to standardize and expedite pre-publication peer review to iteratively get closer to the list of 1-5 above. With regard to incentives, I’m not sure whether the right path is paying peer reviewers, formalizing peer review credit on a CV, or some combination of those options. That seems like a great question to answer through entrepreneurial experimentation (a variety of firms can try different approaches to achieving those goals).

I strongly agree that peer-review is one of the most important aspects of academic publishing and that it is essential to distinguish between online comments and a true “post-publication peer-review”. When designing the ScienceOpen publishing platform we decided to offer both ways of giving feedback to fellow scientists. Because our publication model depends solely upon post-publication peer-review, we are committed to carrying this out in a serious and scholarly framework mediated by editors and also open to a qualified scientific public, completely transparent for all parties. Comments are also welcome on ScienceOpen, but have another format and are less mediated.
While the Nature experiment is very interesting, 2006 is not 2014 and several recent publishing ventures are showing that there is an increasing willingness to carry out the evaluation of research results in the open. Too often peer review as it is conducted today is used to wager on which submitted articles will garner the most citations and thus have a positive effect on the journal impact factor, rather than giving feedback on solid methods and correct interpretations. Peer review should not be subverted into a tool for glamor publishing. That is where I see failure in the system.

In a comment to a previous post (http://bit.ly/1hGakaw) I suggested calling this “anonymous mentoring”. In fact, “peer”, although the politically correct term, is conceptually exactly the wrong concept. A junior scientist should not be being judged by his peers, which would be other junior scientists, but rather by experts. In fact, the ONLY constant in, what I shall begin hereafter calling “expert review” rather than “peer review”, is that it MUST be conducted by experts in the field. This is unfortunately pretty much also the only thing that is constantly left out of (at least the take-all-comers version of) so-called “post publication peer review” — i.e., you can’t tell the level of expertise of the reviewers.

And a few more specific responses.

>The review must be objective and comprehensive. The people doing the review must be in a position to render a fair and unbiased opinion, and must look at all sides of a case undergoing review.

Yes, and is good at some journals and terrible at others. But I think it is fair to say that our current pre-publication peer review is mostly broken. I love this post from Professor Arjun Raj (he wrote it in reply to my editorial).
http://rajlaboratory.blogspot.com/2014/04/how-to-review-paper.html

>The reviewer must be a true peer. This can be a case-by-case judgment, but the principle applies.

The reviewers on pubpeer.com are good and are true peers. And as Arjun notes above, most papers in our current system are reviewed by junior professors or students/postdocs who do not have the perspective to give a good review.

>The review must be uniform in nature. That is, if someone is called out because of a potential bias, then all others suffering from the same or similar biases must be called out.

Good ideal. But where does it happen in practice? We review competitors all the time. We are all biased creatures. Move the reviews into the open in post-pub manner, and the reviews will become more uniform. I think the publishers and we as a community simply have not thought hard enough about how to get post-publication peer review to be constructive. I have many thoughts on this, but can’t type them now.

>The review is usually confidential, which is why it’s afforded protections under US law.

I disagree that the confidentiality is a good thing for science and reviews.

>A peer-review committee or group must be defined in some manner.

This part I don’t grasp. Maybe this is not applicable to basic research in life sciences?

And now, the arguments against post-pub peer review.

>When we move to the system currently being called post-publication peer-review, we leave many aspects of peer-review behind:

>The authors of the paper are known to the reviewers.

Not the case in the peer review I know. I always know the authors when I review and reviewers know I am an author when I submit.

>The identity of the reviewers are disclosed to the authors and to subsequent reviewers.

No. Only if you choose F1000 model. Not the case on pubpeer.com. Not the case for PeerJ. Reviews can be public while still being anonymous. I personally think F1000 gets this wrong as does PubMed Commons. Open or anonymous should be a choice, and I would try to incentivize open without requiring it. This is why I am going to be promoting PubPeer – they get this right. But nothing about the idea of post-publication peer review dictates that the identity of the reviewers must be open.

>The reviewers are not pre-qualified as true peers.

Most of the reviewers in post-publication are the ones following up on the work in question. On PubPeer, those commenting tend to be the true experts, often much better than reviewers I got on my submissions. Pre-pub review by 2-3 people can be hit or miss. Same for post-pub. But with the reviews open, this is blindingly obvious and bad reviews can be discounted by researchers.

>The reviewers are not identified as part of any formal committee or peer-review group.
I still don’t understand this part.

>There is no uniform ranking or grading system.

Where is the ranking and grading in current pre-pub peer review? Editorial decisions at high-impact journals are close to random and often worse.

The system we have now is so broken, it gives us this: http://anothersb.blogspot.com/2014/04/dear-academia-i-loved-you-but-im.html

The reviewers on pubpeer.com are good and are true peers. And as Arjun notes above, most papers in our current system are reviewed by junior professors or students/postdocs who do not have the perspective to give a good review.

From PeerPub’s website on who is allowed to review/comment:
https://pubpeer.com/about

Anybody wishing to ask a question about a paper or to make a comment can do so. Comments are anonymous by default and usually made from an account. Opening an account requires a previous publication and an email address at an academic or research institution.

I don’t see how this makes anyone on PeerPub automatically superior to the “junior professors or students/postdocs” you deride above. I had a first author paper as a fairly junior graduate student and a .edu email address so I would have qualified for PeerPub long before anyone asked me to peer review for a journal. Heck, I still have an alumni.edu account and my first author paper still exists. This now qualifies me to comment on any paper on any subject that I choose in PeerPub. My degree is in Genetics, but maybe I feel like commenting on string theory or economics. Am I truly a “peer” to physicists and economists?

Anyone should be able to comment. Most people won’t. PubPeer makes it true peer commenting because it is post-publication and openly visible; the people commenting are the ones following up on the work and that is not the case for pre-pub review. You won’t take the time to comment on economics because your comment will be irrelevant and will be dismissed.

There are already thousands of comments on PubPeer. Take a look at them.

Comments are closed.