Editor’s Note: A recent Nature blog posting by Richard Van Noorden starts with the claim that online comments on newly published research are becoming “widespread.” Van Noorden suggests that the problem is compounded by there being too many places to post, making it difficult for commenters to maximize the impact of their critiques. Is this really the case, or are we just seeing a few, highly visible efforts by the research community to chase down high-profile, seemingly important but extraordinary studies (e.g., arsenic life, acid bath stem cells). With that question in mind, I wanted to revisit Kent Anderson’s 2012 post about peer review which presents the key concept that what publishers tend to call “post-publication peer review,” researchers call “doing the next experiment and publishing the next paper.” Also of interest, this recent article on the nature of online debate where, at least in the political sphere, winning the argument is more important than being right.
Back in late 1998, I thought myself clever by launching “P3R,” or “post-publication peer-review” at Pediatrics. It was our attempt at e-letters, and P3R was the marketing spin I decided we’d try. It worked pretty well, actually, with one memorable statistical correction coming from a grocery store clerk, and more (and better) e-letters coming in than we expected. But over the intervening years, I’ve come to realize that marketing exaggeration and hard reality probably don’t meet on this one.
We had online letters. We didn’t really have post-publication peer-review.
Peer-review is a special animal with an erratic history. First used by the Royal Society of Edinburgh in 1731, where individuals “most versed in . . . matters” were consulted before collections of medical articles were published, it fell out of favor in the 1800s as an era of editor-centric letters journals came to the fore. As scientific publishing grew in size and complexity, peer-review was reintroduced to deal with the onslaught of papers, especially after World War II.
Peer-review has such a key role in the social goals of science that it is governed and protected by laws, some federal, some state. In medicine, peer-review within hospitals is also regulated by the Joint Commission on Accreditation of Healthcare Organizations (JCAHO). There are several ways peer-review is validly accomplished according to these bodies, with the best approach being double-blind peer-review (neither the author or reviewer finds out the other’s identity, as opposed to single-blind peer-review, in which the reviewer knows the identity of the author):
- The review must be objective and comprehensive. The people doing the review must be in a position to render a fair and unbiased opinion, and must look at all sides of a case undergoing review.
- The reviewer must be a true peer. This can be a case-by-case judgment, but the principle applies.
- The review must be uniform in nature. That is, if someone is called out because of a potential bias, then all others suffering from the same or similar biases must be called out.
- The review is usually confidential, which is why it’s afforded protections under US law.
- A peer-review committee or group must be defined in some manner.
Now, not all of these rules are applied in the same manner to health organizations, legal peer-review systems, or scientific journals broadly speaking. Health organizations work differently, legal scholars and bodies work differently, and journals work differently still. These differences only underscore how careful we have to be with terms. Even Wikipedia distinguishes its peer-review process from academic peer-review, noting that articles undergoing academic peer-review should be presumed to have greater authority. What JCAHO considers peer-review is slightly different from what journals consider peer-review.
Between journals, there’s a lot of variability when we talk of peer-review. Most journals strive to have at least two independent reviewers evaluate each manuscript chosen for peer-review, but the level of review required, the disclosures required of reviewers, the depth of review, the grading of reviews and reviewers, and the time given for review can vary between and, in some cases, within journals, depending on the speed required and the type of article.
Yet, key constants remain, especially the desire to select true peers to do the review, to have objective reviews (often leading to some form of blinding), and to have a uniformity of reviews and reviewers, usually through a structured system of training, grading, and ranking.
When we move to the system currently being called post-publication peer-review, we leave many aspects of peer-review behind:
- The authors of the paper are known to the reviewers.
- The identity of the reviewers are disclosed to the authors and to subsequent reviewers.
- The reviewers are not pre-qualified as true peers.
- The reviewers are not identified as part of any formal committee or peer-review group.
- There is no uniform ranking or grading system.
There is evidence that peer-review systems devoid of these special characteristics fail. The well-known trial of open peer-review, done by Nature in 2006, studied the use of an open approach for a large multi-disciplinary high-status journal. Over the course of four months, 71 articles were posted for open comment; of these, 33 received no comments at all, while the level of commenting on the other articles proved scant and unhelpful. One difficulty was in getting substantive comments from experts in the area (peers). As one paper reviewing the findings stated it:
It is simply unrealistic to expect informed, well-argued opinions from those who have not been specifically tasked with the job of supplying them.
With open pre-publication peer-review failing so dramatically, why are we holding out hope for an even more exposed, less incentivized system of post-publication peer-review?
For us to truly enact “post-publication peer-review,” it seems we need to actively build out that exact capability. It’s not impossible, but we can’t lazily pass off comments and letters as the same things as “post-publication peer-review,” not without devaluing peer-review overall. As I see it, a system that would qualify as post-publication peer-review would have the following features:
- Reviewers would apply to be a post-publication peer-reviewer, have her or his application approved by the editor, and then be subject to viewing contents of the site without identifying characteristics (author names, institutional and other sponsors, etc.).
- The reviewer could request his or her identity be suppressed on the review, but this could be optional, and dependent on the substance or direction of the review.
- Post-publication reviews would be clearly distinguished from reader comments or letters to the editor, both of which have a place, but are not technically post-publication peer-review.
Even if we were to create such a system, the issue of incentives raises its big, ugly head — incentives are always present in science and scientific publishing; they’re what make our world go around. What are the incentives for someone to write a post-publication peer-review? As someone noted at a meeting I attended last week, the more effective and incentive-driven response to research you don’t believe to be correct or sufficient is to conduct and publish a better study — you get academic credit for this, which is precisely in keeping with the incentive systems we have in place. But putting your name to a criticism that might tip off another scientist about a great new study to perform? Not a smart move.
It’s interesting to contemplate how our current implementations of what some are calling “post-publication peer-review” can lead to what is known as “sham” peer-review. As defined by Wikipedia, sham peer-review is:
. . . the abuse of a medical peer review process to attack a doctor for personal or other non-medical reasons
Today’s commentators seem to have many axes to grind. Far too often, commentary forums degrade into polemical attacks with win or lose dynamics at their heart. The pursuit of knowledge and science isn’t the goal. Capitulation of one combatant to another is.
Peer-review is at the heart of scientific communication and validation. It’s not perfect, and it has many forms and shadings within those forms. However, attempts to market comments as post-publication peer-review — as something akin to true, single- or double-blinded peer-review by true peers — seems doomed to failure. In fact, there is every indication that it is failing.
Perhaps it’s failing because it’s a failure.