Presentation1
The consortia will enable papers, with their accompanying referee reports, to move more easily between publishers.

An old saw, borrowing from Churchill’s quip about democracy, is that peer review is the worst system for validating research except for all the others. Another effort to improve this much-maligned process has been announced. eLife, BioMed Central (BMC), the Public Library of Science (PLoS), and the European Molecular Biology Organization (EMBO) will be forming a new peer review consortium based around the concept of what eLife calls “portable peer review.”

As the Economist recently noted, this is the latest in a series of schemes, along with Rubriq and Peerage of Science, to detach referee reports from publication in a specific journal. (Readers of this article may be interested in the recent interview I conducted with Rubriq’s Keith Collier on this topic). Unlike Rubriq and Peerage of Science, which are attempting to build extra-journal validation, this initiative builds on the existing journal peer review process.

Authors who submit to a participating journal in the consortium, and are not accepted by that journal, will be able to redirect their paper, with the referee’s reports, to any other journal in the consortium. Referees will be given the opportunity to opt out of having their reports forwarded, or to forward them anonymously (in all cases, the referee’s identity will be anonymous to the author – referees will choose whether they wish to remain anonymous to the editors of the secondary journal).

“How the referee reports are used is up to the receiving journal,” noted Margaret Winker, Senior Research Editor at PLoS Medicine. “Editors may choose to use them or not. Anonymous reviews may be of interest to the editor but would not be as useful since the editor needs to know the identify of the reviewers to ascertain his or her area of expertise.”

Cascading peer review systems, of which this is a variant, have been in use for some time. Publishers with numerous journals in the same field often provide authors with the opportunity to resubmit a paper to other journals managed by the publisher without needing to go through the entire submission and review process all over again. This saves time for the author who does not need to resubmit from scratch to a new journal. It also saves time for reviewers, at least in aggregate, as there will be fewer total reviews requested in the universe.

In addition to saving time for everyone involved, cascading peer review systems can provide a competitive advantage to publishers that employ them effectively. Top-tier journals can attract more high-caliber papers by offering authors the reassurance of the possibility of a back-up plan via publishing in a second choice journal by the same publisher. Publishers including Nature Publishing Group, PLoS, BMC, EMBO, the American Medical Association, and others have used cascading peer review successfully for years.

Second- and third-tier journals benefit as well as they can receive papers, cascaded from their top-tier siblings, they would not be likely to receive as direct submissions. The authors, in this later scenario, may prefer to get a decision letter quickly with a second tier journal as opposed to starting the process all over again with a top tier journal, risking finding themselves in the same predicament several months down the road.

In many ways, the eLife/BMC/PLoS/EMBO consortium (which needs a pithy acronym) is similar to intra-publisher cascade systems, only on a larger scale. The aims of the participants are also similar.

“We all want to ensure that this peer-reviewed ‘sound science’ is published, incorporating any necessary revisions, as rapidly as possible and without duplicating the peer review effort,” said Matthew Cockerill, Managing Director of BMC. “Duplicated review effort reduces academic productivity and so effectively makes a funder’s investment in research less effective.”

Bernd Pulverer, Chief Editor of the EMBO Journal expressed a similar perspective. “Often, a manuscript may be rejected from a journal, not for reasons of quality, but rather of scope,” said Pulverer. “For example, whereas all these journals will select for conceptual advance, eLife may emphasize the broad interest of a discovery, the EMBO Journal mechanism, and PLoS Medicine human relevance.”

The eLife/BMC/PLoS/EMBO consortium is not, of course, the first multi-publisher consortium. The most notable predecessor is the Neuroscience Peer Review Consortium. While still operational, the Neuroscience Peer Review Consortium has not lived up to expectations with very few papers being referred amongst the participating journals.

I asked Pulverer why he thinks this new consortium will succeed where the Neuroscience Peer Review Consortium has not.

“We aim for the cooperation to be much more tight-knit,” replied Pulverer. “For example, if we receive a paper, we will look at the referee reports and then go back to the authors rapidly with a very clear picture of what our requirements will be. We might be able to tell the author that we will accept the paper with a subset of revisions requested by the referees or we might say that it is in the authors’ interest that we ask for input from one additional referee. In any event, the authors will have a clear picture of how we will proceed and they can chose to transfer based on that.”

Cockerill brings a different perspective. He thinks the main problem with the Neuroscience Peer Review Consortium is that the participating publishers have too much in common. “With the Neuroscience Peer Review Consortium, the strong competition between different journals in the field doesn’t create sufficiently strong incentives for the journals to make the effort to facilitate the free flow of manuscripts and reviews.” The new consortium, by contrast, while broadly centered on the life sciences, is less closely focused on a single specialty.

Time will tell whether this new consortium will be successful. And indeed numerous challenges still must be worked out, including:

  • Technology Interoperability. The publishers in the consortium are not all using the same manuscript submission systems, which means that submitted papers, and their accompanying reviews, will need to be manually exported from one system and re-imported to another. This is a cumbersome process that will increase workloads at editorial offices and will need to be streamlined over time.
  • Participation of Referees. The consortium will only be effective enough to be worth the trouble if referees agree to have both their reports and their identities shared amongst participating publishers in the consortium. Reviews by anonymous (to the editor/publisher) referees are of limited value.
  • Editorial Participation. Just about every editor I know believes the nuances of his or her peer review process are superior to all others. Most editors would use another’s editor’s toothbrush before accepting the output of his or her peer review process. Gaining participation of editors may be the most difficult challenge of all.

If the new consortium can surmount these obstacles, however, it may indeed make life easier for authors and peer reviewers. This would be a much-welcomed outcome. It may also provide a competitive advantage to the participating publishers. It is an axiom that authors wish to be published in the journal in their field with the highest profile that they can get their paper into as quickly as possible. The ability to start with a high profile title such as PLoS Biology and quickly cascade to an alternate publication venue across four publishers removes much of the risk of submitting to such a journal. This will raise the selectivity of the flagship titles for each publisher even further while simultaneously increasing the flow of high quality papers to these same publishers’ other titles.

If this initiative is successful, other publishers will no doubt wish to join the consortia or, alternatively, to imitate it by forming additional consortia. Perhaps we will see the rise competing consortia, with complex and shifting alliances in the mode of a George R.R. Martin story. This is, perhaps, but the first move in the Game of Papers.

Note: Thanks to B.A and R.F. for background information related to this article. And thanks to Margaret Winker, Matthew Cockerill, and Bernd Pulverer for their time and interview responses.

Michael Clarke

Michael Clarke

Michael Clarke is the Managing Partner at Clarke & Esposito, a boutique consulting firm focused on strategic issues related to professional and academic publishing and information services.

Discussion

34 Thoughts on "Game of Papers: eLife, BMC, PLoS and EMBO Announce New Peer Review Consortium"

There is an inherent conflict in such consortia around the need of the editors to know the identity of the reviewer. Reviewers (at least this reviewer) are understandably wary of giving permission in advance to expose their identity to any (and multiple) other journals, given the likelihood that the manuscript might well reach an editor who is a colleague or friend of the author in question. It would be more reasonable for the first journal to approach the reviewer and request permission to provide his/her identity to the second journal on a case by case basis.

Networks like this often work better when there is a clear hierarchy in place. Journal A sits at the top, and its rejected papers can then be passed down the ladder to Journal B, C, etc. I worry that in the case of this network, there’s a cluster of roughly equivalent journals at the top, then a large gap, then a cluster of roughly equivalent journals further down the ladder.

If I’m an editor at eLife, how willing am I going to be to accept papers that were judged not good enough for EMBO J? Is PLOS Medicine really going to consider papers rejected from a lower tier BMC journal? What seems most likely here is that papers rejected from the top tier journals in the set will be offered quick acceptance in the journals with a much lower ranking. For some authors, that speed will matter enough to go for it, but I suspect that for most, attempts will instead be made to get the rejected paper into a journal somewhere in between the two tiers.

The question of articles being out of scope is something of a red herring here. In my experience, those sorts of decisions are generally made at the editorial level, not after a round of peer review. For most authors who have submitted their papers to an inappropriate journal, there will be no reviews to pass along. And as an author, I probably don’t want the editor of Journal X sending my paper to Journal Y with a clear indication that I don’t really have an understanding of the content of the journals in my own field. I’d probably rather send it along myself and start fresh.

But on the positive side, one thing that’s really great about this coalition is its diversity in terms of the participants. You have a funding agency with an OA journal, a private not-for-profit OA publisher, an academic/society not-for-profit publisher that employs the subscription model and a wing of a large for-profit corporation that employs a mix of OA and subscription products. These groups working together shows a willingness to embrace diversity in the market in terms of players and business models, a recognition that there’s no single methodology that must be followed in order for the research community to benefit.

In the biomedical sciences, researchers understand tier much better than they do scope, and even when they think their article might not be in scope for a journal, they’ll send it anyways, if they see the journal as being higher tier.

I think this actually a nice move to help researchers focus less on journal tier and more on scope.

Can you explain further how a system like this helps a researcher focus more on scope?

When the journals I work with receive good papers that are out of scope, the editor rejects them as quickly as possible and sends along an explanatory letter, often suggesting other journals where the article would be more appropriate. Is this collaboration any better or more useful than something like that? Would a system that is open to recommending all journals be better for finding one of the exact right scope than one like this that is limited to a small subset of journals/publishers?

On the surface, and based entirely on this posting, this smells like another attempt to complicate things. We already have the watering down of peer review where it only matters if a paper is technically sound in order to be accepted. eLife claims to be more rigorous but then why would they take a PLOS One reject? I am not as comfortable with cascading review as others as I see it as a way for publishers to “make more” from their authors. I really have mixed feelings about this across different publishers. It actually feels a bit like an anti-trust issue. I know, I may sound paranoid here, but way to cut off the competition. Keep the papers in your small circle and you get to keep the citations too.

I also agree with David and Mike about why a reviewer would go for this and the concerns about how useful random reviews are to a journal editor.

This is an interesting collaboration. Let me know how the ScholarOne platforms could assist with this effort, in serving the author community. We support all publishers.

For better or worse, I always decline to review a paper for a journal where I’ve already reviewed it for another journal. My rationale is that the paper was rejected before (not necessarily because of my review) and that the authors should have the opportunity of other points of view. I think such a policy, in effect, gives the authors the benefit of the doubt. This also assumes that any credible concerns voiced in the first review have been either adequately addressed or were rightly dismissed (reviewers are no more infallible than authors). But I can see the benefit of accruing reviews as proposed by this consortium in order to aggregate the reviews. If there were not differing characteristics/filters/editorial preferences associated with each journal, there would only be, ahem, (PLoS) ONE. While the consortium is hardly going to result in a major change in how the majority of papers are processed, its good to see these efforts at innovation.

What we’ve come to realize over the past year is that portable peer review is quite different from journal peer review. It’s very difficult to start with traditional journal peer review and then claim it to be “portable” just because it’s passed off to another journal.

Journal peer review is performed within the context and lens of the journal it’s being reviewed for and until we find a way to review a manuscript in an independent and standardize way, it will be difficult to eliminate redundant peer review.

I agree with the other challenges noted in the post as well. In addition to a standardized scorecard and independent review, a technology platform to pass on the papers and reviews between journals is necessary for this system to work.

Collusion is never to the benefit of anyone except those who are engaging in the collusion.

So join the collusion and enjoy the benefit. When everyone collude, I guess it’s to everyone’s advantage.

(Another word for “collude” would be “cooperate”.)

Except it is technologically not feasible and the smaller society journals will be left behind due to the enormous human intervention needed to facilitate. It is not easy being a society publisher. Having to compete with new megajournals and commercial publishers is an uphill battle. The whole basis for societies and their publications is being Walmarted and Amazoned. I fail to see how this will further science or individual professions.

I don’t understand why you think there is any cost to participating in this. Surely accepting incoming peer-reviews means that the handling editor has less work to do, since she may not need to find more reviewers and send the manuscript out to review.

More generally, you seem worryingly close to saying “We want no part in this, and if we’re not involved we don’t want anyone else to be part of it either”. I’m not sure that would be a reasonable position to adopt.

As noted in the post above, there is a potential for expense in either modifying one’s manuscript handling system to do this automatically or at minimum, the time and effort needed to manually send these along (and receive them from) other journals. For smaller, not-for-profit publishers with thin margins, one has to carefully invest (both funds and efforts) where they are most effective.

Right, I get that there is a cost in handling a manuscript with accompanying reviews. My point that there is also a cost in accepting a manuscript without accompanying reviews. And is that that greater?

Generally you have a system in place to handle submission of a new manuscript (sans reviews). The costs of building that system are already covered and there’s no manual intervention required. Here you’re either going to have to build something new into the system, or bring them in manually and have someone plug them into your existing system. Since most journals are still going to do a round of peer review on their own, it’s not like you’re saving on that (although presumably, having already been through one round, the papers are in better shape and will require less revision).

And that doesn’t get into the efforts/costs of sending out your own rejected papers to other journals.

OK, that makes sense.

But evidently the consortium members have calculated that the costs will, overall, be down. It still seems to me that this looks like something that will advantageous to authors and to the publishers who participate, and neutral to publishers who choose to stay away. I’m struggling to see a downside for anyone.

It will be interesting to see critieria for which publishers are admitted, if others ask to join. My guess is that it won’t be too hard to find criteria than include the likes of Elsevier while excluding International Recognition Multidisciplinary Research Journals, Monthly Publish.

It’s likely a question of waiting to see how well things work. The Neuroscience coalition has been around for a while now with hardly any uptake. If this group can make things work and provide positive benefits, I’m sure others will be very interested in joining.

That’s not what I said. It would be technologically, financially, and resource wise next to impossible to properly facilitate this at some publishers. This puts those publishers at a major disadvantage. That disadvantage is heaped on top of a bunch of others that in tandem, are a threat to professional societies and publishers.

Is this group open to all comers? If a journal on Beall’s list wanted to join in and receive/transfer manuscripts would that be allowed? What about journals from Elsevier?

The Neuroscience consortium has a list of criteria and a page setup for those looking to join:
http://nprc.incf.org/joining

Is anything similar available here?

I am not aware of any published criteria for joining and do not know if the group is presently open to additional participants. This is a good question – and one that I did not ask. I would speculate that the 4 publishers would want to keep this as “closed beta” for some period of time, working out the process kinks and getting the technology sorted, before adding more journals to the mix. But this is just speculation. Perhaps one of them will comment.

It seems to me that the goal of OA publishers and indeed all publishers who are now databasing their offerings is to feed the database.

In the past we had page limits, and with that a higher rejection rate -that is no longer so.

Thus, the sales feature offered to libraries is we have more of more.

I think about that a lot Harvey. While it has long been true that a paper written is a paper published, there has been an assumption that for the most part, peer review, in whatever form it may take, will assure the users that extensive time has been put in by experts on the topic to improve even the most mediocre of papers. Not so anymore. The big journals with the big impact factors can certainly expect their authors to jump through some hoops. But the middle tier journals may find that authors just don’t care. The publisher with the most content in the end will be the winner.

I think it may be deeper than that. I think once an author sends a paper to a Journal and that journal is part of a larger stable of journals then the publisher is reluctant to let the paper go. It will find a journal within the stable and publish it. The publisher may use all sorts of esoteric terms or programs and present them to the author as a benefit, but the goal is to not let the paper go elsewhere.

And then there is this posted today by Elsevier: http://elsevierconnect.com/new-streamline-peer-review-process-piloted-by-virology/

Per the editor: “The way Streamline Reviews works is simple. If an author has a manuscript that has been reviewed and rejected by a high-impact journal that publishes papers on the basic science of viruses (a journal with an Impact Factor higher than 8, …), they can send us the original reviews, their rebuttal and a revised manuscript. They should include these extra items as part of their cover letter.

We will then consider the manuscript based on the reviews and usually send the manuscript, reviews and response to one additional expert for an opinion. In theory, this should speed up the review process for these manuscripts — authors do not need to start over at the beginning, and it is easier for someone to give an opinion on the paper with reviews already to hand.”

What do you all think of the new reviewer seeing the older reviews? May this sway the new reviewer in one direction or another? Most reviews are happening simultaneously where the reviewers are not privy to each others comments until after a decision is made.

The article also states that in one case, they sent the paper to the same reviewer that reviewed it originally, by coincidence. Do any reviewers out there have issues with authors sending their anonymous reviews elsewhere?

As an author and a reviewer I am all for this. As an author I have done this in the past with certain journals where the editor allowed it on an ad hoc basis, and in both cases it worked out well for us, the papers were published and they have been well-received and cited. As a reviewer, I am always glad to have the chance to see the other referee reports – they provide a ‘reality check’ on my review and put it into context.

It’s an interesting concept. There’s something of a precipitous drop in terms of Impact Factor between the journals whose reviews they’re willing to accept (must be above 8.0) and Virology itself (3.367). There are six journals listed in ISI’s virology category between 8 and 3.367, and likely a variety of other more general journals one might choose as a second landing point for a paper where one initially had very high hopes.

One would think you’d want to get some sort of permission from the reviewer to reuse their reviews in this way, but since the whole thing is done entirely by the author, with no input from the original journal or the reviewers, I’m not sure how you’d police that. Similarly, how does Virology verify the accuracy of what they receive? Let’s say my paper receives a scathing review from Cell. How would Virology know if I’ve altered those reviews to something much more mild when I send them along?

I am the Editor-in-Chief of Virology who posted the ElsevierConnect article about Streamline Review. I think we need a level of trust in our authors that they will give us the original reviews. In the ones we have received thus far, there have been both positive and negative comments from the previous reviews. In any case, we do have an additional expert reviewer on our end, so anything obvious will be picked up.
As for the “scathing review from Cell”, I think that anyone who has submitted to such journals would agree that sometimes a reviewer is wrong. We give authors the opportunity to respond to that scathing review by pointing out misunderstanding or other mistakes, or to show how additional data allows them to address the criticism.

I don’t disagree, it’s definitely an interesting experiment. Though as noted, one that may be difficult to police if someone tries to pull a fast one. You’re right though in that anything really way off base should be caught by the additional reviewer.

At the moment this is simply a trial set up between a small group of publishers. In theory, it could apply to any journal, though we have not had time to evaluate how successful it will be.

Previous cross publisher agreements along these lines, have not worked as positively as they could, because they have focused on an in principle agreement to share reviews, without any emphasis on finding and smoothing clear transfer paths that address the commonest resubmission scenarios seen by authors.

BioMed Central’s internal transfer process shows us that when transfer is appropriately targeted, and with editorial buy-in on both sides, it can work really well, and provide a valuable author service. So, we started talking to EMBO and eLife about better managing transfers and sharing reviews because we saw a significant number of researchers were publishing in one of the BMC series titles after having initially submitted to a high rejection-rate EMBO title, and it seemed likely that the same would hold true for eLife.

Ruth:

I think the difference between OA and paper is involved in the need for quantity vs quality. OA lives on quantity. Thus, to capture all that comes in is the goal. PloS has published over 31,000 papers sinces its founding in 2006. (www.richardpoynder.co.uk/Eisen_Interview.pdf) ‎This says something about quality. How many papers has pubmed central published compared to a comparable number of paper journals on the Springer list since its founding?

Thus, it appears to me that these major players in OA are out to capture much of the market.

Comments are closed.