The promise of portable peer review took a fatal blow earlier this year as Rubriq, the company that began a radical experiment to disrupt the peer review process, quietly closed its service after years of unremarkable uptake.

When I last reported on Rubriq earlier this year, just 30 manuscripts were reviewed over the prior three months. According to Damian Pattinson, VP of Publishing Innovation at Research Square — the owners of Rubriq, the service has not gone away, just focused entirely on providing peer review and other editorial services to publishers.

technology graveyard

On March 1st, Axios Review, a Vancouver-based company decided to close after lackluster uptake. Tim Vines, its founder and chief operating officer, cited several reasons for the lack Axios’s success: price sensitivity, entrenched workflows, and the culture of conducting in-house peer review.

The last surviving portable peer review service in this market is Peerage of Science, based in Finland. Unlike Axios’ business model, Peerage is free for authors but charges publishers when a reviewed manuscript is transferred to a journal.

According to Janne Seppänen, Co-Founder, and Managing Director of Peerage, submission rates are low but continue to grow. In the past 12 months, Peerage processed 102 manuscripts. Seppänen, who now works full-time at the University of Jyväskylä’s Open Science Center and, like his other co-founders, derives no salary from Peerage, offers insight on how the company remains viable:

Financially, Peerage is sustainable, but only because the costs are so low. We want to hire a team and start doing the things I know will quickly accelerate the submission rate, but we need serious money to do that. Finding investors who offer terms I can live with has not happened so far. But the effort to get there continues.

Peerage will soon begin to offer customized submission and peer review solutions for conferences, according to Seppänen, working with Jyväskylä’s Open Science Center and his university’s open access repository.

A commercial future of portable peer review looks less likely today than it did in 2012 when Rubriq announced its new venture. In spite of the rapid growth of the open access Article Processing Charge (APC) model, which shifted fees from the consumer to the producer, there is still little interest in shifting the financial costs of peer review to the author, even if the publisher promises a fast-track to publication.

Nevertheless, the versions of portable peer review that persist are not truly portable. With its last pivot, Rubriq has become a service for publishers who wish to outsource some of its peer review and editorial operations. And while Peerage is still standing, it operates like a marketplace within a limited community of reviewers and participating journals.

It may be time to bury the promise of portable peer review.

Phil Davis

Phil Davis

Phil Davis is a publishing consultant specializing in the statistical analysis of citation, readership, publication and survey data. He has a Ph.D. in science communication from Cornell University (2010), extensive experience as a science librarian (1995-2006) and was trained as a life scientist.


14 Thoughts on "Portable Peer Review RIP"

I think that it still will be interesting to see whether new portable peer review options spring up around the big push for preprints. It makes a lot of sense to see preprint servers as manuscript marketplaces. If papers come with open and validated peer assessments – and first indications of interest in the research community – it could save publishers time and money and speed up the process for researchers.

Portable peer-review still thrives within the silos of large publishers and/or commercial submission systems: I think it is fair to assume all mega-journals work with a manuscript cascade system to transferring (reviewed) manuscripts from higher-tier journals to the mega-journal, and these systems all use some form of portable reviews. This information is simply too precious to discard.

Sharing these reviews often happens without involvement of the authors or reviewers (perhaps the reviews are more ‘sticky’ than portable), but portability is limited to journals on the same platform and/or with the same company. Portable peer-review is far from dead but it happens behind the scenes.

Cascading peer review–the process by which a manuscript with its reviews moves from one journal to another within the same publisher–is a much earlier development in publishing (see and it is important not to confuse it with portable peer review. The latter was an attempt to move the responsibility and ownership of the peer review process to the author, who would be the agent in marketing their manuscript + reviews to journals. It was an attempt to separate and commercialize the peer review process from the publication process.

Cascading peer review appears to have been largely successful while portable peer review a flop. In looking at the trajectories of three companies in the portable peer review market (Rubriq, Axios, Peerage), I think we can learn a great deal, which was the goal of my post.

Thank you Phil. I believe one should not see the trajectories of these companies separate from publishers’ cascading strategies: perhaps there is no commercial niche available for author-driven portable peer-review when cascading peer-reviews already help the author find a new journal after a rejection. Sure there is ‘vendor lock-in’ but if the publisher’s portfolio is large enough and/or offers suitable alternatives many authors will not look further.

Janne Seppänen’s efforts show there is some room for peer review separated from journals, but from informal discussions with authors I found that many authors see this as a waste of time unless it gets them into a good journal more quickly.

I set up a cascading strategy at a large international publisher some years ago and during that process I had some difficulty convincing receiving journal editorial boards to accept portable reviews. I heard two major concerns: that authors – when given a choice – would only share positive reviews, and that the boards did not know the previous reviewers personally and therefore would not rely on their judgement. Cascading peer-reviews means the publisher controls both who gets invited as previous reviewer and what exactly is cascaded (i.e. the original, unedited review report): for many of the receiving journals this was more acceptable than using an external portable review service.

Max, Peerage of Science *does* get work published more quickly. But you probably knew this, since journals in your previous employer’s portfolio use it particularly effectively. The peer review itself takes time because it is not hastily done – main goal of Peerage is to help replace the careless scribbles that often serve for peer reviews in even at the “top” journals, with carefully written peer review essays – but quicker because unnecessary iteration is avoided by concurrent consideration.

For work originally submitted and peer reviewed in Peerage, and later published, here are the publication routes and time delays after Peerage:

32% directly via publishing invitations inside the platform (hence, not “ported” anywhere); 100 days
27% via export to non-participating journals (I imagine these were often re-peerreviewed) ; 222 days
42% published presumably without editors having Peerage reviews (I imagine these went through several iterations in various journals); 416 days.

We make it possible and easy for journals to reduce delays for peer review itself, but of course have no power over other factors causing workflow friction for publishers.

A positive outlier on the delays is BMC Evolutionary Biology, which regularly publishes articles peer reviewed in Peerage, with a median delay of only 43 days after Peerage review is done.

The central struggle in science publishing is authors trying to find a journal that wants their paper, and journals trying to find papers they want. The ‘submit and hope’ strategy is very inefficient (particularly at high impact journals), and I think everyone privately acknowledges that.

The natural solution to this struggle is the brokerage model, which has been successfully implemented in so many other fields. The three small-scale ventures that tried it out first may not have succeeded, but the logical is inescapable.

I appreciate the effort that went into these experiments. Editors are on the hook for a lot. It’s their name at the top of the masthead and lots of care is put into either selecting reviewers themselves, or bringing on trusted associate editors to do that. I’ve been in many of these conversations and the data shows us that editors still rely very heavily on the people they know to review the content. Portable peer review strips the editors of that control, that comfort factor. I even see reluctance to transferring papers with reviews among our own journals. I know other societies struggle with that as well.

I disagree with Stephanie that reviews on preprint servers will take up the cause. I don’t see any way that random reviews on a preprint server will suddenly win the journal editors over. It’s all about reputation and control (and in many cases, a big contract).

This is not borne out by our experience at Axios: 40% of the papers we sent on to the journal were accepted without further review at the journal, and that included some very staid society publications.

Why? We had a highly qualified editorial board who could find good reviewers. The journal editors largely recognized that the people we’d found came from the same pool that they’d be drawing from themselves, and treated our reviews as their own. Editors can be very pragmatic sometimes…

I think that concentrating on one discipline and one community helped, don’t you? You clearly did some community building. I’m guessing those conversion numbers would be different for Rubriq.

Starting in one community certainly helped, but that effort could be readily replicated across many disciplines given the right resources and people. The key is doing a journal-like review process, so that the journal editors don’t feel the need to repeat it. Rubriq’s process was sufficiently different (paid reviewers, very fast turn-around) that journals likely didn’t quite understand (or trust) what they were getting.

Angela, here too I’d like to draw the difference between the “portable” concept and what Peerage of Science does. In Peerage of Science, editors of participating journals can, with a few easy clicks, send a recommendation to anyone they would like to see review a submission.

Oh yes, Peerage sort-of strips two other things from editor:
1) the *exclusive* privilege to ask people as peer reviewers, because everybody has that right in Peerage.
2) the *exclusive* privilege to decide what a peer review is worth, because fellow reviewers also judge and score and give feedback on a review, and large number of editors will see the peer review and the4 judgements. And this social pressure has tremendous impact on quality of reviews, even though reviewers usually are anonymous to each other.

Don’t be so quick to bury us Phil 😉

Peerage’s submission rate for past 30 days is 19 manuscripts (triple from average monthly rate in 2016). Peerage is providing submission and peer review for 1000+ conference abstracts this winter. And that is from just our first conference customer. And that conference leads to special issue in the field’s flagship journal, articles in which will be peer reviewed through Peerage.

Or actually, I am OK with burying the idea of “portable” peer review. Peerage’s concept was never that.

Instead, our promise is to have a place where all parties in the publishing endeavour can come together, concurrently, to get peer review done, together, in a way that yields better science. And yes, better business, of course, for both Peerage and it’s customers.

And maybe investors who share my academic values are considering our equity proposal right now. Maybe.

The desire to improve peer review is surely laudable. However, competing for a researcher’s time means that they need to believe that this time is well spent. The value of in-depth and detailed reviews as compared to simple star or tick ratings is one of the many areas of peer review that bear scrutiny and may even turn out to be complementary.

Attributing the success (or otherwise) of a new idea solely to the potential of the idea itself lacks an understanding of how innovation works. I talked about the many things that can bury innovation in a previous Scholarly Kitchen posting: Sure, the root idea is important, but funding (as Janne has found) is perhaps even more important – because it enables you not only to expedite development (and accelerate market fit) but also to market the idea successfully. Many valuable innovations have withered on the vine not because they weren’t necessary or useful, but simply because the innovators lacked the skills or resources to create awareness, interest, desire and action – and / or the skills or resources to tap into capital.

Comments are closed.