On March 1st, Axios Review, a prominent upstart in providing external peer review, closed its doors. While submissions continued to grow, volume was insufficient to sustain the business, explained Tim Vines, founder and Managing Editor of Axios Review. The company was founded in 2013 and converted to a non-profit society late in 2016.

withered flower

I interviewed Vines on what he learned from his experiment in providing independent (also referred to as portable) peer review.

  1. Many editors insisted on conducting their own peer review. According to Vines, about half of the manuscripts referred from Axios went out for another round of peer review. Vines concedes that this was not surprising, as many journal editors send manuscripts out for a second round of peer review after an author submits a revised manuscript. Some authors, however, were puzzled and frustrated with having their paper being re-reviewed, and Vines admits that he should have done a better job making it clear that an Axios referral does not necessarily mean that editors are simply left with the decision to accept or reject.
  2. Authors are price sensitive. While submission rates were rising steadily during the early days of Axios, growth stopped when they introduced a $250 USD fee. Vines was surprised with authors’ price sensitivity, considering that many authors in ecology and evolution were willing to pay ten-times that fee for publication in an open access journal. An arrangement with BioMed Central journals allowed authors to deduct the $250 Axios fee from their article processing charge (APC), making the total publication cost identical. [I should note that while many funders will cover the cost of APCs, I know of none that cover separate peer review fees.]
  3. Entrenched workflows. Coupled with the unwillingness of authors to pay out of pocket, Vines conceded that academic workflows are deeply entrenched. He writes:

Overall, I blame the lack of uptake on a deep inertia in the researcher community in adopting new workflows, particularly one that cost them even a small amount of money. Friends that moved into the business world find our failure bemusing, as their companies always evaluate purchasing a service in terms of how much time/effort they’ll save against how much it costs. The fact that academics won’t pay $250, even if it saves them months of fruitless submitting and resubmitting, strongly implies that they place very little value on their own time, or on the time of their students.

I reached out to two competing independent peer review companies on developments in their own companies since my first review of portable peer review (see: Whither Portable Peer Review).

Janne-Tuomas Seppänen, Founder and Managing Director of Peerage of Science was cautious to equate the failing of Axios with a failure of independent peer review. First, he noted, their business models are completely different: Axios charged authors for their service while Peerage charges publishers. Second, Axios attempted to outsource traditional editor-managed peer review services, while Peerage gives authors full control of the peer review process. Seppänen was traveling and unable to provide me with with current benchmark statistics, but noted that Peerage recently added two titles to its participating journal list.

Damian Pattinson, VP of Publishing Innovation at Research Square (owners of Rubriq), noted that it has been difficult to convince publishers to outsource their review process and editors to accept manuscripts that have already been peer reviewed—even if the quality of review is comparable to what they do internally.

Like most services in academic publishing, independent (portable) peer review needs high-volume in order to be profitable. At present, these services are attracting — at best — hundreds of manuscripts per year. For Axios, this was not sufficient to run a company and pay its single employee (Tim Vines). Peerage of Science was started with sponsorship from Finnish institutions and, while the company is now self-supporting, its founders draw no salary or other compensation from Peerage — all of them have jobs in research. Rubriq reviewed 30 manuscripts over the past three months, according to their ‘papers reviewed’ counter on their homepage. The company does, however, offer authors other services including language translation, editing and other manuscript preparation and recommendation activities.

In the past, I’ve argued that there was enough room in scholarly publishing for two separate review markets: one, based on a volunteer labor market (the traditional model); the other, based on a commercial service model. Until recently, I believed that the market would bifurcate, with “scientifically sound” open access journals gravitating to a commercial service. This doesn’t seem to be the case.

While many editors complain that it is difficult to find volunteer reviewers, the absence of a commercial alternative indicates that they do eventually find them. Research strongly suggests that there is an ample supply of reviewers to meet growing demand.

Second, I may have been too critical of “scientifically sound” manuscripts, arguing that they offer little in return for a reviewer’s time and that it may be more efficient to pay a commercial service to review these manuscripts rather than seek a willing volunteer.

Academics express a lot of goodwill when it comes to peer review. According to an international survey of authors conducted in 2015 by the Publishing Research Consortium and Mark Ware, the overwhelming majority of respondents enjoyed being able to improve a paper and that the act of reviewing connected them to their academic community (see p.36 “Reasons for Reviewing”).

In sum, the success of independent (or portable) peer review starts with a strong belief that the traditional journal-centered model of external review using voluntary labor is failing and can be rebuilt (at least for part of the market) with a new model that requires authors, editors, and publishers to change their workflow. Apparently, this is quite a business challenge.

The closing of Axios Review coupled with unremarkable growth in independent peer review doesn’t necessarily mean that a successful model won’t emerge; however, the current state of portable peer review appears more wither than whither.

Phil Davis

Phil Davis

Phil Davis is a publishing consultant specializing in the statistical analysis of citation, readership, publication and survey data. He has a Ph.D. in science communication from Cornell University (2010), extensive experience as a science librarian (1995-2006) and was trained as a life scientist. https://phil-davis.com/


13 Thoughts on "Wither Portable Peer Review"

I really admire this piece of work. None of what surprised Vine surprises me nor you I expect. Anyone who has done or read the research in this area or who has worked in publishing will expect the reactions to what he was trying to do. Many years ago a guru in the scholarly communications environment told me that the peer review system was broken – you could not get reviewers. I asked him how he knew. He said he had been speaking to scientists. I rudely probed. They were exclusively Californian molecular biologists. I am not saying that there are not problems with the system but our research (CIBER) seems to show that they are less than is often suggested by some of the start-ups. Of course some of them are offering useful services and some of them will find niche and even a buyer..

Excellent review. I somehow missed the news that Axios closed. This is not surprising to me either (or the lackluster performance of other companies like that). Here at ASCE we asked our editors if they wanted a better system for transferring papers within OUR OWN journals. This transfer would send reviews along with it from one ASCE journal to the other. This was rejected. Even in their own community, the editors wanted control over which reviewers were picked.

The other important piece here is the author pays model that Axios used. I think it was worth them exploring whether authors would pay for this kind of service but as of yet, we have seen very little that authors/researchers are willing to pay out of pocket for when it comes to their scholarly output. This is why most start up services we see in lightening talks at publishing conferences end in a sales pitch. The services are free to authors and paid for by publishers.

All in all, I want to thank Tim Vines for giving this a shot. Experimentation in the market is a good thing and all of us can learn from the lessons of Axios.

If BMC paid Axios a fee for every manuscript accepted (rather than deducting the peer review fee from their author’s APC), then authors would not see the $250 fee come out of their own pocket—it would simply be passed to their grant/funder. I’ve seen publishers bundle page and color fees into their APC as a way of passing them on to institutions, which are generally are much less price sensitive than authors.

This doesn’t solve the editorial control issue, however.

We actually found that our fee counted as a valid ‘publication cost’ with every funding agency we came across, so researchers didn’t have to pay out of pocket. However, I suspect that researchers guard their research funds just as jealously as they do their own money.

You’re also wrong to suggest that editors were reluctant to use the Axios round of review. In fact, our statistics were consistent with journals treating Axios papers just like their own revisions or resubmissions: 42% were equivalent to ‘minor revision’ and accepted without further review, 42% were given a second round of review and accepted, and 16% were given a second round of review and rejected. These proportions are very similar to the fate of resubmitted manuscripts at a typical mid-tier journal. Better still, our authors always avoided being among the 30-40% of papers receiving an immediate desk rejection.

So, the problem was more that some Axios authors expected all papers going through Axios to be treated as a ‘minor revisions’ by the journal. For about half of our papers, the scope of changes requested by our reviewers made it much more like a ‘reject, encourage resubmission’ that gets a second round of review.

With respect to charging authors or publishers, this body of economic theory suggests this kind of service should always be author-pays: https://hbr.org/2006/10/strategies-for-two-sided-markets

Angela: I think editors will always say that they want to pick their own reviewers. However, when you put a paper under an editor’s nose that has been reviewed by 2-3 experts in their field, they’re much more willing to use those reviews instead. They recognize that they might have picked those reviewers themselves, and moreover most of them see starting peer review from scratch as a gratuitous waste of the previous reviewers’ time.

Fair enough. The decrease in desk rejects for pre-reviewed papers is an important metric. I noticed that CSE has a session this year on “decision-making tools” for editors, which I have also been leery of as my core belief is that each paper should be judged on its own merits and not take new information into consideration. That said, you have a data point that shows that some level of prescreening of papers may have value for the editors. The question remains how much authors are willing to pay for that prescreening process.

As a scientist, I have always been concerned about the ethics of this business model. In essence, money buys a greater guarantee to acceptance, which can be perceived as unfair, and unethical by many researchers and editors. This has nothing to do with the volume of submissions or articles. Consequently, the issue of trust arises: do academics trust systems like Axios Review? I say, “good riddance”, to be honest, because there are sufficient corrupting factors in science publishing. The increasing trend towards turning anything and everything into a “marketable” item is what is rapidly corrupting the academic nature of academics. Tim Vines, how would you respond to my concerns?

Hi Jaime,

In answer to your question – yes. Academics in our field trusted us. We had over 100 editors, many of whom were also editors at other journals. Three left us to become chief or senior editors. We had also built relationships with over 60 journals, many of whom regularly accepted our papers for publication without sending them back out for review. That implies a very high level of trust.

I guess you could claim that our $250 creates a divide between haves and have nots, but I’d contend that if a lab doesn’t have $250 they’re probably not doing much research anyway. Every lab I know would spend that before 11am every day of the week.

Lastly, our fee was not for peer review. Peer review is a quid pro quo service academics provide each other. We did peer review management, just like every other journal editorial office. The latter cover their salaries etc via library subscriptions or open access fees; since we didn’t publish anything we had to charge at ‘point of use’. There’s absolutely nothing unethical about charging a fee for this work. Good peer review costs money to organize.

I agree with Jaime, there was an uncomfortable association of Axios’ model with “purchase an introduction to editor”, and I think many scientists saw it like that.

Not saying that was Axios’ aim. But had the model worked profitably, and then inevitably evolved to price-tiered system (if not by Axios itself, then by diversity among copycats), it is easy to fear a rather dystopian world coming out of it.

I agree with Jaime on that one. I think it might not be correct to blame the inertia of the research community for the failure.

I have not used their services, I am not aware of somebody around me who did or suggested it. You can safely assume that money was not the issue (at least for me it was not). It is just that I would not feel comfortable using this system, just like using the PNAS contributed track, I just don’t want to do that. Period.

I am however more than happy to use new ways to communicate our research and discuss science (preprints / code repositories / PPPR / social media).

I would echo two issues Tim highlights:

1) changing culture in science is slow and challenging – but that is just part of the operating environment we live in; a feature, not a bug.

2) too many people, both among authors and editors, bizarrely think the measure of success for independent peer review is the proportion of submissions that get accepted immediately after it. And *that* is unfair. We have no magic wand that turns sh*t to gold, neither does anyone else. If your paper is good, it should get published quick, if it is not, then not. And peer review is about judging that difference in accurate, justified and ideally trustworthy way – it is not about helping authors get published.

Comments are closed.