Peer review only works when qualified reviewers are willing to volunteer their time and expertise.  When they won’t volunteer, yet expect others to review their work, the system edges toward collapse.

In an article published July in the Bulletin of the Ecological Society of America, entitled “Pubcreds: Fixing the Peer Review Process by ‘Privatizing’ the Reviewer Commons,” ecologists Jeremy Fox and Owen Petchey propose a solution to prevent selfish authors from exploiting the system: Privatize it.

Marriner S. Eccles Building. source: https://www.federalreserve.gov/aboutthefed/aroundtheboard/history-buildings.htm

Fox and Petchey propose the creation of a central bank they call the “PubCred Bank.”  This is not a bank where real money is transacted. Instead, it works on a form of symbolic coinage they call “PubCreds.”

Review an article and you are rewarded with one PubCred.  Save at least three PubCreds, and you can spend them on submitting your own article for review. Handling editors receive half a PubCred for each manuscript they manage.

The desired outcome to such a symbolic banking system is to distribute the reviewing workload and avoid freeloaders who exploit the time and resources of others without paying back into the system.

The B.E. Journal of Economic Analysis & Policy, for example, uses a similar banking approach. A submission costs you two timely reviews or US $350. Authors may “borrow” review credits but are fined if they are not promptly repaid.

Under the PubCred system, authors are required to pay into the system before they use it. Fox and Petchey do allow for “overdrafts,” in which authors may hold negative balances. In the case of authors who are unable to pay, editors may provide waivers. Nevertheless, participating journals would be forbidden to pay more than one credit for a review (say, to persuade a qualified reviewer to participate), or to waive the fee in order to encourage submission. The changes in author behavior would be immediate, write Fox and Petchey:

This is effectively a means of privatizing the reviewer commons. Alternatively, our proposal can be thought of as simply making overexploitation of the reviewer commons impossible. Furthermore, individuals who attempt to publish smallest acceptable units (the “salami slicer syndrome”), or who resubmit rejected manuscripts without appropriate revision, would be free to do so, but would pay an appropriate price.

Essentially, Fox and Petchey are creating a central bank, in which everyone agrees upon the rules of lending.  This is the antithesis of a laissez-faire free market economy where transactions are free from state intervention and authors, editors, and reviewers are allowed to negotiate their own deals.  Under the central bank model, decisions to publish are dictated solely by the balance in one’s account.

The authors have thought clearly through their proposal, providing specific details of how the banking system would operate and acknowledging many of its limitations. It is a proposal that is built on theory, knowledge of how the peer review system actually works, and an admittance that their proposal will not solve all of the problems of the current system. They have responded to many concerns on their blog. For all of these reasons, this is a proposal that should be considered and not dismissed on a few technical details.

On the other hand, Fox and Petchey assume a state of “crisis” that needs to be taken on faith. They provide only anecdotes to support the notion that editors are experiencing more difficulty finding competent reviewers. Indeed, they freely admit to lacking any longitudinal data. More importantly, they assume that peer-review is a burden to researchers, and one that provides no benefits (such as being able to view relevant research ahead of colleagues, the power to influence its content (i.e., “you forgot to cite me!”), and ultimately, publication). Many journals publicly thank their reviewers and let the best join their editorial boards. As practiced, peer review is not a purely altruistic system.

Creating a monetary system — even a symbolic one — changes the incentives to review, and with them, the behavior of authors.  As citation indexes changed the nature of citation behavior, we should assume that a review credit system would have similar effects.  Some authors may resort to gaming the system. Fox and Petchey write:

Individuals in positions of power could coerce others to do their reviewing, in return for authorship. People are likely to find ways to cheat, and the PubCred Banking system may have to be modified in response.

Fox and Petchey acknowledge that such a central banking system requires its own source of funding and administration. However, finding the funds to support such a system may be difficult since the return on investment is unclear. More importantly, journals that currently experience no problem in finding competent reviewers may refuse to join, creating an impoverished bank with little peer review capital.

the real problem with PubCred is that it attempts to provide a global solution to local problems

While some of the technical problems with their proposal can be addressed, the real problem with PubCred is that it attempts to provide a global solution to local problems.

The search for competent reviewers is essentially a search for expertise — and expertise is a limited resource. The fact that experts feel overburdened with requests to review is not a sign of a system in crisis, but a sign of a healthy system that is able to locate these pools of experts.  A central banking system, in contrast, attempts to maximize the efficiency of the system, spreads the responsibility of peer review around, and ignores competency in return for bodies.  It is a system that turns the meritocracy of science into a bureaucratic barter system.

If we are suffering, as many claim, from an avalanche of mediocre science, lowering the competency level for peer review is only going to make things worse.

Phil Davis

Phil Davis

Phil Davis is a publishing consultant specializing in the statistical analysis of citation, readership, publication and survey data. He has a Ph.D. in science communication from Cornell University (2010), extensive experience as a science librarian (1995-2006) and was trained as a life scientist. https://phil-davis.com/

Discussion

31 Thoughts on "Privatizing Peer Review — The PubCred Proposal"

I really think this idea is half-baked. Jeremy Fox and Owen Petchy clearly have good intentions at heart, but they seem to have created a solution even worse than the problem.

And the suggestion that they can fund the administration for their bank with “online advertising” is frankly absurd, and smacks of digital naivety.

The other point I take issue with is that this solution places too high a value on review, and not enough of the research itself, which for me is by far the more important.

When asked about this on their blog – the authors responded:

This is a selfish, parasitic point of view. If everyone adopted it, no science at all would be peer reviewed, and scientific progress as we know it would cease.

Which seems to me to be desperate scaremongering. The proposed system would hold back the best research and promote anything written by the more prolific reviewers, regardless of quality.

Peer Review can be a burden, but as this blog has pointed out before. The burden is not as big as some might expect.

Jeremy Fox and Owen Petchy clearly have good intentions at heart, but they seem to have created a solution even worse than the problem.

Before even attempting to create a solution, no one is quite sure the scope of the problem. Indeed, the premise for their piece stems from anecdotal evidence.

I’d be interested if any editors could chime in and provide any evidence (for or against) that finding competent reviewers is a major problem, and the extent of freeloading in the system (authors who publish but never review).

Owen Petchey and I wrote the post you quote in response to comments we had received. The passage you quote was written in response to a comment which did indeed claim that no scientist should feel the least obligation to review anything, because doing so slows the progress of research and “nothing” should slow the progress of research. Far from being desperate scaremongering, our reply was to merely take this point of view to its logical conclusion. It’s obviously unlikely that the current peer review system will ever break down to the point where no one is willing to review anything, although one can certainly wonder whether serious problems requiring solution will (or perhaps are) develop well before that point.

In our view, someone who truly values research values not just data collection and ms writing, but rather the entire research enterprise. It’s not a matter of valuing “secondary” activities over “primary” ones, it’s a matter of valuing “primary” activities properly. After all, lots of things that would have a narrowly-defined, short-term positive effect of promoting data collection and ms writing would end up *hurting* data collection and ms writing in the long run and ms writing in the long run. In the short term, scientists could collect a lot more data and write a lot more papers if none of them were obliged to teach, but in the long run data collection and paper writing would cease because there wouldn’t be any trained scientists (a deliberately-silly example, of course).

But if you think our proposal overvalues review and undervalues submission, then please put your cards on the table. Do you think the current system appropriately values submissions and reviews? If not, what system would? If so, what do you propose to do if the current system begins to break down in the face of the obvious incentives individuals face to write more and review less? And if your answer is that the system will never break down, well, I sincerely hope you’re right (seriously, I do). But rather than simply hoping that the system continues to work in the face of incentives that will break it, Owen and I believe it’s wise to think ahead rather than waiting for things to get worse.

Callum’s comment above is spot on–the idea of paying for anything, particularly a huge technological undertaking like this, through online advertising seems almost anachronistic given nearly every publisher’s experience with banner ads. The authors also suggest that some publishers would spontaneously build their own banking system, but offer little evidence of any potential ROI from doing so. It’s important to remember, as Kent has been pointing out lately (here and here) that costs run far beyond the initial build. You’d need a constant flow of funds for maintenance, upgrades, etc., not to mention staffing and administration. No idea where that money is supposed to come from.

Peer review systems such as Editorial Manager already include all the data (and most of the functionality) needed to operate such a system – and at much lower marginal cost than creating an entirely new platform.

The obstacle is that publishers and societies may not wish to share their siloed data.

Richard.

You are likely right in that publishers and societies are very protective of their internal data, mailing lists, etc. and would be unlikely to offer to share it with the competition.

But you’re also assuming that the many varied systems used by the many varied publishers for tracking peer review are compatible with one another. Can MacMillan’s plug into Wiley’s? Who knows? The suggested system also requires a new level of functionality, tracking the awarding, saving and usage of PubCreds (across all those diverse and largely incompatible systems). Adding that alone is a major undertaking.

And that has to be paid for somehow, which is a major obstacle. That, and of course that it’s a flawed system that is unlikely to solve a problem that may not even exist.

Yes the systems can integrate. However, an open standard for unique person identification is an essential building block. Initiatives such as ORCID and ISNI put this within reach. Exchanging data between disparate peer review systems is extremely feasible, once an (XML) data exchange standard is in place. (Incidentally Aries proposed this almost 10 years ago).

There would be some cost, but it would be a small fraction of the cost of building a centralized mega system (run by the Government?).

Richard

Can you specify which systems used by which publishers are cross-compatible? Are any current systems compatible with ORCID or any other identifier?

I too would be very keen to see some evidence of this Richard. Because I completely disagree.

Large organizations typically have enough trouble integrating their own in house data, let alone talking to third parties.

It is very easy for laypeople to band around words like XML and say it’s easy without really understanding the technical ramifications of a task.

Maybe I should have written: “most leading systems could integrate”. All the systems store similar data elements but there is currently no agreed standard for exchange. An established people identifier is necessary but not sufficient attribute of an exchange standard.

Most leading peer review system vendors (including Aries) are participating in the ORCID initiative, but (as far as I know) the ORCID platform itself has not yet been launched. When ORCID is fully available, spin-off applications of this type are likely to follow.
Richard

It all sounds a bit vague. Systems “could integrate” but that will likely mean a massive amount of custom work done on each of the thousands of publishers’ systems out there. As an example, looking at my online submission/review system, I can generate reports on individual reviewers, or on the group as a whole, but no individual identifier information is available in the group reports. I have no mechanism for exporting any of those reports. I have no mechanism for importing any such data from outside my system. I have no mechanism for networking my system with that of other publishers. No way to get or give continuous live updates of data. No way to track something like PubCreds and credit or debit the system. That’s a massive amount of expensive bespoke work on the platform for just one fairly small publishing house. Now multiply that across all journal publishers. The economic model doesn’t make sense. It’s got to be cheaper to build one universal system than to try to shoehorn a variety of systems together. One big build versus thousands of big builds.

And if you assume that the system needs an identifier like ORCID, a spec that’s not yet finalized, a platform which has not yet been launched and thus is not yet supported anywhere, it seems to me that there’s more than one obstacle to implementation here.

The problem I see is that there are many people I would never choose as reviewers for a manuscript I’m handling. I don’t trust their judgment, or their expertise is too limited. Yet I don’t think that they should be banned from publishing their own work merely because they are poor reviewers.

In the current system, these people are admittedly a drain, in that their manuscripts take reviewers, even though (rational) editors don’t ask them to review anything themselves.

The way I see it, the work of peer review is a problem worth solving, but hopefully our solution will improve the scientific output. Adding a bunch of marginal reviewers into a barter system might be a net improvement (wisdom of the crowd?) but I sort of doubt it.

Interesting discussion. I briefly participated in some previous online debate about this proposal at Nature Network recently. I can speak a bit about an actual peer-review process, as I am an editor at the journal Nature. Although I no longer actively manage peer review I am closely involved in it and the editors who do.

Some of the technical solutions being proposed here – I am aware of two that exist already, one is the Collexis “referee finder” tool, recently acquired by Elsevier. This is now integrated into manuscript tracking systems. It scans and matches quite a big database, but not all disciplines. It is expensive. The other service I know about is JANE, which is free, and not customised for finding appropriate referees but can be adapted for this purpose. We don’t use either at Nature, but I believe other publishers do.

In terms of whether the peer-review process is broken, I think not. Like any system, it is not perfect and it does not set out to do what some people incorrectly think it does, eg detect fraud. I think the quality of the service to the author depends on how much effort the journal puts into it. Having professional editors who choose two or three independent reviewers, go through two or three rounds of review so that the reviewers can see and if necessary comment on each others’ reports, is one excellent way to manage a fair process. Another one is for the journal to rely on referees for technical advice, but for the editors to use their brains to think about the interest level of the work (for journals that are overwhelmed with submissions, that is).

I do think it is a problem that there are so many journals and this is getting more and more likely to overwhelm referees. I think that people on the whole are happy to referee papers, and in return get their own papers refereed to the same standard. This is our experience, in any case.

But I do think that journals could do more to reduce the referee burden overall, for example think creatively about how to make the “reject, resubmission elsewhere, second journal sends ms to the same (or different!) referees more efficient overall”. The Neuroscience consortium is a good example of different publishers working to the benefit of a community.

The “pooled resource” approach of passing along reviews on rejected manuscripts to the next submission site does seem interesting in terms of efficiency. I worry though, that it does away with an important check and balance the current system provides. If my paper goes to Journal X and they choose a poor reviewer who I think gives me an unfair review, I want a fresh start when I submit it instead to Journal Y. I don’t want my work permanently bogged down by that unfair review, I want it viewed with a clean slate by an unbiased reviewer.

I have a blog post floating around in the back of my head addressing the necessity of redundancy in many areas of science, and this may be a good example to discuss….

In the few cases that I know of where this is possible, the choice is up to the author. If he/she thinks the reviews are fair, he/she submits to the next journal down the food chain at the same publisher (e.g. NPG, Cell Press, now also Science) or consortium (Neuroscience consortium). If the author thinks the reviews are not fair, they may submit to the next journal of their preference without mentioning the previous submission to a related journal.

Thanks for the info. That makes the system seem a lot more tenable. But still, I have to wonder why any author would agree to have negative reviews attached to their resubmission. Is there any advantage to letting the editor of the new journal and the reviewers know that previous editors and reviewers found your paper lacking? Why would you do this?

If authors get clearly negative reviews, obviously they will prefer a clean resubmission elsewhere. On the other hand, if a paper gets positive reviews but not enthusiastic enough for the journal of first choice (e.g. Cell or Nature), the editors in the next journal down (e.g. Molecular Cell or Nature subjournal X) might accept the paper on the basis of those same referee reports without sending it out for further review. The advantage to the authors is that they get a quick decision, with a good chance that it will be positive, rather than starting from scratch in another journal with different reviewers.
For this process to work, the editors concerned have to trust the reviewer choices of the first journal, and they also have to recognize their relative positions on the journal prestige ladder. This delicate balancing of egos is fairly straightforward for journals belonging to the same publisher, such as Cell Press or NPG; and we will find out in due time how well that might work for a subset of journals in the same field, such as those that are collaborating in this way in the neuroscience journals consortium.

I see how this might work well within one publisher’s properties (though it’s been my experience that most individual journals are somewhat siloed, and it’s unclear how the egos of the editors of the lower journals would respond to having decisions made for them by editors at the higher ranked journals). That could streamline the process if the suggested same-publisher journal meets the desired next-landing place for the authors.

But going outside one company is likely to be fraught with peril. As you note, it requires a keen understanding of the ranking of journals, and also that the understanding match perfectly with the understanding held by the editor of the secondary journal. This may not be all that easy to achieve. Editors are human, and we likely over-rate the products of our own efforts.

It seems like most authors understand this, and want their articles presented with a clean slate (the Neuroscience Consortium reports an uptake of only 1-2%).

Belatedly, I certainly agree that the author should control the process, that is, it is up to the author where to submit her/his ms and what information is provided to the journal. These “manuscript transfer” services are just that, services, for the author to choose, or not.

David, I have said on your related post that I don’t think the proposed PubCred system is such a good idea. I’m also with Maxine that I don’t think peer review is broken, and that we would need a totally new system. My personal view of peer review (as author and sometimes reviewer, not coming from the publishing side) is that it can be improved in several small areas.

ORCID could help improve peer review, as it will make it easier to track authors and reviewers across submission systems. ORCID just launched as a non-profit and has more than 100 participating organizations. A public beta is expected for spring 2011 (Disclaimer: I’m on the ORCID board of directors).

I think ORCID would be massively beneficial in other realms as well. As an editor, and as a reader of the literature, it’s often impossible to track down papers from authors who share the same last name and first initial. Simply being sure I was getting the right material from the right Smith, J. would be a tremendous help.

I do think that reviewers should receive some positive career credit for their work. But I don’t think PubCred is the way to do it. I want a system of positive reinforcement rather than a punitive system. And I also want one that puts the proper level of emphasis on the credit given.

David, finding papers from authors with common names is obviously an important use case for ORCID. This makes it for example much easier to figure out whether an author from a submitted manuscript and a potential reviewer have coauthored a paper in the past.

If enough journals use ORCID to track their reviewers, it would be possible to collect that information and display it in a public profile (in 2010 researcher X has reviewed 4 papers for journal Y, 1 paper for journal Z, etc.). The next question would be whether we really want to make that information public, and to what extend.

Speaking as an author (averaging 4-5 publications per year), a reviewer (dealing with a “burden” of 20-30 manuscripts per year), and an editorial board member of three journals, I think this PubCred idea is simply insane. Peer review is not something one wants farmed out indiscriminately, at least not if one wants quality reviews. On the other hand, I know excellent scientists who publish brilliant papers, but are terrible reviewers… . The adage “if it ain’t broke, don’t fix it” applies here in spades. Peer review is not perfect, and small steps to improve efficiency and fairness are welcome, but they should be small and careful steps.

First, thank you to Philip for such a balanced and thoughtful post on our PubCreds idea, and to the commenters as well for their feedback. I wanted to briefly reply to Philip’s post and to some of the main points raised in the comments.

It is true that our article is motivated primarily by anecdotal evidence for a serious ‘tragedy of the peer review commons’ (although one could question whether the numerous editorials we cited constitute ‘anecdotal’ evidence, since the EiCs who wrote them presumably have extensive data on what’s going on at their journals). Owen and I actually agree that any effort to solve putative problems in the peer review system ought to be preceded by an effort to quantify those problems. One of our goals in writing the article was to spark a serious effort within our own field (ecology) to collect/compile relevant data. For some thoughts on what sort of data one might want, see http://www.ipetitions.com/petition/fix-peer-review/blog/2983

Many commenters, both here and elsewhere, have questioned whether PubCreds requires a way to uniquely identify authors and reviewers. It would certainly be helpful to have such a system, but Owen and I don’t think it’s essential. In principle, what’s needed is a way to uniquely identify accounts at the ‘PubCred Bank’, much as with the real banking system, in which accounts, but not account holders, are uniquely identified. For details see http://www.ipetitions.com/petition/fix-peer-review/blog/2677 Fortunately, this point may be moot because ORCID seems likely to come online before any PubCred-type system could be implemented.

As Owen and I said in our article, the problem of how to pay for startup and ongoing operation and improvement of the PubCred Bank is an important one. Our initial suggestions in the article as to how the system might be paid are probably not workable, but were mainly intended to highlight the issue of costs as an important one to our audience of fellow academics (most of whom have been much more concerned with how the PubCred system should work than with how to pay for it). Owen and I explicitly admitted our inability to address this issue, but that does not mean we are under any illusions that will be easy to address. Our discussions with publishers have so far confirmed Philip’s suspicion that publishers do not currently see PubCreds as an attractive investment, relative to the other investments they might make. The key issue for publishers is not so much the absolute size of the investment required, but the expected return relative to alternative investments. Owen and I plan is to proceed with data collection/compilation efforts while continuing informal discussions with publishers on this.

As to how challenging it would be to share data across different online ms handling systems, we plan to find out: we have been put in touch with folks at Elsevier and Wiley-Blackwell who should know the answer.

Philip notes in passing that reviewing does have its rewards. In our article, Owen and I noted this as well. Indeed, we’re recipients of such rewards ourselves (we sit on journal editorial boards). But some of those rewards can only accrue to relatively few people, and others clearly aren’t highly valued (if they were, the perception of a peer review system in crisis wouldn’t be so widespread).

Philip’s closing point regarding the competency of referees is a fair one. A partial answer is note that our proposal leaves editors free to choose referees, just as they do now. A second partial answer is to note that, anecdotally, senior leaders in the field often do little or no reviewing. Under PubCreds, the great expertise of these leaders might be more available to the peer review system. A third partial answer is to note that, at least in our field, it’s not the unique expertise of “popular” reviewers that’s valued by editors so much as their willingness to take the time to perform thoughtful, constructive reviews. The many handling editors we’ve spoken to (we’re also handling editors ourselves) have indicated that, in ecology, willingness to review and review well is much more narrowly distributed than technical expertise. But we freely grant that the same may not be true in all fields.

Comments are closed.