The theme of Peer Review Week back in 2016 was, Recognition for Review, “exploring all aspects of how those participating in review activity – in publishing, grant review, conference submissions, promotion and tenure, and more – should be recognized for their contribution.” In the subsequent two years, the idea that peer review is important and valuable has largely been spread to, and accepted by, the research community. This year’s Peer Review week seemed to take that as a given, yet everything I read seemed to gloss over any practical implementation of how credit for peer review that might work. No progress seems to have been made on questions of how efforts should be measured, who will monitor it, and to whom it will matter. Back in 2015, I wrote a Scholarly Kitchen post, “The Problem(s) With Credit for Peer Review“, asking those questions, and I felt it was time to revisit them.

Rewards brown road sign

The original post pointed out the psychological and motivational changes that happen when one shifts an activity from volunteerism to a required or commercial realm. It noted that being asked to peer review for a journal is largely something that is outside of a researcher’s direct control, and so any rewards system based on such activity would concentrate significant power on journal editorial offices and publishers. But perhaps most importantly, the question, “who cares?” was asked — who are we asking to recognize peer review efforts and what rewards do we expect from them?

In the intervening years, we’ve seen Publons grow and be acquired by Clarivate. At the same time, there’s been no widespread indication by funders or institutions that the information collected has been used in evaluating funding or job candidates. One hears anecdotes about individual researchers who have listed their peer review work on their CV or applications and received job offers or tenure, but I have yet to see any funder or university explicitly declaring that peer review experience is an important criteria for hiring or money decisions.

Much of Publons’ recent focus seems to have shifted away from the promise of recognition and toward building much-needed tools to help journal editors find peer reviewers. Perhaps this is something of a business pivot from Clarivate. They still need to drive the idea of “credit” to get participants to feed them data, but the real utility of that data may be elsewhere, in particular the enhancement of Clarivate’s ScholarOne peer review system in an increasingly competitive market. The recent acquisition of Editorial Manager by Elsevier opens up a new front where the two companies seem to be squaring off against one another. Publons can no longer be seen as a neutral third party service in the data/workflow sphere.

The two most common suggestions one hears from researchers are that either 1) peer review should be explicitly included in the job requirements of researchers, or that 2) peer reviewers should be paid for their service. Many researcher contracts include vague requirements for “service” but don’t specifically define what that means. It could mean being on thesis committees or mentoring students, but it could also mean serving the community as a peer reviewer. But let’s not make assumptions or force anyone to guess — would it make a difference if performing peer review was clearly stated as an expectation in one’s contract with a research institution?

This would codify the argument that is commonly made — peer review is part of the job of being a researcher, and if you’re earning a salary, then you’re already being paid for these activities. Would this be a welcome opportunity for researchers to demonstrate their contributions to the community or just another burden added to the already lengthy list of hoops through which researchers must jump? I could envision a tenure or hiring committee giving some small level of favor to a candidate who is seen by leading journal editors as an expert in the field. But does this then harm candidates who aren’t as skilled at networking, or those who choose to spend all of their time on their own research? Does it bias evaluation against lower-profile researchers at lesser-known institutions, or those from developing countries who studies show are less likely to be invited to peer review?

Regardless, any such credit is only going to be a tiny fraction of what’s offered for one’s own original research or teaching activities. No sane committee is going to care as much about your beautifully argued critiques of the work of others as the work you’re able to accomplish yourself. And if we struggle with getting evaluators to read the actual papers (rather than relying on the Impact Factor), then how likely are they going to spend time reading peer reviews?

Direct payment for peer review is an approach with its own drawbacks, mainly adding significant costs to a system that many already consider too expensive. How much should a reviewer be paid? If I recall correctly, the Company of Biologists used to pay reviewers $25, and stopped the practice at the request of those same reviewers — the money offered wasn’t worth the paperwork and hassle it took to receive it. One also has to look beyond the actual payments themselves. New systems would need to be built and maintained to facilitate and track those payments. The same goes for any sort of payment done as credit toward a future open access fee or color charge. How will the journal’s submission system remember what reviewing you’ve done in the past and what you’re owed? Either method would mean a major (and costly) revamp of existing infrastructure.

Paying peer reviewers would mean higher subscription prices or higher article charges for open access journals. It is possible that the largest commercial publishers could absorb those increased costs into their significant profit margins and keep costs flat. But this would not be possible for most not-for-profit and independent publishers who rely on much smaller margins. The unintended consequence here is further consolidation of the market, increasing the power of the largest commercial players.

Neither suggested route seems satisfactory, and so we remain in limbo. We know peer review is important, and we know that some sort of credit should be given for the hard work that goes into it. We’ve seen a host of companies try to monetize peer review from a variety of angles (e.g., here, here, here, here, here, and here), yet all seem to have failed.

We now have systems where that work can be tracked and verified. But we still don’t know what to do with it and we still haven’t answered the most important question: what is it really worth?

David Crotty

David Crotty

David Crotty is a Senior Consultant at Clarke & Esposito, a boutique management consulting firm focused on strategic issues related to professional and academic publishing and information services. Previously, David was the Editorial Director, Journals Policy for Oxford University Press. He oversaw journal policy across OUP’s journals program, drove technological innovation, and served as an information officer. David acquired and managed a suite of research society-owned journals with OUP, and before that was the Executive Editor for Cold Spring Harbor Laboratory Press, where he created and edited new science books and journals, along with serving as a journal Editor-in-Chief. He has served on the Board of Directors for the STM Association, the Society for Scholarly Publishing and CHOR, Inc., as well as The AAP-PSP Executive Council. David received his PhD in Genetics from Columbia University and did developmental neuroscience research at Caltech before moving from the bench to publishing.

Discussion

32 Thoughts on "Credit for Peer Review: What is it Worth?"

Yes, thanks David! With both my ORCID and my personal hats on, I believe that enabling recognition for peer review – and the many other forms of contributions that researchers make – is important as part of the effort to move away from the current culture of evaluating researchers largely based on their publications. Many individuals and organizations agree this is desirable but it’s only possible if there are effective ways of measuring those other forms of service, which have been largely missing till now. Organizations like ORCID and Publons are starting to enable this – but culture change is typically a slow process. But maybe it’s starting to happen – the Royal Society Te Apārangi (New Zealand) is the first ORCID member to use our peer review recognition functionality in a non-publishing (in their case, funding) workflow (see https://orcid.org/blog/2018/09/12/recognizing-reviews-grant-applications-using-orcid-interview-jason-gush-royal)

A review is worth about US$100 at JMIR Publications (https://www.jmir.org/karma/description). Currently “paid” as credit towards APC (and yes, this does mean a higher APC for everybody else), but we are also working on a blockchain project to pay this out as cryptocurrency (R-coin), to be used as discount token at a variety of other scholarly service providers and publishers.

Can you tell us more about how you build the system that delivers and tracks these awards? How do you monitor their use? And thinking about the increased APCs that result from some rewards, it’s another instance where there are economic pressures that favor quantity over quality — the more articles one rejects, the higher the price is going to be on the accepted authors, so it makes sense to lower your standards and accept more papers over time.

If one goes back, before journals, researchers shared their efforts with colleagues expecting, among other things, feedback. There was merit in such exchanges, including critiques as well as support on all sides. But such involvement took time. Today that collegiality does not exist, in part, only, because of the anonymity.

In the STEM area and many data-heavy research, a thorough, “peer” review can, and probably should, involve significant work, in part, made visible by the many studies which have shown none reproducibility. The same may hold in the humanities and other heavy theory laden articles. The potential of a thorough, critical review not only is time consuming but would probably lead to a significantly higher rate of rejection or recommend for revision with all the concomitant ramifications for all, the journal, editors, researchers and reviewers, including significant delays and increased backlogs in the system flow with further ramifications for institutions and funding sources who have defaulted to impact factors and similar review short cuts.

But, in the long run, in spite of the anonymity, there would be benefits for both the original researchers and the reviewers both in increased quality and a potential of significant value (and potential issues). But it disturbs the settled status quo of the publishers and others who use parasitic measures of value for promotion/tenure, grant decisions and related issues.

If one removes the persiflage which is trying to find solutions and patches within the current system, peer review has the same, exploitative, issues as we are seeing in the rise of the “gig” economy, such as “Task Rabbit” or the “Mechanical Turk”.

Of course, we might consider using one of the emergent AI systems as a first pass in reviewing submissions in spite of the indignant ire raised by bruised academic egos.

As a professional society we have at least one way to recognize our peer review volunteers that a commercial publisher does not — awards. In 2007 we created an award that we call A Peer Apart, which recognizes our volunteers who have reviewed 100 or more papers for us. We recognize the newest members annually during our Annual conference, and keep the full list on our website. Other than the public recognition, we give them a special lapel pin, which they seem to love. It is a small token, but one way we can show our appreciation. In our industry, less than half of our reviewers are academics, so this recognition allows us to show appreciation for the substantial review time that they all put into peer review. Whether any of our academic reviewers has been able to get any credit for that recognition within their universities is unknown. But we felt it was important to at least say thanks for the time invested in our journals.

I have recently published an article on the benefits of reviewing papers for scientific Journals. In the same article, I also compare the junior versus senior reviewers’ characteristics concerning the peer-review process, and what a junior researcher can do to attract the Editors’ attention so as to invite him/her to review an article. You might wish to have a look here: http://peptiko.gr/show_article.php?article=261&selected=-1

We now have systems where that work can be tracked and verified. But we still don’t know what to do with it and we still haven’t answered the most important question: what is it really worth?

Perhaps you are thinking too globally about answering the question of rewards. Just as institutions value publication, teaching, and community service differently, should we not expect them to value peer review differently?

In other words, building the reward system may be sufficient in, and of, itself.

At MDPI, we do grant our reviewers discounts on the APC to publish in any of our journals. How? Each editor rates the quality and timing of each report and based on that we grant a discount voucher (with a code). To use them they just need to insert the code(s) when submitting their paper. Apart from that, we also publish acknowledges to reviewers for all journals and offer several outstanding reviewer awards.

How are those vouchers generated and monitored? Does someone ensure that they’re only used once and only by the person to whom they were granted?

These vouchers are generated and tracked by our submission system, each code can be used only once (then it is deactivated automatically) and is associated with the reviewer, which means that the reviewer must be within the authors of the submitted paper.

We do use our own system, but it is available here: http://jams.pub/. If you’d like to test it, do not hesitate to contact Martyn Rittman (rittman@mdpi.com), responsible for this project.

Free Input to PubMed Commons Well we know for a fact that some 7000 post-publication commentaries were freely made by qualified people over the five year period during which PubMed Commons operated. These comments provided feedback to readers, the original authors, the original pre-publication reviewers and the editors who selected those prepublication reviewers.

When in the future selecting pre-publication reviewers, editors through PubMed Commons could access post-publication assessments of those reviewers’ work. It should be possible to design software to make this easier. However, the magnificent work of the post-publication peer reviewers was cast aside when early in 2018 the NCBI deemed PubMed Commons just an “experiment,” moreover an experiment that had failed.

I’m not sure that’s particularly relevant to the discussion here, other than that PubMed Commons suffered from the same issue being discussed — participation was so low because there’s no recognition or career reward offered for post-publication peer review. Perhaps the difference for pre-publication peer review (why it remains robust) is that it became established as a cultural norm long before the current era of hyper-competition and the piling on of enormous number of demands on a researcher’s time. Post-publication peer review came along as an idea after researchers were already under enormous time pressures, so much so that any activity without direct career rewards is quickly eliminated.

David, pre-publication peer review evolved over centuries. In a mere 5 years the expert, freely given, commentaries provided by the nascent PubMed Commons project were gaining increasing recognition. Many contributors were well-recognized in their fields and had little interest in “career reward.” Some of them (like me) were formally retired and no longer under “enormous pressures.” Simply stated, they merely sought truth. A recent commenter (Harriet Gilman) at NCBI Insights remarks:

“I am saddened to learn that PubMed Commons is being killed off – just as I learn that it exists! As one commenter observed, there’s probably no use in complaining. But to reason the program should stop because it isn’t well-subscribed is ridiculous. Problem is, people don’t know about it and it needs to be easily accessible. Why hasn’t it been advertised or promoted? I suspect the number of people who post there is proportional to the number who know about it.”

See this article for a very different view of the history of pre-publication peer review:
https://scholarlykitchen.sspnet.org/2018/09/26/the-rise-of-peer-review-melinda-baldwin-on-the-history-of-refereeing-at-scientific-journals-and-funding-bodies/

What “increasing recognition” were they achieving? What did that translate to in practical terms? What percentage of the biomedical research community was actively participating?

It is clear that you are upset at the NLM’s discontinuance of the service (you leave a comment about it on many of our posts here, whether relevant to the topic at hand or not). Is an agency like the NIH required to continue a service that they have deemed to be a failure because a small percentage of users like it? If it is so essential, and the problems so easy to fix, why not start your own version and prove them wrong?

David, mine was one of the 6 comments on Belinda Baldwin’s review (Sept 26), which backs my point about the maturation of pre-publication peer review over centuries and the nascent nature of formal post-publication peer review. I suspect a relatively low percentage of the “biomedical research community” actually participates in pre-publication review. Harriet Gilman’s commentary suggests unawareness as a reason why so few participated in the post-publication review “experiment.” The main point I was trying to make is that you should stop worrying about reviewer remuneration and look elsewhere. Formal post-publication review provides an untapped resource for improving the effectiveness of pre-publication peer review.

Being asked to review for a prestigious journal is an honor, which is cheapened by asking for any kind of reward. It really is that simple. If you’re too busy, say no, but don’t later on wonder why you receive fewer requests. Speaking only for myself, I aspired to be the scholar who published quality works but also appeared on editorial boards. That came by writing thorough reviews whenever being honored with a request. There’s always time to do a review, don’t kid yourself. If it takes more than three hours, you’re doing it wrong. The editor will take note of your quick willingness, especially when others are money-grubbing, and your modest efforts will land you on the editorial boards. Editorial board participation is delicious evidence of your being a true scholar, for merit, promotion, or post-tenure. Lazing about, whining about rewards, is the opposite of same. At one point in my long career, I became a journal editor and tangled with a legacy member of the editorial board who refused to review even one manuscript. It was fun to explain why I could not let him remain on the board. Publish or perish applies to the world of reviewing, too, even if the reviews are anonymous to the world. Editors have long memories.

Perhaps it is an honor for a top journal to ask me for a review, but that comment is irrelevant to the other 90% of reviewing tasks. And even if it is an honor, often the only person who recognizes that honor is the editor who asked. Few others know about the “honor” or even whether the review was done well. As indicated by your final two sentences, you seem to think it is perfectly fine that the currency of reviewing only has value in the private network of established journal editors.

Your statement “If it takes more than three hours, you’re doing it wrong.” betrays a lack of understanding of the variety of research areas and forms of research. There are many cases where an established expert would need days or weeks to determine whether or not the results in a paper are substantially correct.

David,

Your insights, thoughts, are on target, particularly, as I noted, for STEM and other data-heavy, research. The possibility of multiple turns looms very large in a system facing increasing submissions and pressure to rapidly publish along with such issues as the possibility of either inadequate information or non-reproducibility. “Recognition” and similar “rewards” are tape and belts for this system which is ripe for the type of AI driven systems now being used by the California tech community, at least for first pass. Scholarly research is not immune.

David, DF’s final two sentences were about the negative consequences of not reviewing, not about the positive value of reviewing. In fact, many journal editors do recognize their reviewers publicly. In my experience as an association publisher, I have seen (1) an annual editorial board reception where top reviewers were presented awards, (2) an annual journals reception where top reviewers *and their deans* were invited and the reviewers presented awards, (3) annual published reviewer lists (not glamorous, I know, but they are comprehensive), and (4) the addition of good reviewers to editorial boards, which can be a step toward becoming an editor. So some journals do reward reviewers through public recognition, and some associations celebrate the value of peer review in the presence of officials from academic institutions. I do think that journal publishers could do a better job of adopt more of these practices.

My Department (humanities, large Canadian research university) has a point system for merit review, on which salary increases are based, in which each activity is tallied. Reviews (manuscripts, fellowships, conference paper selections) are all counted.

Excellent! Do they explicitly call for review activities? What else counts? How do they track and confirm such activities?

FWIW, in the 3 history departments I’ve worked in, all included reviewing (mss. reviewing of articles or books, that is, not book reviewing post pub) as a sign of engagement with and recognition in the field. It was generally part of service, sometimes with a specific point awarded sometimes within the category of professional service. When I write (lots of!) letters of recommendation for hiring or promotion I always note reviewing as a sign of the same.

Scholarly societies that publish journals can provide incentive by awarding reviewers with discounts on membership or annual meeting registration fees.

I suspect that, for many, a significant (and sufficient) reward for peer reviewing comes from the opportunity to have influence over the publication success of papers that cite one’s own work favourably, or disfavourably — or papers that are supportive or critical of one’s most favourite theories, or least favourite theories — or papers authored by research rivals. http://www.musingsone.com/2015/03/why-be-reviewer.html

Besides the benefits that peer review can have, also some negative assignments can be given. Peer review is slow and costly. It does not allow revisions of published papers, such as is supported by Arxiv.org and Vixra.org. The reviewers are not omniscient and are often biased. Other ways of judging published papers or preprints exist. One of them is the discussion of the papers in discussion sites, such as ResearchGate.net. Researchers that work in isolation, are retired or work in industry or other non-scientific organizations have great difficulty to publish affordably in renowned peer-reviewed media.
It is possible to publish free and openly accessible in Academia.edu, ResearchGate.net, or in the form of a Wikiversity project.

Oddly enough, I just today signed off on a 4th year review document that specifically discussed the candidate’s reviewing activity. We evaluate everyone on the three key parameters (scholarship, teaching, and service, where service is construed quite broadly). The memo notes that she has averaged 16 reviews per year: this was seen as a reflection of her stature in the field, but it was discussed in the service section of the document, as “service to the profession”. So it’s not true that it is not taken into consideration. My department, at least, sees it as one of the duties of faculty. That may have something to do with the fact that two faculty members are journal editors…

Cool! Is peer reviewing of papers explicitly stated in their job description as part of their “service” requirements? And how do you track this — do they self-report or do you use something like Publons?

In roles at different publishers I managed a good number of journals across disciplines, both small and large. At the smaller journals peer-reviewers who did a good job were promoted into an editor / board member role or were asked to guest-edit themed issues. These are informal and effective ways to reward peer-reviewers, but it seems to work best with smaller journals and -communities where the editors do not need algorithms to find suitable reviewers.

Comments are closed.