Scrabble tiles spelling "Credit"
Image via 401(K) 2012

Offering career credit to researchers for performing peer review seems like a no-brainer, right? Peer review is essential for our system of research, and study after study confirms that researchers consider it tremendously important. Funding agencies and journal publishers alike rely on researchers to provide rigorous review to aid in making decisions about who to fund and which papers to publish. On the surface it would seem to make sense to formalize this activity as a part of the career responsibilities of an academic researcher. But as one delves into the specifics of creating such a system, some major roadblocks arise.

One such problem falls into the realm of volunteerism and motivation. Right now, most academics see performing peer review as a service to the community. It’s important to the advancement of the field and so they volunteer their time. If instead we turn peer review into a mandatory, career requirement that is rewarded with credit, it changes the nature of the behavior. If we set standards (you must do X peer reviews per year) people will then work to those standards rather than the more generous acts we see today, where good samaritans (and good reviewers) take on much larger workloads.

Economists suggest that incentives (a form of reward) changes motivation, some of which will be actualized by real behavioral change. Educator Alfie Kohn talks about how behaviors change in light of offering rewards in one of his books on parenting:

…there are actually different kinds of motivation. Most psychologists distinguish between the intrinsic kind and the extrinsic kind. Intrinsic motivation basically means you like what you’re doing for its own sake, whereas extrinsic motivation means you do something as a means to an end — in order to get a reward or avoid a punishment…extrinsic motivation is likely to erode intrinsic motivation…The more that people are rewarded for doing something, the more likely they are to lose interest in whatever they had to do to get the reward.

Plugging peer review into a rote system of requirements may threaten both the participation levels and the rigor and enthusiasm with which many approach the task.

A second problem with peer review credit schemes (and perhaps I’m arguing against my own interests here) is the increased power it places in the hands of publishers and editors. Researchers have little control over whether they are asked to do peer reviews. If your career is dependent upon getting a journal to ask you to review a paper, what happens if you’re not on their list? An early career researcher who is not well-known in their field is at an automatic career disadvantage under such a scheme. Many are already resentful of the “kingmaking” ability of editors of journals like Science, Nature and Cell. Turning over even more power over academic career success to publishers might not be so well-received.

Perhaps the biggest problem of all comes when we ask the simple question that must always be asked when new changes to the academic career structure or the scholarly publishing ecosystem are proposed:  Who cares? Not “who cares” as in “peer review is unimportant and no one should care about it”, but “who cares” as in “who exactly are we asking to grant credit here?”

As we are constantly reminded, the two things that matter most to academic researchers are career advancement and funding (and the more cynical among us suspect that the former is primarily dependent on one’s ability to secure the latter).

If I was an administrator at a research institution, I’m not sure I’d want my researchers spending an enormous amount of their time helping to improve the papers of researchers at other institutions.  If I was a particlarly wise administrator able to see the big picture, I would understand the value of peer review and how it is necessary for the advancement of knowledge. So I’d know some amount of credit is due. But it’s not the primary reason I hired those researchers, nor is it something I want them spending a lot of their time doing. Their job is to do original research.

Similarly, a funding agency gives a researcher a grant to do research. A diabetes foundation is looking to fund research to cure the disease and likely wants fundees spending their time doing original research, not reviewing papers from other researchers. How much should they reward fundees for doing something other than what they’ve been funded to do? And back at that research institution, if much of the tenure decision (at least in the sciences) is based on how much funding one can bring in, then if peer review doesn’t bring in funding, it won’t matter all that much.

How much career credit should a researcher really expect to get for performing peer review? I suspect that at best it will be a few small percent of the overall picture, more likely a box on a checklist–did you do any peer review? If yes, then you get a small bonus amount of credit. No one is going to get hired, tenure or funding based on a stellar record of peer reviewing lots and lots of papers. There’s a different job where you get rewarded for that — it’s called “editor”.

The proposed peer review credit systems currently under examination, both commercial and community-based seem like overkill. Many systems offer extensive tracking, point systems and review of reviewers which may be unnecessarily complex for a yes/no question. As Joe Esposito has trained us to ask, this is at best a feature, certainly not a product nor a business.

Perhaps something along the lines of the work ORCID and CASRAI are doing will suffice in the end. Tag the activity to the researcher’s identifier and offer a simple yes/no or a tally of peer review events for the year. Do we really need anything more than that?

David Crotty

David Crotty

David Crotty is a Senior Consultant at Clarke & Esposito, a boutique management consulting firm focused on strategic issues related to professional and academic publishing and information services. Previously, David was the Editorial Director, Journals Policy for Oxford University Press. He oversaw journal policy across OUP’s journals program, drove technological innovation, and served as an information officer. David acquired and managed a suite of research society-owned journals with OUP, and before that was the Executive Editor for Cold Spring Harbor Laboratory Press, where he created and edited new science books and journals, along with serving as a journal Editor-in-Chief. He has served on the Board of Directors for the STM Association, the Society for Scholarly Publishing and CHOR, Inc., as well as The AAP-PSP Executive Council. David received his PhD in Genetics from Columbia University and did developmental neuroscience research at Caltech before moving from the bench to publishing.


37 Thoughts on "The Problem(s) With Credit for Peer Review"

There are many problems with peer review, the least of which, you mention, is the trade-offs in time and coin for the reviewer and the need to get coin (promotion/tenure.etc) of the researcher. Kohn’s comments cut in many ways. For example, the need, not to publish, but to get articles published for the rewards drives the market. The more accepted, especially in “ranked” journals, the more credit and, presumably more coin. When it comes to STEM, the reviewer is caught in a bind for many reasons but the basic is that with all the publish pressure, few have time to really penetrate or even to validate the results of the authors. When chemists publish papers, they are supposed to list all their inputs, many of which cannot be duplicated. So the peer review has a lot of blind faith thrown in, separate from the technical editing commentaries where few authors and reviewers tend to hold a blind eye to all that style, grammar and persiflage made into an article with not much substance. In fact, the game is to see how many slices one can get out of a piece of research, often distributed in varied forms in a number of publications.

As with many “measures”, the measure itself becomes the goal. And as that is the goal, we now have more than 20K academic journals and several million of articles targeting the measure.

With the rise of deep learning and increasingly capable AI, one would think we are approaching the point where these intelligent systems can do better than a 1st cut at academic articles, and maybe even write a review since they can command a much larger data base. Then maybe even the research can be done by AI. All that has to be done is an electronic transfer from libraries and subscribers to the publishers. it is said that if you can imagine your job as automated, it can be and will be.

Maybe then we will see Christensen’s disruption applied to the academic publishing industry?

Quantifying credit for peer review may be redundant in a system where editors ask leading authors to do the reviewing, leaving those who have published rarely (or not at all) completely out of the system.

Quantifying the number of reviews leaves us with another problem–it assumes that the time, effort, and expertise put into a each review is constant across all journals and reviews. I may put a full day into reviewing a groundbreaking manuscript submitted to a top medical journal with strict guidelines but spend just 10-20 minutes skimming a mss. submitted to a “sound science” journal that makes no novel claim. Should I receive just as much credit for each?

The issue of motivation is one I had not considered. That is definitely worth thinking about. Of the few reviewer surveys I have done, the top two reasons to review are to see what is new in the field, and to give back to the process. No one reviews papers to get tenure–although we are asked to send about a dozen letters a year “certifying” that an individual has done a certain number of reviews. Given the thousands of reviewers that we use, this seems like a very small number of people who care.

Phil raises another issue with credit–the quality of the review. Even with journals that have a pretty strict peer review form, some reviews are just not good. Then there is the problem of reviewers who review version 1 but refuse to review any subsequent versions. Do they get credit for the one review if they didn’t see the whole process through?

Every scholarly publishing conference these days has a session on rewarding reviewers and it certainly comes up at editorial board meetings. But it all seems to fall apart in these details.

Getting credit for reviewing does seem to work at the small scale (e.g. allowing an association member to apply reviewing towards CME and recertification). At this community level, incentives, rewards, and motivations all seem to work towards creating high-quality reviews that benefit both the author and reviewer. The model doesn’t seem to make much sense at a larger scale (e.g. counting up the number of reviews or applying this number toward a grand metrics score), however. I think there is merit for this idea, but it appears to be promoted at the wrong scale.

(Publisher’s-side perspective) I was glad to see your comment, because CME credit is what I was thinking about while reading this article. I’d be interested to hear the experiences of other publishers (or authors) with a system in which CME credit was offered for peer review.

I think there is a slight twist on this issue, which is more about retaining excellent reviewers and keeping them going rather than increasing the overall base of reviewers (which can be a complete headache). Offering credit and incentives aimed at retaining strong reviewers is different from creating an incentives system to make everyone a reviewer (or to make everyone want to be a reviewer). As Phil mentions, CME is a nice way to give good reviewers a little something tangible and appropriate for their reviews, but CME is commonly available for most physicians so it’s not a differentiator that is going to change behaviors (i.e., make more people desperate to become peer reviewers). Somewhat additive and non-exclusive credit like this can help retain reviewers without skewing the entire system in the way other incentives might.

So just how much does one pay? Do we have a sliding scale based on academic fame, length, “indepthness” , etc.

It seems to me that reviewing goes with the job just as research and writing does.

I am surprised no one has started a reverse OA journal in which the author sells his article to the highest bidder!

In short, is publication a money game for the reviewer/author?

The best idea I have heard recently for a way for publishers to reward and recognize an outstanding review of a manuscript came from a European colleague(I won’t say which country). “Why don’t you send the reviewer a small box of chocolates? It doesn’t cost much, says thank you, and doesn’t go on your income tax.”

There seem to be some extrinsic motives in peer review that have not been accounted for here such as: a) reviewing the work of others in one’s field strengthens one’s own work and b) P&T committee members do consider being asked to review such papers as an endorsement. It counts for something. So the publisher acknowledges the work of peer review and the researcher’s institution dispenses the rewards for that work however it sees fit.
This implicit contract is disintegrating as green and gold OA publishers lower their need and regard for peer review. Will this give rise to institution’s paying more attention to post publication peer review? Is all this talk about doing more to recognize pre-publication peer review anything more than rearranging the deck chairs?

Regarding: “Right now, most academics see performing peer review as a service to the community. It’s important to the advancement of the field and so they volunteer their time.”

I doubt that this is true. I have argued ( that the most common real reasons for reviewing are probably those that people are least likely to admit. Many (probably most) people (regardless of how busy they were) would agree to review, with eager anticipation, every time they are invited to review a paper that cites one’s own work favorably, or dis-favorably — or a paper that is supportive or critical of one’s most favorite theory, or least favorite theory — or a paper authored by a research rival. This is why most reviewers prefer single-blind (anonymity). These motivations generate a peer-review bias problem that may be bigger and more systemic than we will ever be capable of detecting. It is the grand un-testable hypothesis — and unspeakable for most researchers — thus ensuring that the peer-review system will forever remain far from perfect.

That hasn’t been my experience, both as a reviewer and as a journal editor (and as a publisher working with many journals). Reviewers are more often asked to look at papers in their general area of research, rather than the more rare event of reviewing a paper by a competitor or that directly cites them. That does happen, but it doesn’t account for all the reviews done where it is not the case. Also, when asked, reviewers usually talk about community service, about protecting the standards of quality in their field and the quid pro quo of providing a review because they know they will need someone else to do the same for them.

One regularly overlooked aspect of ‘why be a reviewer’ is the opportunity to enhance your reputation. This is true even for anonymous peer review – the editor always knows the identity of the reviewers, and their opinion of you counts. Across 20-30 reviews per year, you can get a reputation as thorough and objective, or as someone who is rude and careless. Since these editors will also sit on your grant panels, review your papers, and potentially become collaborators, being a good reviewer can make or break careers.

This is why developing explicit peer review credit seems like an exercise to please the bean counters – the community broadly knows who pulls their reviewer weight and who doesn’t, and there’s no number that will replace that.

The system of review is simply broken. Remove the incentive to publish garbage and the problem is fixed.

Post publication review by reviewers working within a system that rewards veracity over hype is the solution. In our presently connected world such a system will for the most part evolve naturally if we scientists can muster up the courage to withstand monied selection pressures.

Peer review lends itself to free riding. Those who choose not to engage in the practice nevertheless benefit from the system. This is similar to the way universities that do not support their own presses benefit from having presses at other universities provide the service of publication to their faculty.

I think there are two separate issues here.

First, I think giving credit for peer review is fine for two reasons. First, it gives something tangible to those who do peer review, a sense of accomplishment that goes on their record (e.g., ORCiD). Further, it does motivate other to engage in peer review because it becomes obvious that they have not (“I see from your ORCiD that you haven’t reviewed a paper for 3 years??”). The motivation to be held with respect in your field is different (avoid something negative) than motivation to obtain a reward that is mentioned in this post (obtain something positive).

Making peer review mandatory or something required for career advancement, however, is something completely different. It is a bad idea for all the reasons mentioned in this article as well as for another important one not mentioned — people will accept peer review invitations not because they are qualified but because they have to. So I worry researchers will begin to review manuscripts or grants that are out of their specialty simply to fulfill a quota. That will of course deteriorate the quality of peer review that is essential to promote the field.

Cofounder of Publons here, the largest peer review recognition platform (39,000 reviewers getting credit for 120,000 reviews across 8,000 journals). Always good to see peer review recognition being discussed.

David, your theory is that giving credit to reviewers will reduce incentives to review (and review well), give undue power to publishers and editors, and that hirers & funders will reward those that do less peer review than more.

Our theory is more or less the opposite. The tens of thousands of researchers that have signed up to get credit for their review work, many of whom are already including their verified review record in promotion applications etc, are an early sign that reviewers value getting credit for peer review.

Either way, surely we can agree that running the experiment is the best way to find out.

If you can find a niche, then more power to you. Personally I think it’s an overly complicated and expensive solution to a problem that may not exist. If you find concrete evidence of researchers being promoted, hired or receiving funding based largely on a stellar record of peer reviewing, please do publicly share it.

One other note that I’ve heard concern about from multiple sources (example here: Publons is apparently publicly posting what are meant to be confidential peer reviews, in violation of the reviewers’ agreed upon terms of service and as a complete betrayal to the confidentiality promised to the authors of those papers. If Publons hopes to be integrated across publishers’ systems, then this activity needs to be addressed.

Clarification on your last note: a small proportion of reviewers choose to publish some of their reviews, but we forbid this when the reviewed manuscript hasn’t been published (to protect the authors’ privacy), where the journal explicitly forbids it, or where the journal / author / editor formally requests a review to be hidden. To date no author has ever requested that we hide a published review of their manuscript.

We are working on improving our communication to editors on this as it is a question that comes up often. But the solution we’ve come up with balances the wishes of reviewers, authors, and editors pretty well, and it did not prove to be a barrier to launching pilot integrations with publishers like Wiley, SAGE, and the American Society for Microbiology in the last couple of months.

Responding to the point about this being a problem that may not exist (along with the other points raised in the article) might be better suited for a full blog post. Would you welcome a guest submission from us on this topic?

The problem with this policy is that it is “opt-out” rather than “opt-in”. If the journal in question does not have an open peer review system, then the authors publishing their reviews are in violation of the terms that they agreed to when performing the review. It is then up to journals or authors to issue a takedown notice to you to set this violation of confidentiality to right.

To be clear, I have heard several very angry comments about this from different publishers, and it certainly has harmed the reputation of Publons in their eyes. If any publishers are reading this post, you might consider sending Publons notice that you do not allow posting of your confidential paper reviews and request they take down any that are already up.

An anonymous publisher sent the message below, and granted permission for reposting here:

The problem I have with Publons is not just that this is an ‘opt-out’ system, it’s that Publons have made it implicit (but unfortunately not explicit) that they reserve the right not to comply with an opt-out by a publisher, where that contradicts to the view of an editor/researcher. Their T&Cs state:
“There may be cases where we agree with publishers to hide review content and ensure your review remains “blind”. In those cases we will clearly state why the content is not shown.”

It is unclear if Publons is reserving the right to ignore a publisher’s opt-out request, as it appears they are giving themselves that option.

In my view, the buck stops with the publisher in regards of maintaining standards at a journal, and it is the publisher that complies with COPE guidelines. The publisher’s view on confidentiality should be final.

Also, stating that no authors have requested a review to come down would be better evidence if all of the authors were definitely aware that the review had gone up.

I have watched the growth of Publons with increasing concern. It’s clear that there’s very little peer-review experience behind the venture, but the big-name publishers and journals who are signing up to the service are giving it a credibility I find alarming.

Nowhere can I find the sorts of guidance to researchers I would expect to see (especially for those at an early-career stage) or considerations of ethical issues (of which there are many). A crucial factor is missing – I can’t find anything about review quality being taken into account. The scoring system for pre-publication review is:

“1 point for being on Publons
2 points for being open (ie. the review content is published)
2 points for being journal-verified
1 point for every endorsement the review receives on Publons.”

Verification involves just the submission and checking of ‘review receipts’, i.e. the ‘thank you for reviewing’ emails reviewers get from journals. These generally go out automatically from manuscript management systems and are no indicator of quality. So someone who has done ten rubbish reviews that were of no, little or even negative value to an editor/journal could get 50 points according to the first three indicators above. Someone who has provided a journal with two thoughtful, constructive reviews that have not only helped the editor, but provided immensely helpful feedback to the authors, will get only 10 points if open, 6 if not open.

Publons notes that it’s already seeing cases where researchers are using its records to show academic activity. It’s very unlikely that those on hiring and promotion panels will know what a Publons score really means. They may well therefore look at publisher, society and journal endorsements, and think ‘must be OK if they’ve signed up to it’.

This is what Publons advises reviewers to do who haven’t received a ’thank you for reviewing’ message:
“How can I verify my review if I haven’t received an editorial “thank you for reviewing” message?
In this case you can contact the editor of the review and ask them to verify the work you’ve done. You can read more about this process here. You can also send us screenshots (preferably PDF) of the reviews in your journal’s editorial management system. Please ensure your name, the journal’s name and date of submission are visible on the screenshots.”

Those with editorial experience will immediately recognise the implications of this.

The recent introduction of Impact Factor as a metric for peer reviewers is another cause for great concern. Review/reviewer quality isn’t linked to journal Impact Factor. Many society and niche journals have excellent peer-review processes and provide high-quality reviews. What will happen when IF comes into the equation? Researchers may have been/are still tied to publishing in certain journals for reasons tied to IF and not personal choice. But they have always been free to give their peer-review labour to the journals and editors who have earned their respect and loyalty, or to support their learned societies. How much harder is it going to be for journals that fall below a certain IF threshold to find reviewers?

Publons is being promoted as speeding up the process of peer review, accelerating science, getting scientific publishing moving faster. It is much more likely to slow it down, increase the work of editors and journal staff, and create chaos. It’s also a commercial product (with increasing numbers of commercial tie-ins) that will be hoping one day to be a valuable commodity. Think of the implications of that and for the information that is being transferred without everyone realising it. It could also, unless serious changes are made, potentially become the most easily gamed system in scholarly publishing.

Credit for peer review service should not be linked to impact factor. Period. I absolutely agree.

Irene, if you do ever have any concerns about some aspect of Publons, please do get in touch with us ( Our approach is designed to bring improvements for all peer review stakeholders, so we strive to be as inclusive as we can. Input is always welcomed.

I completely agree with the points about the need to take review quality into account, and the need to not increase the workload of already-overloaded editors. Incentivizing better quality review and making editor’s lives easier are both at the heart of the Publons mission.

There are at least a couple of ways we can encourage higher quality peer review. The first is to give credit for all peer review to help change the perception of reviewing from a chore to being a source of pride (which seems to work, eg: & The second is to establish review quality metrics (based on editor & peer evaluations) to encourage good reviewing behaviours. While for now we are focusing on the former, this does not mean we have no plans for the latter — but we can not put the cart before the horse. In the meantime, the tools available to those who wish to infer the quality of a researcher’s body of reviews are not far off the tools available for assessing the quality of researcher’s body of published articles: look at their track record of reviewing/publishing, and where possible, read a selection of their work.

Regarding editor workload, everything we have engineered aims to reduce the admin workload for editors — this includes tools for finding and contacting suitable reviewers, tools to help detect reviewer fraud, and making the review verification process as hands-off as possible. Less than 0.5% of reviews on Publons are directly verified by editors (the two primary methods of verification require no editor effort at all). Taking a minute to verify 1 out of every 200 reviews you commission is a pretty small price to pay for having happy, motivated reviewers. To give just one example of a way in which reviewer recognition reduces editor workload, our data show that reviewers accept 30% more review requests when they know they’ll get credit for their efforts. (More robust data on the effect of reviewer credit on acceptance rates, review turnaround times, and review quality will come out of the pilots with Wiley, Sage, and ASM.)

The mentioned impact factor metric — the average impact factor of journals reviewed for — is one of many supporting metrics we recently released as an experiment to see which metrics reviewers find interesting and/or useful. The feedback has been mixed (see below), so expect changes soon. For more discussion on including journal impact factors in peer review metrics, see the recent paper published in Royal Society Open Science paper [1] or our blog post & discussion thread on the topic [2].


“More robust data on the effect of reviewer credit on acceptance rates, review turnaround times, and review quality will come out of the pilots with Wiley, Sage, and ASM”

Not true, sadly. These three metrics vary widely through the year, so a study comparing them for a time period ‘before Publons’ to the time period ‘during Publons trial’ will be confounded with changing conditions through the year.

It’s therefore impossible to ascribe any improvement in reviewer agreement rate, turnaround time or quality to Publons: the improvement might be because e.g. the trial period is in the summer and the control period is the busier winter.

In short, the data collected from the pilots suffers from temporal pseudoreplication* and is not very robust at all. A more effective approach would have been to randomly assign papers to either the Publons trial or the status quo, but that would have been significantly more work for the editorial office.


The comparison I had in mind wasn’t ‘before Publons’ versus ‘after Publons’, but rather ‘reviewers that opt-in to Publons’ versus ‘reviewers that do not opt-in’ during the pilot period. Since the two groups aren’t randomised we’ll all still need to be careful with the inferences we make — but it should be better than any data that exist currently.

OK, but there’s also a lot of variability between reviewers as well. Maybe a combination approach that compares before trial and in trial metrics for both opt-in and don’t opt-in reviewers? That would be a bit like a BACI (Before After Control Impact) study. One would still have to be careful as opt-in is probably the group most likely to respond to the Publons incentives (because they opted in), so your observed effect size of Publons on the metrics would be a hefty overestimate for reviewers as a whole.

Thanks for engaging in the discussion, Daniel, and I hope very much that you will reconsider the decision to include the Impact Factor metric.

I’m aware of the R-index paper you’ve linked to and have issues with it. The metric is based on an equation I’m not qualified to evaluate, but I don’t agree with two of the assumptions it includes – that the word count of a manuscript can be considered a proxy for review time, and that the Impact Factor of a journal to which a manuscript is submitted can be considered “a proxy for the impact of the prospective paper … as well as the reviewer’s prestige and standing in the field”. There are also a number of other statements/generalisations made for which no evidence is provided.

I echo David’s concerns above that Publons is an opt-out service. I am also hearing concerns from many other editors. Your website currently indicates that you have reviews from over 8300 journals, which represents nearly a third of the world’s ~28,000 active peer-reviewed English-language journals. The great majority will be unaware that they are ‘in’ Publons.

I am in total sympathy with the aims of Publons – I’ve spent the majority of my professional life trying to improve peer-review standards and finding ways to increase knowledge of peer review and acknowledgement of reviewers.

It would be good if you could provide clarification for some of the things you say on your website. For example, that “Our editors check every review we receive. In the case of questionable reviews our policy is to engage in a dialog with reviewers to improve their work.” You currently have over 124,000 reviews, could you outline here the checks you run? Also, how much time is devoted to checking each review? At even only 10 minutes per review this would add up to a workload of 20,683 person hours (2.36 person years). What size editorial team do you have doing this work, and what are their editorial qualifications? The staff team lists just eight people: the two co-founders, two software developers, two data analysts, and two undergrad students. It would help your credibility to provide this sort of information, so I hope you can do this both here and on the Publons website.

I think the five organisations ‘officially integrated’ with Publons – Wiley, SAGE, PeerJ, GigaScience, and eLife – have some responsibility to reassure the rest of the scholarly publishing world that they appreciate the sorts of concerns others are expressing and are working with Publons to introduce appropriate editorial and ethical standards.

Daniel, in case you hadn’t seen my follow-up comment of 24 June asking for details of your review-checking procedures, I posted a comment directly on your tumblr post
I’ve also been wondering about the checks you run to ensure the journals you have reviews from are legitimate journals, not scam or questionable ones – it would be good if you could outline those too. There are now over 8600 journals listed on the Publons website, with over 100 added in just the past 2 days. The number of reviews has gone up by over 1000 in the past 2 days. Running appropriate checks on these sorts of numbers represents a very considerable workload.

Apologies for my slow response Irene — busy busy busy. Working through your comments in order:

I agree with you on the R-index paper, although I’m wary of holding peer review metrics up to unrealistic standards — many of the same criticisms can be levelled at the various metrics we use to assess article quality too. On a more practical note, an issue I had with the R-index paper is the understatement of the difficulty of standardising (and gaining access to) editor evaluations of reviews. That strikes me as a tough ask. Still, any and all research on crediting peer reviewers is welcomed!

I’m aware of (and appreciate) your work on improving the state of peer review. It is perhaps a failure on our part that we have not yet crossed paths — my cofounder Andrew is now based in London so I’ll suggest he get in touch with you.

The FAQ you’ve found about checking every review is embarrassingly out of date — I believe that was written in the days where our review count was in the double-digits! We now have more automated checks to prevent against authors writing post-publication reviews of their own papers, signing up claiming to be someone they are not, etc. And as alluded to in an earlier comment, the vast majority of our review verification now occurs via the processing of emailed review receipts, or receiving data directly from the publisher. I’ve now fixed up our FAQs, so thanks for bringing that to our attention.

Re our journal checks, a journal’s presence on Publons is no endorsement; it just means a reviewer has added a review that was done for that journal. The additional transparency of being able to see a journal’s reviewers — who an interested party can then research and contact if necessary — can only help with the issue of “scam” journals. One of the things we’re working on at the moment is a journal profile page that better highlights a journal’s reviewers to help out with that.

That said, we are obviously careful with the journals we officially integrate with.

Quite a lively thread! That itself says something, perhaps about the necessity to vet this topic from many different perspectives.

We see intended and unintended consequences just about everywhere we apply a process and/or technology to a communication system. The comment raising CME (and CPD) is good — there are parallels. What are the unintended consequences of having introduced CME (and CME credit-tracking systems)? And (maybe as important), is there evidence that CME achieved its *intended* consequences?

I will be hosting a plenary panel at the September ALPSP meeting at Heathrow on just this topic. I hope it hasn’t been discussed at every conference, but the panel at this meeting will include a variety of stakeholders in the system. Publishers (or researchers) alone probably can’t drive an ecosystem change (though it does seem that funders can…). Suggestions on aspects of the problem that need new coverage are welcome!

A group of HighWire publishers recently discussed this topic, and I think the sense from the group (all publishers, both society and otherwise), was that peer review is part of a whole ecosystem in which community members exchange information/time/value, and that siloing one part of the system (review-writing) from the others is not appropriate. I think it will make it all more transactional (not all bad; but not all good either) and might foster some of the unintended consequences.

John Sack, HighWire Press

One of the areas faculty get evaluated is service to your profession. Peer review counts. Not a huge amount but it does count and service to your profession is expected.

The best reward for reviewing particularly for relatively inexperienced researchers is there is no better way to learn how to get your own manuscripts published and your grants funded.

Returning to the notion of incentives to encourage good peer reviews, I wonder if any publishers have offered incentives from their catalogs of products? Say a credit or points for an e-book download (or a real book) for each review scored by the editor as being highly relevant and timely? Points that could be accumulated and redeemed to offset open access options in hybrid journals or other APC charges? Or complementary access to unsubscribed journals for a period of time? Few publishers are also in the chocolates business, but it seems like there could be some mutual benefit to publishers in encouraging loyalty and good will from reviewers (who are also authors), plus promoting their own products and services. Obviously, like airline miles and other loyalty-rewards program, the devil would be in the details, and if not handled well could backfire.
As an applied scientist, I seldom buy books anymore. Too many $80+ disappointments on my shelves that looked good thumbing through at conference booth or looked good in an online store, but turned out to not be very useful. And I don’t cite what I never read. Maybe some points system for free books might give more exposure, to the mutual benefit of publishers, authors and reviewers?
Many of the postings on SK are from those in academic settings, often biomedical, with good library access. For these writers, the ability to access journals of interest or download e-books may be taken for granted. But among applied “scholarly” readers and writers, good online library access may be out of reach. While I presently enjoy excellent library access through my federal science-oriented employer, access to literature for most of my career has required endless email requests to authors, two hour drives to my nearest public university, and even ILLs through the city library. At least in the applied sciences, access to their publications could be a major incentive that publishers could provide their editors to offer to encourage good quality, timely peer reviews.

This of course has long been a practice in rewarding scholars for reviews they write on monographs. Typically, reviewers are offered a choice of either a cash honorarium or free books. I think the idea of accumulating credits for article reviews has merit and could be tracked easily by the same editorial management programs that people have here been complaining so much about.

One threat of Publons to publishers is that it can unmask the fact that some papers are published after undergoing peer-review of inferior quality. Ever read a paper and wonder how it managed to get published?
Publons is therefore more than just a reviewer credit tool, it is a tool to verify the quality of reviews that happen in each journal (at least those that allow the contents of the reviews to be made public).
One last note, as a Publons Advisor, I have found the organization very receptive to suggestions and comments.

The incentive for reviewers in getting credit for their reviews is actually very large: they get to tell the world what set of referees journal X is using, this means that if a journal uses a very restricted set of referees, coming always from a certain part of the world, the community will be able to figure this out, and “incentivize” the journal to broaden its referee base.

Comments are closed.