Scrabble tiles spelling "Credit"
Image via 401(K) 2012

Editor’s Note: Joe Esposito pointed out earlier this week that information is not a zero sum game, “Different metrics measure different things for different audiences.” That said, there is a point where too many metrics can muddle the picture we’re trying to see. “Because it can be measured,” is not a good enough reason for a metric to draw one’s limited attention. Five years ago I suggested using a panel of metrics to compensate for the flaws in each individual metric, a suggestion that has recently been revived by a SPARC white paper. At the time I urged caution: “As we assemble this panel though, it’s important that we don’t muddy the waters further with the impact equivalent of chart junk.  Just because you can measure something, it doesn’t automatically result in meaningful data.”

Do we need a metric for everything? Many activities of a researcher are done as a service to one’s community, not out of an expectation that they will lead to financial reward or career advancement. Do we really need to turn philanthropic volunteerism into a carefully tracked and rated competitive exercise? In light of that, and the recent new round of funding announced for Publons, I wanted to revisit the question of how carefully we need to track and rate peer review activities. Is this added complexity necessary when a simple yes/no checkbox would suffice?

Offering career credit to researchers for performing peer review seems like a no-brainer, right? Peer review is essential for our system of research, and study after study confirms that researchers consider it tremendously important. Funding agencies and journal publishers alike rely on researchers to provide rigorous review to aid in making decisions about who to fund and which papers to publish. On the surface it would seem to make sense to formalize this activity as a part of the career responsibilities of an academic researcher. But as one delves into the specifics of creating such a system, some major roadblocks arise.

One such problem falls into the realm of volunteerism and motivation. Right now, most academics see performing peer review as a service to the community. It’s important to the advancement of the field and so they volunteer their time. If instead we turn peer review into a mandatory, career requirement that is rewarded with credit, it changes the nature of the behavior. If we set standards (you must do X peer reviews per year) people will then work to those standards rather than the more generous acts we see today, where good samaritans (and good reviewers) take on much larger workloads.

Economists suggest that incentives (a form of reward) changes motivation, some of which will be actualized by real behavioral change. Educator Alfie Kohn talks about how behaviors change in light of offering rewards in one of his books on parenting:

…there are actually different kinds of motivation. Most psychologists distinguish between the intrinsic kind and the extrinsic kind. Intrinsic motivation basically means you like what you’re doing for its own sake, whereas extrinsic motivation means you do something as a means to an end — in order to get a reward or avoid a punishment…extrinsic motivation is likely to erode intrinsic motivation…The more that people are rewarded for doing something, the more likely they are to lose interest in whatever they had to do to get the reward.

Plugging peer review into a rote system of requirements may threaten both the participation levels and the rigor and enthusiasm with which many approach the task.

A second problem with peer review credit schemes (and perhaps I’m arguing against my own interests here) is the increased power it places in the hands of publishers and editors. Researchers have little control over whether they are asked to do peer reviews. If your career is dependent upon getting a journal to ask you to review a paper, what happens if you’re not on their list? An early career researcher who is not well-known in their field is at an automatic career disadvantage under such a scheme. Many are already resentful of the “kingmaking” ability of editors of journals like Science, Nature and Cell. Turning over even more power over academic career success to publishers might not be so well-received.

Perhaps the biggest problem of all comes when we ask the simple question that must always be asked when new changes to the academic career structure or the scholarly publishing ecosystem are proposed:  Who cares? Not “who cares” as in “peer review is unimportant and no one should care about it”, but “who cares” as in “who exactly are we asking to grant credit here?”

As we are constantly reminded, the two things that matter most to academic researchers are career advancement and funding (and the more cynical among us suspect that the former is primarily dependent on one’s ability to secure the latter).

If I was an administrator at a research institution, I’m not sure I’d want my researchers spending an enormous amount of their time helping to improve the papers of researchers at other institutions.  If I was a particlarly wise administrator able to see the big picture, I would understand the value of peer review and how it is necessary for the advancement of knowledge. So I’d know some amount of credit is due. But it’s not the primary reason I hired those researchers, nor is it something I want them spending a lot of their time doing. Their job is to do original research.

Similarly, a funding agency gives a researcher a grant to do research. A diabetes foundation is looking to fund research to cure the disease and likely wants fundees spending their time doing original research, not reviewing papers from other researchers. How much should they reward fundees for doing something other than what they’ve been funded to do? And back at that research institution, if much of the tenure decision (at least in the sciences) is based on how much funding one can bring in, then if peer review doesn’t bring in funding, it won’t matter all that much.

How much career credit should a researcher really expect to get for performing peer review? I suspect that at best it will be a few small percent of the overall picture, more likely a box on a checklist – did you do any peer review? If yes, then you get a small bonus amount of credit. No one is going to get hired, tenure or funding based on a stellar record of peer reviewing lots and lots of papers. There’s a different job where you get rewarded for that — it’s called “editor”.

The proposed peer review credit systems currently under examination, both commercial and community-based seem like overkill. Many systems offer extensive tracking, point systems and review of reviewers which may be unnecessarily complex for a yes/no question. As Joe Esposito has trained us to ask, this is at best a feature, certainly not a product nor a business.

Perhaps something along the lines of the work ORCID and CASRAI are doing will suffice in the end. Tag the activity to the researcher’s identifier and offer a simple yes/no or a tally of peer review events for the year. Do we really need anything more than that?

David Crotty

David Crotty

David Crotty is a Senior Consultant at Clarke & Esposito, a boutique management consulting firm focused on strategic issues related to professional and academic publishing and information services. Previously, David was the Editorial Director, Journals Policy for Oxford University Press. He oversaw journal policy across OUP’s journals program, drove technological innovation, and served as an information officer. David acquired and managed a suite of research society-owned journals with OUP, and before that was the Executive Editor for Cold Spring Harbor Laboratory Press, where he created and edited new science books and journals, along with serving as a journal Editor-in-Chief. He has served on the Board of Directors for the STM Association, the Society for Scholarly Publishing and CHOR, Inc., as well as The AAP-PSP Executive Council. David received his PhD in Genetics from Columbia University and did developmental neuroscience research at Caltech before moving from the bench to publishing.


25 Thoughts on "Revisiting: The Problem(s) With Credit for Peer Review"

There is one crucial silent assumption about peer review you appear to make here David. The same happens in almost all discussions about peer review, especially those suggesting that tallying the number of reviews done is somehow a good reward system.

The assumption is this: all peer reviews are equally good.

As an experienced editor you know that is not true. Some of them are brilliant. Some of them are garbage that hinders science. And that variance exist across the journal prestige ladder, it is not limited to small or poorly run journals.

In the traditional system, and in the tally-the-number reward schemes, what is the incentive to be excellent as a peer reviewer?

I hope nobody suggests it is to get back favors later from the editor, because that would be corruption, and surely we don’t have that at the core of academic publishing system.

The coin at stake when you do a review is reputation: do a bad job or a good job and the editor will judge accordingly. Always decline the review request and the editor will judge accordingly. Over the course of many reviews for many journals your ability as a reviewer coalesces into part of your broader reputation with the community. Because nobody is looking over researcher’s shoulders when they’re doing science, their approach to review has to stand in as a proxy for their approach to their own work (the other proxy is the quality of their papers). This isn’t remotely like corruption.

Well, in a a setting that is private, and happens between a researcher and someone in a position of power, the words “judge accordingly” can include quite a lot of devils in details…

I don’t make that assumption (and believe me, as a former Editor in Chief of a journal, I know how variable they can be). It is a good question though, how can we drive better, more thoughtful reviews. I think, however, that it is a separate question from how we go about assessing the quality of someone’s research and that the two should not be inextricably linked.

Just in case my comment appeared confrontational, I want to emphasize that I think David’s post above is great, and identifies real issues. Instead of saying you make any assumptions, I should have said that “the post does not mention that peer reviews are not born equal, and should not be rewarded equally”.

I agree both with
1) a collegial system should not be transformed into a competitive field
2) there must be equal opportunity for academic recognition

Agree fully. Would add that not all (free) peer reviews are done by academics set on career advancement. Many real experts are out of academia, or have followed an alternating or coterminous academic and practical career and have nothing much left to prove but remain in love with their subject and their niche within their subject. I am one of the few experts left alive who specialises in Laos. That is known in appropriate circles and I am sought after rather than seeking after career advancement, which at 70 is a distant memory. Institutionalise peer review within academic career-advancement structure and you could lose more than you gain.

I want to underscore this comment re: the myopia of academic credit for reviewers. As academic options shrink or vanish for younger (and older) scientists, many are taking jobs at organizations that don’t emphasize academic credit at all, or that utilized credit in the way posited by these systems/metrics. These systems almost seem nostalgic, in a way. Long-term funding shortfalls have curtailed careers for many scientists, and they find themselves on short-term contracts with CROs, teaching (and reviewer metrics don’t matter), working in industry, or working in unrelated fields.

There may be merit to giving credit for reviewing for those reviewers still in academia, but there is some risk — that is, if some reviewers are getting credit (academics) and other reviewers are asked to do the same work without credit (industry or teacher or contract researcher), the non-academic reviewers may check out of the activity altogether.

Interviewing scientists, there is a feeling that academia has broken a promise to them — trained them up for academic careers, then let them struggle and flail without support. This has not led to the most forgiving frame of mind among many younger scientists, who still love science but feel like academia led them on, trained them up, then abandoned them.

Creating an incentive for only part of the population has the potential to cleave and divide, rather than incentivize. This could cause the group without the incentive to disengage, especially if they already feel like academia broke an implicit trust with them, and therefore they harbor some resentment or hard feelings this incentive would only bring to mind.

I think its important not to portray scientists as a single homogenous group, but a very heterogeneous group that may include academic researchers (tenured/non-tenured), government researchers, educators, postdocs, graduate students, among others, working in various environments, all with some variations on their reward systems. A graduate student leaving academia may benefit from listing reviewing on his/her resume when applying for a first job whereas a career tenured professor may derive little (if any) benefit.

The market for academic publishing is big and diverse enough where working to benefit one group may be sufficient and not rejected out-of-hand.

Excellent piece that captures the inherent conflicts of peer review demands: they are imperfect, essential, mostly thankless, and often unfair. Researchers spending time on reviews are not spending that time getting their own research done, and so researchers want to limit their time doing them, and institutions/supervisors want their people getting their own work done, not spending time helping their colleagues/competitors get their work done. Yet, they know that when they submit their own work for review, the editors have to round up reviewers, hopefully good ones. If not enough volunteer to review, the whole system suffers with slow publication or poor quality reviews resulting in poorer quality publications. Commenters on this blog and other science publication oriented blogs often complain how lame peer review can be, but in all the author surveys I’ve seen, when the authors were asked whether they thought their final manuscripts were improved through the review process, the “yes” responses were resounding.

So why do I say peer reviews are often unfair? Because Prof. Big can skate along, turning down every peer review request without penalty, where Dr. Model Citizen dutifully returns several per year without reward. So would a Publons-like tracking and rating scheme help with that? I doubt it. However, no one minds a “thank you.” I don’t give donate money to public broadcasting because they give away coffee mugs or proportionally more desirable gifts for bigger donations, but I might take them or at least appreciate the offer. Publishers have similar goodies to offer, and they are already track reviewers in their systems. Give points for free books from their catalog, or credits for so many full text downloads across their entire portfolio. Let them accumulate without expiration so they are actually of value. As long at the scheme isn’t stingy or a hassle for the recipient, I wager it would be appreciated. At my society-based journal, 10 acceptable reviews are worth an annual membership. No one will do reviews just for that, but if Dr. Model Citizen is doing the reviews anyway, an occasional tangible thank you doesn’t hurt. I note that some government agencies (e.g., the U.S Environmental Protection Agency) sometimes pays real money for external peer reviews of their massive documents (I recall ~$600 to $1000 USD depending on complexity). No one will make a living doing those reviews, but it is an interesting recognition that expert reviews take real work, there are limits to volunteerism, and review requests must compete with other demands on researchers’ time.

Reviewing of monographs already does come with such rewards. A typical honorarium for writing a report on a scholarly monograph is $150 in cash or $300 worth of books from the publisher’s list. While the amount of time it takes to read and write a report on a book is usually much greater than that required to review a journal article and the payment comes out to a fairly low wage per hour spent, still there is something tangible offered for the time spent. It is interesting that different schemes for peer review developed for journal articles and books in this respect.

One thing that no one has yet noted is the fact that peer review isn’t just about reviewing journal articles – important though that is. So while you’re right, David, that a funder or research institution might not want to ‘waste’ their researchers’ time on reviewing papers, they also need those same researchers to help do peer review for grant applications, hiring, promotion, and tenure decisions, and more. More tangentially, those same organizations are benefitting from the improvements made to their researchers’ output by peer review. Researchers (mostly) get this I think, but their organizations often don’t. There’s a job to be done in helping those organizations understand the bigger picture – how it benefits them specifically as well as the scholarly community more broadly.

I think you raise a really key factor in the dilemma here, and that’s the level of how much a researcher, a funder or an institution should act as a self-centered manner, that is, how much should they focus on their own success/advancement versus sacrificing to contribute to the overall community? So many initiatives in academia these days face that same “free rider” problem.

Perhaps there’s a project in here–combining the work ORCID/CASRAI is doing in tagging peer review credit with FundRef and (at some point when there’s an accepted standard) institutional identifiers. Could a public listing of which funders and universities are acting at a deficit be helpful into shaming them into better behavior? Could be…

A similar problem about intrinsic/extrinsic motivation exists for high school students applying to college. How many students would engage in extensive community service if they didn’t think this was important for their getting admitted to college? As an alumni interviewer of applicants to my alma mater, I spend a lot of time trying to figure out which type of motivation exists for which community service activity. Is the student really interested in that activity, or is it just being done to pad the resume?

If a scientist is highly productive all arguments regarding a career are moot. Doing reviews becomes just part of the job. So I must ask who is clamoring for recognition?

To paraphrase Hubbard:

If you want something done, ask a busy person the other kind has no time.

Kent: My experience is different from yours. I found that most scientists who were either in a PhD program or freshly minted did not have on rose colored glasses and realized the perils of getting a tenured position.

As a scientist rather than a librarian, I am relieved to the see the lack of enthusiasm for constructing a metric for reviews. As an associate editor for many years, I can vouch for the very variable quality of reviews, and as a reviewer I know the vast differences in time and effort put into any one review. “best thing since sliced bread” is easy; the really lousy is a bit less so (you have to say why). It is the ones in between that mop up the hours (maybe five pages of comments and reworking of analyses).
i an ancient enough not to bother about career or money. But in my generation, reviewing was seen both as a duty to the community, and also an honour to be asked: someone thinks I am capable of applying a critical eye to this paper.
Is it possible that any move to make a metric for reviewing is that editors of journals and the publishers are finding it harder to find reviewers? I ask because I get asked repeatedly to review papers that have only a tenuous connection to my own work. I refuse many, but conversations with editors indicate that getting reviewers, especially for high-ranking journals that go for three or more for each paper is increasingly difficult.

I think there is another key reason why people accept review assignments– or at least why I still do when I can: intellectual exchange. Reading and evaluating a piece of work is a silent seminar. And once you return the review, a good Editor will ultimately let you know what’s happened with the piece. Sometimes, depending on the piece and the publication I de-cloak and then the author usually takes up the invitation for more exchange about the piece. Thinking is good work.

It’s also true that it doesn’t “count” for merit evaluations in the same way that publications do, but it does indicate the level of one’s engagement in the field (and recognition of same by editors/ others).

Hear hear Karin! I can’t resist this opportunity to post one of my favorite quotes about peer review, from a PhD student, which appeared in a blog post published on various blogs including ORCID’s during Peer Review Week 2015:
“Although I have had the opportunity to formally review only four or five papers, reviewing papers is one of my favorite things to do. First off, it is a good reminder that not all papers are born perfect, and when I am struggling to try and finish my own work and the prospect of a well-polished manuscript seems too far in the distance, it gives me hope. Second, is there a better opportunity to see what your colleagues are working on and thinking about than by reviewing their work? Third, the idea of being able to help shape the information released into the public sphere is a very enticing. Fourth, it is a great excuse to really think about the assumptions you and others make in your research…when you review, it is your responsibility to stop and think about why this is the way things are done. Fifth, thinking up alternative interpretations and then filtering through the data presented in the paper to determine the robustness of the conclusions is a rewarding challenge. Finally, reviewing papers provides an opportunity to slow-down and formulate a full, well-rounded opinion on something, something which happens unfortunately rarely in the life of the frantic modern scientist stuck in with the nitty gritty details of doing experiments. And I think that from a personal perspective, that final point of generating a sense of accomplishment in doing a good job in thinking things through to the end is probably the greatest motivation for me to review papers.”
The full post is here:

Really interesting discussion. As David mentioned in his comment above, respondents to our recent study indicated that they felt reviewing should be better acknowledged by research assessment bodies/their institution. They also indicated that they would spend more time reviewing if this was the case.

David’s point about extrinsic motivation (the danger of doing something as a means to an end) makes sense to me in the context of discounts/loyalty scheme etc. But, in my mind, there is difference between “recognition” and “reward”. Respondents to our study suggested that “reward” in the form of payment and discounts were less valued as incentive to accept an invitation to review than “acknowledgement” – both public (i.e. being named as one of the journal’s reviewers in an annual roll call) and private (a personal thank-you from the editor). 3 of the top 5 most popular incentives to accept a review invitation were related to receiving feedback from the journal – on the quality of their review, learning about the decision outcome, and seeing other reviewer comments. Reviewers want to know that their contribution has been well received, and was worth the precious time they spent. If more recognition in the form of credit for a job well done, relieves the pressure on existing reviewers by evening out the workload, attracting new reviewers and enabling them to spend time reviewing well, then I think it’s worth pursuing.

In my experience, most researchers want to solve tough problems, teach good students and supervise great ones. Most funding agencies want to fund great research, if they are big, national funding agencies with a broad gambit, or solve tough problems if they are smaller, focuses agencies like the diabetes agency refereed to above. Both groups will generally work with goodwill towards those aims.

But most young researchers now are living on short term contracts or hourly paid work. Funding agencies are facing increasing numbers of applications with decreasing (or static) funds.

More scrutiny (of everything that can be measured) is not the answer here. You are asking the wring question.

Jonathan. Perhaps ever-more scrutiny is not the ‘answer’, but I think the subject question is more concerned with a trend towards less scrutiny, particularly in publications that set standards of academic excellence. We might imagine a situation where there is no scrutiny at all. This is already evident in the millions of self-publications on Amazon, where ‘peer review’ is often simply advertising. One likely result of a decline in scrutiny could be that the basics of what is accepted as reliability (‘fact’) would be eroded and devalued. When students, teachers or others interested in any subject ‘Google’ information or search in Wikipedia, they would be presented only with an array of non-refereed information and misinformation.

That more ‘knowledge’ than ever before is readily available almost instantly is generally good. If that knowledge loses any credibility, ready availability could become a negative. Expert peer review (not to be confused with Amazon reviews and statements on Facebook and Twitter) is fundamental to the growth and dissemination of knowledge and the setting of standards of quality. It deserves full recognition.

I agree wholeheartedly, Robert. I was reacting to the suggestion that people should be tracked and measured as they undertook peer review.

Comments are closed.