Peer Review Monster
Image via Gideon Burton.

In September 2014, a group of more than 40 Australian journal editors submitted a letter entitled Journal Reviewing and Editing: Institutional Support is Essential to the Australian Research Council, the National Health & Medical Research Council of Australia, Universities Australia, the Australian academies, and all Australian Deputy Vice Chancellors of Research. In it, they argued that the review and editing of scholarly papers is a critical element of academics’ work and should be recognized as such by their institutions and funding bodies.

A couple of years before this, Sense About Science’s early career researcher group, Voice of Young Science (VOYS), wrote an open letter to the then CEO of the Higher Education Funding Council for England, Sir Alan Langland, with a similar request – that the organization recognize the contribution that researchers make to science through the peer review of articles and grant applications.

Other than this commentary (by one of the signatories) both organizations have, to date, offered nothing more than brief acknowledgements.

Which is too bad because, despite recent criticism in some media (see, for example, this Slate post), peer review remains central to scholarly communication. In fact, there is evidence to suggest that researchers, are increasingly satisfied with peer review (69% in Sense About Science’s 2009 survey on peer review, compared with 64% in 2007), that they believe it is trustworthy (the 2013 CIBER/University of Tennessee study on trust in scholarly communications states that “researchers agreed that peer reviewed journals were the most trusted information source of all”), and that it improves their research (91% according to Sense About Science again).

However, it’s hard to argue with the perception that peer review is facing, if not a crisis then at least challenging times. With research output – and articles – continuing to grow at 3-4% annually, more reviewers than ever are needed.

The Australian letter highlights the concerns of today’s journal editors, that, “To help maintain the publication quality that Universities and the ARC expect and rely upon, research-active academic staff must be involved [in] peer-review or editorial activities.” While the VOYS letter reflects the concerns of the next generation of reviewers and editors, who believe that,“ Without recognition in the REF we risk reviewing of both papers and grant applications becoming a marginal activity and inevitably inconsistent and shoddy.”

Specific issues include:

  • The tension between the pressure for researchers to publish in peer-reviewed journals and the lack of support for peer review from their institutions and funders. Perhaps my favorite quote from the Australian letter is: “no Nobel Prize was won by an unpublished research work and the peers who reviewed, published and judged the work had to be people whose opinions mattered within the discipline.” Although peer review is rarely rewarded in monetary terms, researchers have historically been happy to participate in the process, mainly for altruistic reasons. For example, the 2009 Sense About Science survey found that 90% of respondents did so because they believe they are playing an active role in the community; only 16% said that increasing their chances of having future papers accepted is a reason to review. However, many institutions and funders fail to recognize or reward their researchers for the time they spend on peer review – either as reviewers themselves or training others.
  • The excessive value placed on publication output by institutions and funders, as opposed to other contributions researchers make to the scholarly endeavor. As the signatories of the Australian letter point out, “The ERA procedures effectively mean that certain research activities are rewarded while other academic activities are not; and that universities suffer financial consequences if their academic staff do not privilege [sic] the winning of large grants and publication of articles in prestigious, high quality journals over all other work.” But without the work of editors and peer reviewers (not to mention the many other contributors), research papers would never see the light of day. Ironically, this is especially true of articles published in journals with high rejection rates – which also typically happen to be exactly those aforementioned “prestigious, high quality journals”.
  • The increasing difficulty of finding researchers who are willing to undertake peer review. With research output continuing to increase at 3-4% annually, finding enough knowledgeable, well-trained reviewers is a challenge. VOYS suggests that reviewing should be “approached professionally and seriously, enabling senior researchers to spend time mentoring early career researchers like ourselves in these activities.” But without support from their institutions and funders, it’s increasingly difficult for senior researchers to prioritize training the next generation of peer reviewers over their own need to publish or perish.
  • The particular challenge of finding reviewers for special issues which, the Australian letter suggests, are increasing in frequency. According to the signatories, “For academics the publication of a Special Issue on a particular theme – especially if it reflects research output for an ARC grant, is deemed an achievement — while for editors the escalating difficulty of finding one reader who is prepared to evaluate the collection as a whole to see that it has integrity and thematic coherence is ignored.”

So, what to do? Both groups essentially make the same request, summed up in the Australian letter as follows:

First, there should be much more explicit requirement and recognition within Universities of the professional service requirements of academics. All academics engaged in publishing should also be involved in reviewing or similar activities, and Universities requiring staff to meet publication targets should also be setting professional service targets. Second, Universities need incentives to develop and maintain these professional service requirements. Inclusion of professional services to journals within the ERA framework is a key option to achieve this.

These modest requests, if implemented widely by institutions and funders, would have an immediate and positive impact on scholarly communications, especially in countries with a centralized approach to the evaluation of institutions for funding purposes, such as Australia and the UK.

Publishers – and societies – are well placed to help maximize this impact. For example, formal peer review training is something that many of our organizations are already investing in individually and would, I believe, be happy to support more rigorously, perhaps through our industry organizations, such as SSP.  In particular, perhaps we should be looking to support peer review training for Chinese scholars, since much of the increase is in submissions is coming from China. Continuing experimentation with different peer review models is also critical – again, something that publishers are taking the lead on. This includes portable peer review (see here and here, for example); new services such as PRE, which was recently acquired by STRIATUS/JBJS, publisher of the Journal of Bone & Joint Surgery; and post-publication review, for example, at F1000. Initiatives like CREdiT, which proposes a contributor taxonomy as a way of facilitating recognition for all contributors to a research paper, not just the author(s), also have a role to play.

Then, of course, there’s also the thorny question of whether researchers should be paid to do peer review – and some certainly feel strongly that they should. The same Sense About Science survey found that reviewers were split on this topic – slightly more than half felt that some kind of payment in kind (such as a free subscription) would be appropriate, just over 40% wanted to be paid (though unsurprisingly this dropped to 6.5% if authors had to pay for it themselves), and 40% would be satisfied simply with some form of acknowledgement in the journal.

So, with somewhat belated thanks to VOYS and the group of Australian editors, here’s to what could be the start of a good, positive debate about how to ensure the essential contribution that peer review and peer reviewers play in scholarly communication is fully recognized, rewarded, and supported – by funders and institutions, promotion and hiring committees, publishers and societies.

Alice Meadows

Alice Meadows

I am a Co-Founder of the MoreBrains Cooperative, a scholarly communications consultancy with a focus on open research and research infrastructure. I have many years experience of both scholarly publishing (including at Blackwell Publishing and Wiley) and research infrastructure (at ORCID and, most recently, NISO, where I was Director of Community Engagement). I’m actively involved in the information community, and served as SSP President in 2021-22. I was honored to receive the SSP Distinguished Service Award in 2018, the ALPSP Award for Contribution to Scholarly Publishing in 2016, and the ISMTE Recognition Award in 2013. I’m passionate about improving trust in scholarly communications, and about addressing inequities in our community (and beyond!). Note: The opinions expressed here are my own

Discussion

49 Thoughts on "Peer Review — Recognition Wanted!"

A huge part of what’s historically been missing is publication of reviews. Reviewers can sink many hours work into producing a document that could be of interest to many researchers, but which is only ever seen by two — the original author and the handling editor. That reviews are habitually buried is one of those hangovers from the bad old days when space was limited by physical size of the journals — but it’s a bad habit that we can break now that so much publication is online.

Some journals are addressing this. For example, the careful and detailed work that Heinrich Mallison did on my sauropod-neck paper in PeerJ can be found at https://peerj.com/articles/36/reviews/#version-0-1-review-1 and includes, crucially, a link to the extensively annotated version of the manuscript that he returned to us. That work can and should be cited as part of Mallison’s academic output.

Accepting that there are some small proportion of reviews that genuinely do need to remain confidential, the great majority do not, and should contribute to their creators’ scholarly record.

(And, yes, of course reviewers should be thanked in the final versions of papers — by name unless they have elected to remain anonymous. It wouldn’t even occur to me not to do this.)

I am very skeptical against publishing reviews, and particularly so if the name of the reviewer is disclosed. We need the reviewers to be objective, and that they formulate their reviews with no ulterior motive. If my name is published along with my report when I am reviewing, I would have to consider how the report could possibly affect my career. This would be particularly important if I am young scientist, looking for good academic jobs in the future, and I am reviewing someone senior. I would accept fewer referee assignments if my name was disclosed, and my report published, and I would sacrifice more time on the ones I do accept, and I would have to always consider the possible political consequences of my report.

“We need the reviewers to be objective, and that they formulate their reviews with no ulterior motive.”

So far we agree. Where we disagree is that I think objective, helpful reviews are far more likely when they are to be made public and attributed, then when they are secret and anonymous.

(I don’t know of any research that’s been done into which of us is right, but all the anecdata I’ve been hearing supports my contention. Open reviews tend to be more detailed and constructive. Hardly surprising that people behave better when they know their behaviour is going to be visible to the world.)

I’m with Mike on this one. There is something unseemly about an anonymous review. Is something lost when (if) everything is open (at least to the author)? Sure. That’s called a trade-off.

See the linked article below. Anonymity is important to allow negative reviews to be honestly written, particularly when speaking truth to power. That said, there are two separate issues here, anonymity of reviewers, and making the reviews (signed or unsigned) publicly available.

I agree that in principle open reviews should “be more detailed and constructive”, and I am sure that is frequently the case. However, I have seen first hand so many times how important it can be for a young scientist to get the right friends in the right places, and how detrimental the opposite can be. This strongly affects peoples behavior at conferences, seminars etc.. In a perfect world, an open review would always be better, but in the real world, I am not so sure. It may be a cynical and negative viewpoint, but I am afraid that I think there are more dangers than advantages with open reviews. I would hate to see peer review also become part of power games. I subscribe to the views in in the link posted by David below.

I’m not sure this is something we should be prescriptive about – there are pros and cons to both ope and closed peer review and in addition, just as some people are comfortable commenting publicly on blog posts and others aren’t, some researchers (and journals) will have good reasons to prefer their reviews to be publicly available or not.

Agreed–people should be free to be open with their reviews and identities, but it’s important we preserve the option of anonymity. Most of the journals that practice open review give reviewers the choice of signing or not signing their reviews. Not surprisingly, reviewers are much more likely to sign positive reviews and prefer to leave negative reviews anonymous.

Perhaps with that much work he should have been included as an author!

Another factor, in addition to training, is a method for acknowledging peer review activities. This is something the community is working on, via a CASRAI working group co-chaired by F1000 and ORCID, to determine a structure for citing peer review activities (see blog post: http://orcid.org/blog/2014/04/08/orcid-and-casrai-acknowledging-peer-review-activities). From a citation structure, methods can be developed and employed by publishers, funders, university databases, and profile systems that store research activities, to acknowledge and display peer review service. The working group is finalizing its recommendations today (!), and will be posting a final report in the coming weeks. One thing the group has found, is that regardless of the type of peer review performed, it is citable. It will take a few adjustments (such as obtaining a DOI for the front matter pages that list peer reviewers, and ORCID iDs for reviewers), which some journals and funders (and system vendors) have already started to implement.

I fully agree with the described problems. However, I believe that the basic underlying problem has a wider scope, and more serious implications, than affecting peer review negatively.

As a scientist, I will constantly be evaluated in various ways. The evaluations that really matter for me are in connection with requests for funding, seeking of promotion, and when I try to get a new job. The importance of some of these evaluations can hardly be exaggerated. They can have a crucial impact on my career, and can have great repercussions on my entire life, and for my family.

In such evaluations, there is really one thing that will truly matter; and that is research accomplishments, as manifested by good publications. As stated in the blog post, my activities as a reviewer will not matter at all. Worse, and more damaging, is that my accomplishments in things like teaching, diffusion of science, and work for the scientific community, will help me very little. A good teaching record will be a plus, but put in relation to the work I need to put in to be a good lecturer (good for the students that is!), the loss of time and energy from research means that it will always (ALWAYS!) be a net loss.

Thus, if I was to give an advice to a young researcher, who is eager to try a scientific career, it is possible that my best advice would be: “Do a minimum of teaching, and spend as little time and energy as you can on it. Don’t get involved in any form of popularization of science. Ignore review request, unless when you can see a direct career advantage with the particular request (making a good impression on a senior editor, getting valuable contacts)”. Expect that I will not give such advices, since I refuse to sink to such levels of cynicism and relativism. I do, however, strongly believe that following such an advice will have a positive impact on a careers.

I don’t believe paying referees will change much. That payment will never have a weight comparable to the career impact. Those who accept to review for altruistic reasons will do it regardless of if you pay them or not. Having said that, rewarding referees is still good – not because it makes it easier to find reviewers, but because it is the right thing to do.

In order to truly change this, change has to occur in the review boards for the different types of evaluations that I mentioned above, and this change has to occur at the very top. Unfortunately, I do not see that happening anytime soon, and I have no concrete ideas for how to achieve such a change.

“In order to truly change this, change has to occur in the review boards for the different types of evaluations that I mentioned above, and this change has to occur at the very top.”

Thanks Anders, I agree and this is why this group of Australian editors are pushing their funders and institutions to acknowledge both the importance of peer review (and editing) and the amount of time they spend on it. See this quote from the letter:

“But there is one overarching factor for this trend – the exclusion of editing and assessing from the Excellence in Research in Australia assessment system. The critical issue here is the fact that (to quote from the website):

‘ERA outcomes inform the performance-based block funding that universities receive from Government to sustain excellence in research. This funding provides all our universities with a direct financial incentive to encourage and support world class research. ERA outcomes directly inform university funding under the Sustainable Research Excellence scheme.’

The ERA procedures effectively mean that certain research activities are rewarded while other academic activities are not; and that universities suffer financial consequences if their academic staff do not privilege the winning of large grants and publication of articles in prestigious, high quality journals over all other work.”

Excellent post Alice. I recently delivered a lecture to journal editors in Brazil about the professionalization of scientific editing and the first question I got was asking how they can convince their universities that being an editor is a worthwhile activity. The second question was about how to find quality reviewers.

We have ways of expanding our reviewer database but it doesn’t always help to dump 2,000 more names of unknown people into a database of 10,000 reviewer names. Editors have a tendency to choose the people they know and have already used. I am sure we have thousands of reviewers in the database that have never been invited to review for us.

The comments above from Ander really pinpoint the problem for those who you want to have as reviewers. There is not enough bang for the buck–especially when there is no buck.

I do not think that paying reviewers is the answer. And publishing reviews or attaching reviewer names to published papers is not a universally accepted practice across all fields. One thing we do at ASCE is our Outstanding Reviewer Recognition program. Each editor can give us the names of reviewers that have done an outstanding job and we send them a certificate, a letter to their department chair recognizing their contribution, and a special call out on the journal website. We do this once a year and it has been very popular.

If I recall correctly, the Company of Biologists used to pay peer reviewers but discontinued the practice at the request of the reviewers. The token sum that they were able to offer (I think $25 or so) wasn’t worth the trouble that reviewers had to go through to receive the money. Similarly a researcher recently griped about a token payment from OUP:
http://rajlaboratory.blogspot.com/2014/05/how-much-is-my-time-as-reviewer-worth.html

Phil Davis has written two excellent posts around the question of rewarding peer reviewers, both worth revisiting:
http://scholarlykitchen.sspnet.org/2013/02/22/rewarding-reviewers-money-prestige-or-some-of-both/

http://scholarlykitchen.sspnet.org/2014/05/28/what-motivates-reviewers-an-experiment-in-economics/

I read it. It’s just unseemly. I am not suggesting that the reviews and reviewers’ identities be made public, only that they be disclosed to the author. Wouldn’t you feel crummy writing a review about me without telling me?

No, I wouldn’t. I did it all the time as a researcher and still occasionally do peer review for journals. I do not want my identity exposed to you, as it might jeopardize both our professional and personal relationships. If I honestly think your paper is in error, or stinks, I need to be able to honestly say that to the journal’s editor without fear that you’ll take some retribution against me.

Researchers are human beings and they act with human emotions. It’s hard to be told that your life’s work is no good (no matter whether it is actually true). Science doesn’t select for nice personalities or friendly fairness. If you’ve ever been involved in departmental politics, you know how vicious and cutthroat researchers can be to their own colleagues. Taking revenge on a competitor, particularly one who has publicly thwarted you and called out the quality of your research is sadly a likely scenario.

When I am approached with a complaint or issue regarding a paper, they first thing I need to figure out is what is the history between the people involved. Sometimes you find out that they were in a shouting match with each other at a conference last year or they went to the same university and one was kicked out of the PhD program or something like that. Basically, human drama.

Sadly, it is also true that there are times when the most influential people in certain fields are bullies. If you are being asked to review a paper by a very influential and well known bully, and the paper stinks, you likely don’t want to disclose your name on the review. Like I said, full out disclosure is not ready for prime time across all fields.

If some young guy asks me for a postdoc position, and if I know his thesis advisor, I will ask the latter for his opinion of the person. I want this advice to be for my eyes only. If it is disclosed to the applicant, a lot of the value is lost.

I have written very many letters of recommendations for former students and others, and I feel that I can be more objective if the letter is sent directly to the presumptive employer, than if it has to go via the applicant.

It is less delicate to review a manuscript than a person, but it is still delicate.

We, at the British Journal of Educational Technology, use a different way of peer reviewing. Here we have a panel of over 600 reviewers and we send them the abstracts of new submissions for them to decide if they wish to review the full paper. Submissions, choice of reviewers, and referees’ reports and decisions are handled electronically. See Rushby, N. (2009). The BJET Reviewer Panel [editorial] British Journal of Educational Technology, 40, 6, 975-979.

I think we need to face several facts:
a) both the publishers and the authors need throughput, articles. I have only seen one reviewer who has rejected an article because the author should have combined this “thin” work with one or more previous contributions (regardless of the merit as far as style and quality of the work.

b) It is interesting, as noted in a comment, that journals are seeing more “special issues”. This presents numerous problems including singling out a paper in the invited collection, thinning out the number of reviewers willing to deal with single articles as opposed to the issue and article in context. And this becomes particularly tricky when more research crosses narrow disciplinary lines and often stretches across, often, several disciplinary lines.

c) Several comments remark that the only or primary route to promotion and tenure is to publish or perish, yet there are increasing calls to consider other contributions. Perhaps these institutions that acknowledge the value of what, in economic terms, might be called externalities, need to match rhetoric with action rather than arriving at decisions by old, defaul, metrics

I have only seen one reviewer who has rejected an article because the author should have combined this “thin” work with one or more previous contributions (regardless of the merit as far as style and quality of the work).

That seems rather silly. What does the reviewer want the author to do? Time-travel? My own recent paper Quantifying the effect of intervertebral cartilage on neutral posture in the necks of sauropod dinosaurs should have been a sxection of my previous PLOS ONE paper The Effect of Intervertebral Cartilage on Neutral Posture and Range of Motion in the Necks of Sauropod Dinosaurs , if only I’d thought of it in time (hence the very similar title). I noted that fact in the introduction. But what else was I to do? Having had an idea that should have been in the original paper my only options were to publish it separately, or forget it.

I find the current initiatives for assigning credit by tallying number of times a review is written, or worse, by counting the number of citations the target article gets, deeply flawed.

Two things:

1) Peer review contributions are not equal. Some are brilliant and advance science in their own merit, most are useful for both editors and authors, but some are utterly useless waste of everybody’s time. Any recognition should reflect this variation in quality and importance.

2) The quality and importance mentioned above are independent of the soundness and importance of the manuscript under scrutiny. Any recognition should not depend on whether the manuscript is published or not. Otherwise we create incentive to avoid destroying a flawed piece of research.

Good points, and they speak to the difficult notion of putting the idea of credit into action. If everyone agrees that peer review is important, how should researchers be credited for it? How much should that credit count? Does a university really want to give tenure to someone who spends all their time doing peer review rather than doing their own research? Why would a funding agency want to give a research grant to someone for doing things other than research? What “credit”, exactly are the authors of this letter seeking?

Perhaps we should think more in terms of a strike against people who don’t participate in the community rather than a positive bonus for those who do the norm.

I believe it is not credit in reality, but protected time and recognition that being a reviewer or an editor is an essential academic function. As an editor, I am grateful to all of my editors who give their time altruistically even though their respective universities don’t give a hoot.

Excellent point about negative reviews. All of the “credit” or recognition that has come forth thus far is for papers that have been published. I have seen extremely detailed reviews done for papers that have been declined. If reviews were published with ORCIDs and “credit” given to the reviewer, there is an incentive to get papers you reviewed published.

To clarify, the idea of acknowledging peer review using DOIs and ORCIDs is to make visible ANY review activity (not necessarily the review itself), whether the paper is published or the grant awarded, etc. The proposed citation structure could support acknowledgement of review service to a journal in a specific timeframe (e.g., annually) without needing to connect the reviewer to the paper itself, in the case of double-blind review. It could also support connection between the reviewer, their review, and the paper being reviewed, in the case of open review. In either case, the person doing the reviewing gets acknowledged for their contribution, and the contribution (whether the review itself or a front matter page listing reviewers during a quarter or year) gets a DOI.

I just don’t see what good this kind of acknowledgement would do. It will be like a pat on the back. I don’t find it probable that any evaluation board would take a “reviewer score” into account. The situation would still as it is now. That is, what is best for my career will still be to ignore review requests, and to dedicate all my time to my research. We would still be relying on altruism.

Anders, what in your view would make a difference in elevating peer-review services to something visible to evaluation boards?

This is really a key question here–what is meant by “recognition”? We know that decisions on funding and career advancement are going to be primarily based on the research that one does (and in the case of tenure, research and the amount of funding one brings in). That shouldn’t change, you shouldn’t give a research grant to someone unless you think they will actually be able to accomplish the research. So what exactly are the authors of the letter asking for?

If pre-publication review really is as important to the progress of research as we’re all assuming, maybe we should see evaluation committees giving a high weight to the quality of a researcher’s peer-review contribution. Maybe we should even think about having specialist peer-reviewer posts, like we have specialist teaching posts. (Well, not really that last part, since you really need to be an active researcher in order to be a competent reviewer; but still, one could imagine researchers who spend a lot of their time performing reviews with the blessing of the community.)

But if you’re a funding agency, would you pay for this? And if so, wouldn’t that be separate from the evaluation done for rewarding a research grant?

And if you’re a university, is there motivation for rewarding this activity? It probably takes vision of the big picture, which, given the financially-driven short-term thinking employed by many universities these days, may be something of a rarity.

But let’s say you’re a researcher at University X and you’re up for tenure. Do you want to be evaluated on the basis of your research plus have a requirement to do a lot of peer review, or just be evaluated on the basis of your research? Most would choose the easier path. Then there are the complications that being asked to do peer review is not something most researchers can control, it’s out of their hands. What if you work in a really specialized and brand new field where there aren’t many papers? Should you be penalized for only reviewing the few that come out? So much to unpack here, and no clear way it would work…

But if you’re a funding agency, would you pay for this?

That’s the key question. You could imagine (for example) the Wellcome Foundation giving a grant to researchers to spend time peer-reviewing other Wellcome-funded work. But it would take a particularly benevolent funder to fund peer-review in general, knowing that other funders’ projects would be gaining most of the benefit.

And if you’re a university, is there motivation for rewarding this activity?

Back when Universities were funded by the Government to create new knowledge, there would have been. (I guess that is how things were when the present peer-review-as-service-to-the-community model got started, and we’re still running on the fumes from that all-but-empty tank). Of course now that the Government thinks that universities should be businesses competing with each other, the days of such behaviour could be numbered.

Do you want to be evaluated on the basis of your research plus have a requirement to do a lot of peer review, or just be evaluated on the basis of your research? Most would choose the easier path.

Tragedy of the commons, right there.

So much to unpack here, and no clear way it would work…

Agreed.

As the main author of the submission, my view is that ‘recognition’ from the institution should be weighted more heavily in the ‘Workload’ calculations. These are used to allocate teaching and measured (in most Australian universities) in ways that are then used as factors in promotion. Editing, reviewing; being the external examiner for a PhD thesis; reviewing manuscripts for publishers(which at least usually involve payment)and journals – these are aspects of academic work that are barely acknowledged in ‘Workload’ models. They are all lumped as ‘service’ and people are not expected to spend their working hours engaged in service to the discipline. No time is given for a full-time academic to be an editor of a journal. Most universities have withdrawn the assistance they used to provide to journals – in the form of office space, administrative assistance etc. But the real problem is time. Whereas in the past, editorship of a journal was acknowledged as a legitimate activity that might involve a reduction in teaching load, now editors are expected to do this in their ‘spare time’ and it is not valued as a legitimate academic activity by their employing institution.
Many of my colleagues stressed that appointment as an editor is also indicative of peer recognition within the broader discipline and should therefore be recognised within the ERA as a mark of prestige.

I find the quest for recognition for much of what one does is rather petty. It can be said: If you want recognition get a dog – it will always be happy to see you, will lick your hand, play with you if you want, and bark loudly in recognition that you have fed it!

If one becomes an academic, one knows early in the career that reviewing is part of the job – at least if one chooses to do it! The choice is yours…

I think there’s a big component of reviewer recognition that academics often miss: their identities are known to the editor, and the editor must judge the care and diligence of the reviewer.

This matters because the editor (more or less by definition) is a senior figure in the reviewer’s area, and they will doubtless be judging that reviewer’s work in other contexts. If they’re a careless reviewer, then they’re probably a careless researcher too. The reverse is true for diligent and insightful reviewers.

So, while this recognition isn’t public, the reviewer-editor interaction can have a profound effect on boosting or diminishing the reviewer’s reputation in their field.

Many journals recognize their reviewers in annual lists, preserving the important aspect of anonymity as it pertains to any particular paper but acknowledging contribution of important work. Also, many editors I’ve known have sent letters to deans or chairs praising the work of young scientists who are excellent reviewers, an unseen but important form of recognition.

Science is best when it is not about the reviewer or the author or the funder or the institution, but about the work.

The weakness with naming all the journal’s reviewers for that year is that it doesn’t distinguish the good from the bad, and it’s little more than an acknowledgment that they returned at least one review. Journals do sometimes publish a list of their 5-10 best reviewers, which is great, but this only gives recognition to a few people.

At Mol Ecol we’ve been publishing a list of our top 300 reviewers from the past year – this is about 8% of our total reviewer pool but they do 25% of our reviews. The list is here: http://www.molecularecologist.com/2014/05/mol-ecols-best-reviewers-2014/

Our hope is that this list covers a decent subset of the reviewer community, while being discriminating enough to be a meaningful form of recognition.

Peer review is paid for, of course, when monographs are involved, rather than articles. Rates vary among scholarly publishers, and commercial publishers tend to pay more than university presses (partly, I suppose, because reviewers think that such publishers, being commercial, can better afford to pay higher rates). But the fees are called “honoraria” for a reason and hardly compensate the faculty reviewers for all the time and effort it takes to read a manuscript and write a good report. Beyond this paltry financial reward, however, peer review is now more given academic credit when books are involved than when articles are involved. So the main issue raised by this article remains relevant to peer review of monographs also.

This is an excellent post. Thank you. Given the value that we place upon peer reviewed articles, it seems highly important that we acknowledge and ‘reward’ the peer reviewers.

Another great article with some salient points being made in the discussion.
Giving credit for peer review is what Publons.com is all about. We believe that a centralised third-party database of reviewers and their review history is of benefit to reviewers, authors, editors and publishers alike. Certainly being able to quantify one’s peer review work as research output incentivises the activity and making this information publicly available can provide very useful insight to editors in particular.
Read the thoughts of some of Publons’ top reviewers here: http://www.nature.com/news/the-scientists-who-get-credit-for-peer-review-1.16102

What about a system to provide CPE credits to reviewers for presentation to the university. This would still keep them anonymous but provide some evidence of their participation. We provide an extensive thank you letter now.

I’m not sure where to write this comment now, with all the stacked comments and comments on comments. Anyway ……..

At least as far as my own field is concerned (fundamental physics), the reason that some researchers do agree to review manuscript is that they feel that it is their duty, and nothing else really. When you are young, and you get your first request, you may also feel som pride – it is after all a recognition of peerage. However, this feeling wears off very fast.

There is another reason, namely if you know the editor. Then, you may feel that you are asked to do him or her a personal favor, and might do it for that reason. I was editor for different journals for 10-15 years, and I tried to avoid asking friends for reviews too often, for this reason. I didn’t want to abuse friendship.

Some editors have various forms of ”recognition”. A journal I worked for gave good reviewers a the possibility to choose a book from the editor’s catalogue. Some journals offer to give you a certificate that you have done valuable peer review. As we see in the comments above, there are many other schemes.

I think many of these reward schemes may be good, for the only reason that giving some kind of recognition is a nice and decent thing to do. However, I don’t think it changes the way people accept or decline peer reviews much at all. I don’t think anyone would do it because of this recognition.

When I, as a scientist, get a request to do a review, I ask myself: ”why should I do it”? The thing is, scientists are under a relentless pressure to perform, from review boards, from department heads, from research councils, form selection committees, from graduate students, colleagues, etc. etc.. And ”to perform” means doing successful research, and nothing else really. Spending time reviewing articles only takes time away from this. Added to that, if I do a good job with a review, the chance (or risk rather), that I will be asked again increases. I still think it is important that someone reviews papers, so I will accept from time to time, but I will be selective.

Some other colleagues will be more selective. The most ruthless careerists will ignore all request that they think are below them, or that they don’t think have an immediate impact on their career. I’ve seen cases where someone writes a very positive review about a paper written by a famous scientist, and then they are very eager to make it known that they wrote the review. Even if the name is not published, there are many ways to leave your fingerprint in a review, and of course, you can always tell someone that you reviewed their paper. A review like that loses a lot of value, in my opinion.

The question if peer reviewing can somehow be given a career value has been touched upon. I think it is very difficult, unfortunately. To have a real value, it would have to have a value at some, or many, of all the different instances when I am being evaluated.

An independent funding foundation, trust or institute, will not take into account that I review papers, and I don’t think they should. Their purpose is to sponsor research, and they should pick the project that they think will give them the best research output possible, in quality and and quantity, for their money. The place where the argument could possibly be made for career value of peer reviewing is when someone is getting hired, given tenures, or getting promoted. Or, possibly when resources are being distributed internally by a university, or at a governmental level.

I personally think that one think that may make a university consistently great, over many generation, is it’s ability to really be universal. A wide range of competencies are needed, including research, teaching, popularization of science, work with learned societies, and indeed peer review. In the best of worlds, departments should take this into consideration, and act accordingly. However, this very rarely happens in the rap world.

When hiring a new lecturer or professor, any university will first and foremost look for someone who will produce excellent research, and who as a corollary will pull in lot’s of fresh funding for the university. If they do not do that, the excellent researcher will go somewhere else, and so will the money. Such is the rat race, and if I knew a way to to break this vicious circle, I could probably become rich and famous.

I think that the publishing industry (which I am not really part of) will do best to accept that this is the nature of the beast, and to try to do the best of the situation. As long as we do peer review in the old traditional way (and I don’t advocate anything else, at least not here and now), it will be difficult to get good referees. All you can do is to appeal to people’s good nature, and sense of duty.

Apart from that, mingling with the scientific community (at conferences for example) and reiterating to them how important peer review is, might make a (very) marginal difference. You just have to do it subtly enough not to provoke the comment: ”why should I do your reviews for free, when you charge me for reading/writing?”, or to be prepared with a good diplomatic answer to that question.

Maybe a reward would have a very small effect also. At least, if you offer me a free book, I will accept it.

Comments are closed.