Authors’ note: This post is co-authored with PLOS’s Chief Scientific Officer, Véronique Kiermer.
In a week devoted to exploring the role of personal and social identity in peer review, we’re going to take a deeper dive into the perennial question of open identities in peer review. Open peer review has been growing steadily but its implementations take many different forms (in fact, Ross-Hellauer has catalogued 122 definitions). The call for increased transparency has been gathering pace over the past few years with increased funder interest, and there is growing support for publishing reviews and making them easily citable by providing DOIs. For example, a survey published in PLOS ONE in 2017 found high levels of support for most aspects of open review among the majority of respondents .
There is, however, far more divergence on the issue of whether reviewers should reveal their identities. Here, we’re first going to provide a quick recap of the pros and cons of open identities before diving into what we can learn from the experience of publishers who’ve implemented them.
The benefits of open identities
Proponents of open identities (i.e., signed) peer review make the case that this form of transparency is an important component of open review:
- Accountability. When the identity of all participants is transparent, any potential competing interests are more readily apparent and everyone is publicly accountable for their actions.
- Credit. Peer review is challenging, time consuming and all too often, unacknowledged. Signed and published peer review offers reviewers an opportunity to claim credit for their work and is a step towards elevating peer review to a bona fide academic activity deserving of recognition and credit.
- Quality. Open identities could improve the quality of reviews by encouraging reviewers to be more thorough in their assessments. Research suggests that signed and published peer reviews are at least as good as blinded models, and other research has found improvements in specific areas like constructive feedback, comments on methods, length of review, and substantiating evidence to support the comments (see here for an example).
The case for greater caution
At this point, there are probably a higher number of people making the case that the risks of open identities would outweigh the benefits. The key issues can be summarized as follows:
- Bias. Bias in many forms is inherent in the peer review process (for example, gender bias in selecting reviewers and their evaluation outcomes), a problem that is exacerbated by the fact that our reviewer pools are far too narrow to ensure that research is assessed fairly and with full consideration for different perspectives. Instead of revealing reviewers’ identity, masking authors’ identity in a double-anonymized system has been offered as a solution — even though the effectiveness of double-anonymized practice has been questioned. The implementation of double-anonymized peer review in some fields is further complicated because it is at odds with behaviors and policies that are favored, such as the posting of preprints, the sharing of data before submission, and the scrutiny of competing interests.
- Impact on those more vulnerable. The impact of open identities on those who are more vulnerable and hold less power is a very real one as consequences could range from full-on retaliation in peer review of papers or grants, to more subtle consequences such as not being favored for talks and prizes.
- Rigor and candor. Linked to this concern is another that reviewers are likely to be less comfortable giving critical feedback – or that those who would have done so decline to participate, a concern that has been borne out by experience in medical journals (e.g., BMJ). Additionally, editors and authors have expressed concerns about chains of trading favors, giving a positive review in the hope of receiving one in the future.
Impact on readers
While most studies have focused on the impact of open identities on peer review outcomes, it’s also useful to ask what they mean for readers of published work. When it comes to evaluating published research, there is plenty of data to indicate that being peer-reviewed is a determinant of trust and credibility (see for example Nicholas and colleagues). But there is also data to suggest that people judge the quality of research outputs based on extrinsic characteristics of the work (perceptions of the reputation of the author, the lab, the institution: see Tenopir and colleagues). Our research also indicates that in some cases at least, researchers who encounter new publications in the course of their own research analyze the peer review reports to assess the credibility of the results. This is a double-edged sword for open identities. At the individual level, it helps us in assessing a paper outside our area of expertise, if an individual whose expertise we trust has reviewed it positively. But at the systems level, perpetuating these heuristics and extending them to the peer review reports can perpetuate bias and does not move us closer to evaluating research — and peer review — entirely on its own merit.
Lessons from publishers’ experience with open review
A small handful of publishers have been practicing some form of open review for fifteen to twenty years, including The BMJ, EMBO Journal and EGU. The BMJ was the first to disclose reviewer identities and publish peer reviewer reports after studying the effects of this form of transparency in randomized trials (example, example and example). Their studies, started two decades ago, essentially found there was no effect of open identities on technical quality of review, little discernable effect on likelihood to recommend acceptance, a decline in willingness to review and a small positive effect on tone and constructiveness of reviews. Convinced by the benefits of full-blown transparency for their disciplines, The BMJ has successfully operated a transparent process, with open reviewer identities for many years.
Taking a very different approach, Nature conducted a trial, which it then made its normal practice, with the option to reveal reviewers identities at publication but without opening the reports. Both authors and reviewers need to agree for the reviewer names to be revealed. Overall 80% of published Nature papers have at least one reviewer named. Because the choice is made at the time of publication and names are not associated with a specific report, the results tell us little about the impact on quality or accountability, but they’re indicative of the desire for credit. In the trial 55% of reviewers opted to be named and there was no obvious difference in this choice based on gender or career stage. Interestingly, a quarter of authors declined to see the reviewers’ identities, which may be related to the concerns about trading favors.
A recent paper in PeerJ based on PeerJ data examines how the peer review process impacts manuscripts, and one interesting finding from this analysis is that the reviewers who opt to sign their reviews write more subjective and positive reviews. While this is a limited dataset, it provides some evidence for the concern that social pressure can influence the candor in an identified, publicly available review. This contrasts with the BMJ data, but it’s very possible that cultural differences between disciplines play an important role in these subtle effects of individual behaviors.
When PLOS introduced a published peer review policy across all its journals in 2019, we considered the arguments for and against revealing identities, and we learned from those who had already implemented forms of transparent review. Like EMBO Journal — who pioneered this model in the life sciences — and PeerJ before us, we decided that publishing the content of the review was most important. Our peer review transparency models share important commonalities: all reviewers agree to make their report public but they may choose to reveal their identity or to remain anonymous. The decision to publish the entire peer review history, including the peer review reports but also the authors responses and the editorial decisions rests with the authors — albeit with slight variations of implementation.
Despite the similarities in the models, the attitudes toward open reviewer identities vary materially across disciplines and journals. At EMBO Journal, although encouraged to reveal their identities, almost no reviewer does so. For PeerJ, looking at the last six months of all reviews to their main (biology) journal, 28% of reviewers choose to name themselves.
At PLOS, the last 21 months of data since January 2020 shows an interesting pattern across the seven PLOS journals (we’re not including our five new titles as we don’t yet have enough data). The highest number of signed reviews are in the medical sciences, with PLOS Medicine at 24%, followed by PLOS Neglected Tropical Diseases at 17%. Our core bioscience titles range from 14% at PLOS Biology and PLOS Computational Biology to 8% and 10% respectively at PLOS Pathogens and PLOS Genetics. And PLOS ONE, with its broad mix of disciplines, comes in at 16%. Across PLOS journals, 22% of all manuscripts that are reviewed have at least a signed review.
We’ve also observed a robust demand for credit for peer review. Since we’ve introduced a streamlined way for reviewers to get credit for their review regardless of their choice to reveal their identity. Through an integration with ORCID, reviewers can elect to automatically update their ORCID record with a proof of review certified by the PLOS journal. Across all PLOS journals in 2020-21, 46% of eligible reviewers opted in to ORCID review deposits and some 44,390 review credits have been deposited with ORCID.
Journal-independent peer review
Recently, we’ve also seen traction for reviews made outside of the typical framework of journal peer review. Preprint servers in particular offer the options for peers to critique a paper before its publication in a journal. While the vast majority of comments are private, usually sent directly by email to the authors (as shown in a bioRxiv survey), there are also an increasing number of examples of robust discussions of preprints in other public forums where commentators’ identity is known (examples on Twitter). An organization like PREreview also creates means for groups of Early Career Researchers to provide a consensus opinion — a mechanism by which these particularly vulnerable researchers might find strength in numbers to mitigate the risks. The influence of these forums on peer review outcomes has not yet been studied to our knowledge, but it is interesting that one step removed from the journal publication decision-making, we see movement towards spontaneous peer review which in many cases, involves open identities.
What comes next?
We believe that opening up the black box of peer review has benefits for all stakeholders. The transparent access to reviewers’ reports, authors’ responses and editorial decisions pioneered by EMBO Journal and The BMJ, for example, shows in our view how openness provides better service to both authors and readers. It also allows the peer review process itself to be studied by scholars with a diversity of perspectives, and we must continue to learn from these studies to improve the process.
But the value of revealing reviewer identities is more ambiguous. The data from existing implementations demonstrates hesitancy across research communities, to different extents. But equally importantly, our understanding of its impact and the potential unintended consequences — are there more positive reviews for well-established researchers? Are there discernable country, gender, subject biases? — is limited.
Above all, building recognition of the activity of peer review as a bona fide academic output, and assigning credit for the quality of this service to the scientific community, is both paramount and challenging. It will require those assigning and recognizing the credit to consider the sensitivities associated with revealing identities.
While it’s clear that open identities carry some risk and more research is needed to understand their full impact, it’s too easy to focus on those without also acknowledging the ways in which the existing system already benefits those with status and privilege. Revealing or concealing reviewers’ identities alone won’t fix the bias problem (conscious or unconscious) inherent in peer review. Perhaps most urgent and important is increasing the diversity of the reviewer and editor pool. Most editors have long made efforts to include a variety of scientific expertises in peer review. But it’s clear from the data now emerging from journals that have opened up their peer review process for study that we still have much work to do to increase gender-, geography- and community-based representation.
We’ll close with circling back to the theme of this year’s Peer Review Week. We’ve focused primarily on individual identity. Yet we also note that those of us engaged in peer review rarely take time to reflect on the social and community impacts. Our own social identities and categorizations have a significant impact on how we perceive and understand those around us and by extension the research and scholarship we pursue and review. If our goal truly is to root out bias and increase trust and transparency, we might also start exploring these social dimensions in peer review.
17 Thoughts on "Open Reviewer Identities: Full Steam Ahead or Proceed with Caution?"
Great summary of achievements and lessons learned on the impact of identity transparency on reviewer performance.
It is worth mentioning here that we ran a deep-dive analysis on that the Elsevier pilot of publishing peer review reports and its impact on reviewer performance, finding a strong correlation between the type of decision recommendation reviewers chose and their willingness to sign their report. Of the almost 10000 published review reports during the Elsevier pilot, only 8% signed their reports who had either recommended ‘accept’ or ‘minor revision. Despite most editors’ expectations back in 2016, we did not find any correlation between introducing the practice of publishing peer review reports alongside the articles and reviewer-invite accept/decline nor with the reviewer completion rate neither for women nor for men. More details can be found here: https://www.nature.com/articles/s41467-018-08250-2/
I want an honest review carried out by a qualified reviewer who will have no fear of retribution. It seems to me that blind reviews accomplishes the task while open reviews does not!
I think that’s what everyone wants. But if you’ve read the posts here over the past few days, you’ll know that anonymized review doesn’t guarantee that either and that there are ways for a whole range of biases to creep in. I think it’s important to acknowledge the problems of existing systems if we’re to understand how various forms of open review can and cannot provide solutions.
One way to achieve this beyond identity transparency or anonymization is to guide reviewers through the peer review process to the items you want them to focus on the most. the free text/essay in the name of ‘comment to author’ doesn’t seem to be the best vehicle given academics hardly learn how to peer review in advance of accepting their first invitation.
As a MA student I was tasked with reviewing many articles and books which were submitted to my profs and criticized. So the claim, at least in my case, that one is not taught how to review is not so.
Nice post! I recalled a conference seminar for authors with several ecology journal EICs when one remarked that they discouraged signing reviews because it led authors to focus on the qualifications of the reviewer to criticize the work rather than the substance of the criticism. The remark stuck with me. I like the way PLOS approaches open peer review, which emphasizes the substance over who’s who. I doubt any but a tiny fraction of the most interested readers (critics mostly?) bother to read the reviews. Nature’s approach of listing reviewers’ names with no substance seems the worst. What’s the point? Bragging rights that they reviewed for a glam journal?
Of course anonymity does not guarantee the goal but neither does open reviews. It seems to me the odds of attaining honest reviews when blind far outweighs the Pandora box of rancor.
I continue to struggle with the question of exactly what we mean when we say, “credit for peer review”. If it’s just registration (acknowledgment that person X wrote review Y for paper Z), then as you note in the post, the problem is largely solved, both anonymously through ORCID and directly through open signed reviews.
But as I asked back in 2015 (https://scholarlykitchen.sspnet.org/2015/06/17/the-problems-with-credit-for-peer-review/), who cares? That’s not meant to denigrate the essential work of reviewers, but rather to ask, once the work is registered as having been done by that person, does that mean anything to anyone who is able to reward them? Has there been any progress in anyone offering career rewards for reviewer activities? What exactly is the “credit” that is desired?
Important point, David. We have started work on precisely this as part of a Wellcome Trust grant. referee credit is in my view one of the key gaps in the ecosystem and the issue has not at all been solved, as you also suggest. I’d be happy to report on progress as we go along. Let me know if you want more info.
Thanks Bernd, would love to hear more (and when you’re ready, it would make a great Scholarly Kitchen post).
To me, the question is how to connect peer review with the things that matter to researchers, career advancement and funding. I’m not sure how to do that, or even if it should be done. Peer reviewing means you’re an active, supporting member of the community, so it should be a criterion to be eligible for jobs, promotions, tenure, or funding. But I can’t see a university hiring a researcher solely because they write really good peer reviews or a funder funding a project based not on the project itself but on the peer reviews the applicant has written about other projects.
PLOS ONE editors evaluate research on the basis of scientific validity, rigorous methodology, and high ethical standards, with the aim of making all well-conducted research freely available. In short, PLOS does not review for ideas.
…https://journals.plos.org › plosone › journal-information
On the other hand Nature reviews for the following:
The criteria for publication of scientific papers (Articles) in Nature are that they:
report original scientific research (the main results and conclusions must not have been published or submitted elsewhere)
are of outstanding scientific importance
reach a conclusion of interest to an interdisciplinary readership.
In short, it seems to me that Nature is more involved in scientific debate
“Revealing or concealing reviewers’ identities alone won’t fix the bias problem (conscious or unconscious) inherent in peer review.”
We see this claim of inherent bias repeated here again and again, but I can’t remember anyone ever explaining what is meant by it. What biases do you believe are “inherent” to the peer-review process and in what ways are those biases detrimental to the process? Please be specific.
Interesting read, which quite shockingly fails to mention the biggest publisher that pioneered “open identities” for reviewers: Frontiers.
Certainly no slight intended to Frontiers or the many others who have implemented forms of open review (F1000, MDPI, BMC, Elsevier…and no doubt others I will now be in trouble for not calling out specifically!). This post wasn’t intended to be an exhaustive review, rather to focus to studies and data that help us learn from implementations to date.
In 1999, the BMC journals began experimenting with ways in which the peer review process could be made more transparent when it began publishing reviewer names in its medical journals. The publication of peer review reports alongside a ‘pre-publication history’ began a few years later for the medical journals of the BMC Series. My BMC colleagues published a report in the BMJ (https://bmjopen.bmj.com/content/5/9/e008707) on the effect of open peer review in countering bias when authors suggested the referees, and this could be of interest to readers of this blog.
In 2020, on the basis of feedback from the community that echoes many of the points above, the medical journals of the BMC Series and BMC Medicine adopted transparent peer review. This means for all published manuscripts, the reviewer reports continue to be published along with the author’s response to the reviewers, but reviewer names are only published if that reviewer chooses to reveal their identity. Later in 2020, we went a step further and almost all BMC Series titles now operate transparent peer review, including BMC Chemistry.