Authors’ note: This post is co-authored with PLOS’s Chief Scientific Officer, Véronique Kiermer.
In a week devoted to exploring the role of personal and social identity in peer review, we’re going to take a deeper dive into the perennial question of open identities in peer review. Open peer review has been growing steadily but its implementations take many different forms (in fact, Ross-Hellauer has catalogued 122 definitions). The call for increased transparency has been gathering pace over the past few years with increased funder interest, and there is growing support for publishing reviews and making them easily citable by providing DOIs. For example, a survey published in PLOS ONE in 2017 found high levels of support for most aspects of open review among the majority of respondents .
There is, however, far more divergence on the issue of whether reviewers should reveal their identities. Here, we’re first going to provide a quick recap of the pros and cons of open identities before diving into what we can learn from the experience of publishers who’ve implemented them.
The benefits of open identities
Proponents of open identities (i.e., signed) peer review make the case that this form of transparency is an important component of open review:
- Accountability. When the identity of all participants is transparent, any potential competing interests are more readily apparent and everyone is publicly accountable for their actions.
- Credit. Peer review is challenging, time consuming and all too often, unacknowledged. Signed and published peer review offers reviewers an opportunity to claim credit for their work and is a step towards elevating peer review to a bona fide academic activity deserving of recognition and credit.
- Quality. Open identities could improve the quality of reviews by encouraging reviewers to be more thorough in their assessments. Research suggests that signed and published peer reviews are at least as good as blinded models, and other research has found improvements in specific areas like constructive feedback, comments on methods, length of review, and substantiating evidence to support the comments (see here for an example).
The case for greater caution
At this point, there are probably a higher number of people making the case that the risks of open identities would outweigh the benefits. The key issues can be summarized as follows:
- Bias. Bias in many forms is inherent in the peer review process (for example, gender bias in selecting reviewers and their evaluation outcomes), a problem that is exacerbated by the fact that our reviewer pools are far too narrow to ensure that research is assessed fairly and with full consideration for different perspectives. Instead of revealing reviewers’ identity, masking authors’ identity in a double-anonymized system has been offered as a solution — even though the effectiveness of double-anonymized practice has been questioned. The implementation of double-anonymized peer review in some fields is further complicated because it is at odds with behaviors and policies that are favored, such as the posting of preprints, the sharing of data before submission, and the scrutiny of competing interests.
- Impact on those more vulnerable. The impact of open identities on those who are more vulnerable and hold less power is a very real one as consequences could range from full-on retaliation in peer review of papers or grants, to more subtle consequences such as not being favored for talks and prizes.
- Rigor and candor. Linked to this concern is another that reviewers are likely to be less comfortable giving critical feedback – or that those who would have done so decline to participate, a concern that has been borne out by experience in medical journals (e.g., BMJ). Additionally, editors and authors have expressed concerns about chains of trading favors, giving a positive review in the hope of receiving one in the future.
Impact on readers
While most studies have focused on the impact of open identities on peer review outcomes, it’s also useful to ask what they mean for readers of published work. When it comes to evaluating published research, there is plenty of data to indicate that being peer-reviewed is a determinant of trust and credibility (see for example Nicholas and colleagues). But there is also data to suggest that people judge the quality of research outputs based on extrinsic characteristics of the work (perceptions of the reputation of the author, the lab, the institution: see Tenopir and colleagues). Our research also indicates that in some cases at least, researchers who encounter new publications in the course of their own research analyze the peer review reports to assess the credibility of the results. This is a double-edged sword for open identities. At the individual level, it helps us in assessing a paper outside our area of expertise, if an individual whose expertise we trust has reviewed it positively. But at the systems level, perpetuating these heuristics and extending them to the peer review reports can perpetuate bias and does not move us closer to evaluating research — and peer review — entirely on its own merit.
Lessons from publishers’ experience with open review
A small handful of publishers have been practicing some form of open review for fifteen to twenty years, including The BMJ, EMBO Journal and EGU. The BMJ was the first to disclose reviewer identities and publish peer reviewer reports after studying the effects of this form of transparency in randomized trials (example, example and example). Their studies, started two decades ago, essentially found there was no effect of open identities on technical quality of review, little discernable effect on likelihood to recommend acceptance, a decline in willingness to review and a small positive effect on tone and constructiveness of reviews. Convinced by the benefits of full-blown transparency for their disciplines, The BMJ has successfully operated a transparent process, with open reviewer identities for many years.
Taking a very different approach, Nature conducted a trial, which it then made its normal practice, with the option to reveal reviewers identities at publication but without opening the reports. Both authors and reviewers need to agree for the reviewer names to be revealed. Overall 80% of published Nature papers have at least one reviewer named. Because the choice is made at the time of publication and names are not associated with a specific report, the results tell us little about the impact on quality or accountability, but they’re indicative of the desire for credit. In the trial 55% of reviewers opted to be named and there was no obvious difference in this choice based on gender or career stage. Interestingly, a quarter of authors declined to see the reviewers’ identities, which may be related to the concerns about trading favors.
A recent paper in PeerJ based on PeerJ data examines how the peer review process impacts manuscripts, and one interesting finding from this analysis is that the reviewers who opt to sign their reviews write more subjective and positive reviews. While this is a limited dataset, it provides some evidence for the concern that social pressure can influence the candor in an identified, publicly available review. This contrasts with the BMJ data, but it’s very possible that cultural differences between disciplines play an important role in these subtle effects of individual behaviors.
When PLOS introduced a published peer review policy across all its journals in 2019, we considered the arguments for and against revealing identities, and we learned from those who had already implemented forms of transparent review. Like EMBO Journal — who pioneered this model in the life sciences — and PeerJ before us, we decided that publishing the content of the review was most important. Our peer review transparency models share important commonalities: all reviewers agree to make their report public but they may choose to reveal their identity or to remain anonymous. The decision to publish the entire peer review history, including the peer review reports but also the authors responses and the editorial decisions rests with the authors — albeit with slight variations of implementation.
Despite the similarities in the models, the attitudes toward open reviewer identities vary materially across disciplines and journals. At EMBO Journal, although encouraged to reveal their identities, almost no reviewer does so. For PeerJ, looking at the last six months of all reviews to their main (biology) journal, 28% of reviewers choose to name themselves.
At PLOS, the last 21 months of data since January 2020 shows an interesting pattern across the seven PLOS journals (we’re not including our five new titles as we don’t yet have enough data). The highest number of signed reviews are in the medical sciences, with PLOS Medicine at 24%, followed by PLOS Neglected Tropical Diseases at 17%. Our core bioscience titles range from 14% at PLOS Biology and PLOS Computational Biology to 8% and 10% respectively at PLOS Pathogens and PLOS Genetics. And PLOS ONE, with its broad mix of disciplines, comes in at 16%. Across PLOS journals, 22% of all manuscripts that are reviewed have at least a signed review.
We’ve also observed a robust demand for credit for peer review. Since we’ve introduced a streamlined way for reviewers to get credit for their review regardless of their choice to reveal their identity. Through an integration with ORCID, reviewers can elect to automatically update their ORCID record with a proof of review certified by the PLOS journal. Across all PLOS journals in 2020-21, 46% of eligible reviewers opted in to ORCID review deposits and some 44,390 review credits have been deposited with ORCID.
Journal-independent peer review
Recently, we’ve also seen traction for reviews made outside of the typical framework of journal peer review. Preprint servers in particular offer the options for peers to critique a paper before its publication in a journal. While the vast majority of comments are private, usually sent directly by email to the authors (as shown in a bioRxiv survey), there are also an increasing number of examples of robust discussions of preprints in other public forums where commentators’ identity is known (examples on Twitter). An organization like PREreview also creates means for groups of Early Career Researchers to provide a consensus opinion — a mechanism by which these particularly vulnerable researchers might find strength in numbers to mitigate the risks. The influence of these forums on peer review outcomes has not yet been studied to our knowledge, but it is interesting that one step removed from the journal publication decision-making, we see movement towards spontaneous peer review which in many cases, involves open identities.
What comes next?
We believe that opening up the black box of peer review has benefits for all stakeholders. The transparent access to reviewers’ reports, authors’ responses and editorial decisions pioneered by EMBO Journal and The BMJ, for example, shows in our view how openness provides better service to both authors and readers. It also allows the peer review process itself to be studied by scholars with a diversity of perspectives, and we must continue to learn from these studies to improve the process.
But the value of revealing reviewer identities is more ambiguous. The data from existing implementations demonstrates hesitancy across research communities, to different extents. But equally importantly, our understanding of its impact and the potential unintended consequences — are there more positive reviews for well-established researchers? Are there discernable country, gender, subject biases? — is limited.
Above all, building recognition of the activity of peer review as a bona fide academic output, and assigning credit for the quality of this service to the scientific community, is both paramount and challenging. It will require those assigning and recognizing the credit to consider the sensitivities associated with revealing identities.
While it’s clear that open identities carry some risk and more research is needed to understand their full impact, it’s too easy to focus on those without also acknowledging the ways in which the existing system already benefits those with status and privilege. Revealing or concealing reviewers’ identities alone won’t fix the bias problem (conscious or unconscious) inherent in peer review. Perhaps most urgent and important is increasing the diversity of the reviewer and editor pool. Most editors have long made efforts to include a variety of scientific expertises in peer review. But it’s clear from the data now emerging from journals that have opened up their peer review process for study that we still have much work to do to increase gender-, geography- and community-based representation.
We’ll close with circling back to the theme of this year’s Peer Review Week. We’ve focused primarily on individual identity. Yet we also note that those of us engaged in peer review rarely take time to reflect on the social and community impacts. Our own social identities and categorizations have a significant impact on how we perceive and understand those around us and by extension the research and scholarship we pursue and review. If our goal truly is to root out bias and increase trust and transparency, we might also start exploring these social dimensions in peer review.