Timed perfectly to coincide with today’s official launch of this year’s Peer Review Week (hold the date: September 11-17, 2017), BioMedCentral and Digital Science yesterday published a report on “What might peer review look like in 2030?”
Based on last November’s SpotOn London conference (which, sadly, I couldn’t attend myself), the report makes seven recommendations for the research community in the coming years:
- Find new ways of matching expertise and reviews by better identifying, verifying and inviting peer reviewers (including using AI)
- Increase diversity in the reviewer pool (including early career researchers, researchers from different regions, and women)
- Experiment with different and new models of peer review, particularly those that increase transparency
- Invest in reviewer training programs
- Find cross-publisher solutions to improve efficiency and benefit all stakeholders, such as portable peer review
- Improve recognition for review by funders, institutions, and publishers
- Use technology to support and enhance the peer review process, including automation
Since the theme of this year’s Peer Review Week is, “Transparency in Review,” I’m going to focus primarily on how the SpotOn report covers this important topic. As well as being specifically called out in recommendation (3), the need for improved transparency is also something of an underlying theme, implicit in several other recommendations.
Transparency is a — or perhaps even the — key element in open scholarship. As my colleagues Josh Brown, Tom Demeranville, and I wrote in a recent article: “Truly open research is also transparent.”
Transparency in peer review is not, however, without its challenges. Many journals continue to use some form of blind peer review and, while open peer review has many fans, it may not ever (in its current manifestations, at least) be appropriate for every discipline and community. However, it should be perfectly possible to have a fully transparent peer review process, while maintaining anonymity for reviewers and/or authors. From a quick (and admittedly very unscientific!) look at a small sample of journals, there is definitely room for improvement both in terms of how easy it is to find information about the review process, and how well that information describes the “why” as well as the “how” of the process.
Journals that use more open forms of peer review are, unsurprisingly, more — well — open about their process and, in particular, about why they believe open review is a good thing. For example, F1000 Research has created this (sadly all-male) video, while ScienceOpen provides extensive guidelines for reviewers, including a clear and helpful checklist.
Greater transparency can also provide opportunities for greater recognition for reviewers — something which the SpotOn report identifies as another key challenge (recommendation 6). One example that’s especially close to home for me is the option for organizations — publishers and others, such as associations, funders, or universities — to connect review information to a researcher’s ORCID record (full disclosure, I am Director of Community Engagement & Support for ORCID). In the case of F1000 Research, this includes linking the DOI for the review itself to the record, while for journals with a closed peer review process, the information can be very sparse — ORCID iD, organization name, and date (e.g., year). A good example of a third-party service in this space is Publons. By enabling researchers to “track, verify and showcase” their peer review contributions they are allowing them to easily and transparently share those contributions with others. (Per their announcement yesterday, they’re also helping address recommendation (4) — investing in peer review training!)
I was especially interested in Elizabeth Moylan’s suggestion in the SpotOn report that, “Many individuals are converging in advocating a ‘living’ article, where sharing what researchers are going to do (pre-registration) and how they do it (data) in addition to the narrative that is their research article could radically reshape the publishing landscape. However, it remains to be seen where peer review would fit into this new landscape; alongside preprints, pre-publication or post-publication or at all stages of the publishing process…” This sort of approach could certainly encourage both more transparency about the whole research process, as well as having the potential to increase collaboration. I look forward to hearing more!
On the other hand, I can’t help but feel a little concerned about the potential use of AI in the peer review process. While I can see that some aspects of the process could be made more efficient by increased automation — such as reviewer selection and portable review — is AI a step too far? Chadwick C DeVoss sums up the pros and cons in his contribution to the report: “On the one hand, automating a process that determines what we value as ‘good science’ has risks and is full of ethical dilemmas…If we dehumanize that process, we need to be wary about what values we allow artificial intelligence to impart…On the other hand, automated publishing would expedite scientific communication…Additionally, human bias is removed, making automated publishing an unbiased approach.”
Certainly, if we put machines in charge of even some of the peer review process itself, transparency would be even more essential than it is today. I can’t imagine that many of us would like to be in a Google-like situation of not knowing anything about the algorithms that are being used to determine the quality of a paper, nor being informed when, why, how, and by whom those algorithms were created and get updated.
Last but not least, greater transparency can, of course, be of great value in increasing diversity. Studies like this one by Jory Lerback and Brooks Hanson provide evidence of bias in the review process — in this case, showing that “women were used less as reviewers than expected…The bias is a result of authors and editors, especially male ones, suggesting women as reviewers less often, and a slightly higher decline rate among women in each age group when asked.” Other studies have shown that, for example, Chinese researchers are far less likely and US researchers far more likely to be reviewers. These kinds of studies don’t just shed light on a long-suspected problem in peer review, they also provide a solution to a well-known problem: the need for a larger pool of qualified peer reviewers.
I strongly encourage you to read the SpotOn report in full. And what better call to action than these words from Rachel Burley and Elizabeth Moylan in the Foreword: “publishers will have to proactively partner with the wider community if we are to see real industry-wide improvements” — something that Peer Review Week aims to facilitate!