Timed perfectly to coincide with today’s official launch of this year’s Peer Review Week (hold the date: September 11-17, 2017), BioMedCentral and Digital Science yesterday published a report on “What might peer review look like in 2030?”

peer review week banner

Based on last November’s SpotOn London conference (which, sadly, I couldn’t attend myself), the report makes seven recommendations for the research community in the coming years:

  1. Find new ways of matching expertise and reviews by better identifying, verifying and inviting peer reviewers (including using AI)
  2. Increase diversity in the reviewer pool (including early career researchers, researchers from different regions, and women)
  3. Experiment with different and new models of peer review, particularly those that increase transparency
  4. Invest in reviewer training programs
  5. Find cross-publisher solutions to improve efficiency and benefit all stakeholders, such as portable peer review
  6. Improve recognition for review by funders, institutions, and publishers
  7. Use technology to support and enhance the peer review process, including automation

Since the theme of this year’s Peer Review Week is, “Transparency in Review,” I’m going to focus primarily on how the SpotOn report covers this important topic. As well as being specifically called out in recommendation (3), the need for improved transparency is also something of an underlying theme, implicit in several other recommendations.

Transparency is a — or perhaps even the — key element in open scholarship. As my colleagues Josh Brown, Tom Demeranville, and I wrote in a recent article: “Truly open research is also transparent.”

Transparency in peer review is not, however, without its challenges. Many journals continue to use some form of blind peer review and, while open peer review has many fans, it may not ever (in its current manifestations, at least) be appropriate for every discipline and community. However, it should be perfectly possible to have a fully transparent peer review process, while maintaining anonymity for reviewers and/or authors. From a quick (and admittedly very unscientific!) look at a small sample of journals, there is definitely room for improvement both in terms of how easy it is to find information about the review process, and how well that information describes the “why” as well as the “how” of the process.

Journals that use more open forms of peer review are, unsurprisingly, more — well — open about their process and, in particular, about why they believe open review is a good thing. For example, F1000 Research has created this (sadly all-male) video, while ScienceOpen provides extensive guidelines for reviewers, including a clear and helpful checklist.

Greater transparency can also provide opportunities for greater recognition for reviewers — something which the SpotOn report identifies as another key challenge (recommendation 6). One example that’s especially close to home for me is the option for organizations — publishers and others, such as associations, funders, or universities — to connect review information to a researcher’s ORCID record (full disclosure, I am Director of Community Engagement & Support for ORCID). In the case of F1000 Research, this includes linking the DOI for the review itself to the record, while for journals with a closed peer review process, the information can be very sparse — ORCID iD, organization name, and date (e.g., year). A good example of a third-party service in this space is Publons. By enabling researchers to “track, verify and showcase” their peer review contributions they are allowing them to easily and transparently share those contributions with others. (Per their announcement yesterday, they’re also helping address recommendation (4) — investing in peer review training!)

I was especially interested in Elizabeth Moylan’s suggestion in the SpotOn report that, “Many individuals are converging in advocating a ‘living’ article, where sharing what researchers are going to do (pre-registration) and how they do it (data) in addition to the narrative that is their research article could radically reshape the publishing landscape. However, it remains to be seen where peer review would fit into this new landscape; alongside preprints, pre-publication or post-publication or at all stages of the publishing process…” This sort of approach could certainly encourage both more transparency about the whole research process, as well as having the potential to increase collaboration. I look forward to hearing more!

On the other hand, I can’t help but feel a little concerned about the potential use of AI in the peer review process. While I can see that some aspects of the process could be made more efficient by increased automation — such as reviewer selection and portable review — is AI a step too far? Chadwick C DeVoss sums up the pros and cons in his contribution to the report: “On the one hand, automating a process that determines what we value as ‘good science’ has risks and is full of ethical dilemmas…If we dehumanize that process, we need to be wary about what values we allow artificial intelligence to impart…On the other hand, automated publishing would expedite scientific communication…Additionally, human bias is removed, making automated publishing an unbiased approach.”

Certainly, if we put machines in charge of even some of the peer review process itself, transparency would be even more essential than it is today. I can’t imagine that many of us would like to be in a Google-like situation of not knowing anything about the algorithms that are being used to determine the quality of a paper, nor being informed when, why, how, and by whom those algorithms were created and get updated.

Last but not least, greater transparency can, of course, be of great value in increasing diversity. Studies like this one by Jory Lerback and Brooks Hanson provide evidence of bias in the review process — in this case, showing that “women were used less as reviewers than expected…The bias is a result of authors and editors, especially male ones, suggesting women as reviewers less often, and a slightly higher decline rate among women in each age group when asked.” Other studies have shown that, for example, Chinese researchers are far less likely and US researchers far more likely to be reviewers. These kinds of studies don’t just shed light on a long-suspected problem in peer review, they also provide a solution to a well-known problem: the need for a larger pool of qualified peer reviewers.

I strongly encourage you to read the SpotOn report in full. And what better call to action than these words from Rachel Burley and Elizabeth Moylan in the Foreword: “publishers will have to proactively partner with the wider community if we are to see real industry-wide improvements” — something that Peer Review Week aims to facilitate!

Alice Meadows

Alice Meadows

I am a Co-Founder of the MoreBrains Cooperative, a scholarly communications consultancy with a focus on open research and research infrastructure. I have many years experience of both scholarly publishing (including at Blackwell Publishing and Wiley) and research infrastructure (at ORCID and, most recently, NISO, where I was Director of Community Engagement). I’m actively involved in the information community, and served as SSP President in 2021-22. I was honored to receive the SSP Distinguished Service Award in 2018, the ALPSP Award for Contribution to Scholarly Publishing in 2016, and the ISMTE Recognition Award in 2013. I’m passionate about improving trust in scholarly communications, and about addressing inequities in our community (and beyond!). Note: The opinions expressed here are my own

Discussion

12 Thoughts on "The Future of Peer Review"

“automated publishing [using AI to evaluate articles] would expedite scientific communication”

Indeed, but communication with whom? More bots? This reminds me of Springer’s proud announcement that they had developed software to detect gibberish papers written by SciGen. I had to check myself that it wasn’t April’s Fools, but there was no irony in the discussion. Since the obvious solution to catching SciGen gibberish is for someone actually read it, but that option wasn’t even mentioned, this made me realize a large amount of “science communication” isn’t written with the expectation that anyone will really read it, it’s just churned out to fill CVs that in turn, won’t be read closely.

And so, this is relevant with the “living article”  notion wherein authors pre-register their article, post data, post drafts on pre-print servers, followed by post-publication review. Who’s got time for all that? Just in STEM, isn’t there something between 1 and 4 million articles published per year? (I’ve seen this range; I’m sure someone in this community would know the right number).  Per the recent commentary on commenting, except for a small fraction of popular authors with active followings, something really new or controversial (CRISPR, cold-fusion, arsenic life), or when someone smells blood (Pubpeer), the vast majority of “living articles” will live alone and forsaken.

“…Additionally, human bias is removed, making automated publishing an unbiased approach.” First, humans program AI and humans set the parameters. We cannot escape human bias. Second, AI tends to move towards the fringes – just look at the issues Google is having with YouTube and Adwords.
The solution for the real biases listed above – the preponderance of US male reviewers – is more outreach, encouragement and training by all stakeholders. It falls on the shoulders of research advisers, universities and institutions, government and private research labs, societies, NGOs and publishers to expand the pool of potential reviewers. As one of my first initiatives as managing editor for a society, I am instituting a reviewer training seminar at an upcoming meeting. I welcome the efforts of Publons and others in this arena.

This is something I struggle with as well — if we’re asking someone to review a paper (is this any good?), then isn’t that inherently biased? If we’re asking someone for a qualitative review (whether a paper review, a letter of recommendation or a hiring/funding decision), then we are essentially asking for an opinion. Even the simpler question, “is this scientifically sound?” is a qualitative, individual opinion (you may think that paper’s conclusions are supported by the data but I think they missed a key control). Fooling ourselves into thinking these types of opinions can be machine generated in a quantitative manner is how we ended up over-relying on the Impact Factor rather than reading and rendering personal opinions on the actual papers themselves.

If you’re interested, contact me via email (david.crotty at oup.com) and I’d be happy to connect you with a few research societies that have peer review training programs connected with their journals.

These suggestions cannot hurt. Just this week my colleague and I had a manuscript rejected because we chose an underlying theory from Goffman but the reviewer wanted Foucault. God forbid we choose a theorist who did not have a sick, twisted mind, who Chomsky observed had no moral capacity.

What about peer reviewing of monographs? How does it, or should it, differ from journal peer reviewing?

Are we sure that transparency will actually enlarge the reviewer pool? In my area of social science, double blind peer review is the norm and, personally, I won’t review for any journal that doesn’t respect my anonymity. This is partly why some journals in the social science and health field end up publishing rather poor qualitative work – many of us who are established scholars simply prefer to allocate our reviewing commitment elsewhere. Similarly, I refuse all review requests from OA journals that simply operate a threshold notion of publishability. If they can get a market and attract papers, that’s fine but I don’t want to waste my time on papers that are simply ‘good enough’. I am only willing to agree to a certain number of reviews in any given month – and I want to invest my efforts in papers that have some prospect of making a difference to the field.

The distinction I see here is that the process, the mechanism by which the peer review is set up, run, and tracked, should be more transparent, not necessarily the identities of the reviewers of that article. Since reviewers are already (conceivably) getting information about how to review for the journal when they are invited/agree to review, this transparency would be in service to readers or potential authors, an indicator that (i) the journal has a well-defined and rigorous peer review system in place, and (ii) that it works because they’re being provided with the review criteria and turnaround times. Either of those reasons seem more valuable for early researchers or those not familiar with the journal, but at worst more information about the journal is being shared and at best more credibility is being lent to the journal.

Hi Robert, as Nick says the point is not that the peer review needs to be open (though for some disciplines and communities that seems like a good option) but that the process should be transparent.
I also believe that increased transparency (not necessarily by identifying individuals but rather through the use of demographics) can help enable a wider pool of reviewers because it will allow editors to cast a wider net by proactively seeking reviewers from groups that are currently under-represented such as women, ECRs, and Chinese researchers.

Showing our conservatism here! AI/machine-reading can certainly help with making peer review and other publishing processes more efficient so why not try it? With respect to automation – if all that is happening in the future is that an immediately published pre-print is then commented upon to validate it – then this can and should be easily automated and the publishing industry has to adapt/evolve or die. However, if we really do value the opinions of experts then peer review is the main aspect of the publishing process we have to focus on and defend for all types of articles and proposals.

I think the seven recommendations are mostly sensible. In the face of increasing “crisis talk”, which often originates from outside of academia, most of the recommendations are actually quite conservative. True, there are new labels such as AI, but otherwise I could have read a similar list from some editorial published in the 1990s or even 1980s.

Using AI and ML for improved match-making is especially something I find interesting. This is also something that journals and publishers could jointly implement without interfering with the daily work of scholars.

But then again, what I actually see is mostly disappointing. Take Publons as an example. The platform they provide seems like a replica of a social media site. And all typical tenets of social media are also present: mostly empty content, “reviews” that are misrepresented by some obscure metrics, peers who are “peer reviewing” themselves, “reviews” that amount to few sentences, and so forth. Didn’t ResearchGate already exemplify that “gamificaton” and “badges” do not work in academia? Nor are such things substitutes for old-fashioned peer review.

Colleagues may be interested in the peer reviewing system that was used by the British Journal of Educational Technology for many years prior to a change of editorship in 2016. (See Hartley, Cowan & Rushby, 2016). Here a panel of regular volunteer reviewers was established – and updated annually. Each month the editor (Nick Rushby) e-mailed the panel with a list of anonymous abstracts from recent submissions and invited interested people to bid to review them. This system had the advantage that the editor did not have to select different reviewers for every paper deemed appropriate for the journal, and for reviewers in that they could choose to review papers with which they felt comfortable so doing, or indeed would like to read. For more details see:
Hartley, J., Cowan, J., & Rushby, N. (2016). Peer choice – does reviewer self-selection work? Learned Publishing, 29, 27-29. Doi: 10.1002/leap.1010

I apologize if I sound negative… but as a basic optimist who’s done quite a bit of research into reviewer training programs, it’s nice that faith in them persists undaunted, but so far none have been shown to have any lasting real world value (other than a transient burst of self-confidence by the trainees). Let’s devise and develop one that can actually produce the promised results, then the time invested would be well worth while.

Comments are closed.