Michael Eisen recently became the Editor in Chief of eLife, and he has wasted no time in laying out his visions for the future of eLife and for peer review more generally. In a series of tweets on July 27th, he presented two new initiatives that eLife will be launching in the near future. I’ve pasted these tweets below (concatenated into paragraphs) and added commentary of my own: I think one initiative is actually a rehash of a fairly common and not particularly effective practice, and the other could be a game changer.
I’ll say at the outset – as I’ve said many times before – I think journals are an anachronism — a product of the historical accident that the printing press was invented before the Internet. I want to get rid of them. More specifically, I want to get rid of pre-publication peer-review and the whole “submit – review – accept/reject – repeat” paradigm through which we evaluate works of science and the scientists who produced them. This system is bad for science and bad for scientists.
The review-reject-review cycle really is one of the big inefficiencies in the current system, and arises from a conflict between two of journals’ main roles: the journal uses peer review to improve research articles, but only wants to expend that effort on articles that are likely to pass through its filter for subject area, robustness, and novelty. Weaker or out-of-scope articles are ideally rejected quickly so that they can go somewhere else.
Evaluating works at a single fixed point in time, and recording the results of this evaluation in a journal title is an absurd and ineffective way to evaluate science — it’s also slow, expensive, biased, unfair and distorting.
This theme comes up a bit later – the idea that the article receives a (semi-) formal peer review process at multiple time points between posting on a preprint server to being ‘published’ somewhere, and even after that. What’s less clear is whether the article should be revised by the authors each time.
A clear alternative is emerging — we should let scientists publish their work when they want to on @biorxivpreprint or the equivalent, and then review and curate these works in many different ways, throughout the useful lifetime of a paper. In this future world we’ll still have peer review — and maybe things that look a bit like journals to organize it — but it will differ in many crucial ways:
1) It won’t be exclusive — anyone, or any group, should be able to review a paper, so long as they do so constructively. It’s silly to leave something as important as evaluating the validity and contributions of a work based on the judgment of 3 people
Anyone involved in journal management can tell you that peer review never happens spontaneously – the best one can hope for is scattershot commenting from people with varying levels of expertise. The system that Prof. Eisen views as an anachronism is the way it is precisely because quality peer review is so damned hard to organize.
The system has several key strengths: a) a third party (the Editor) selects the reviewers and acts as a guarantor of the reviewers’ expertise; b) because three or so people have agreed to review, the Editorial Office can chase them until they return their comments, c) the responsibility for deciding whether the article can be published (in that journal) lies solely with the Editor and these reviewers, and not some diffuse group with limited responsibility and engagement. That responsibility focuses minds and promotes quality assessment.
Moreover, all articles need to go through peer review, but commenters generally only focus on a small fraction of articles (e.g., in a presentation earlier this year, CSHL Press’ John Inglis stated that only around 8% of articles on bioRxiv receive comments). One could counter that articles with no comments are of little importance and their flaws are inconsequential, but that’s not true in many situations. For example, decisions about new clinical practices are based on all of the relevant literature, and it would be dangerous if most of these articles had not been thoroughly reviewed and corrected beforehand.
As we see below, Prof. Eisen is indeed picturing a more structured peer review process at multiple time points, but these review processes will only be effective if Editorial Office infrastructure exists to support them.
2) It won’t just happen once — there’s value in assessing a paper at the time it’s published, but also in looking at it a year, 5 years, 100 years later.
Articles are assessed all the time, whenever a researcher reads one, a review article is written, or a thesis/hiring/tenure/funding committee looks at the work of an applicant. The next steps are what matter more for peer review: the reader’s opinions are written out, those comments are communicated to the authors (either signed, or anonymously with the reader’s expertise guaranteed by a third party), and the authors revise their article in response to the comments.
What would this process look like outside of the structure of pre-publication journal peer review? Would the authors really bother to update their article if it has already been published? What if they revise it, and post a new version, and then receive another (unsolicited) set of peer reviews? Would they really put in the effort to revise it again, particularly when there’s no net benefit to them? What happens if authors have to update a core finding of a landmark article (e.g. E=mc^3)? All of the subsequent literature must then be updated, which could become chaos. This scenario of cascading revisions is prevented under the current system by the tradition of a ‘version of record’, where the published version is the definitive version, and it’s very hard to change. Changes, or new results, are written up as a new, separate article, with its own version of record. If we stick to having versions of record, putting the effort into a structured peer review process for a published article is unsatisfying because there’s no prospect of the article being updated in response.
3) We won’t record the results of peer review in journal titles — it’s really crazy that we reduce something as multidimensional as the assessment of a work of science to a single yes/no decision (especially given how poisonous this system has become).
As mentioned above, acceptance to a journal is a signal to readers that effectively says “4-6 of your colleagues (reviewers, Editor(s), Chief Editor) thought this article is sufficiently relevant, robust and important to be included in our journal”. That helps the reader filter the literature much more effectively than just trawling through random articles on bioRxiv.
However, this filter signal is currently limited because articles can only be accepted to one journal. Although he didn’t mention them explicitly, I suspect this is where Prof. Eisen sees overlay journals coming in: a preprint can be peer reviewed by any entity that resembles an editorial board and accepted or rejected for their particular title. Another overlay journal can come and do the same; acceptance at both overlay journals signals that the article should be of interest to both sets of readers. By this view, publication in a journal is just a label, and anyone with Editorial Office infrastructure and an Editorial board can make labels.
Life is more complicated for overlay journals when we consider the stability offered by the version of record. Which overlay journal gets to host the ‘version of record’? If we decide nobody, then multiple different versions of the same article will appear in different overlay journals. Should researchers cite both? Or just the version that best supports their current argument? These issues are not insurmountable, but it will be a while before a consistent ‘best practice’ is established.
Prof. Eisen next moves onto his initiatives:
Initiative 1: Publish, for accepted papers, a statement from editors explaining why the editors selected it for @eLife. The goal is to shift people’s attention away from the journal title as a measure of a paper’s value — which we all know doesn’t work — and towards the specific contributions of a given work of science.
This sounds potentially useful, providing that the editors won’t find it too burdensome, and it doesn’t degenerate into broad statements about how the article meets the journal’s criteria for importance, robustness, and relevance.
If we do this right, we hope it will become a standard for all journals. And while it won’t immediately get rid of journal titles, we hope it will begin to undermine them.
Many journals ask authors to supply importance statements about their work, and these don’t seem to have led to the demise of journals. Indeed, having Editor-written statements may make journals that can afford such an effort even more attractive to authors. I therefore don’t see this initiative moving the needle in the way Prof. Eisen hopes.
Of course this only goes part way, as it will only apply to papers we accept. In the long run we don’t want to accept or reject papers — rather we just want to assess them all by simply saying what the reviewers and editors think of the work without any kind of seal of approval. Which leads to …
One of the things that I find most frustrating about pretty much all journals (save those like @PLOSONE) is “triage” or “desk rejects” or whatever you want to call it — the process by which editors decide whether a paper should go out for review. If you accept the current journal system, there is a certain logic to this — if a paper has little chance of being published in a journal even after peer review, it’s a waste of everyone’s time and effort to review it. The problem is, of course, that it’s really hard to make a judgment about the audience, impact, value — whatever criteria you care about — of a paper without reading and thinking about it in detail — i.e. peer reviewing it. So the process ends up being incredibly subjecting and immensely frustrating to authors, who feel like their work wasn’t given a fair shake. And this subjective pre-judging is really impactful, serving as it does as a gateway to journals that can make or break your career.
A much saner system is, as I said above, to simply review everything and instead of deciding which pigeon hole a paper belongs in, just publish the review along with the paper. And Initiative 2 is exactly this — once we get the details worked out, @eLife will begin offering a service by which we will review papers posted on @biorxivpreprint without triage (there will be limited capacity, but it will be 1st come 1st served).
We will then review the paper just like we do papers submitted in the traditional system — but instead of sharing the reviews only with the authors, they will be posted back to @biorxivpreprint, and will be written to be useful to the public. Some of these papers will be, perhaps following revisions, published in @eLife, so people can still participate in the traditional publishing system even as they’re also stepping into the future.
I’m admittedly a biased observer (hence my decision to write this post), but this is a great and radical step forward. Prof. Eisen is essentially proposing to convert the eLife Editorial board into a version of Axios Review. Axios was an independent peer review organization I founded in 2014 and folded in 2017. The proposed service also has a fair amount in common with F1000 Research.
As I mentioned above, peer review has two main processes: ‘improve’ via peer review and revision, and ‘filter’ via acceptance and publication in a particular title. Journals only want to dedicate their ‘improve’ effort to articles that have a chance of being published with them, so they apply the crude ‘filter’ of desk rejections first. Prof. Eisen is suggesting that eLife apply their ‘improve’ effort to a much wider range of articles (anyone with a preprint that asks), and then ‘filter’ some into eLife. For articles coming via this route, eLife essentially becomes an overlay journal.
I honestly think this could be a great success: the chance of acceptance at eLife will be a strong draw for many authors, with quality reviews being the pay-off for authors that don’t get selected. They may even want to introduce the extra step of acting as a brokerage for the review metadata (e.g., reviewer identities), passing those and the reviews along to other journals that might be interested in the article. Axios Review found that 85% of articles submitted to an interested journal ended up being accepted, with half not being re-reviewed by the journal.
The availability of quality assessments for a higher proportion of preprints will further entrench the preprint system, and will hopefully spur the formation of other overlay journals that make use of these reviews (particularly if they can access the review metadata).
There’s three big questions. First, who pays for this? eLife’s backers have a lot of resources, and it’s possible that they’re willing to bankroll a peer review service for tens of thousands of articles per year because it fits with their mission. This may work for well-funded fields like the biomedical areas covered by eLife but who would cover costs for less well-funded areas of research? There are inherent risks with letting funders assess their own research, as there’s pressure to reaffirm the wisdom of one’s funding decisions, but that’s less of an issue because only a fraction of the reviewed articles will end up in eLife. If this approach is very successful, I envision they will eventually charge a submission fee of a few hundred dollars to cover the costs of managing the peer review process. Again, this favors well-funded researchers and fields.
Second, how will the service handle reviewer anonymity? The research community tells us again and again that anonymity is vital for permitting frank assessment and protecting reviewers (particularly those from outside the white male establishment) from retribution. Prof. Eisen envisions the reviews from the eLife review service being reposted to bioRxiv, but will all the reviewer identities be visible? If not, then it is hard for readers to assess the expertise behind review. On the other hand, enforcing open identities in peer review will deter a substantial fraction of potential reviewers and may blunt criticism. Instead, I would recommend allowing authors to decide whether their reviews are posted to bioRxiv, and letting reviewers decide whether their reviews are signed. In either case, the bioRxiv entry for that article should still note that it has been reviewed by eLife and that the reviews and metadata (reviewer/Editor identities) are available to legitimate third parties, such as other overlay journals.
Third, will the eLife review service get swamped by thousands of low-quality articles? All but the worst should still get reviewed (as that’s what this service is about), but as anyone associated with a megajournal can tell you, getting thousands of boring, poorly written articles through peer review is very challenging. This issue may be worse for a service, as lazy authors may just solicit reviewer feedback rather than thinking hard about the article themselves.
While the authors of these low quality articles might be deterred by a submission fee, my experience with Axios showed that it’s researchers in high profile labs that are much more opposed to paying for peer review: many simply cannot grasp that while reviewers and editors are volunteers, the editorial office staff are paid professionals and every manuscript goes through their hands multiple times. A fee could therefore end up steering away the high quality articles that keep the Editors and reviewers engaged and enthusiastic.
I suspect that eLife will keep this service free for as long as possible while gauging demand and getting a handle on the feasibility of handling large numbers of submissions. It’s certainly a very interesting development, and eLife has the prestige and experience to make it a success.