Michael Eisen recently became the Editor in Chief of eLife, and he has wasted no time in laying out his visions for the future of eLife and for peer review more generally. In a series of tweets on July 27th, he presented two new initiatives that eLife will be launching in the near future. I’ve pasted these tweets below (concatenated into paragraphs) and added commentary of my own: I think one initiative is actually a rehash of a fairly common and not particularly effective practice, and the other could be a game changer.

I’ll say at the outset – as I’ve said many times before – I think journals are an anachronism — a product of the historical accident that the printing press was invented before the Internet. I want to get rid of them. More specifically, I want to get rid of pre-publication peer-review and the whole “submit – review – accept/reject – repeat” paradigm through which we evaluate works of science and the scientists who produced them. This system is bad for science and bad for scientists.

The review-reject-review cycle really is one of the big inefficiencies in the current system, and arises from a conflict between two of journals’ main roles: the journal uses peer review to improve research articles, but only wants to expend that effort on articles that are likely to pass through its filter for subject area, robustness, and novelty. Weaker or out-of-scope articles are ideally rejected quickly so that they can go somewhere else.

blackboard drawing showing arrows to represent change

Evaluating works at a single fixed point in time, and recording the results of this evaluation in a journal title is an absurd and ineffective way to evaluate science — it’s also slow, expensive, biased, unfair and distorting.

This theme comes up a bit later – the idea that the article receives a (semi-) formal peer review process at multiple time points between posting on a preprint server to being ‘published’ somewhere, and even after that. What’s less clear is whether the article should be revised by the authors each time.   

A clear alternative is emerging — we should let scientists publish their work when they want to on @biorxivpreprint or the equivalent, and then review and curate these works in many different ways, throughout the useful lifetime of a paper.  In this future world we’ll still have peer review — and maybe things that look a bit like journals to organize it — but it will differ in many crucial ways:

1) It won’t be exclusive — anyone, or any group, should be able to review a paper, so long as they do so constructively. It’s silly to leave something as important as evaluating the validity and contributions of a work based on the judgment of 3 people

Anyone involved in journal management can tell you that peer review never happens spontaneously – the best one can hope for is scattershot commenting from people with varying levels of expertise. The system that Prof. Eisen views as an anachronism is the way it is precisely because quality peer review is so damned hard to organize.

The system has several key strengths: a) a third party (the Editor) selects the reviewers and acts as a guarantor of the reviewers’ expertise; b) because three or so people have agreed to review, the Editorial Office can chase them until they return their comments, c) the responsibility for deciding whether the article can be published (in that journal) lies solely with the Editor and these reviewers, and not some diffuse group with limited responsibility and engagement. That responsibility focuses minds and promotes quality assessment.

Moreover, all articles need to go through peer review, but commenters generally only focus on a small fraction of articles (e.g., in a presentation earlier this year, CSHL Press’ John Inglis stated that only around 8% of articles on bioRxiv receive comments). One could counter that articles with no comments are of little importance and their flaws are inconsequential, but that’s not true in many situations. For example, decisions about new clinical practices are based on all of the relevant literature, and it would be dangerous if most of these articles had not been thoroughly reviewed and corrected beforehand.

As we see below, Prof. Eisen is indeed picturing a more structured peer review process at multiple time points, but these review processes will only be effective if Editorial Office infrastructure exists to support them.

2) It won’t just happen once — there’s value in assessing a paper at the time it’s published, but also in looking at it a year, 5 years, 100 years later.

Articles are assessed all the time, whenever a researcher reads one, a review article is written, or a thesis/hiring/tenure/funding committee looks at the work of an applicant. The next steps are what matter more for peer review: the reader’s opinions are written out, those comments are communicated to the authors (either signed, or anonymously with the reader’s expertise guaranteed by a third party), and the authors revise their article in response to the comments.

What would this process look like outside of the structure of pre-publication journal peer review? Would the authors really bother to update their article if it has already been published? What if they revise it, and post a new version, and then receive another (unsolicited) set of peer reviews? Would they really put in the effort to revise it again, particularly when there’s no net benefit to them? What happens if authors have to update a core finding of a landmark article (e.g. E=mc^3)? All of the subsequent literature must then be updated, which could become chaos. This scenario of cascading revisions is prevented under the current system by the tradition of a ‘version of record’, where the published version is the definitive version, and it’s very hard to change. Changes, or new results, are written up as a new, separate article, with its own version of record. If we stick to having versions of record, putting the effort into a structured peer review process for a published article is unsatisfying because there’s no prospect of the article being updated in response. 

3) We won’t record the results of peer review in journal titles — it’s really crazy that we reduce something as multidimensional as the assessment of a work of science to a single yes/no decision (especially given how poisonous this system has become).

As mentioned above, acceptance to a journal is a signal to readers that effectively says “4-6 of your colleagues (reviewers, Editor(s), Chief Editor) thought this article is sufficiently relevant, robust and important to be included in our journal”. That helps the reader filter the literature much more effectively than just trawling through random articles on bioRxiv. 

However, this filter signal is currently limited because articles can only be accepted to one journal. Although he didn’t mention them explicitly, I suspect this is where Prof. Eisen sees overlay journals coming in: a preprint can be peer reviewed by any entity that resembles an editorial board and accepted or rejected for their particular title. Another overlay journal can come and do the same; acceptance at both overlay journals signals that the article should be of interest to both sets of readers. By this view, publication in a journal is just a label, and anyone with Editorial Office infrastructure and an Editorial board can make labels.

Life is more complicated for overlay journals when we consider the stability offered by the version of record. Which overlay journal gets to host the ‘version of record’? If we decide nobody, then multiple different versions of the same article will appear in different overlay journals. Should researchers cite both? Or just the version that best supports their current argument? These issues are not insurmountable, but it will be a while before a consistent ‘best practice’ is established. 

Prof. Eisen next moves onto his initiatives:

Initiative 1: Publish, for accepted papers, a statement from editors explaining why the editors selected it for @eLife. The goal is to shift people’s attention away from the journal title as a measure of a paper’s value — which we all know doesn’t work — and towards the specific contributions of a given work of science. 

This sounds potentially useful, providing that the editors won’t find it too burdensome, and it doesn’t degenerate into broad statements about how the article meets the journal’s criteria for importance, robustness, and relevance.

If we do this right, we hope it will become a standard for all journals. And while it won’t immediately get rid of journal titles, we hope it will begin to undermine them.

Many journals ask authors to supply importance statements about their work, and these don’t seem to have led to the demise of journals. Indeed, having Editor-written statements may make journals that can afford such an effort even more attractive to authors. I therefore don’t see this initiative moving the needle in the way Prof. Eisen hopes.

Of course this only goes part way, as it will only apply to papers we accept. In the long run we don’t want to accept or reject papers — rather we just want to assess them all by simply saying what the reviewers and editors think of the work without any kind of seal of approval. Which leads to … 

One of the things that I find most frustrating about pretty much all journals (save those like @PLOSONE) is “triage” or “desk rejects” or whatever you want to call it — the process by which editors decide whether a paper should go out for review. If you accept the current journal system, there is a certain logic to this — if a paper has little chance of being published in a journal even after peer review, it’s a waste of everyone’s time and effort to review it. The problem is, of course, that it’s really hard to make a judgment about the audience, impact, value — whatever criteria you care about — of a paper without reading and thinking about it in detail — i.e. peer reviewing it. So the process ends up being incredibly subjecting and immensely frustrating to authors, who feel like their work wasn’t given a fair shake. And this subjective pre-judging is really impactful, serving as it does as a gateway to journals that can make or break your career.

A much saner system is, as I said above, to simply review everything and instead of deciding which pigeon hole a paper belongs in, just publish the review along with the paper. And Initiative 2 is exactly this — once we get the details worked out, @eLife will begin offering a service by which we will review papers posted on @biorxivpreprint without triage (there will be limited capacity, but it will be 1st come 1st served).

We will then review the paper just like we do papers submitted in the traditional system — but instead of sharing the reviews only with the authors, they will be posted back to @biorxivpreprint, and will be written to be useful to the public. Some of these papers will be, perhaps following revisions, published in @eLife, so people can still participate in the traditional publishing system even as they’re also stepping into the future.

I’m admittedly a biased observer (hence my decision to write this post), but this is a great and radical step forward. Prof. Eisen is essentially proposing to convert the eLife Editorial board into a version of Axios Review. Axios was an independent peer review organization I founded in 2014 and folded in 2017. The proposed service also has a fair amount in common with F1000 Research.

As I mentioned above, peer review has two main processes: ‘improve’ via peer review and revision, and ‘filter’ via acceptance and publication in a particular title. Journals only want to dedicate their ‘improve’ effort to articles that have a chance of being published with them, so they apply the crude ‘filter’ of desk rejections first. Prof. Eisen is suggesting that eLife apply their ‘improve’ effort to a much wider range of articles (anyone with a preprint that asks), and then ‘filter’ some into eLife. For articles coming via this route, eLife essentially becomes an overlay journal.

I honestly think this could be a great success: the chance of acceptance at eLife will be a strong draw for many authors, with quality reviews being the pay-off for authors that don’t get selected. They may even want to introduce the extra step of acting as a brokerage for the review metadata (e.g., reviewer identities), passing those and the reviews along to other journals that might be interested in the article. Axios Review found that 85% of articles submitted to an interested journal ended up being accepted, with half not being re-reviewed by the journal. 

The availability of quality assessments for a higher proportion of preprints will further entrench the preprint system, and will hopefully spur the formation of other overlay journals that make use of these reviews (particularly if they can access the review metadata). 

There’s three big questions. First, who pays for this? eLife’s backers have a lot of resources, and it’s possible that they’re willing to bankroll a peer review service for tens of thousands of articles per year because it fits with their mission. This may work for well-funded fields like the biomedical areas covered by eLife but who would cover costs for less well-funded areas of research? There are inherent risks with letting funders assess their own research, as there’s pressure to reaffirm the wisdom of one’s funding decisions, but that’s less of an issue because only a fraction of the reviewed articles will end up in eLife. If this approach is very successful, I envision they will eventually charge a submission fee of a few hundred dollars to cover the costs of managing the peer review process. Again, this favors well-funded researchers and fields.

Second, how will the service handle reviewer anonymity? The research community tells us again and again that anonymity is vital for permitting frank assessment and protecting reviewers (particularly those from outside the white male establishment) from retribution. Prof. Eisen envisions the reviews from the eLife review service being reposted to bioRxiv, but will all the reviewer identities be visible? If not, then it is hard for readers to assess the expertise behind review. On the other hand, enforcing open identities in peer review will deter a substantial fraction of potential reviewers and may blunt criticism. Instead, I would recommend allowing authors to decide whether their reviews are posted to bioRxiv, and letting reviewers decide whether their reviews are signed. In either case, the bioRxiv entry for that article should still note that it has been reviewed by eLife and that the reviews and metadata (reviewer/Editor identities) are available to legitimate third parties, such as other overlay journals.

Third, will the eLife review service get swamped by thousands of low-quality articles? All but the worst should still get reviewed (as that’s what this service is about), but as anyone associated with a megajournal can tell you, getting thousands of boring, poorly written articles through peer review is very challenging. This issue may be worse for a service, as lazy authors may just solicit reviewer feedback rather than thinking hard about the article themselves. 

While the authors of these low quality articles might be deterred by a submission fee, my experience with Axios showed that it’s researchers in high profile labs that are much more opposed to paying for peer review: many simply cannot grasp that while reviewers and editors are volunteers, the editorial office staff are paid professionals and every manuscript goes through their hands multiple times. A fee could therefore end up steering away the high quality articles that keep the Editors and reviewers engaged and enthusiastic. 

I suspect that eLife will keep this service free for as long as possible while gauging demand and getting a handle on the feasibility of handling large numbers of submissions. It’s certainly a very interesting development, and eLife has the prestige and experience to make it a success. 

Tim Vines

Tim Vines

Tim Vines is the Founder and Project Lead on DataSeer, an AI-based tool that helps authors, journals and other stakeholders with sharing research data. He's also a consultant with Origin Editorial, where he advises journals and publishers on peer review. Prior to that he founded Axios Review, an independent peer review company that helped authors find journals that wanted their paper. He was the Managing Editor for the journal Molecular Ecology for eight years, where he led their adoption of data sharing and numerous other initiatives. He has also published research papers on peer review, data sharing, and reproducibility (including one that was covered by Vanity Fair). He has a PhD in evolutionary ecology from the University of Edinburgh and now lives in Vancouver, Canada.

Discussion

12 Thoughts on "Two New Initiatives at eLife To Start the Eisen Era"

These ideas seem to miss an incredibly important part of our information ecosystem – there are people who are not scholars who are paying attention and may well inject themselves into the review stream.
This raises two huge problems: (1) the popular press as well as political partisans grabbing onto research results and convincing the public and politicians to act on them before any real high quality vetting has taken place. Imagine the flood of garbage science about climate change, racial differences, anti-LGBT, etc. being mixed in with serious science/social science on these pre-print platforms, and the media picking them up. Consider just the anti-vaxxers and how difficult it has been to kill the recycling of one retracted article and the public health consequences. (2) those same kinds of non-scholarly-motivated people flooding the review sections with biased comments: “anyone, or any group, should be able to review a paper, so long as they do so constructively.” Who judges constructively? Take a look at the comments attached to so many science articles on the NYT and WashPost websites, for a warning about what’s going to happen. When 5 serious reviews are drowned in 2,000 ignorant partisan ones, what value is this system? The current system has flaws, but eliminating the very idea of pre-publication expert filter review in today’s social/political climate is most definitely the wrong way to go.

In the literary classic Kim by Rudyard Kipling there is a character introduced as a letter writer in a bazaar, whose shingle introduces himself as an M.D. (failed). The logic being that he is advertising to his customers that he passed the entrance exams to Med School, but could not graduate. By analogy, it will be interesting to see how many @biorxivpreprint submitters volunteer for a PUBLIC label of Rejected by @eLife …

Perhaps this is an area where machine learning can play a role, by performing some of this triage process. Promising to manually review every submission doesn’t seem scaleable, whereas models can be built to verify the document structure, methods, citations and stats to at least provide a baseline report/suggestions for improvement.

I’d really like to hear about how retraction works in this vision. How is the literature “purged” … or isn’t it?

Two major problems with the “eLife as reviewer of bioRxiv” idea. First, about 32% of bioRxiv papers consistently don’t get published in a peer-reviewed venue (https://thegeyser.substack.com/p/biorxiv-and-abandoned-preprints). This suggests that 1/3 of the effort will be wasted, finding thousands of preprints that don’t belong in the literature and spending time on them. Second, my recent sampling of 1,200+ published papers based on preprints showed that 57% of the papers were posted AFTER they were submitted to the journal that would ultimately publish them (https://thegeyser.substack.com/p/biorxiv-authors-mostly-post-after). The number of days of posting after submission has also doubled in the past few years, from 20 to 40, suggesting that authors are more cautious about using bioRviv for pre-publication review. If this sampling reflects a larger truth about how bioRxiv is being used by authors, then eLife could be reviewing preprints that are already spoken for. Added together, for every 100 preprints, 32 could be predicably unacceptable; of the remainder, about 40 would be spoken for. That would leave eLife with 28 preprints to look at for every 100, and no way to know which 28 of the 100 would be available to them or good enough to look at. And the rate of available and viable papers is probably lower than that, because only 29% of papers published from preprints I sampled were deposited in bioRxiv more than 10 days before being submitted to the journal that would ultimately publish them.

If I were Eisen, I’d think twice about devoting a lot of eLife resources to reviewing papers that are on what is increasingly a marketing platform for authors of papers that are either substandard or already well on their way to acceptance elsewhere. As for the speculation that potential publication in eLife would be a strong draw for authors, eLife has consistently accounted for 5% of published preprints over the past 5 years. OUP, Nature, Elsevier, PLOS, and BMC all publish more papers with associated preprints in bioRxiv.

I think they’d only be reviewing articles at the authors’ request, rather than spontaneously, so (presumably) these won’t have been submitted anywhere else just yet. So, I don’t think we can safely predict what will happen in this experiment from a sampling of current bioRxiv usage.

If that’s the case, based on what I see as far as eLife’s appeal to bioRxiv authors, that’s even worse news, as only 5% of the preprints that result in published papers end up with eLife. Also, how is this different then? Isn’t “review upon author request” just “submission”?

The critical difference is that in this case both “submission” and “rejection” are very very public and known to all…

Perhaps we should start a journals program that consists of rejected papers, bogus papers, nonsense papers, what if papers, and is it possible papers!

From a pragmatic point of view, this effort is similar to many attempts to effect change in scholarly communications, in that it assumes that by building a tool or offering a service, the culture of the research community will change in order to use that tool/service. Unfortunately, culture, and how things “are” tends to hold sway over how things “should” be, and no matter how useful the tool/service, the motivations for using it won’t exist until the culture drives them.

In this case, as was pointed out above in a comment by Mike Fainzilber, the public nature of the critical reviews are problematic. Science, and research in general, runs on a reputation economy — what jobs you get and how much funding your receive largely depend on what your colleagues think of you and your work. No one seems eager to share their flaws and mistakes publicly, which is what is being risked here. It is one thing to share the reviews on a piece of work that is seen as successful and acceptable, and quite another to publicly be the source of something that is labeled flawed and unacceptable. Considering that there are many other journals of eLife’s stature that one could send an article to for review which wouldn’t result in a permanent public record of rejection, I’m not sure how many will deem it worth the risk.

I’d also be concerned that if it took off, it would consolidate power considerably among a small number of scientists and editors. If every paper in the life sciences is run through an eLife-directed public review process, then does that small elite group become the arbiters for all work being done in the field? Does this concentration further drive subjectivity rather than reducing it?

One other question — I’m still trying to understand the problem that overlay journals solve. They certainly relieve the journal from hosting the paper itself, but given that an overlay journal must have its own website hosting tables of contents and information, and must have its own submission/peer review system, how much of a savings is incurred by not hosting a pdf of the paper itself? If, as Tim suggests here, the advantage would be that a paper could be published in many different journals to reach many different audiences, then is it better to have one generic version of the paper for all audiences, or instead to allow authors to publish their paper multiple times, with each version geared toward a specialized audience reached by the journal in question?

Comments are closed.