Founded in 2012 with funding from Howard Hughes Medical Institute, the Max Planck Society and Wellcome, eLife spent the first seven years of its life in relatively traditional form: that is to say, a highly selective journal, albeit with an article processing charge (APC) business model and an innovative, collaborative peer review model. When Mike Eisen was appointed as Editor in Chief in 2019, many of us expected this to change. And indeed, Mike set out his vision and intentions early on via this and other tweets:
I’ll say at the outset – as I’ve said many times before – I think journals are an anachronism — a product of the historical accident that the printing press was invented before the Internet. I want to get rid of them. More specifically, I want to get rid of pre-publication peer-review and the whole “submit – review – accept/reject – repeat” paradigm through which we evaluate works of science and the scientists who produced them. This system is bad for science and bad for scientists.
And we’ve not been disappointed – eLife recently announced a radical shift in its model. From next year, eLife will no longer make accept/reject decisions at the end of the peer-review process; rather, all papers that have been peer-reviewed will be published on the eLife website as Reviewed Preprints, accompanied by an eLife Assessment and public reviews. Authors will also be able to include a response to the assessment and reviews. The decision on what to do next will then entirely be in the hands of the author; whether that’s to revise and resubmit, or to declare it as the final Version of Record (VOR).
Many tweets have flown across the worlds of scholarly communication and research over the past few weeks; a number in support, but as always, many to point out the perceived flaws. And so today in my conversation with Damian Pattinson, Executive Director of eLife, I want to focus on understanding the principles underlying this change, the problems that eLife is trying to solve, and what we should watch out for to learn from this bold move.
Whether or not it succeeds in this exact form or is the right solution for all journals and publishers, it’s the kind of radical experimentation that we need. As I discussed in my recent post, scholarly publishing has stagnated and it’s become too easy to shift the responsibility for this to other stakeholders. While there are many positive movements elsewhere (such as the Agreement on Reforming Research Assessment in Europe and the National Academies’ Roundtable on Aligning Incentives for Open Science in the USA), eLife’s new model shows that publishers can (and should) lead with solutions too. I hope that it’s one of a growing number – too many global challenges depend on a faster, more open, and more equitable system for sharing research.
It seems to me that the primary problem eLife is seeking to disrupt with this model is the tight coupling of peer review and dissemination (publication). Combined with the dysfunctional academic system of rewards and incentives, this is a key driver of problems in the current system whether we’re talking about speed, efficiency, or bias. Does this align with your thinking — are there other problems you’re seeking to address with these changes?
Damian: Yes, I think that’s it, although I don’t think we should underestimate the importance of speed. We have all become so used to a world in which research is held up for months, if not years, and seem to have forgotten that these delays can literally cost lives. ,The pandemic showed us that fast access to research is not only possible but necessary, and we feel that this needs to be the standard for the entire scientific enterprise, not just for disease outbreaks. So, if we’re moving to a world where research is shared as soon as it is ready, then you also need to have a system where that research is vetted and reviewed as rapidly as possible. This is the idea behind Reviewed Preprints, which can be available within a few weeks of a work being shared. This is obviously a dramatic improvement on the current timeframes we’ve all become accustomed to.
I’m interested to hear more about the eLife Assessments. We know that the current system encourages researchers to shape their research for specific journals (and often to excessive reviewer demands) and to focus on novel rather than robust results. Many had hoped that preprints would disrupt this, but so far researchers still want the signifier that comes with a journal brand. How do you see the eLife Assessments fitting into this framework — are you hoping that they can shift behavior more fundamentally (and in what ways)?
Damian: eLife Assessments are intended to free up researchers from the restraints of arbitrary concepts of journal brand. Clearly, we have our own brand, but we want it to signify high quality peer review rather than our perception on whether a particular discovery is particularly interesting. Assessments allow editors to describe papers based on their significance and strength of evidence, using predefined vocabulary, so that readers can see what a paper is like without having to rely on their own notions of what a particular journal title signifies. This to me is the most exciting part of the whole process — we’ve talked for years about how it’s impossible for journals to give equal weight to novelty and rigor in their publishing decisions, and novelty has always won out (since that’s the thing journal metrics more easily convey). By describing work on both axes, and removing the binary accept/reject decision, we’re creating possibilities for totally new forms of assessment.
I agree that this is a really important point. We spend a lot of time bemoaning impact factors yet there’s not been that much experimentation to find viable alternatives. But at this stage, an eLife Assessment isn’t going to fit into the frustratingly narrow assessment frameworks used by institutions, funders, and others on whom careers depend. The stakes are especially high for early career researchers. That you’re launching this experiment from a trusted and respected brand will likely mitigate some of this. But thinking back to the early years at PLOS, one of the key enablers was the number of established, high-profile scientists who chose to move their work to PLOS. How are you thinking about this process of culture change at eLife?
Damian: Yes, this does require behavior change, and we’ve been working with experts from other industries to learn how we can best go about this. One thing we’ve learned is that it’s much easier to do when you already have the infrastructure to offer a genuine alternative, and a strong commitment to make it work, rather than just talking about the need for change. Everyone agrees that assessment criteria need to change, but the people doing the assessment need to know what else to look at. So, we have spoken to numerous funders and institutions to ask them to use our eLife Assessments as part of their evaluation process, and we’ll be making various announcements in the coming weeks to highlight the wide support we’ve received. We’re also seeking submissions from researchers who have published with eLife in the past and who now have the opportunity to show their support for this system. As Hilal Lashuel recently pointed out in The Scientist, this is a collective effort that will only succeed if everyone agrees to try alternative systems.
eLife will remain a selective journal. There clearly must be some filtering and I like the plan for editors to both select preprints for review and guarantee expert review. It also seems to me that this addresses the challenge of post-publication review to date: very few people bother! But some have expressed concerns about this initial selection process and lack of clarity about how these decisions will be made or what the criteria are for “identifying papers where the reviews will be of greatest public value”. How would you respond to concerns that editors could become “unduly powerful gatekeepers”?
Damian: They’re only unduly powerful gatekeepers if people keep thinking of this model as being a gate. The point is we are trying to move away from a world where “getting into” the journal has any value whatsoever. The only value is in the review we do. Now, of course some papers are more likely to get attention than others, and therefore have more of a need to be reviewed than others. So, we are using our limited capacity to focus on those papers for now. In the future we would like to go beyond our existing offering and review more papers, but we can’t be expected to do that from the outset. The important thing is that there is no link between the venue and the assessment, and therefore the idea of “getting into” eLife needs to be seen as far less significant than what we actually say about the paper.
I was very interested to see your approach to the VOR, which is currently very hard to change. In our recent interviews with different stakeholders at PLOS, we certainly heard diminished value placed on the traditional VOR concept and a desire for a more flexible, iterative process. How is eLife thinking about evolution of the VOR? Do you imagine that you will have articles that stop at the Reviewed Preprint stage? Given that this will be citable, what is the need for the VOR?
Damian: Yes, we really don’t place much importance on the Version of Record. Authors have little interest or even knowledge of it as a concept. However, the current system does still place a lot of value on it, most notably the indexing services (PubMed, Web of Science, etc.) who, for now, are only willing to index VORs and not Reviewed Preprints.
That may change in time and that would obviously again change the value of the VOR in the eyes of the authors, but at the moment we do see the need for delivering a VOR in order to get the work indexed. We are also still doing a lot of work to create a production-level VOR. Our Reviewed Preprints are a re-rendering of the bioRxiv XML, which is quite different from the schema we use at eLife. We therefore re-typeset papers at the VOR stage and do a lot of checks to ensure they meet our production and editorial standards. Over time we plan to move some of these checks upstream to be included in the Reviewed Preprint rather than the VOR.
One thing I do like about the VOR is that it allows an author to say they’re “finished”. We often hear from authors that they are concerned that in a world of multiple versions, they will be expected to constantly update their papers, which will stop them from moving on with their research. Declaring a VOR allows them to say they consider the work done, and any new findings would be reported in a new paper.
Tell me more about the business model. You’re reducing the fee from $3,000 to $2,000 — is the goal for eLife to be sustainable without continued funding from your sponsors? It seems to me that, if an author follows the new process through to VOR, eLife will still be undertaking the same set of activities so I’m wondering how you’re able to reduce the cost?
Damian: Yes, we still have a goal to be financially sustainable, and we think we can do that at this price point. The reduced cost comes from the fact that we are now charging more authors for publication of a Reviewed Preprint, and so can split the costs more widely. In the old system we were only charging publication fees to accepted authors, so the costs for all the papers we reviewed and rejected were being borne by those authors. This is a problem that has been highlighted with APCs in general and one that we feel we have a chance to improve in this new system.
So now because we will be asking for payment for all papers we review, that means there are about 40% more authors who would pay for a Reviewed Preprint and, as a result, any individual author pays a lower fee. Of course, we’d like to move beyond author-paid publication fees altogether and will work with funders to come up with centralized payment systems. But in the meantime, this system does at least offer a more equitable approach to publication charges. And of course, we’ll continue to provide waivers for any authors who are unable to pay.
Response from researchers has been mixed. It’s been great to see the new model embraced by many, but it’s also received some significant criticism, perhaps most strongly in a THE opinion piece which described it as a “bait and switch”. Some are simply concerned about losing the “significant prestige” afforded by eLife’s old model. But I’m sure that you’ve thought through both potential researcher concerns about public criticism and ways in which they might “game” the new process (as discussed by some of my fellow Chefs in the latest issues of The Brief). How would you respond to these concerns?
Damian: There is no change in prestige. We continue to review using the same consultative review model, and the same people, as before. All that changes is how we confer that prestige — instead of doing it at a journal level, we’re doing it at the article level. So, getting into eLife is not the thing people should be hoping for — it’s a positive, even a glowing, assessment of their paper. As people get used to that notion, the incentive for gaming the system diminishes substantially. Authors are incentivized to make changes in order to improve their assessment, since these are openly visible on their paper. If they choose to just post a VOR without revising, their assessments will reflect this.
So, what does success look like for eLife? When you look back in 3-4 years’ time, how will you measure both the impact for eLife itself and the ecosystem more widely?
Damian: Well first we want to prove the model. That means delivering hundreds of Reviewed Preprints each month and giving authors the control to choose how they revise and index their work. But we also want to offer the system to other organizations who wish to move to preprint review workflows, in particular learned societies, who already can deliver expert review and are looking for new publishing options. Our funding from Wellcome and, more recently, CZI (in partnership with PREreview) is expressly designated to build the preprint review ecosystem, and our current technology roadmap is focused on delivering end-to-end, open-source publishing to anyone who wishes to participate. This includes peer review management (from Coko’s Kotahi system), Sciety (our preprint review aggregation tool), Reviewed Preprint display (based on Stencila’s Encoda tools), and VOR display (our Continuum system). We hope that in 3-4 years we’ll see a thriving ecosystem of preprint reviewing communities, with funders and institutions using their outputs to make decisions on funding and hiring based on the quality of an author’s research, not the journals they end up publishing in.