Years ago I was at a meeting that featured a talk by Cliff Lynch. I don’t remember where it was, and I don’t remember most of what he said. But one of his comments, which seemed (but may not have been) off the cuff, really stuck with me: “The purpose of collaboration is not to collaborate.” This principle had never occurred to me before, but once expressed in that way it seemed perfectly obvious: collaboration is something we engage in not for the purpose of “being collaborative,” but for the purpose of accomplishing tasks and solving problems that are best accomplished and solved in a collaborative way. And it then occurred to me that there were other concepts and principles that, similarly, are really means to an end but are sometimes evoked as if they were ends in themselves: innovation is one; change is another.
And another is disruption, which brings me to the topic of today’s post. But first, a little bit of background:
As many Scholarly Kitchen readers will know, Clarivate recently suspended indexing of eLife in its widely used Web of Science database. Clarivate explains its reason for doing so in an addendum to eLife’s entry in the Web of Science Master Journal List. Search for “eLife” there, and it comes up with the tag “On Hold”; click on that tag and you get the following explanation:
Concerns have been raised about the quality of the content published in this journal. The journal is being re-evaluated according to our selection criteria; new content will not be indexed during the course of the re-evaluation.
When the evaluation is complete, the publisher will be informed of the outcome and the journal will either:
- be removed from coverage if it no longer meets the quality criteria
- or remain covered if it continues to meet the quality criteria.
If the journal meets the quality criteria, any missing content will be indexed. If the journal is removed from coverage, content will not be backfilled. In the most serious cases of breaches in a journal’s editorial standards, as determined by our selection criteria, published content may be removed from Web of Science. Timeframes for completing a re-evaluation will depend on the particular circumstances of each case.
More specifically, the quality-control issue that led to eLife’s suspension was the fact that in 2023, eLife changed its publishing model such that it no longer makes publication decisions based on the outcome of peer review. As explained on its “Editorial Process” page, eLife subjects submissions to an initial review by editors; a paper deemed worthy of review is then sent to peer reviewers, and is subsequently published alongside its reviews, regardless of the reviewers’ assessment of its quality or validity. This practice falls afoul of Web of Science’s longstanding policy of giving “cover-to-cover coverage” only to journals that reject articles that don’t pass peer review. Clarivate uses the term “validated” to describe articles that have successfully passed peer review; journals that do publish non-validated articles alongside validated ones can get partial indexing as long as they are able to “provide a feed of content that is limited to articles that have been validated by peer review” – but these journals are not eligible for a Journal Impact Factor (JIF).
The response from eLife was largely defiant: “This decision from Web of Science stifles attempts to show how publishing and peer review can be improved using open science principles,” the platform provider said in a public statement the day after the suspension went into effect.
Subsequently, Clarivate notified eLife that its evaluation had led to the conclusion that “the cohort of content considered to have been ‘validated by peer review’ passes our 24 quality criteria. Therefore, we can continue to cover eLife in the Web of Science Core Collection. As per our policy, eLife will be partially indexed in ESCI, subject to the provision of the appropriate feed” (i.e., a feed limited to content that has successfully passed peer review).
Now, I’m not interested in litigating the upsides and downsides of traditional peer review here. This is an issue that deserves to be discussed, analyzed, and evaluated from multiple perspectives, and there’s no reason to expect (or hope for) universal agreement as to exactly how peer review should be done, or who should do it and when, or even whether it offers a net benefit to the scholarly community and the wider world.
What I do want to discuss, briefly, is a very interesting reaction from the Declaration on Research Assessment (DORA) that was published this week under the title “Clarivate’s actions regarding eLife: DORA’s response.”
In this document, DORA invokes the need for “constant innovation and renewal” in scholarly publishing, characterizing innovation as being “at the core of eLife’s policies and processes,” and accuses Clarivate of seeking to “(punish) innovation,” “shut down innovation,” and “hold the academic community to ransom.” It praises eLife for having “disrupted the traditional model of scholarly publishing” and laments Clarivate’s move as exemplifying “the overt challenges to disrupt and innovate in the scholarly publishing system.”
At no point does DORA’s statement address the substantive issue at the heart of the dispute between eLife and Clarivate.
In other words, DORA’s statement takes it as given that “innovation” and “disruption” are worthy ends in themselves, and that Clarivate’s position is invalid because it represents a lack of support for those things. At no point does DORA’s statement address the substantive issue at the heart of the dispute between eLife and Clarivate, which is about quality and validity in scientific publishing, and the effectiveness of eLife’s publishing model in helping to safeguard it. Instead, the statement focuses entirely on eLife’s value as a “disruptor” of and “innovator” in the existing system – as if disruption and innovation were somehow, in themselves, ultimate goods that outweigh any possible negative outcomes particular attempts at innovation and disruption might lead to.
Similarly, I was also struck by the way that DORA’s statement framed Clarivate’s actions as simply a reactionary response to a threat of disruption, rather than as a critical response to a genuine issue that needed to be addressed.
But this is a common problem, one that I’ve encountered among library managers and leaders from time to time throughout my career. Think about all the times a person in your organization has resisted change and immediately been dismissed as someone who just “hates change.” Of course, that may be accurate; the person in question may be genuinely reactionary. And it may also be that the person in question is perfectly in favor of change in principle, but objects to this particular change based on actual critical analysis of the reality on the ground.
Because obviously, there’s no such thing as “change” in the abstract. There are only specific changes to specific things, which may be for the better or for the worse. To defend the latter on the basis that they constitute change and that change is good by definition would, obviously, be ridiculous. And yet it’s all too tempting to do just that when we have conferred halos on particular words and concepts – “change,” “innovation,” “disruption,” etc. – and then to respond in a reflexive, knee-jerk manner whenever we encounter resistance to a particular example of change, innovation, or disruption. The problem, of course, is that when we react to resistance in this way, we lose the opportunity to hear what may be important insights about the implications of our proposed plan of action.
Circling back to DORA, eLife, and Clarivate: if you really believe (as both eLife and DORA claim to) that the Journal Impact Factor is “increasingly irrelevant” and is not needed to “show the impact that [journals] have within their communities,” then fair enough – you’ll find many who agree with you. Publish in whatever way you believe makes the most sense, and don’t worry about whether you have a JIF. But this is one of those things that you can’t really have both ways: if you care about having a JIF, then you have to publish in a way that doesn’t undermine the internal validity of the JIF. If you don’t believe that’s the right way to publish, and publicly proclaim that you “have never supported the Impact Factor and never wanted one,” then getting outraged about being denied a JIF doesn’t seem to make much sense.
Personally, I wish godspeed to eLife in its publishing project and continue to be happy to see a diversity of publishing models flourishing in the scholarly communication ecosystem. And I don’t see how eLife and DORA can possibly object to Clarivate’s decision not to fully index eLife and give it a JIF, given that eLife’s publishing model falls well outside the longstanding, well-publicized, and clearly explained criteria that Clarivate has established for indexed publications.
Discussion
26 Thoughts on "Disruption As an End in Itself: eLife’s Suspension and DORA’s Response"
Thank you, Rick. I couldn’t agree more. I’ve been, dare I say, flabbergasted at the outrage directed at Clarivate on this issue. Deposition to Web of Science, and the entire JIF process is entirely voluntary. Journals are not required to have a JIF. But if they chose to elect to do so, then it only makes sense they follow the rules. eLife, DORA, and others may not like the rules – and I respect that. Getting outraged over not getting something eLife claimed they never wanted, highlights chutzpah at its finest.
Disclaimer: I have no horse in this race, neither Clarivate, nor DORA.
Having a JIF is not voluntary – applying for coverage in Web of Science is.
eLife applied for and pursued Indexing in Science Citation Index Expanded. Selected for that, they became eligible for listing in JCR – with the only restriction being Clarivate’s (TR at the time) having complete and accurate data to support the calculation.
DORA signers do not have the choice to not have a JIF – they can and do choose not to promote/depend on/celebrate the fact. A little nit-picky, but a relevant distinction. Institutions sign DORA, after all.
It’s not that eLife was removed from JCR because of their publication model, only because the (current) JIF calculation would be vastly distorted by Clarivate’s proposed method of indexing (only fully peer-reviewed items, not the screened-and-waiting-peer-review items). An incorrect metric suits none of the parties, particularly given the kind of monstrous upward swing that would be the likely outcome. Okay…perhaps that would suit DORA just fine. Bad data and irresponsible publication of bad data would assault Clarivate’s proclaimed “gold standard” data indexing.
To my mind, the following are the questions for Clarivate (because eLife is not the first or last alternative model of publishing they will have to deal with):
– is it only journals that are indexed in Web of Science?
– is eLife a “journal” given its publishing model?
– is there a journal-entity that can be gleaned from eLife content? (Which is what I think they are trying to do with considering a distinct feed of passed-peer-review content for ESCI.)
– is there way to fully utilized the content posted in eLife? Maybe items that have completed, approved peer-review are in “eLife Journal” in ESCI, and items that have not completed or have failed peer-review are in the “eLife platform” in the Pre-prints index?
– can/should eLife be listed in JCR with other metrics – like the ESCI and AHCI journals were for a couple of years (before they were JIF’ed
As an industry, we all need to grapple with the more fundamental questions that are being raised by many disruptions over the past decade or more.
One interesting thing about eLife is that it does not refer to itself as a “journal” anywhere that I can see. Instead, it refers to itself as a “platform.” (From its “about” page: “Our mission is to help scientists accelerate discovery by operating a platform for research communication that encourages and recognises the most responsible behaviours in science.”) Wikipedia refers to eLife as a “publisher.”
Having a JIF is voluntary in the sense that being indexed in Web of Science is voluntary — as you point out, Marie, you have to apply to get your publication indexed in WoS. But I don’t believe it’s possible to ask for indexing without being given a JIF — though, as eLife has, you can change your publication practices such that you’re left partially “in” but no longer eligible for a JIF.
Rick – Yep, there is no option (to the best, and most current of my knowledge) to be indexed in SCIE, SSCI, ESCI, AHCI without being included in JCR. Other content – like books and proceedings are included as sources of citations, but not as journals with metrics published. So, journals they index are included in the journal metrics product.
While eLife does not current refer to their enterprise as a journal – they did used to do: https://web.archive.org/web/20120613143904/http://www.elifesciences.org/
It was part of their original plan, an OA journal.
If, because of their changed approach, they are no longer a “journal” – then…as you said, “Publish in whatever way you believe makes the most sense, and don’t worry about whether you have a JIF.”
I wonder if they stopped calling themselves a “journal” when Mike Eisen took over as editor-in-chief — given his clear and public contempt for the “journal” as a concept. Since his firing last year, though, eLife don’t seem to have done anything to reclaim the identity of “journal” for their product.
I agree with just about everything Rick says here, but I think part of the reason (if not justification) for the rather odd complaining by eLife and DORA is the confusion that happens when a commercial solution becomes an industry standard.
When people start to think of something like a commercially-managed index or ranking system as part of the scholarly infrastructure, then they get upset when it makes any change, especially one that might seem exclusionary. There is a similar outcry when an organisation like Microsoft or Adobe does something odd with their ‘industry standard’ products. (I had a frustrating few years in the late 1990s trying to sell a library management system that didn’t use the ‘industry standard’ Oracle database. In the end we spent a lot of money creating an Oracle version, to get onto shortlists, but people always actually implemented the faster and cheaper ISAM version in the end.)
The answer, it seems to me, is not to complain after the fact, but to resist monopolies (however useful) and encourage community collaborations (however clumsy). Or live with the commercial offerings and their (potentially reactionary or commercial) behaviours.
Agreed. And it’s even worse when a commercial solution to one problem gets adopted as an industry standard for solving a very different problem — and then the solution provider gets blamed by the industry for providing a solution that isn’t fit to purpose. The JIF was never intended to measure the quality of content in a journal; it’s intended to measure the impact of the journal’s publications on its discipline by counting citations. But since an article may get cited for either positive or negative reasons — cited either as an example of good or of bad scholarship — the JIF is not, and was never intended as, a metric of quality. It does, arguably, measure “importance” — but the scholarly world (and, to some degree, the governments that support it) has been conflating impact, importance, and quality for generations and probably always will.
Now I’m getting flashbacks to the time when I transitioned from libraries to publishing, and went to a meeting where publishing people were saying that libraries should not be using ISBNs, as they were a just a PUBLISHING identifier.
There’s a big difference, though, because an ISBN really does function as a unique identifier for an edition and binding of a particular book. I don’t think anyone believes that the ISBN means anything other than that. The JIF, however, is a measure of citation frequency that is routinely used as if it were a measure of quality. That seems to me like a radically different issue.
Hi Rick,
I am surprised by the penultimate paragraph that suggests eLife “cares about having a JIF” – I don’t remember reading anything where eLife or its staff even remotely suggesting that they care about having a JIF. And, as I understand it, having a JIF and getting indexed in WoS are mutually exclusive to some extent as “partial indexing” does not get a journal its IF.
I believe eLife’s and DORA’s “knee-jerk” reaction is not about not getting a JIF but about Clarivate deciding that “specific changes are for the better/worse” in scientific publishing. eLife, led by active academics, is far better positioned to judge what is better for the scientific community than Clarivate (a viewpoint, which I agree to as well). But it’s also an unfortunate reality that the scientific community depends on such commercial indices to validate their initiatives.
And, though all innovation and disruption need not end up for the good, the current publishing ecosystem is such that no meaningful change could be brought about without disrupting the status quo. In that sense, even if disruption isn’t the end, it needs to be a bare minimum to effect meaningful changes in the system!
I don’t remember reading anything where eLife or its staff even remotely suggesting that they care about having a JIF.
As I noted in this post, eLife has publicly said that they “never wanted” a JIF, and I have no reason to doubt their sincerity — although their responses (and DORA’s) since the initial delisting and the final decision suggest a bit more ambivalence about that. If they genuinely don’t care, it’s odd that their responses to losing it have been so strident.
I believe eLife’s and DORA’s “knee-jerk” reaction is not about not getting a JIF but about Clarivate deciding that “specific changes are for the better/worse” in scientific publishing.
Please note that I did not characterize eLife’s and DORA’s responses as “knee-jerk.”
That said, I don’t see any evidence that Clarivate has set itself up as an arbiter of what is “better/worse in scientific publishing.” Clarivate responded to eLife’s changes in publishing practice by changing its indexing of eLife, in accordance with its longstanding and well-known standard.
eLife, led by active academics, is far better positioned to judge what is better for the scientific community than Clarivate.
I agree, and I suspect Clarivate would as well. To my knowledge, Clarivate has never claimed for itself the role of judge over what is better for the scientific community. It offers a service, which the scientific community has chosen (en masse) to use — though, unfortunately, often for purposes other than those it was designed to serve.
That said, I don’t see any evidence that Clarivate has set itself up as an arbiter of what is “better/worse in scientific publishing.”
Isn’t the very act of choosing what is worthy of indexing that act of an “arbiter” who identifies what is of better quality and worse quality? Of course one might try to differentiate between being an arbiter of content and an arbiter of publishing models, but of course in judging a publishing platform (normally journal) that has just changed it publishing model as suddenly not worthy of inclusion, aren’t you making a comment about the quality of the content itself?
Publish in whatever way you believe makes the most sense, and don’t worry about whether you have a JIF.
Perhaps I’ve missed this point in the comments or something in the argumentation, but I think it’s important to recognize that there is an additional reason why eLife may desire to be included in Clarivate apart from JIF. Many scholars use Web of Science as their search interface for finding relevant content for their research. Our library recently proposed discontinuing our Web of Science license (in favor of Scopus) due to costs. A vocal cohort of scholars was quite displeased, because this was their research tool of choice. In other words, eLife (and DORA) may be worried less about JIF and more about their research becoming invisible to a certain group of scholars, because it is no longer indexed.
It’s still an option for eLife to have the majority of its articles indexed. Are scholars really clamoring to find “incomplete” and “inadequate” articles in WoS? (Side note, I’m fascinated that when it comes to eLife people are dismayed the articles won’t be in WoS but when I wrote about how it can be hard to find Green OA copy of paywalled content I was assured no one uses anything but Google Scholar and other free databases!)
The point about Green Access is not that it is without imperfections but it is able to exist alongside all other options, costs nothing, builds on what one assumes should be best practice (ie archiving final pre-publication knowledge products), and provides a handy alternative for those without the funds or subscriptions. And it reminds publishers that there is a low-cost alternative that might one day come in handy when libraries, universities, funders and others start to get serious about negotiating down the absurdly high prices they are paying publishers. I just don’t know why you maintain this campaign against Green OA which either on its own or alongside other modes can do no harm and provides a cost-effective alternative.
Isn’t the very act of choosing what is worthy of indexing that act of an “arbiter” who identifies what is of better quality and worse quality?
In this case, it’s the act of an arbiter who identifies which journals publish articles validated by peer review and which ones don’t, and indexes the former. Clearly, Clarivate believes that there’s value in providing an index of peer-reviewed content. That’s not the same thing as setting oneself up as an “arbiter of what is better/worse in scientific publishing” more generally.
I think it’s important to recognize that there is an additional reason why eLife may desire to be included in Clarivate apart from JIF. Many scholars use Web of Science as their search interface for finding relevant content for their research.
Agreed — and there’s also a good reason for Web of Science to decline to index research that has not been validated by peer review. It seems to me that Clarivate’s approach in this case represents a reasonable compromise, though apparently both DORA and eLife disagree. (What would not be reasonable, obviously, would be for eLife to expect Clarivate to adopt a whole new, lower, standard for inclusion based on eLife’s desire to publish content not validated by peer review.)
I agree that publishers want as many access points as possible for their content and that WoS data feeds who knows how many Clarivate products, services, custom analyses, rankings, etc.
Tangentially, regarding your vocal WoS users, were COUNTER stats like cost per query shared? Cost per query is often a big eyeopener for users. It can be several times the cost (not just pennies, but many dollars per query) compared to full-text article/chapter views. At least this is often the case for libraries that don’t give huge real estate to WoS as a ‘featured’ resource in the right rail of every A-Z page, a ‘best bet’ in nearly every libguide, and market it in trainings as a search starting point. Then a library shouldn’t be surprised that they’ve cultivated a loyal, vocal user base for shockingly expensive, curated metadata and abstracts ($300,000! https://doi.org/10.7191/jeslib.959).
I guess I am a bit of a traditionalist, but I do think that properly managed and supervised peer review is absolutely core to quality science and scholarship. There is an admirable anti-elitist and egalitarian streak of idealism in the entire Open Access movement. It reminds me of the early days of the internet. But the model has been captured by some bad actors who are making a fortune by subverting the other key criteria of quality science and scholarship (beyond access) such as adequate peer review, independent editorial control, and dealing with paper mills, citation cartels, and authorships for sale. After these developments I have a new admiration for more traditional publishing models (including diamond and green access), even while I support the idealism of greater access and a moderation of the financial margins enjoyed by the legacy publishers.
Thanks for this Rick. The decision by Clarivate reinforces our concerns around the use of the Impact Factor as a proxy for quality. I’ll outline our primary scientific arguments with Clarivate’s policy elsewhere, but I did want to correct two things from this article and the ensuing comments, here. First, Clarivate’s policy regarding partial indexing is not longstanding – it was written in direct response to our announcement to change our model. Secondly, we did not apply to be indexed by Web of Science; we were told that we were being added in 2013. Despite our objections, they began scraping our website for content in an inaccurate and inconsistent manner, so we chose to send a feed directly to them to ensure consistency. We did ask to not be given an Impact Factor, but this request was declined. As has been noted, our frustrations are less about losing the Impact Factor – rather, our concerns are around Clarivate overreaching in deciding how research should be reviewed and published.
First, Clarivate’s policy regarding partial indexing is not longstanding – it was written in direct response to our announcement to change our model.
Thanks for that correction, Damian. Obviously, what is longstanding is Clarivate’s policy not to index content that is not validated by peer review — so this hybrid approach is an attempt to meet eLife halfway.
Secondly, we did not apply to be indexed by Web of Science; we were told that we were being added in 2013.
So in 2013, was Clarivate’s selection process different from what it is today, or did they somehow bypass their normal selection process for eLife?
As has been noted, our frustrations are less about losing the Impact Factor – rather, our concerns are around Clarivate overreaching in deciding how research should be reviewed and published.
By “overreaching in deciding how research should be reviewed and published,” do you mean “declining to index content that has not been validated by peer review”? Or is Clarivate doing something else that you object to?
It’s the fact that Clarivate are deciding the definition of ‘validation’. All our papers go through rigorous expert review, and the output of these reviews are published, along with an editorial decision (in the form of an Assessment), as part of the paper itself. To us that is far more validated than a paper that just appears on a journal with no indication that it has had any expert review at all.
It sounds to me like the root of your dispute, then, is that you see simply being subjected to a publicly documented peer-review process as “validation,” regardless of the reviewers’ ultimate finding. In Clarivate’s view, however, it seems that “validation” doesn’t consist simply in being reviewed — it consists in being reviewed and found at least “adequate” in terms of evidence and argumentation. The latter seems to me like the more reasonable position; calling a study “valid” clearly implies not just that experts looked at it, but that they looked at it and found it to have met a threshold level of scientific validity.
In that case why does much invalid research keep getting published? Why is there a reproducibility crisis? It’s becuase journals are trying to insist everything they publish has passed some kind of threshold of validity, while giving no information about what those thresholds are. Everyone knows that this is not how science actually works, and our model is attempting to solve this huge problem by transparently reporting the strengths and weaknesses of each paper, and reporting the editorial decision in a slightly different way (not accept or reject, but rather through a series of terms that describe strength of evidence and significance). It would be perfectly easy for indexers to capture this nuance (we even include our terms in the paper’s metadata), so it is disappointing that Clarivate have chosen not to do so.
In that case why does much invalid research keep getting published? Why is there a reproducibility crisis?
Personally, I would say it’s because no system is perfect and not everyone is equally competent or honest. My own view would be that rather than abandoning traditional peer review, we should be doing it more competently and honestly. (I mean, people keep dying of cancer. Does that mean cancer treatment doesn’t work?)
But the thing is, you guys at eLife are more than welcome to do peer review in whatever way makes the most sense to you. Again, godspeed. What doesn’t seem reasonable to me is expecting Clarivate to adopt a new standard of indexing based on your desire to publish content that peer reviewers have deemed scientifically inadequate.
In all the flurry over the Clarivate/eLife JIF exclusion, I’ve yet to see an explanation of why partial indexing of articles precludes a JIF score. Lots of journals with JIFs, including high flyers such as Science, Nature, and JAMA, publish non-peer reviewed articles that are excluded from the JIF calculations such as news, editorials, or book reviews, alongside the classic peer reviewed articles that are counted. And F1000 has a publish first, review later model which seems a lot like the model that eLife ran afoul with at Clarivate. Was a root of this dispute that eLife wasn’t providing a feed that distinguished between articles that had or hadn’t passed peer review? Or Clarivate’s algorithms not being able to parse that? This seems fixable, without the need for shouting.
I can’t speak for Clarivate in this regard — hopefully a representative will chime in here with an authoritative response. But I agree that this situation seems fixable without the need for shouting, particularly if (as Damian insists above) what has been taken away from eLife is something they never wanted in the first place and asked never to be given.
There is an integrity crisis in science and scholarship journal publishing on account of the way “bad actors” have grasped the opportunity of the open access movement to establishing for-profit, high-volume, low-quality (often fraudulent) companies monetising the “publish or perish” dynamic in tertiary and other institutions, often in emerging economies. It has got to the stage where previously fulsome funders like Gates now refuse to pay APCs, such has been the misuse of the system. This is the other side to the OA, nuanced peer review debate.