Transitioning to a model that counts publication based on the date of online publication — instead of print publication — will inflate the next round of Journal Impact Factor (JIF) scores.
This is the central take-away from a January 28th press release and full report  by Clarivate, the metrics company that publishes the Journal Citation Reports. The next reports are expected in June 2021.
The move to a counting model based on the date of electronic publication has been long-expected. Nevertheless, the new model may lead to unexpected swings in JIF scores and journal rankings because Clarivate lacks online publication data for about half of the journals it indexes. More importantly, the disparity is determined by publisher. At present, Clarivate has e-pub dates for Springer-Nature journals, but not for Elsevier journals, for example.
Why changing the date changes the calculation
The Journal Impact Factor (JIF) calculation is based on the calendar year of publication. This is not an issue when a paper was e-published, say, in March 2020, but assigned to the journal’s August 2020 issue. It does pose a problem, however, when the paper was e-published in one year (e.g., on 16 November 2019) but assigned to another year (e.g., January 2020). The old model considers this a 2020 publication, while the new model considers it a 2019 publication. Journals that employ a continuous online publication model without issue designation are not affected.
The authors of the report, Marie McVeigh, Head of Editorial Integrity, and Dr. Nandita Quaderi, Editor-in-Chief for the Web of Science, consider the effects of adopting two new counting models: a Retroactive model, in which the next JIF is based on prior data; and a Prospective model, in which they start counting from 2020 onward. After careful comparison of the two models, they conclude:
“We have chosen to implement the prospective model as the retroactive model would create two populations of journals that are differentially affected based only on when Clarivate began accepting their EA [Early Access] content, not on any change in the citation or publication dynamics of the journal itself. Imposing a counting disadvantage on a subset of journals while providing a citation benefit to all would be a poor approach.”
In other words, given their lack of comprehensive e-pub data for all journals, the Retroactive model is a very bad idea. However, the report glosses over any potential bias that will arise when they implement the Prospective model, concluding that their new choice would generally lead to better performance numbers for all journals:
“The prospective model will increase the net number of cited references in the JCR 2020 data, leading to a broadly distributed increase in JIF numerators.”
I asked McVeigh and Quaderi about potential bias in calculating the next round of JIF scores when just half of the journals they index include any Early Access data. And, if there appears to be bias, how will it affect the ranking of journals?
At the time of this writing, I have not received a response, so I decided to do my own back-of-the envelope calculations based on three journal scenarios. Those who wish to modify my assumptions with their own journal data can download the spreadsheet and report their findings in the Comments section below.
Three Journal Scenarios:
- Multidisciplinary Medical Journal. This high impact (JIF=25) journal publishes 500 papers per year, with an average of 40 references per paper, 5% of which are self-citations. The lag time between EA publication and print designation is just one month. Expected change: Under Clarivate’s new model, the JIF will rise by just one-third of one percent, or by 0.083 points, to JIF 25.083.
- Subspecialty Science Journal. This moderately performing (JIF=4) journal publishes 250 papers/yr., 40 references/paper, 20% self-citation rate, and a 3 mo. lag time. Expected change: Its JIF score will rise by 25% (by a full point) to JIF 5.000.
- Regional/Niche Journal. This journal (JIF=2) publishes 50 papers/yr. with an average of 40 references per paper, half of which are self-citations. Lag time is 6 mo. Expected change: Its JIF score rises by 250% (by 5 full points) to 7.000.
In essence, journals with high levels of self-citation and long lag times between e-publication and print designation are particularly sensitive to massive shifts in their next JIF score. However, even journals with relatively low self-citation rates and short publication lags are affected enough to shift their ranking above competitors who do not have any e-publication data in the Web of Science. And, given that indexing of e-publication date is publisher-dependent, we are likely to see some big winners and big losers if Clarivate continues with their new counting model.
Clarivate seems to be choosing sides by implementing a game model that puts half of its journals at a distinct disadvantage.
Clarivate prides itself on being a publisher-neutral source of data and metrics. This is one of the arguments publishers often use for siding with Clarivate over competing indexes like Scopus, which is owned by Elsevier. However, by not acknowledging bias, Clarivate seems to be choosing sides by implementing a game model that puts half of its journals at a distinct disadvantage.
To me, it’s puzzling why Clarivate appears to be rushing this new JIF model with piecemeal data. By not waiting until more publishers receive the same level of indexing, Clarivate puts its reputation in the scientific and publishing communities at risk. The value offered by the Impact Factor will be reduced, as it will not accurately reflect relative performance for at least the next two years as different sets of journals are subject to different measurement criteria. We’ve waited years for Clarivate to catch up with the realities of scientific publication. We can wait one more.
Update:  On Feb 1, the original Clarivate report was moved to a different URL. Marie McVeigh and Nandita Quaderi were removed as contacts and replaced with Editorial.Relations@clarivate.com