Transitioning to a model that counts publication based on the date of online publication — instead of print publication — will inflate the next round of Journal Impact Factor (JIF) scores.

This is the central take-away from a January 28th press release and full report [1] by Clarivate, the metrics company that publishes the Journal Citation Reports. The next reports are expected in June 2021.

50 yard line marker on sports field

The move to a counting model based on the date of electronic publication has been long-expected. Nevertheless, the new model may lead to unexpected swings in JIF scores and journal rankings because Clarivate lacks online publication data for about half of the journals it indexes. More importantly, the disparity is determined by publisher. At present, Clarivate has e-pub dates for Springer-Nature journals, but not for Elsevier journals, for example.

Why changing the date changes the calculation

The Journal Impact Factor (JIF) calculation is based on the calendar year of publication. This is not an issue when a paper was e-published, say, in March 2020, but assigned to the journal’s August 2020 issue. It does pose a problem, however, when the paper was e-published in one year (e.g., on 16 November 2019) but assigned to another year (e.g., January 2020). The old model considers this a 2020 publication, while the new model considers it a 2019 publication. Journals that employ a continuous online publication model without issue designation are not affected.

The authors of the report, Marie McVeigh, Head of Editorial Integrity, and Dr. Nandita Quaderi, Editor-in-Chief for the Web of Science, consider the effects of adopting two new counting models: a Retroactive model, in which the next JIF is based on prior data; and a Prospective model, in which they start counting from 2020 onward. After careful comparison of the two models, they conclude:

“We have chosen to implement the prospective model as the retroactive model would create two populations of journals that are differentially affected based only on when Clarivate began accepting their EA [Early Access] content, not on any change in the citation or publication dynamics of the journal itself. Imposing a counting disadvantage on a subset of journals while providing a citation benefit to all would be a poor approach.”

In other words, given their lack of comprehensive e-pub data for all journals, the Retroactive model is a very bad idea. However, the report glosses over any potential bias that will arise when they implement the Prospective model, concluding that their new choice would generally lead to better performance numbers for all journals:

“The prospective model will increase the net number of cited references in the JCR 2020 data, leading to a broadly distributed increase in JIF numerators.”

Net neutral?

I asked McVeigh and Quaderi about potential bias in calculating the next round of JIF scores when just half of the journals they index include any Early Access data. And, if there appears to be bias, how will it affect the ranking of journals?

At the time of this writing, I have not received a response, so I decided to do my own back-of-the envelope calculations based on three journal scenarios. Those who wish to modify my assumptions with their own journal data can download the spreadsheet and report their findings in the Comments section below.

Three Journal Scenarios:

  1. Multidisciplinary Medical Journal. This high impact (JIF=25) journal publishes 500 papers per year, with an average of 40 references per paper, 5% of which are self-citations. The lag time between EA publication and print designation is just one month. Expected change: Under Clarivate’s new model, the JIF will rise by just one-third of one percent, or by 0.083 points, to JIF 25.083.
  2. Subspecialty Science Journal. This moderately performing (JIF=4) journal publishes 250 papers/yr., 40 references/paper, 20% self-citation rate, and a 3 mo. lag time. Expected change: Its JIF score will rise by 25% (by a full point) to JIF 5.000.
  3. Regional/Niche Journal. This journal (JIF=2) publishes 50 papers/yr. with an average of 40 references per paper, half of which are self-citations. Lag time is 6 mo. Expected change: Its JIF score rises by 250% (by 5 full points) to 7.000.

In essence, journals with high levels of self-citation and long lag times between e-publication and print designation are particularly sensitive to massive shifts in their next JIF score. However, even journals with relatively low self-citation rates and short publication lags are affected enough to shift their ranking above competitors who do not have any e-publication data in the Web of Science. And, given that indexing of e-publication date is publisher-dependent, we are likely to see some big winners and big losers if Clarivate continues with their new counting model.

Clarivate seems to be choosing sides by implementing a game model that puts half of its journals at a distinct disadvantage.

Clarivate prides itself on being a publisher-neutral source of data and metrics. This is one of the arguments publishers often use for siding with Clarivate over competing indexes like Scopus, which is owned by Elsevier. However, by not acknowledging bias, Clarivate seems to be choosing sides by implementing a game model that puts half of its journals at a distinct disadvantage.

To me, it’s puzzling why Clarivate appears to be rushing this new JIF model with piecemeal data. By not waiting until more publishers receive the same level of indexing, Clarivate puts its reputation in the scientific and publishing communities at risk. The value offered by the Impact Factor will be reduced, as it will not accurately reflect relative performance for at least the next two years as different sets of journals are subject to different measurement criteria. We’ve waited years for Clarivate to catch up with the realities of scientific publication. We can wait one more.

Update: [1] On Feb 1, the original Clarivate report was moved to a different URL. Marie McVeigh and Nandita Quaderi were removed as contacts and replaced with Editorial.Relations@clarivate.com 

Phil Davis

Phil Davis

Phil Davis is a publishing consultant specializing in the statistical analysis of citation, readership, publication and survey data. He has a Ph.D. in science communication from Cornell University (2010), extensive experience as a science librarian (1995-2006) and was trained as a life scientist. https://phil-davis.com/

Discussion

14 Thoughts on "Changing Journal Impact Factor Rules Creates Unfair Playing Field For Some"

It’s pointless tinkering with the deeply failed model that JIF has always presented. Why the 2-year citation window, for example? ‘Impact’ should account for longevity rather than transience – ‘Deep Impact’ if you like. Then there’s the disconnect between ‘borrowed glory’ by papers published in a high JIF journal but not garnering similar citations. Then the problem of simple size-of-the-field effects over other measures of ‘quality’. The list of problems associated with indices based on the journal rather than the papers themselves, or perhaps their authors, is too long to ignore. More distracting chatter about JIF is just that. Instead, if you have a platform, campaign for proper assessment and avoid shallow numerical ‘descriptors’ that – as JIF has for years – take on a spurious and unjustified life of their own. Once they’re put on a spreadsheet, such numbers assume a reality that defies logic.

I have to disagree (respectfully!). As I see it, it’s a matter of sheer practicality in situations that your rank-and-file academic does not have to worry about, but others do. I have been on many research grant assessing committees, or been a reviewer for same, and also on promotion applications (or reviewer of same), and as an assesser you are faced with hundreds (even thousands) of cited articles that are brought to bear by applicants to burnish their credentials and strengthen their case. It is simply impractical, indeed impossible, to evaluate such a volume of citations, across a range of journals and disciplines. There are ways around this. For example, in one system I am aware of research productivity/quality is based not just on the full spectrum of output but the applicant is required to provide their “best four”. Yes, you could read those. Also, in many applications people provide citation counts on individual outputs, but it is also helpful to know whether the journal concerned is highly cited (or potentially predatory). So, yes, there are shortcomings with JIF, but in many academic/administrative tasks something like it is useful – indeed it makes such tasks humanly tractable. If you don’t like JIF, I feel you should come up with a respectable alternative, For example, some have suggested a google-scholar based indicator, which would be much more comprehensive, and over a longer time period (say, 5 years). But it still would be a quantitative measure, I am afraid. I don’t think you can get away from it, unless you are in some tiny disciplinary or scholarly area where everybody knows the game – but then that has its problems too!

This is exactly how the world works. JIF is important because it saves people time. Time is the scarce resource.

I’m wondering if the change is a reaction to Scopus providing monthly updates to their Citescore metric. It’s not easy to do this if the publication date for articles is subject to change. If the dates are fixed, one could easily (or at least more easily) provide more frequent updates.

Phil – I wonder if your back-of-the-envelope calculation (focussing on changes in citations – i.e. changes to the NUMERATOR of the JIF equation) is missing the main point here – namely the effect of moving from print publication date to online publication date on the DENOMINATOR of the JIF equation.
A few years ago I alerted Marie McVeigh at Clarivate to a new form of ‘gaming’ by the editors of certain journals to artificially inflate their JIF. This involves putting accepted papers in an online queue for one or even two years before they are eventually published in print. For a journal with, say, a 2-year online queue (and yes, there is at least one in a ‘top’ journal close to the area in which I work!), a paper that appears online (i.e. made available as ‘Early access’) accumulates citations for the journal (adding to the numerator) for first two years but does not ‘count’ in the denominator. Only when it is finally published (in print) in the 3rd year does it enter the denominator, while the citations it earns over the next two years continue to count in the numerator. Moreover, as a ‘Year 3’ paper, it is now earning citations at a far more rapid rate than a ‘Year 1’ paper. Do the maths and a back-of-the-envelope calculation suggests the effect of a 2-year queue could be to more than double the JIF. As I pointed out to Marie, this means that JIF figures are now not worth the paper they’re written on, let alone being accurate to 3 decimal places as Clarivate always claims! I also noted that the on-line queue stratagem seemingly broke no rules so nobody could be accused of any misconduct.
I assume Clarivate have now (somewhat belatedly) introduced this change to overcome the effect of such ‘gaming’ by unscrupulous journal editors. If so, it is to be greatly welcomed. It should reveal which journals have been playing the online queue game to boost their JIF – “It’s only when the tide goes out that you learn who’s been swimming naked.”
PS Further details can be found in B.R. Martin, 2016, ‘Editors’ JIF-Boosting Stratagems – Which Are Legitimate and Which Not?’, Research Policy, 45, pp. 1-7.

100% agree, It is what I was thinking when reading the post. The issue then is if this “new” JIF will change how some journals publish. It seems that online open access journals may “benefit” from this new calculation of the JIF.

This would be a great chance for clarivate to include an “uncertainty” (for things that they control) in the JIF and this post shows how it might be calculated in part. Currently JIF is reported to three significant digits past the decimal (as if!). Obviously variation/uncertainty in captured publication date (preprint availability effects citations also and there’s information on that) as well as the distribution of papers published in a journal over a year and effects of that are something that clarivate could “easily” estimate-this would be a measure of some of the systematic errors that would inform a lot of silly comparisons related to JIF. It would not cover disciplinary or content type (reviews vs. other content) variability, which would only add to the uncertainly. This would make their “rankings” much less relevant though; thus Phil I disagree on the purported “value,” which is tied to its “fake-ness.” One could do the same for other indices (5-yr, etc.). Imagine a scientific organization and number being, well, scientific.

CWTS Journal Indicators includes a “stability interval” around each journal metric that attempts to measure the variability of the citation data used for each score. They use the term “reliability” in their methods explanation, but I think it may mislead some to believe that the data are not accurate. For small journals, a single highly-cited paper can radically boost a journal-level metric.

https://www.journalindicators.com/indicators

It seems like this argument would have mattered more 10-15 years ago. At this late stage, who really cares about JIFs anymore, other than publishers and editors who tout them as markers of “quality” and university bean-counters who use them as a lazy shortcut to judge faculty output? I just don’t think that librarians, who make the money decisions, put much stock in this silly metric anymore. We spend more time trying to convince authors and deans to ignore them. Impact is based on articles, not journals. It defies belief why so many people still don’t understand that.

Chapeau!
I really don’t mind if a useless metric is changing a bit. Imagine someone reveals that the Scottish ell was 95 cm instead of 94 cm …

The only problem is that publishers, editors and bean-counters still have more influence than librarians

The tree analogy would work more like this: Measure Elms, Maples, and Pines using the old Imperial model but Ashes, Beeches, and Cedars using the metric system; then expressing both in numbers with no units.

We all know that journal metrics, whether JIF of Citescore, are flawed. Maybe we should be thinking about the whole changing concept of what is a journal. For traditional journals with monthly or quarterly publication cycles, recognizing early access publications and their citations is finally taking advange of the capabilities of electronic publishing.

We need a new metric that is controlled by academia and universities, not private for-profit companies. We need initiatives like Plan S for this purpose.

In the days of old, JIF had a huge influence on what titles libraries would subscribe to. Then came the aggregators and libraries could no longer choose individual journals without paying a penalty. Then, about the same time, came the internet and libraries felt less responsible to keep up ordering costly bundles of journals – let the patrons request the articles by email to the authors! Then came the archiving of many pre-digital issues of journals and their availability in PDF format, and that spelt the end of people actually going to the library other than to say hello to the librarian. Now we are seeing the emergence of something foundationally different: a continuous publication model, the disappearance of consecutive page numbering within a “volume” or “issue,” the use of “eLocators,” and the elimination of a print edition (other than “print on demand,” or the creation of market-driven weekly or monthly editions such as what we see with The BMJ).

So who still cares about JIF? Investors who don’t know any better. Industry executives who don’t know any better. Promotions committee members who don’t know any better. And, unfortunately, faculty who don’t know any better. Readers don’t care, as long as they find what they are looking for. PubMed is the great equalizer in making the world aware of content (Impact Factor? Shmimpact Factor! https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2921314/).

On the academic side, there is a greater understanding of the importance of trying to assess an individual’s productivity using article-level and author-level metrics. We should work on making those better.

Comments are closed.