On May 20, Clarivate Analytics announced that it had created a new metric for comparing the relative citation performance of journals across different disciplines. It calls this metric, the Journal Citation Indicator (JCI).

Not a very descriptive name, is it?

Toolbox

According to their blog post, written by Martin Szomszor, Director of the newly revived Institute for Scientific Information:

The Journal Citation Indicator provides a single journal-level metric that can be easily interpreted and compared across disciplines.

The JCI has several benefits when compared against the standard Journal Impact Factor (JIF): It is based on a journal’s citation performance across three full years of citation data rather than a single year’s snapshot of a journal’s performance across the previous two years. In addition, Clarivate promises to provide the JCI score to all journals in its Core Collection, even those journals that do not currently receive a JIF score.

The JCI also avoids the numerator-denominator problem of the JIF, where ALL citations to a journal are counted in the numerator, but only “citable items” (Articles and Review) are counted in the denominator. The JCI focuses entirely on Articles and Reviews.

Finally, like a good indicator, the JCI is easy to interpret. Average performance is set to 1.0, so a journal that receives a JCI score of 2.5 performed two-and-a-half times better than average, while a journal with a score of 0.5 performed only half as well.

To me, JCI’s biggest weakness is Clarivate’s bold claim that it achieved normalization across disciplines.

The concept and applied mathematics for comparing the performance of journals across disciplines was described more than ten years ago by Leiden University researcher Henk Moed as the Source Normalized Impact per Paper (SNIP) and implemented shortly thereafter into Elsevier’s Scopus. SNIP is also featured as a prominent metric on Journal Indicators, a free service from Leiden University.

Field normalization is a revolutionary idea that struggles when implemented into a legacy system

SNIP was pretty revolutionary for it’s time, as it abandoned an old model for manually classifying journals into established subject categories. Under SNIP, a journal’s subject field is defined as the set of papers citing that journal. This approach allows for a fluid definition of field that changes over time as science progresses and journals evolve. In contrast, the JCI still relies on the subject classification assigned to a journal when it was first added to the Web of Science database. Clarivate currently uses 235 subject categories.

Some of Clarivate’s subject categories are extremely large and diverse, while others are very small and narrow. For example, Economics contains 373 journals, covering broad multidisciplinary titles to regional and historical reviews, while Engineering gets its own multidisciplinary category plus 13 unique sub-disciplines, from Electrical & Electronic Engineering (266 titles) to Marine Engineering (14 titles). Andrology (a medical specialty that focuses on the reproductive issues of men) gets its own field that includes just 8 titles.

Given the idiosyncratic history and development of Clarivate’s journal classification system, about one-third of journals are assigned to more than one category. Nature Climate Change is classified under Meteorology & Atmospheric Sciences AND Environmental Sciences; Molecular Ecology is classified into three disciplines (Evolutionary Biology, Ecology, Biochemistry & Molecular Biology); and the journal, Oncogene, is classified under four disciplines (Oncology, Genetics & Heredity, Biochemistry & Molecular Biology, and Cell Biology). On its own, multi-field classification makes a lot of sense; however, it will create confusion when JCI scores are first released.

For the JCI to make much sense, a user would need to know the disciplinary classification structure for each journal.

The logical solution would be to create multiple JCI scores for these multi-field journals; however, this would contradict Clarivate’s goal of creating a single JCI score for each journal. Clarivate’s solution, buried in small type at the bottom of page two of their documentation, is to compare these multi-classified journals to “the mean normalized citation impact across all categories assigned.” So, the JCI for Nature Climate Change will be calculated using the journals classified under Meteorology & Atmospheric Sciences PLUS the journals classified under Environmental Sciences. Oncogene’s score will be based on journals in four fields. For the JCI to make much sense, a user would need to know the disciplinary classification structure for each journal.

As a result, JCI scores may conflict with common sense. For example, the journal Oncology may receive a much better JCI score than higher performing journals in Oncology because its JCI will be calculated using three additional fields that perform worse than Oncology. Similarly, a marine sciences journal may perform much worse than other Oceanography journals if it is cross-listed in Environmental Sciences, a higher performing category.

The JCI is also problematic in that is its black-boxed. A journal editor or publisher who wanted to validate a JCI score would need a complete three-year citation record for every paper in every journal within a discipline PLUS every paper from fields in which the journal was cross-listed. While the raw citation record is transparent, the entire dataset and methodology for recreating the metric are essentially off limits for most users. This transparency and replicability problem is not limited to the JCI metric, but extends to most citation metrics like SNIP, Eigenfactor, Normalized Eigenfactor, and Article Influence Score, among others.

The Veg-O-Matic of the metrics world?

The marketing materials for the JCI contain apparent contradictions of what the metric can do. It simultaneously promises “a single journal-level metric that can be easily interpreted and compared across disciplines,” while at the same time hesitant on what their normalization techniques can actually deliver:

The normalization steps make it more reasonable to compare journals across disciplines, but careful judgement is still required.

I asked Ludo Waltman, professor of Quantitative Science Studies at Leiden University about whether Clarivate was making exaggerated claims about field normalization:

This is something all bibliometricians are struggling with. Field normalization does not make things perfectly comparable across fields, but it does make things more comparable. Explaining what field normalization does and does not accomplish therefore is not at all trivial […] I don’t think I managed to solve the problem of how to give a clear, easy-to-understand, and accurate explanation of what field normalization does and does not do!

Just another tool in the metrics toolbox

The Journal Citation Reports, Clarivate’s annual publication on the citation performance of journals, contains an entire toolbox of metrics.

Given the perennial criticism that the Impact Factor — its go-to hammer for comparative journal metrics — is being misused for tasks it was never designed to do, it is understandable that Clarivate would want to develop yet another tool. “Here is a whole box of tools,” they would argue, “Use them responsibly!” Nevertheless, Clarivate also needs to avoid misrepresenting and overselling the JCI. It should start by giving the JCI a new — and more descriptive — name, something like “Field Normalized Impact Factor” (FNIF) so it is clearer to users what it is and how it was designed to be used.

Phil Davis

Phil Davis

Phil Davis is a publishing consultant specializing in the statistical analysis of citation, readership, publication and survey data. He has a Ph.D. in science communication from Cornell University (2010), extensive experience as a science librarian (1995-2006) and was trained as a life scientist. https://phil-davis.com/

Discussion

2 Thoughts on "Journal Citation Indicator. Just Another Tool in Clarivate’s Metrics Toolbox?"

The problem with JIF numbers is the limited time frame used to generate them. Papers which are cited frequently in the first two years after their publication are just as likely to be making bold, unsupportable statements as they are to be reporting solid, verifiable, repeatable work. Remember Pons and Fleischmann? Who was citing their work in 1993, other than editorialists writing about the dangers of hyperbole in the peer-reviewed journals?

The truly impactful journals are the ones publishing the foundational results upon which entire fields of research are based, but for the very reason that the work is so fundamental it tends to generate citations later, and for much longer, than that two-year snapshot Clarivate focuses on.

Three years of data is better than two, but it’s still looking at what made a splash rather than what made a difference. Compare five- and ten-year impact factors and the top journals in a given field can look very different.

This is a lucid explanation of Clarivate’s announcement. Many thanks for putting this out so quickly.

Comments are closed.