The case of Land Degradation & Development and Solid Earth, two journals recently involved in self-citation and citation cartel practices, underscored a fundamental principle of the Journal Citation Reports (JCR) — that citation manipulation was acceptable practice just as long as it doesn’t reach a critical threshold. So, what is that threshold?
The JCR uses several guidelines to make suppression decisions but is not explicit about thresholds that would automatically suppress a journal.
Using public suppression information provided by the JCR, along with prior-year data captured by the Internet Archive, I was able to reverse-engineer JCR’s thresholds for suppression. I should note that the JCR began publishing critical datapoints from 2013.
In the last three JCR editions, journals suppressed for self-citation had at least 50% of their Impact Factor-directed citations come from the journal itself (horizontal line, Figure 1). The percent distortion of category rank, meaning, how much self-citation changed the ranking of the journal within its subject category, starts at 20% (vertical line) with one exception. The journal, Foundations of Science (Springer) had a self-citation rate of 69% in 2013, which distorted its rank by only 13%.
In 2015, Land Degradation & Development‘s self-citation rate was 33%, well below the threshold of 50%. Result: no suppression.
While fewer in number, journals suppressed for citation stacking (aka citation cartel) illustrated two minimal thresholds: At least 80% of citations from another journal are directed at papers published in the previous two years (vertical line, Figure 2), and these citations must account for at least 15% (horizontal line) of total citations used to calculate the target journal’s Impact Factor.
In 2015, citations from Solid Earth to Land Degradation & Development (26%) were above the horizontal threshold; however, they did not exceed the vertical threshold (68%). Result: no suppression.
Whereas the combined effects of self-citation and citation stacking more than doubled Land Degradation & Development’s 2015 Impact Factor, neither practice was sufficient on its own to result in suppression. Based on my 2016 calculations, it is unlikely that either journal will be suppressed in the upcoming 2016 JCR report.
Would we all be better off with rules rather than rules-of-thumb?
Reverse-engineering JCR decisions confirmed what I concluded in my last post — that threshold levels for suppression are set extremely high, and that JCR editors are unwilling to consider malfeasance, even when it is starting them in the face. If this is just a numbers game, shouldn’t the JCR simply make their criteria for suppression explicit and fully transparent? Why should it provide “guidelines” instead of figures? Would we all be better off with rules rather than rules-of-thumb?
Scopus, a literature index that generates journal-level metrics has explicit performance benchmarks for journals. For example, journals are flagged when self-citation rates are two or more times higher than peer journals within their field. Wim Meester, Head of Content Strategy for Scopus, described by email that flagged journals then go to content selection and advisory boards to determine whether they should remain in the Scopus index. As of May 2017, 303 titles have been discontinued from Scopus, 42 of them for not meeting metrics guidelines.
The Scopus team did discuss whether to discontinue indexing Land Degradation & Development and Solid Earth. Their decision to continue indexing, however, was made entirely in-house and did not involve an external advisory board.
Meester explained that discontinuing content indexing stops “the forward flow” of citation data. Discontinued journals do not show up in Scopus’ journal metrics and other services that use Scopus data (CWTS Journal Indicators, and SJR). In contrast, journals suspended from the JCR continue to be indexed, just suspended from receiving an Impact Factor and other related metrics until reevaluated, often the following year.
Ludo Waltman, researcher at the Centre for Science and Technology Studies (CWTS) of Leiden University, which hosts CWTS Journal Indicators, argues that title suppression decisions need to be done by the scientific communities and not by the indexers. As he explained by email:
The rules for suppressing titles will always be arbitrary and debatable. As producers of journal metrics, we are simply not in the position to develop such rules, because we have insufficient background knowledge of what might be going on in specific scientific communities. I think we should therefore provide metrics for all journals in the database that we work with, even suspicious journals, but I also think we should provide additional statistics that can be used to get an impression of possible gaming of the metrics.
In sum, we are left with two models for curating citation data: The JCR model, where in-house editors determine which titles are evaluated; and the Scopus model, where editorial decisions are made by external groups of academics who advise with content selection. Neither model is not perfect. Is one preferable over the other?
More importantly, now we know how suppression decisions are made, should metrics companies suppress titles at all or simply make the underlying data more transparent?