Loophole (1954 film)
Loophole (1954 film) (Photo credit: Wikipedia)

If you want to increase your journal’s impact factor, the easiest way is through an editorial. Exempt from the process that sends manuscripts through editorial and external peer review (sometimes through several rounds), an editor is able to cite scores of articles from one’s own journal and have it published quickly and without delay.

It is therefore not surprising to find editorials, like “The NHJ 2012 in retrospect: which articles are cited most?” written by E.E. Van der Wall, editor of the Netherlands Heart Journal. Published in December 2012, the brief editorial contains 25 self-citations to the NHJ, 24 of which cite articles written between 2010 and 2011 — the window from which the journal’s next impact factor will be calculated.

Now, while others enthralled in navel-gazing bibliometrics may attempt some analysis or experiment designed around one’s own articles, the editor of the NHJ doesn’t attempt to hide his intentions, which is, without remorse, to increase his journal’s impact factor. Having analyzed what gets cited in his journal, Van der Wall is direct and forthcoming about his editorial intentions:

As can be learned from the Web of Science, guidelines generally yield a large number of cites. So hopefully we will publish a vast amount of national guidelines in the future, increasing the number of cites and thereby improving our impact factor.

If this is not enough, Van der Wall concludes his editorial with the the following New Year’s wish:

I wish you all a nice, productive, and citable 2013.

Editors serve simultaneously as gatekeepers and cheerleaders for their journal. Whereas editors for the most prestigious journals in the field can focus entirely on the former, the vast majority of editors are required to spend a significant amount of their time marketing their journal to potential authors. As editors know very well, one of the best market indicators for a journal is its impact factor, and the simplest, cheapest, and fastest way to influence the impact factor is through the editorial. Through selfless self-citation, editors are merely exploiting a loophole in the metrics system much like corporations exploit loopholes in the taxation system.

Is it time to close that loophole?

One solution to stem the tide of editorials like Van der Wall’s is to stop counting citations from editorials in the calculation of a journal’s impact factor. However, this creates an entirely new problem: What, exactly, is an editorial? Editors have found ways to publish full research articles under the editorial rubric, such as this one, which originally included an additional 67 self-citations before they were removed as a result of controversy. Journals publish many, many different types of articles under a wide array of names. What was once labeled an editorial under an old counting model, will simply be renamed “perspective” or some other designate under a new model. Thomson Reuters, the group responsible for calculating the journal impact factor, shouldn’t have to negotiate with journal editors on what constitutes an editorial.

An alternative solution would be to designate those texts written by editors as editorials, but this too creates a new set of problems. Editorial positions change regularly, many journals have scores of deputy and associate editors, and external authors may always be brought in to serve as guest-editors.

To me, Thomson Reuters needs to adopt a simple, arbitrary standard on what is counted as a citable object based entirely on the number of references and not the designation of the article type, author status, or some other fuzzy standards such as page length, the presence of figure or an abstract. I say “arbitrary” because the number of references in a document a cannot be considered to be a reliable reflection of its content type.

If editorials were counted as citable objects when they cite more than three papers, for instance, editorials like Van der Walls would likely not have been written. Or, it would have been written but included the names of the papers he refers to in a footnote rather than a full citation. Editors would resist the temptation to engage in navel-gazing bibliometrics if they knew that their editorial was being counted as a paper and not as a denominator-exempt editorial. Many editors would return to writing about science, and not the counting of science.

The move to abolish the well-meaning but anachronistic policy of giving editorials the privileged position within the citation counting process would not solve overzealous self-citation practices. Self-citation is still a considerable issue for some journals, although there is already a solution for this problem in place — journals that engage in egregious self-citation are delisted from the Journal Citation Reports for two years. No editor wishes to be known as the one who was responsible for having his journal denied an impact factor. The punishment is well fitted to the crime.

Adopting an arbitrary number that distinguishes citable objects from non-citable objects would have the consequence of setting hard guidelines for editors. If three references is the limit, you can expect to see the majority of editorials written with exactly three self-citations. If the number is zero, editorials will simply reference papers as footnotes. Hard guidelines may limit the expression of the editor, but the lack of guidelines, in my opinion, is demonstrably worse. The current method of calculating the journal impact factor provides editors with a loophole to game the system.

The editorial loophole needs to close.

Enhanced by Zemanta
Phil Davis

Phil Davis

Phil Davis is a publishing consultant specializing in the statistical analysis of citation, readership, publication and survey data. He has a Ph.D. in science communication from Cornell University (2010), extensive experience as a science librarian (1995-2006) and was trained as a life scientist. https://phil-davis.com/

Discussion

29 Thoughts on "Netherlands Heart Journal Editor Delivers Dutch Citation Treat"

As you say even if content with more than 3 citations were automatically classed as citable content and therefore counted in the denominator of the Impact Factor, the effect of publishing this type of editorial would still outweigh the cost for smaller journals with fewer Impact Factor citations. Nor would it deal with the problem of editors encouraging authors to cite their journal. If Thomson Reuters made the Impact Factor without self-citations more prominent in the Journal Citation Reports with the ability to rank journals by this metric, it could have the effect of discouraging editor inflated self-citations, beyond the threat of delisting that only applies in extreme cases.

Having clearly defined rules as to what defines each type of content in the Web of Science would be an improvement. I have met editors who believe that competing journal X has a cosy relationship with someone at Thomson Reuters and that is why certain parts of their journal is classed as non-citeable content. Having an open and level playing field for article classification would remove the perception of such gaming.

Absolutely, ignoring self-citation in the calculation of the Journal Impact Factor would completely take away the incentive for the editor to manipulate it through the editorial. However, it would also have the effect of eliminating valid citations from one research article (or review) to another within the same journal, and I’m not sure this is a fair trade-off.

I would argue that tightening the rules over what constitutes each content type is much more difficult than it sounds and is responsible for creating the kind of loophole that editors are exploiting. My argument is to get rid of the classification scheme altogether and decide on what is a citable object by the number of references it contains, not by a myriad of other characteristics. It is those fuzzy guidelines that create an unleveled playing field for some journals.

I would not be in favour of removing of self-citations from the JIF, but as Thomson already produce a JIF excluding self-citations making this more prominent would allow users to make a clear judgement about the effect that self-citation has on a journals’ ranking and JIF.

After commenting in haste, I agree that it would be very difficult to apply simple systematic rules of content classification, but the classification scheme in Web of Science can be useful for searching for specific types of content and as the diversity of indexed content increases this might become more relevant to users. There is no reason save historic continuation why the JCR should continue to rely exclusively on these classifications and simple rules could apply to split content into the 3 types that already exist (Article, Review and non-citable)

One problem is that many papers published in a given journal cite numerous prior papers in that journal. Most journals are focused on a specific scientific issue so a lot of the prior work is there. Typically many citations refer to prior work on the issue so a lot of self citation is natural and good.

Phil: Could you explain your proposal in lay terms? I do not know what “citable object” means.

An impact factor that excluded self citation would be interesting but It is a different metric. One might call it external impact or some such because it excludes the impact on the local community typically found in a journal.

Citable items are the number of papers that factor into the denominator of the Impact Factor calculation. At present, editorials (along with news, errata, letters, etc) are excluded. For a detailed explanation, see the following article by McVeigh and Mann:

The Journal Impact Factor Denominator: Defining Citable (Counted) Items
by Marie E. McVeigh, MS and Stephen J. Mann
JAMA. 2009;302(10):1107-1109. doi:10.1001/jama.2009.1301
http://jama.jamanetwork.com/article.aspx?articleid=184527

And the classification of citable and non-citable objects is in any case a privately negotiated figure.

To quote from

http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0030291

“During the course of our discussions with Thompson Scientific, PLoS Medicine’s potential impact factor— based on the same articles published in the same year—seesawed between as much as 11 (when only research articles are entered into the denominator) to less than 3 (when almost all article types in the magazine section are included, as Thomson Scientific had initially done—wrongly, we argued, when comparing such article types with comparable ones published by other medical journals). At the time of writing this editorial, we do not know exactly where our 2005 impact factor has settled. But whatever it turns out to be, as you might guess from this editorial, we feel the time has come for the process of “deciding” a journal’s impact factor to be debated openly. Something that affects so many people’s careers and the future of departments and institutions cannot be kept a secret any longer.”

I’d be interested in knowing the current distribution of count of cited references in items that are deemed citable vs. non-citable. Is there evidence in the data to back up the anecdotal assertions from certain Editors that their are preferences shown by ISI in how they treat items from different journals/publishers?

One of the points of editorials or editors notes is to highlight or add some perspective to content pubished in the journal. Publishing an editorial about an important study appearing in the journal and perhaps citing the authors work (also possibly published in that journal) leading up to the watershed paper should be allowed and I do think those citations should count. I can also see a place for an annual editorial that maybe talks about the most cited or most downloaded articles as sort of a “best of the best” year end review. I think the readers would like that. In this case, I am not thinking those citations should count. Several of my journal editors publish an editor’s note in each issue literally called “In this Issue” where they discuss each paper but we do not list those papers in the references. I agree that asking TR to decide which citations count and which do not is not the answer. It seems the self-citation policing may be the only way around this.

“In this Issue” types of editorials are useful, and there is no problem with them because the within-year citations in these editorials are not counted by the impact factor. Phil’s suggestion of limiting citable objects to those with 3 or more references is reasonable. Another way to do it would be to calculate the fraction of citations by object that contribute to the impact factor for that journal, and exclude those records whose fraction is above a certain threshold (like Phil’s example above of 24/25). The threshold would need to be set high enough to allow honest citation to recent (1-2 yr old) articles from the journal. Of course, the threshold would need to be set by studying a large set of WoS or Scopus data. Any grad students out there want to try this?

Good article, pointing out a potentially serious problem. I’m sure many “cheerleader” editors like me are saying, “Dang! Why didn’t I think of that?” 😉

Self-cites in citable articles usually are legitimate, as the peer-review process provides some sort of check. But, why should self-cites in non-citable articles be counted? There’s just too much incentive here to push the limits, and the system would be fairer without them.

A very simple rule Thompson-Reuters could implement would be to not count self-cites in non-citable articles.

In theory that would be the way to do it. But in the JCR the numerator is not linked to the denominator, i.e. they do not know which cites are aimed at/originating from which document types, they only know that cites are aimed at/originated from the journal as a whole.

I agree this seems true for the reported data. But is it true for the more basic raw data? That is, when reading an article and recording citations, does the recorder (person or computer) know whether the article is citable or not? Maybe someone from T-R could answer this?

An extreme example of the problem discussed by Phil can be found here: http://dx.doi.org/10.1016/j.cortex.2010.01.004. This editorial published in Cortex in 2010 gives 235 self citations to publications in the same journal in 2008 and 2009. Ironically, the editorial is entitled ‘The impact of self-citation’. In the same year, Cortex has also published editorials with 234, 231, and 97 self citations.

In the article by Reedijk, J., Moed, H.F. (2008), “Is the impact of journal impact factors decreasing?” Journal of Documentation 64, 183-192, we give several striking examples of the possible effect of these what we term “editorial self citations”. In these cases, editors seemed to be much less open than the editor of the Netherlands Heart Journal. We even describe a case in which one and the same editorial that cited mainly 1-2 year old articles of a recently launched journal published by a particular publisher, was published by the same authors, though in different author sequences, in 4 other journals published by the same publisher.

As a rule, citation analysis of a scientific-scholarly journal should only include citations from peer-received articles to peer-reviewed articles. Editorials should be discarded, both a source and as target of citations. For instance, the SNIP journal metric included in Scopus is based on this principle. The citations in the editorial in Neth. Heart Journal are NOT included in the SNIP counts. From a technical perspective, this type of citation analysis can be carried out only if citations are linked to journals on a item-by-item basis rather than on the basis of the cited journal titles.

Henk F. Moed
Director, Informetric Research Group,
Elsevier, Amsterdam, Netherlands
Email: h.moed@elsevier.com

In principle, discarding citations to, and from, editorials makes much sense. However, it is predicated upon reliably identifying “editorial” material. Under such restrictions, an editor may simply reclassify one’s article as a “review”. In the blog post, Emergence of a Citation Cartel I detail several of these documents classified as reviews.

I believe the only way to solve the problem is to ignore the article classification system entirely when it comes to calculating citation scores. Identifying a citable object must be based on whether (or how many) citations are made in the document. Any other method creates ambiguity. And it is that ambiguity that allows editors to exploit the system.

Since editorials rarely are cited themselves, this would effectively kill editorials with more than N citations, where N is the “citable” limit. Not a bad solution at all, and fairly easy for T-R to implement, it seems.

In my humble view, Henk brings an important point to the discussion, e.g. the use of other metrics (he suggests SNIP) which are not affected by the problem under discussion. Other possible choices could be the Eigenfactor (EF) and the Article Influence (AI) score, which are included in the JCR. EF and AI completely discard self-citations and are obviously immune to the discussed issue.

I would even do a further step in this direction. In fact, as Phil pointed out, it is somehow difficult to reliably identifying “editorials”, which is a counter-argument for the use of SNIP alone, and he is also correct in stating that there are plenty of legitimate reasons for an author to include self-citations to a journals, which can be seen as a negative point in the use of EF or AI alone to evaluate the impact of a journal. As such, one solution to “close the loophole” could be, instead of suggesting changes in the definition of the IF (everyone has his/her own opinion on this matter and there are pros and cons for every solution), to simply use, as suggested by several authors, more than one indicator (f.i. IF and SNIP or IF and EF and AI). The information one should look for at this point is not anymore the absolute value of the indicator or the ranking of the journals with respect to a single indicator but the correlation among those quantities.

In other words, let any EiC or journal decide on their policy of self-citations and on the use of editorials. This may influence IF, but will not do so for (say) the AI. If the rankings with respect to the 2 indices are very different, this could be a good indication that there may be something not appropriate in the behavior of the EiC/journal.

Self citation in the manner done in this journal, drawing attention to the most-cited papers, is not inappropriate at all. It’s an entirely legitimate thing for an EiC to do and quite likely of interest to the journal’s readers. What’s not appropriate is to regard the impact factor as a proxy for quality, particularly not the quality of individual articles. Highlighting which articles have been cited most lifts them from the morass of averages that the impact factor is.

Has anybody considered the possibility that the editorial may have been a deliberate provocation, born out of a deep scepticism of the status of importance given to the impact factor? The blatant way in which it was done may well point to such a motive.

Jan Velterop

So, let me get this straight. You’re simultaneously arguing that what he did was perfectly fine and appropriate for an EIC, but also that it was such a blatant misdeed it must be a deliberate provocation?

No, it’s not a misdeed. It’s openly, indeed blatantly, demonstrating that the system can be gamed so easily and is therefore a dangerous thing to take as seriously as is being done.
Your definition of ‘blatant’ may be unduly negative. Mine is simply “without any attempt at concealment; completely obvious”.

Then why did you characterize it as a provocation? Why would something totally normal and within bounds be provocative? You’re trying to have it both ways — blatantly.

I think he means it was not a misdeed in that it was not intended to boost the IF but rather to highlight the problem, which might be a good deed. I had the same impression.

If it is indeed a deliberate provocation, it’s a damned subtle one. One would think that if the editor was trying to make a specific point, it would have been more effective to clearly state that point, rather than (if your read is correct), subtly implying it through something of a “meta” parody of what an editor would do if he were trying to game the system.

Writing an article that says “the system is corrupt, here’s how one takes advantage of that corruption” would read very differently than this article’s plea for help, “We have therefore two recommendations for authors of publications in our journal: 1) send your good work to our journal (preferably originals and reviews), and 2) also cite our journal in other indexed journals when there are publications in our journal on the same topic.”

I guess the EiC is Dutch. It’s a well-kept secret, but we Dutch are incredibly subtle.

So then not so much a “blatant” provocation as a slyly hidden one? It seems that if this were the intent, there are much more effective ways of making a protest statement, particularly ones that wouldn’t call into question the editor’s ethical stance.

Comments are closed.