If you want to increase your journal’s impact factor, the easiest way is through an editorial. Exempt from the process that sends manuscripts through editorial and external peer review (sometimes through several rounds), an editor is able to cite scores of articles from one’s own journal and have it published quickly and without delay.
It is therefore not surprising to find editorials, like “The NHJ 2012 in retrospect: which articles are cited most?” written by E.E. Van der Wall, editor of the Netherlands Heart Journal. Published in December 2012, the brief editorial contains 25 self-citations to the NHJ, 24 of which cite articles written between 2010 and 2011 — the window from which the journal’s next impact factor will be calculated.
Now, while others enthralled in navel-gazing bibliometrics may attempt some analysis or experiment designed around one’s own articles, the editor of the NHJ doesn’t attempt to hide his intentions, which is, without remorse, to increase his journal’s impact factor. Having analyzed what gets cited in his journal, Van der Wall is direct and forthcoming about his editorial intentions:
As can be learned from the Web of Science, guidelines generally yield a large number of cites. So hopefully we will publish a vast amount of national guidelines in the future, increasing the number of cites and thereby improving our impact factor.
If this is not enough, Van der Wall concludes his editorial with the the following New Year’s wish:
I wish you all a nice, productive, and citable 2013.
Editors serve simultaneously as gatekeepers and cheerleaders for their journal. Whereas editors for the most prestigious journals in the field can focus entirely on the former, the vast majority of editors are required to spend a significant amount of their time marketing their journal to potential authors. As editors know very well, one of the best market indicators for a journal is its impact factor, and the simplest, cheapest, and fastest way to influence the impact factor is through the editorial. Through selfless self-citation, editors are merely exploiting a loophole in the metrics system much like corporations exploit loopholes in the taxation system.
Is it time to close that loophole?
One solution to stem the tide of editorials like Van der Wall’s is to stop counting citations from editorials in the calculation of a journal’s impact factor. However, this creates an entirely new problem: What, exactly, is an editorial? Editors have found ways to publish full research articles under the editorial rubric, such as this one, which originally included an additional 67 self-citations before they were removed as a result of controversy. Journals publish many, many different types of articles under a wide array of names. What was once labeled an editorial under an old counting model, will simply be renamed “perspective” or some other designate under a new model. Thomson Reuters, the group responsible for calculating the journal impact factor, shouldn’t have to negotiate with journal editors on what constitutes an editorial.
An alternative solution would be to designate those texts written by editors as editorials, but this too creates a new set of problems. Editorial positions change regularly, many journals have scores of deputy and associate editors, and external authors may always be brought in to serve as guest-editors.
To me, Thomson Reuters needs to adopt a simple, arbitrary standard on what is counted as a citable object based entirely on the number of references and not the designation of the article type, author status, or some other fuzzy standards such as page length, the presence of figure or an abstract. I say “arbitrary” because the number of references in a document a cannot be considered to be a reliable reflection of its content type.
If editorials were counted as citable objects when they cite more than three papers, for instance, editorials like Van der Walls would likely not have been written. Or, it would have been written but included the names of the papers he refers to in a footnote rather than a full citation. Editors would resist the temptation to engage in navel-gazing bibliometrics if they knew that their editorial was being counted as a paper and not as a denominator-exempt editorial. Many editors would return to writing about science, and not the counting of science.
The move to abolish the well-meaning but anachronistic policy of giving editorials the privileged position within the citation counting process would not solve overzealous self-citation practices. Self-citation is still a considerable issue for some journals, although there is already a solution for this problem in place — journals that engage in egregious self-citation are delisted from the Journal Citation Reports for two years. No editor wishes to be known as the one who was responsible for having his journal denied an impact factor. The punishment is well fitted to the crime.
Adopting an arbitrary number that distinguishes citable objects from non-citable objects would have the consequence of setting hard guidelines for editors. If three references is the limit, you can expect to see the majority of editorials written with exactly three self-citations. If the number is zero, editorials will simply reference papers as footnotes. Hard guidelines may limit the expression of the editor, but the lack of guidelines, in my opinion, is demonstrably worse. The current method of calculating the journal impact factor provides editors with a loophole to game the system.
The editorial loophole needs to close.