Time out, in the corner by Ken Wilcox via. Flickr

Thomson Reuters released the 2011 edition of the Journal Citation Report (JCR) on Thursday, promoting increased coverage of regional journals and listing 526 new journals receiving their first journal impact factor. Far less conspicuous was a list of 51 journals that were suspended from this year’s report due to “anomalous citation patterns.”

Anomalous citation patterns is typically a euphemism for systemic self-citation.

Rampant self-citation is very easy to identify, and it can be achieved by several premeditated strategies. Editors may attempt to coerce authors into self-citing the journal as a condition of manuscript acceptance — a practice that is reportedly widespread among some business and marketing journals. A less intrusive tactic is for an editor to publish a short review (listed as an editorial) citing every article in the journal published in the prior two years (the window from which the impact factor is calculated), or to contrive a bibliometric study that is limited to the papers published in one’s journal and then citing each paper as a data point.

Since 2004, the JCR has been calculating self-citation rates, reporting their percentage and contribution to a journal’s impact factor calculation. For some journals, self-citation can reach nearly 100%. Because self-citation is so easy to detect, it is also easy to take punitive action. Each year, dozens of journals are suspended by the JCR; many are reinstated several years later with self-citation rates that resemble other journal in their field.

This year, the list of 51 suspended journals included three of four titles that engaged in citation behavior resembling a citation cartel. Citation cartels are exceedingly difficult to detect since they represent coordinated efforts among several journals to collectively self-cite.

It would be tempting at this point to begin a long diatribe against the impact factor, why it represents the hegemony of commercial interests over the needs of scientists, or why we should abandon journal-level metrics for article-level metrics. I’ll leave others to take these positions.

To me, the suspension of journals from the JCR represents a serious consequence to editors wishing to manipulate an important rating and a warning to others that such behavior is clearly unacceptable. No editor wants to be known as the one who was responsible for getting the journal kicked off the JCR. Being suspended from the JCR does not mean that a journal cannot publish; it just means that it cannot send an important quality signal to the scientific community. The fact that most journals are ultimately reinstated in the JCR in future years suggests that suspension provides a strong incentive to editors to abide by community norms of acceptable behavior. Suspension from the JCR may be considered a punitive action but it does come with a second chance.

Thomson Reuters, the parent company behind the JCR, has a long and vested interest in keeping the impact factor relevant and salient in the minds of academics and those who evaluate them. But it can only do this by working to maintain the integrity of the citation network and to preserve the collective believe that the citation represents some form of accomplishment and peer recognition. Thomson Reuters does not take a moral position in what is considered acceptable behavior but relies on the norms of the scientific community to protect against “anomalous citation patterns.”

Whether or not you agree with the impact factor as a credible indicator of scientific importance, the process of sanctioning journals that attempt to game the system helps maintain the integrity of the scientific publishing enterprise for everyone.

Enhanced by Zemanta
Phil Davis

Phil Davis

Phil Davis is a publishing consultant specializing in the statistical analysis of citation, readership, publication and survey data. He has a Ph.D. in science communication from Cornell University (2010), extensive experience as a science librarian (1995-2006) and was trained as a life scientist. https://phil-davis.com/

Discussion

19 Thoughts on "Citation Cartel Journals Denied 2011 Impact Factor"

I think you are using the term “citation cartel” incorrectly. It doesn’t refer to a journal trying to increase its impact factor by increasing citations.

Instead, citation cartels refer to groups of scholars who tend to cite others in the group while at the same time excluding those not in the cartel.

Here’s my definition:

A group of scholarly authors who agree on particular scientific or research methods, definitions, or conclusions and who only cite each other or other authors in agreement with them and who neglect to cite authors who disagree with the group’s preferred methods, definitions, or conclusions. (from: http://metadata.posterous.com/definition-citation-cartel)

If you re-read George Franck’s 1999 article, I think you will see that he is referring to scholars rather than journals or publishers as members of cartels.

Franck’s description of a citation cartel is to emphasize that there are tactics to increase citations that have nothing to do with the merit of the paper. I use the term “cartel” in a very traditional sense to describe how cooperative agreements among journals can lead to mutual citation benefits. To me, your definition works for the purpose of distorting the literature and creating unfounded authority, not for maximizing incoming citations. For example of this see Steven Greenberg’s paper on beta amyloid structures in the brain and their relationship with Alzheimer’s disease:

Greenberg SA. 2009. How citation distortions create unfounded authority: analysis of a citation network. BMJ 339: b2680-. http://dx.doi.org/10.1136/bmj.b2680

Absolutely! And you’ll note that we’re not a scholarly journal nor have we applied to receive an Impact Factor. As unpaid writers, the only reward we receive is your attention, for which I am truly grateful.

Why on earth would they limit to telling subscribers which journals are in the naughty corner? Making the list public would be a greater disincentive to ‘anomalous citation patterns’. Per Wowter’s comment below, do they not have enough faith in their systems to avoid false positives?

DR – the list is available on the internet with some careful googling. (As are the lists for all previous years). Not that this excuses JCR from not making it public themselves!

Dear Mr. Van Noorden,

Apparently, my googling is not careful enough. Could you provide me with that list? I am working on a self-citation parameter and want to try to extend it to a cartel parameter.

Best wishes,

Tobias Opthof

If it is always a citation cartel in the negative sense I wonder. Take for instance the Revista Ciencia Agronomica. That is a Portugese language journal in a relatively nich area: agricultural sciences. In SJR you can see that the external citations are keeping pace with self citations arond 50% http://www.scimagojr.com/journalsearch.php?q=16900154709&tip=sid I am not really stunned by that figure.
Still it was debinked from the JCR and WoS therefore. A pity for the smaller languages present in WoS.

As editor of the Journal of Medical Internet Research (JMIR), I am disappointed by your repeated accusation that with publishing our editorial “Can Tweets Predict Citations? Metrics of Social Impact Based on Twitter and Correlation with Traditional Metrics of Scientific Impact” we have published a “contrived bibliometric study”. JMIR is a leading and respected journal in its discipline and was never flagged as one of the self-citing journals discussed in this blog post, so your potshot is inappropriate and factually incorrect, and can only be explained by your well-known scepticism about the value of open access and altmetrics. I am happy to discuss methodological concerns you have, but I am getting tired having to respond to smear-campaigns and polemic.

Here are the facts:

1. In the 2011 JCR Science Edition, the Journal of Medical Internet Research is – for the third time – again ranked #1 in medical informatics, and #2 in health services research, by impact factor (2011 JIF: 4.4). The runner-up in medical informatics (Journal of the American Medical Informatics Association, JAMIA, formerly published by Elsevier, now BMJ) has an impact factor of 3.6, so it is not even close (the spread was even bigger in the previous year). The 5-year impact factor of JMIR is 5.357, compared to 4.329 of JAMIA.
2. JMIR is the leading journal in its field, even when journals are ranked by the “impact factor without self-cites” (an adjusted metric which is also published by Thomson Reuters). This adjusted impact factor is in 2011 3.278 for JMIR, compared to 2.359 for JAMIA (which is the runner-up in the medical informatics category,
3. The proportion of self-cites in JMIR has always been lower than those of its main competitors. For example, in the 2011 JCR, the proportion of self-cites was 25%, compared to 34% at the JAMIA. This is because we hardly publish ANY editorials, letters, or commentaries discussing our own articles. The one editorial you refer to as “contrived bibliometric study” was an exception, and as the numbers of self-cites over the past years show, JMIR is at the lower end of the journals within the medical informatics discipline.
4. Within days after publication of the editorial, which you call a “contrived bibliometric study”, we published a corrigendum and moved references to the discussed JMIR articles to an appendix (with the sole purpose to not “inflate” the impact factor), but even had we left these references in the article, the proportion of self-cites would still not have raised any concerns; in fact, it would have been comparable to the #2 ranked journal, JAMIA (36%), and JMIR would certainly not be on the list of journals that were disqualified from JCR, discussed in this blog post.
5. The study you repeatedly tried to discredit (“Can Tweets Predict Citations? Metrics of Social Impact Based on Twitter and Correlation with Traditional Metrics of Scientific Impact”) was not a “contrived study”, but a prospective, carefully planned, multiyear study on the predictive value of tweets (“tweetations”) for citations, which is already widely discussed/cited as a seminal study and early example for the potential value of “altmetrics” to complement citation metrics. Google Scholar already lists 12 citations only 6 months after the study was published. The study was tweeted over 1000 times. Hardly evidence of a “contrived” study with no impact. We are also aware of other studies which confirm our results with other datasets.

The fact that fixation on citations and the impact factor as a sole metric leads to a behaviour shift or questionable practices in some of our competitor journals/publishers is regrettable, but data published in the JCR (in 2012 and previous years) clearly show that this is not the case for JMIR, which has less self-citations than the runner-up, published by BMJ Publishing..
JMIR – an independent open access journal – has firmly established itself as a leading journal in its field – and we have achieved this with hard work and with making articles more visible and accessible than our competitor journals. We do not need to engage in impact factor manipulation to achieve this position, nor are we fixated on citations as the sole metric for impact, as our editorial suggesting a “tweetation impact (twimpact) factor” also illustrates.

As the current publisher of The Scientific World Journal, which is one of the titles that was suspended from the 2011 JCR, I would like to reply to the description of this situation as a “citation cartel.” It is unfortunately true that two articles were published in The Scientific World Journal with excessive citations to the journal Cell Transplantation, which have subsequently been retracted on the grounds that they violate the journal’s Policy Against Citation Manipulation (http://www.tswj.com/policies/). These articles were both written by members of the Cell Transplantation Editorial Board, and the Editor who accepted both articles for publication in The Scientific World Journal, who later left The Scientific World Journal’s Editorial Board, is one of the Section Editors for Cell Transplantation.

While this situation (which is explained on The Scientific World Journal’s website at http://www.tswj.com/statement/) is very regrettable, it is incorrect to describe this as a “citation cartel” since there have never been any articles with excessive citations to The Scientific World Journal published in Cell Transplantation or Medical Science Monitor. It appears that a number of Editors from Cell Transplantation worked together to exploit their position on the Editorial Boards of other journals (including The Scientific World Journal) in order to boost the citation count to Cell Transplantation, but without the involvement of any other Editorial Board Members or the former publisher of The Scientific World Journal. We very much agree that better safeguards should have been in place to prevent these sort of articles from being accepted for publication, and after Hindawi took over the publication of The Scientific World Journal last year we implemented a number of changes to the editorial workflow of the journal which should prevent any similar cases from happening in the future. More recently, we developed a tool that our in-house staff currently use to check every submitted manuscript that we receive in any of our journals in order to detect possible cases of citation manipulation prior to the article being sent for peer review.

While we very much regret the fact that The Scientific World Journal will not receive an Impact Factor for the current year, we appreciate the need for Thomson Reuters to take a firm stance against any manipulation of the citation record, which is an issue that we take very seriously as a publisher.

Paul Peters

If a pitcher is found with pine tar on his glove, he’s tossed outta’ the game and suspended. “Anomalous citation patterns,” like pine tar on the glove, are a mark of cheating. Hooray for Thompson Reuters for keeping the playing field level.

Comments are closed.